Researchers develop a simple processing technique that could cut the cost of organic photovoltaics and wearable electronics
With a new technique for manufacturing single-layer organic polymer solar cells, scientists at UC Santa Barbara and three other universities might very well move organic photovoltaics into a whole new generation of wearable devices and enable small-scale distributed power generation.
The simple doping solution-based process involves briefly immersing organic semiconductor films in a solution at room temperature. This technique, which could replace a more complex approach that requires vacuum processing, has the potential to affect many device platforms, including organic printed electronics, sensors, photodetectors and light-emitting diodes. The researchers’ findings appear in the journal Nature Materials.
“Because the new process is simple to use, general in terms of applicability and should be configurable into mass productions, it has the potential to greatly accelerate the widespread implementation of plastic electronics, of which solar cells are one example,” said co-author Guillermo Bazan, director of UCSB’s Center for Polymers and Organic Solids. “One can see impacts in technologies ranging from light-emitting devices to transistors to transparent solar cells that can be incorporated into building design or greenhouses.”
Studied in many academic and industrial laboratories for two decades, organic solar cells have experienced a continuous and steady improvement in their power conversion efficiency with laboratory values reaching 13 percent compared to around 20 percent for commercial silicon-based cells. Though polymer-based cells are currently less efficient, they require less energy to produce than silicon cells and can be more easily recycled at the end of their lifetimes.
This new method, which provides a way of inducing p-type electrical doping in organic semiconductor films, offers a simpler alternative to the air-sensitive molybdenum oxide layers used in the most efficient polymer solar cells. Thin films of organic semiconductors and their blends are immersed in polyoxometalate solutions in nitromethane for a brief time — on the order of minutes. The geometry of these new devices is unique as the functions of hole and electron collection are built into the light-absorbing active layer, resulting in the simplest single-layer geometry with few interfaces.
“High-performing organic solar cells require a multiple layer device structure,” said co-author Thuc-Quyen Nguyen, a professor in UCSB’s Department of Chemistry and Biochemistry. “The realization of single-layer photovoltaics with our approach will simplify the device fabrication process and therefore should reduce the cost. The initial lifetime testing of these single layer devices is promising. This exciting development will help transform organic photovoltaics into a commercial technology.”
Organic solar cells are unique within the context of providing transparent, flexible and easy-to-fabricate energy-producing devices. These could result in a host of novel applications, such as energy-harvesting windows and films that enable zero-cost farming by creating greenhouses that support crops and produce energy at the same time.
Learn more: Solar Cell Game Changer
The main campus is located on a 1,022-acre (414 ha) site near Goleta, California, United States, 8 miles (13 km) from Santa Barbara and 100 miles (160 km) northwest of Los Angeles. Tracing its roots back to 1891 as an independent teachers’ college, UCSB joined the University of California system in 1944 and is the third-oldest general-education campus in the system.
UCSB is one of America’s Public Ivy universities, which recognizes top public research universities in the United States. The university is a comprehensive doctoral university and is organized into five colleges and schools offering 87 undergraduate degrees and 55 graduate degrees. The campus is the 6th-largest in the UC system by enrollment with 18,977 undergraduate and 2,950 graduate students. UCSB was ranked 41st among “National Universities” and 11th among public universities by U.S. News & World Report ‘s 2014 rankings. The university was also ranked 33rd worldwide by the Times Higher Education World University Rankings and 35th worldwide (6th worldwide for engineering) by the Academic Ranking of World Universities in 2013.
University of California, Santa Barbara research articles from Innovation Toronto
- Detecting HIV diagnostic antibodies with DNA nanomachines – October 11, 2015
- An Important Step in Artificial Intelligence – May 12, 2015
- Researchers develop the first-ever quantum device that detects and corrects its own errors – March 5, 2015
- Bio-Inspired Bleeding Control – November 21, 2014
- Mathematics Getting More Important in Science – September 30, 2014
- Researchers ‘grow’ lasers in chip breakthrough – September 13, 2014
- Robots that use Wi-Fi to see through walls – August 11, 2014
- Unique semiconductor with huge implications invented – April 13, 2014
- MEDIC implant measures drug levels in the bloodstream
- Could Revolutionize Solar Energy: Quantum waves at the heart of organic solar cells
- Keeping the lights on | cascading power outages
- 50 Meters of Optical Fiber Shrunk to the Size of Microchips | iPhoD
- UC Santa Barbara Scientists Discover Cinnamon Compounds’ Potential Ability to Prevent Alzheimer’s
- UC Santa Barbara Scientists Develop A Whole New Way of Harvesting Energy from the Sun
- California Scientists Propose System to Vaporize Asteroids That Threaten Earth
- 2013 Ocean Health Index Shows Food Provision Remains an Area of Great Concern
- Bright, Laser-Based Lighting Devices
- Portable, Low-Cost Early Warning Test For Osteoporosis
- Is this the future for low-power and high-performance integrated circuits?
- UCSB Researchers Successfully Treat Autism in Infants
- Steps toward quantum computing
- Detecting cocaine “naturally”
- Nanotech Device Mimics Dog’s Nose to Detect Explosives
- VIDEO: Getting (drugs) under your skin
- UCSB scientists examine effects of manufactured nanoparticles on soybean crops
- Nature study highlights many paths to ocean health
- Why great ideas come when you aren’t trying
- First-Of-Its-Kind Study Reveals Surprising Ecological Effects of Earthquake and Tsunami
- Open-Source A.W.S.: Creating a Thousand Clouds
- How to stop fishermen fishing
- The internet at forty
- Science’s Breakthrough of the Year: The First Quantum Machine
- Quantum Computing Reaches for True Power
- New study offers hope for halting incurable citrus disease
- Photovoltaics beat biofuels at converting sun’s energy to miles driven
- Smart Irrigation: A Supercomputer Waters the Lawn
UCSB electrical and computer engineers design an infinitesimal computing device
In 1959 renowned physicist Richard Feynman, in his talk “Plenty of Room at the Bottom,” spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction.
And then, scientists and other creative thinkers began to realize Feynman’s nanotechnological visions.
In the spirit of Feynman’s insight, and in response to the challenges he issued as a way to inspire scientific and engineering creativity, electrical and computer engineers at UC Santa Barbara have developed a design for a functional nanoscale computing device. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometers on any side.
“Novel computing paradigms are needed to keep up with the demand for faster, smaller and more energy-efficient devices,” said Gina Adam, postdoctoral researcher at UCSB’s Department of Electrical and Computer Engineering and lead author of the paper “Optimized stateful material implication logic for three dimensional data manipulation,” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.”
While efforts to shrink computing devices have been ongoing for decades — in fact, Feynman’s challenges as he presented them in his 1959 talk have been met — scientists and engineers continue to carve out room at the bottom for even more advanced nanotechnology. A nanoscale 8-bit adder operating in 50-by-50-by-50 nanometer dimension, put forth as part of the current Feynman Grand Prize challenge by the Foresight Institute, has not yet been achieved. However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality, said Dmitri Strukov, a UCSB professor of computer science.
“Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” he said.
Key to this development is the use of a logic system called material implication logic combined with memristors — circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages — a critical function in autonomous systems such as robotics.
In addition, the researchers reconfigured the traditionally two-dimensional architecture of the memristor into a three-dimensional block, which could then be stacked and packed into the space required to meet the Feynman Grand Prize Challenge.
“Previous groups show that individual blocks can be scaled to very small dimensions, let’s say 10-by-10 nanometers,” said Strukov, who worked at technology company Hewlett-Packard’s labs when they ramped up development of memristors and material implication logic. By applying those results to his group’s developments, he said, the challenge could easily be met.
The tiny memristors are being heavily researched in academia and in industry for their promising uses in memory storage and neuromorphic computing. While implementations of material implication logic are rather exotic and not yet mainstream, uses for it could pop up any time, particularly in energy scarce systems such as robotics and medical implants.
“Since this technology is still new, more research is needed to increase its reliability and lifetime and to demonstrate large scale three-dimensional circuits tightly packed in tens or hundreds of layers,” Adam said.
Learn more: A Tiny Machine
A group of scientists from Hong Kong University of Science and Technology; the University of California, Santa Barbara; Sandia National Laboratories and Harvard University were able to fabricate tiny lasers directly on silicon — a huge breakthrough for the semiconductor industry and well beyond.
For more than 30 years, the crystal lattice of silicon and of typical laser materials could not match up, making it impossible to integrate the two materials — until now.
As the group reports in Applied Physics Letters, from AIP Publishing, integrating subwavelength cavities — the essential building blocks of their tiny lasers — onto silicon enabled them to create and demonstrate high-density on-chip light-emitting elements.
To do this, they first had to resolve silicon crystal lattice defects to a point where the cavities were essentially equivalent to those grown on lattice-matched gallium arsenide (GaAs) substrates. Nano-patterns created on silicon to confine the defects made the GaAs-on-silicon template nearly defect free and quantum confinement of electrons within quantum dots grown on this template made lasing possible.
The group was then able to use optical pumping, a process in which light, rather than electrical current, “pumps” electrons from a lower energy level in an atom or molecule to a higher level, to show that the devices work as lasers.
“Putting lasers on microprocessors boosts their capabilities and allows them to run at much lower powers, which is a big step toward photonics and electronics integration on the silicon platform,” said professor Kei May Lau, Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology.
Photonics advances allow us to be seen across the universe, with major implications for the search for extraterrestrial intelligence, says UC Santa Barbara physicist Philip Lubin
Looking up at the night sky — expansive and seemingly endless, stars and constellations blinking and glimmering like jewels just out of reach — it’s impossible not to wonder: Are we alone?
For many of us, the notion of intelligent life on other planets is as captivating as ideas come. Maybe in some other star system, maybe a billion light years away, there’s a civilization like ours asking the exact same question.
Imagine if we sent up a visible signal that could eventually be seen across the entire universe. Imagine if another civilization did the same.
The technology now exists to enable exactly that scenario, according to UC Santa Barbara physics professor Philip Lubin, whose new work applies his research and advances in directed-energy systems to the search for extraterrestrial intelligence (SETI). His recent paper “The Search for Directed Intelligence” appears in the journal REACH – Reviews in Human Space Exploration.
“If even one other civilization existed in our galaxy and had a similar or more advanced level of directed-energy technology, we could detect ‘them’ anywhere in our galaxy with a very modest detection approach,” said Lubin, who leads the UCSB Experimental Cosmology Group. “If we scale it up as we’re doing with direct energy systems, how far could we detect a civilization equivalent to ours? The answer becomes that the entire universe is now open to us.
“Similar to the use of directed energy for relativistic interstellar probes and planetary defense that we have been developing, take that same technology and ask yourself, ‘What are consequences of that technology in terms of us being detectable by another ‘us’ in some other part of the universe?’” Lubin added. “Could we see each other? Can we behave as a lighthouse, or a beacon, and project our presence to some other civilization somewhere else in the universe? The profound consequences are, of course, ‘Where are they?’ Perhaps they are shy like us and do not want to be seen, or they don’t transmit in a way we can detect, or perhaps ‘they’ do not exist.”
Eliminates the need for an external light source for mid-infrared silicon photonic devices or photonic circuits
A team of researchers from across the country, led by Alexander Spott, University of California, Santa Barbara, USA, have built the first quantum cascade laser on silicon. The advance may have applications that span from chemical bond spectroscopy and gas sensing, to astronomy and free-space communications.
Integrating lasers directly on silicon chips is challenging, but it is much more efficient and compact than coupling external laser light to the chips. The indirect bandgap of silicon makes it difficult to build a laser out of silicon, but diode lasers can be built with III-V materials such as InP or GaAs. By directly bonding an III-V layer on top of the silicon wafer and then using the III-V layers to generate gain for the laser, this same group has integrated a multiple quantum well laser on silicon that operates at 2 µm. Limitations in diode lasers prevent going to longer wavelengths where there are many more applications, so the group turned their attention to using quantum cascade lasers instead.
Building a quantum cascade laser on silicon was a challenging task made more difficult by the fact that silicon dioxide becomes heavily absorptive at longer wavelengths in the mid-infrared. “This meant that not only did we have to build a different type of laser on silicon, we had to build a different silicon waveguide too,” Spott explained. “We built a type of waveguide called a SONOI waveguide [silicon-on-nitride-on-insulator], which uses a layer of silicon nitride [SiN] underneath the silicon waveguide, rather than just SiO2.”
New sensor technology created at UCSB captures nanoscale images with high spatial resolution and sensitivity
If using a single atom to capture high-resolution images of nanoscale material sounds like science fiction, think again.
That’s exactly what the Quantum Sensing and Imaging Group at UC Santa Barbara has achieved. Members of physicist Ania Jayich’s lab worked for two years to develop a radically new sensor technology capable of nanometer-scale spatial resolution and exquisite sensitivity. Their findings appear in the journal Nature Nanotechnology.
“This is the first tool of its kind,” said Jayich, UCSB’s Bruker Endowed Chair in Science and Engineering and associate director of the campus’s Materials Research Lab. “It operates from room temperature down to low temperatures where a lot of interesting physics happens. When thermal energy is low enough, the effects of electron interactions, for instance, become observable, leading to new phases of matter. And we can now probe these with unprecedented spatial resolution.”
Global leaders recently convened at the UN to sign on to the Paris climate agreement and committed to a wide range of strategies to reduce greenhouse gas emissions (GHG). Expectations are particularly high for two of the most widely recognized solutions to climate change: energy efficiency and renewable energy.
But how much do we know about the environmental impacts of a large-deployment of these technologies — including the benefits or costs from a life-cycle perspective? And by how much can the gains from energy efficient technologies be multiplied if GHG emissions from electricity production are also reduced?
In a special issue, Yale’s Journal of Industrial Ecology aims to advance our understanding of life cycle environmental and natural resource implications of energy efficiency technologies. This special issue, “Environmental Impacts of Demand-Side Technologies and Strategies for Carbon Mitigation,” was prepared in collaboration with the International Resource Panel(IRP) of the United Nations Environment Programme (UNEP). Several of the studies in this special issue will serve as technical appendices to a forthcoming report on life cycle implications of energy efficiency technologies by the IRP.
“If we are to meet climate change targets,” said Peter Crane, dean of the Yale School of Forestry & Environmental Studies, “we need to dramatically decarbonize electricity generation and use technologies for energy-efficiency on a very large scale. To do this effectively, we need to know not only the impact on greenhouse gas emissions, but also the environmental and natural resource implications of these changes.”
Previous research on individual technologies use different assumptions and data or focus on one region making comparison difficult. The contributions to this special issue seek to overcome this problem by employing comparable scenarios, assumptions and data and by covering all nine global regions as much as possible.
“Research confirms that energy efficiency in general is great, but not without some ‘fine print,’” said Sangwon Suh, an associate professor at the University of California, Santa Barbara and lead editor for the special issue. “Demand-side technologies reduce greenhouse gas emissions as well as many other environmental impacts. However, the magnitudes of those improvements vary widely among the technologies and regions. In some cases, demand-side technologies too may increase resource consumption and even increase GHG emissions. It is crucial to understand where, when, and to which technology the investment should be placed to maximize the benefits.”
Contributions to this special issue cover a range of technologies — for both production and consumption of energy — including:
· building energy management,
· copper smelting,
· industrial symbiosis (by-product exchange and resource sharing), and
· transportation and logistics.
This special issue also addresses a number of key questions about the impact of energy efficiency technologies, such as the rebound effect.
The Journal of Industrial Ecology is a peer-reviewed, international bimonthly journal that examines the environmental impact of production and consumption from the perspective of the growing field of industrial ecology. It is owned by Yale University, headquartered at the Yale School of Forestry & Environmental Studies, and published by Wiley-Blackwell.
Articles in the special issue will be freely downloadable on the Web for a limited time.
Physicist Dirk Bouwmeester discovers a promising route for combined optical and solid state-based quantum information processing
Tiny units of matter and chemistry that they are, atoms constitute the entire universe. Some rare atoms can store quantum information, an important phenomenon for scientists in their ongoing quest for a quantum Internet.
New research from UC Santa Barbara scientists and their Dutch colleagues exploits a system that has the potential to transfer optical quantum information to a locally stored solid-state quantum format, a requirement of quantum communication. The team’s findings appear in the journal Nature Photonics.
“Our research aims at creating a quantum analog of current fiber optic technology in which light is used to transfer classical information — bits with values zero or one — between computers,” said author Dirk Bouwmeester, a professor in UCSB’s Department of Physics. “The rare earth atoms we’re studying can store the superpositions of zero and one used in quantum computation. In addition, the light by which we communicate with these atoms can also store quantum information.”
Atoms are each composed of a nucleus typically surrounded by inner shells full of electrons and often have a partially filled outer electron shell. The optical and chemical properties of the atoms are mainly determined by the electrons in the outer shell.
Rare earth atoms such as erbium and ytterbium have the opposite composition: a partially filled inner shell surrounded by filled outer shells. This special configuration is what enables these atoms to store quantum information.
However, the unique composition of rare earth atoms leads to electronic transitions so well shielded from the surrounding atoms that optical interactions are extremely weak. Even when implanted in a host material, these atoms maintain those shielded transitions, which in principle can be addressed optically in order to store and retrieve quantum information.
Bouwmeester collaborated with John Bowers, a professor in UCSB’s Department of Electrical and Computer Engineering, and investigators at Leiden University in the Netherlands to strengthen these weak interactions by implanting ytterbium into ultra-high-quality optical storage rings on a silicon chip.
“The presence of the high-quality optical ring resonator — even if no light is injected — changes the fundamental optical properties of the embedded atoms, which leads to an order of magnitude increase in optical interaction strength with the ytterbium,” Bouwmeester said. “This increase, known as the Purcell effect, has an intricate dependence on the geometry of the optical light confinement.”
The team’s findings indicate that new samples currently under development at UCSB can enable optical communication to a single ytterbium atom inside optical circuits on a silicon chip, a phenomenon of significant interest for quantum information storage. The experiments also explore the way in which the Purcell effect enhances optical interaction with an ensemble of a few hundred rare earth atoms. The grouping itself has interesting collective properties that can also be explored for the storage of quantum information.
Key is an effect called a photon echo, the result of two distinct light pulses, the first of which causes atoms in ytterbium to become partially excited.
“The first light pulse creates a set of atoms we ‘talk’ to in a specific state and we call that state ‘in phase’ because all the atoms are created at the same time by this optical pulse,” Bouwmeester explained. “However, the individual atoms have slightly different frequencies because of residual coupling to neighboring atoms, which affects their time evolution and causes decoherence in the system.” Decoherence is the inability to keep track of how the system evolves in all its details.
“The trick is that the second light pulse changes the state of the system so that it evolves backwards, causing the atoms to return to the initial phase,” he continued. “This makes everything coherent and causes the atoms to collectively emit the light they absorbed from the first pulse.”
The strength of the photon echo contains important information about the fundamental properties of the ytterbium in the host material. “By analyzing the strength of these photon echoes, we are learning about the fundamental interactions of ytterbium with its surroundings,” Bouwmeester said. “Now we’re working on strengthening the Purcell effect by making the storage rings we use smaller and smaller.”
According to Bouwmeester, quantum computation needs to be compatible with optical communication for information to be shared and transmitted. “Our ultimate goal is to be able to communicate to a single ytterbium atom; then we can start transferring the quantum state of a single photon to a single ytterbium atom,” he added. “Coupling the quantum state of a photon to a quantum solid state is essential for the existence of a quantum Internet.”
Learn more: Rare Earth Atoms See the Light
UB research part of study arc to determine why this is happening
Rapidly advancing technology has created ever more realistic video games. Images are sharp, settings have depth and detail, and the audio is crisp and authentic. At a glance, it appears real. So real, that research has consistently found that gamers feel guilty committing unjustified acts of violence within the game.
Now, a new University at Buffalo-led study suggests that the moral response produced by the initial exposure to a video game decreases as experience with the game develops.
The findings provide the first experimental evidence that repeatedly playing the same violent game reduces emotional responses — like guilt — not only to the original game, but to other violent video games as well.
Yet why this is happening remains a mystery, according to Matthew Grizzard, assistant professor of communication and principal investigator of the study published in current issue of the journal “Media Psychology,” with co-authors Ron Tamborini and John L. Sherry of Michigan State University and René Weber of the University of California Santa Barbara.
“What’s underlying this finding?” asks Grizzard. “Why do games lose their ability to elicit guilt, and why does this seemingly generalize to other, similar games?”
Grizzard, an expert in the psychological effects of media entertainment, has previously studied the ability of violent video games to elicit guilt. The current study builds upon that work.
Gamers often claim their actions in a video game are as meaningless to the real world as players capturing pawns on a chess board. Yet, previous research by Grizzard and others shows that immoral virtual actions can elicit higher levels of guilt than moral virtual actions. This finding would seem to contradict claims that virtual actions are completely divorced from the real world. Grizzard’s team wanted to replicate their earlier research and determine whether gamers’ claims that their virtual actions are meaningless actually reflects desensitization processes.
Although the findings of his study suggest that desensitization occurs, mechanisms underlying these findings are not entirely clear.
He says there are two arguments for the desensitization effect.
“One is that people are deadened because they’ve played these games over and over again,” he says. “This makes the gamers less sensitive to all guilt-inducing stimuli.”
The second argument is a matter of tunnel vision.
“This is the idea that gamers see video games differently than non-gamers, and this differential perception develops with repeated play.”
Non-gamers look at a particular game and process all that’s happening. For the non-gamer, the intensity of the scene trumps the strategies required to succeed. But gamers ignore much of the visual information in a scene as this information can be meaningless to their success in a game, according to Grizzard.
“This second argument says the desensitization we’re observing is not due to being numb to violence because of repeated play, but rather because the gamers’ perception has adapted and started to see the game’s violence differently.”
“Through repeated play, gamers may come to understand the artificiality of the environment and disregard the apparent reality provided by the game’s graphics.”
Grizzard say his future research is working toward answering these questions.
“This study is part of an overarching framework that I’ve been looking at in terms of the extent to which media can elicit moral emotions, like guilt, disgust and anger,” he says.
Since the beginning of spaceflight, humans have accomplished wonderful feats of exploration and showcased their drive to understand the universe.
Yet, in those 60 years, only one spacecraft, Voyager 1 (launched in 1977) has left the solar system. As remarkable as this is, humans will never reach even the nearest stars with out current propulsion technology. Instead, radically new strategies involving the technology already available must be used.
We propose a roadmap to a program that will lead to sending relativistic probes to the nearest stars.
To do so requires a fundamental change in our thinking of both propulsion and our definition of what a spacecraft is. In addition to larger spacecrafts capable of human transportation, we consider “wafer sats”, wafer-scale systems weighing no more than a gram. The wafer sats would include integrated optical communications, optical systems, and sensors. These crafts, combined with directed energy propulsion, could be capable of speeds greater than 0.25 c.
This program has applications for planetary defense, SETI and Kepler missions.
Rights-based approaches could double fish biomass and make 77 percent of world’s fisheries biologically healthy within a decade
New groundbreaking research shows that with improved fishing approaches — compared to business as usual — the majority of the world’s wild fisheries could be at healthy levels in just 10 years and global fish populations could double by 2050.
The study conducted by researchers from UC Santa Barbara, the University of Washington and the Environmental Defense Fund appears in the Proceedings of the National Academy of Sciences.
“This research shows that we really can have our fish and eat them, too,” said lead author Christopher Costello, a professor of environmental and resource economics at UCSB’s Bren School of Environmental Science & Management. “We no longer need to see ocean fisheries as a series of trade-offs. In fact, we show that we can have more fish in the water, more food on the plate and more prosperous fishing communities — and it can happen relatively quickly.”
The paper demonstrates that by 2050, applying the same improved fishing approaches could increase profits from the world’s ocean fisheries by 204 percent versus what can be expected under a business-as-usual approach. The increased harvest would be enough to provide a significant source of protein for an additional 500 million people. In the coming decades, with a projected 9.5 billion people competing for more food from maxed-out resources, finding sustainable ways to increase food production has become a critical challenge, Costello noted.
Calculations performed by the investigators showed that if reforms were implemented today, three-quarters of exploited fisheries worldwide could attain population goals within 10 years — and 98 percent by mid-century. The team used a massive database of 4,713 fisheries representing 78 percent of the ocean’s catch. This enabled a far more precise and more granular analysis than ever before.
“We’ve uncovered a really important insight: There is urgency and a tremendous upside in reforming thousands of small-scale, community fisheries around the world,” said co-author Ray Hilborn, a professor of marine biology and fisheries science at the University of Washington. “The research adds to the body of work showing that most of the world’s large fisheries are doing relatively well, but it emphasizes the critical need to rebuild local fisheries, most of which are in the developing world where millions depend on fisheries for food and their livelihoods.”
The analysis suggests that implementing reforms such as those based on secure fishing rights are critical to providing the combined benefits of increased fish populations, food production and profits. “Fishing rights” is a fishery management approach that ends the desperate race to fish by asking fishers to adhere to strict, science-based catch limits in exchange for a right to a share of the catch or to a traditional fishing area.
.“We now have a clear roadmap for how to recover fisheries: Give fishermen secure fishing rights so they can control and protect their future,” said co-author Amanda Leland, senior vice president for oceans at the Environmental Defense Fund. “Countries from the U.S. to Belize to Namibia are leading a turnaround by implementing secure fishing rights and realizing benefits for people and the oceans.”
Since 2000, overfishing in U.S. federal waters has dropped by 70 percent as the number of species managed with fishing rights or “catch shares” has quadrupled. In the past three years, fishing industry jobs have increased 31 percent and fishing revenues have grown by 44 percent. In Belize, a fishing-rights program newly implemented by the government for small-scale fishermen has dramatically increased compliance and shows tremendous potential for recovering important local species.
“Our research reveals a stark choice: Either manage fisheries sustainably and realize the tremendous potential of the world’s oceans, or allow the status quo to continue to draw down the natural capital of our oceans,” said Costello.
Learn more: Better Global Ocean Management