With the ubiquity of lithium-ion batteries in smartphones and other rechargeable devices, it’s hard to imagine replacing them. But the rising price of lithium has spurred a search for alternatives. One up-and-coming battery technology uses abundant, readily available seawater. Now, making this option viable is one step closer with a new report on a sodium-air, seawater battery.
The study appears in the journal ACS Applied Materials & Interfaces.
Sodium-air — or sodium-oxygen — batteries are considered one of the most promising, and cost-effective alternatives to today’s lithium-ion standby. But some challenges remain before they can become a commercial reality. Soo Min Hwang, Youngsik Kim and colleagues have been tackling these challenges, using seawater as the catholyte — an electrolyte and cathode combined. In batteries, the electrolyte is the component that allows an electrical charge to flow between the cathode and anode. A constant flow of seawater into and out of the battery provides the sodium ions and water responsible for producing a charge. The reactions have been sluggish, however, so the researchers wanted to find a way to speed them up.
For their new battery, the team prepared a catalyst using porous cobalt manganese oxide nanoparticles. The pores create a large surface area for encouraging the electrochemical reactions needed to produce a charge. A hard carbon electrode served as the anode. The resulting battery performed efficiently over 100 cycles with an average discharge voltage of about 2.7 volts. This doesn’t yet measure up to a lithium-ion cell, which can reach 3.6 to 4.0 volts, but the advance is getting close to bridging the gap, the researchers say.
Ground-breaking research from the University of Surrey and Augmented Optics Ltd, in collaboration with the University of Bristol, has developed potentially transformational technology which could revolutionise the capabilities of appliances that have previously relied on battery power to work.
This development by Augmented Optics Ltd, could translate into very high energy density super-capacitors making it possible to recharge your mobile phone, laptop or other mobile devices in just a few seconds.
The technology could have a seismic impact across a number of industries, including transport, aerospace, energy generation, and household applications such as mobile phones, flat screen electronic devices, and biosensors. It could also revolutionise electric cars, allowing the possibility for them to recharge as quickly as it takes for a regular non-electric car to refuel with petrol – a process that currently takes approximately six to eight hours to recharge. Imagine, instead of an electric car being limited to a drive from London to Brighton, the new technology could allow the electric car to travel from London to Edinburgh without the need to recharge, but when it did recharge for this operation to take just a few minutes to perform.
Supercapacitor buses are already being used in China, but they have a very limited range whereas this technology could allow them to travel a lot further between recharges. Instead of recharging every two to three stops this technology could mean they only need to recharge every 20-30 stops and that will only take a few seconds.
Elon Musk, of Tesla and SpaceX, has previously stated his belief that supercapacitors are likely to be the technology for future electric air transportation. We believe that the present scientific advance could make that vision a reality.
The technology was adapted from the principles used to make soft contact lenses, which Dr Donald Highgate (of Augmented Optics, and an alumnus of the University of Surrey) developed following his postgraduate studies at Surrey 40 years ago. Supercapacitors, an alternative power source to batteries, store energy using electrodes and electrolytes and both charge and deliver energy quickly, unlike conventional batteries which do so in a much slower, more sustained way. Supercapacitors have the ability to charge and discharge rapidly over very large numbers of cycles. However, because of their poor energy density per kilogramme (approximately just one twentieth of existing battery technology), they have, until now, been unable to compete with conventional battery energy storage in many applications.
Dr Brendan Howlin of the University of Surrey, explained: “There is a global search for new energy storage technology and this new ultra capacity supercapacitor has the potential to open the door to unimaginably exciting developments.”
The ground-breaking research programme was conducted by researchers at the University of Surrey’s Department of Chemistry where the project was initiated by Dr Donald Highgate of Augmented Optics Ltd. The research team was co-led by the principal investigators Dr Ian Hamerton and Dr Brendan Howlin. Dr Hamerton continues to collaborate on the project in his new post at the University of Bristol, where the electrochemical testing to trial the research findings was carried out by fellow University of Bristol academic – David Fermin, Professor of Electrochemistry in the School of Chemistry.
Dr Ian Hamerton, Reader in Polymers and Composite Materials from the Department of Aerospace Engineering at the University of Bristol, said: “While this research has potentially opened the route to very high density supercapacitors, these *polymers have many other possible uses in which tough, flexible conducting materials are desirable, including bioelectronics, sensors, wearable electronics, and advanced optics. We believe that this is an extremely exciting and potentially game changing development.”
*the materials are based on large organic molecules composed of many repeated sub-units and bonded together to form a 3-dimensional network.
Jim Heathcote, Chief Executive of both Augmented Optics Ltd and Supercapacitor Materials Ltd, said: “It is a privilege to work with the teams from the University of Surrey and the University of Bristol. The test results from the new polymers suggest that extremely high energy density supercapacitors could be constructed in the very new future. We are now actively seeking commercial partners in order to supply our polymers and offer assistance to build these ultra high energy density storage devices.”
LMU chemist Oliver Trapp has designed and synthesized a catalyst which flexibly molds the handedness of the reaction products with which it interacts.
Many chemical compounds contain so-called chiral centers to which functional groups can be attached in either of two orientations. This gives rise to two different forms of the product which are mirror images of one another: Their spatial conformations are related to each other in the same way as right and left hands. Moreover, such configurational pairs – generally referred to as enantiomers – may exhibit different properties. For this reason, synthetic chemists are often faced with the problem of ensuring that the final product has the correct enantiomeric form. Oliver Trapp (Professor of Organic Chemistry at LMU since September 2016) and Golo Storch (a member of his previous research group at Heidelberg University, and currently at Yale University) now report the development of a catalyst that dynamically adapts to the stereochemistry of the compounds with which it interacts, and can progressively select for the desired enantiomer. The work is described in a paper which has just appeared in the journal Nature Chemistry.
Their system is based on a pair of molecular backbones that are known to interact with one another with enantiomeric selectivity. One of these serves as the carrier of the desired product while the other is equipped with a metal catalyst and flexible binding sites that recognize the product. The catalyst interacts transiently and repeatedly with the products of its own action, and can swiftly adjust the configuration of its binding sites. “We ourselves were surprised at how rapidly the catalyst adapts,” Trapp says. These interactions effectively modify the structure of the catalyst in such a way that its stereoselectivity is enhanced. Once the catalyst has recognized the desired enantiomer, its selective efficiency improves with every further catalytic cycle. The final result of this self-amplifying action is that the end-products all have the same chiral structure.
This dynamic adaptability is of great interest in the context of the drug industry’s never-ending search for biologically active compounds. Not only that, it may throw new light on how stereoselective chemical reactions operate in biological systems, where one normally finds only one chiral form of any given compound. “The world in which we live is monochiral,” says Trapp. “Researchers have not yet found a convincing explanation for this. But it is conceivable that the functional principle of supermolecular interaction which we have exploited was also crucial for the origin of life.”
Researchers develop a simple processing technique that could cut the cost of organic photovoltaics and wearable electronics
With a new technique for manufacturing single-layer organic polymer solar cells, scientists at UC Santa Barbara and three other universities might very well move organic photovoltaics into a whole new generation of wearable devices and enable small-scale distributed power generation.
The simple doping solution-based process involves briefly immersing organic semiconductor films in a solution at room temperature. This technique, which could replace a more complex approach that requires vacuum processing, has the potential to affect many device platforms, including organic printed electronics, sensors, photodetectors and light-emitting diodes. The researchers’ findings appear in the journal Nature Materials.
“Because the new process is simple to use, general in terms of applicability and should be configurable into mass productions, it has the potential to greatly accelerate the widespread implementation of plastic electronics, of which solar cells are one example,” said co-author Guillermo Bazan, director of UCSB’s Center for Polymers and Organic Solids. “One can see impacts in technologies ranging from light-emitting devices to transistors to transparent solar cells that can be incorporated into building design or greenhouses.”
Studied in many academic and industrial laboratories for two decades, organic solar cells have experienced a continuous and steady improvement in their power conversion efficiency with laboratory values reaching 13 percent compared to around 20 percent for commercial silicon-based cells. Though polymer-based cells are currently less efficient, they require less energy to produce than silicon cells and can be more easily recycled at the end of their lifetimes.
This new method, which provides a way of inducing p-type electrical doping in organic semiconductor films, offers a simpler alternative to the air-sensitive molybdenum oxide layers used in the most efficient polymer solar cells. Thin films of organic semiconductors and their blends are immersed in polyoxometalate solutions in nitromethane for a brief time — on the order of minutes. The geometry of these new devices is unique as the functions of hole and electron collection are built into the light-absorbing active layer, resulting in the simplest single-layer geometry with few interfaces.
“High-performing organic solar cells require a multiple layer device structure,” said co-author Thuc-Quyen Nguyen, a professor in UCSB’s Department of Chemistry and Biochemistry. “The realization of single-layer photovoltaics with our approach will simplify the device fabrication process and therefore should reduce the cost. The initial lifetime testing of these single layer devices is promising. This exciting development will help transform organic photovoltaics into a commercial technology.”
Organic solar cells are unique within the context of providing transparent, flexible and easy-to-fabricate energy-producing devices. These could result in a host of novel applications, such as energy-harvesting windows and films that enable zero-cost farming by creating greenhouses that support crops and produce energy at the same time.
Learn more: Solar Cell Game Changer
Scientists have developed a new optical chip for a telescope that enables astronomers to have a clear view of alien planets that may support life.
Seeing a planet outside the solar system which is close to its host sun, similar to Earth, is very difficult with today’s standard astronomical instruments due to the brightness of the sun.
Associate Professor Steve Madden from The Australian National University (ANU) said the new chip removes light from the host sun, allowing astronomers for the first time to take a clear image of the planet.
“The ultimate aim of our work with astronomers is to be able to find a planet like Earth that could support life,” said Dr Madden from the ANU Research School of Physics and Engineering.
“To do this we need to understand how and where planets form inside dust clouds, and then use this experience to search for planets with an atmosphere containing ozone, which is a strong indicator of life.”
Physicists and astronomers at ANU worked on the optical chip with researchers at the University of Sydney and the Australian Astronomical Observatory.
Dr Madden said the optical chip worked in a similar way to noise cancelling headphones.
“This chip is an interferometer that adds equal but opposite light waves from a host sun which cancels out the light from the sun, allowing the much weaker planet light to be seen,” he said.
PhD student Harry-Dean Kenchington Goldsmith, who built the chip at the ANU Laser Physics Centre, said the technology works like thermal imaging that fire fighters rely on to see through smoke.
“The chip uses the heat emitted from the planet to peer through dust clouds and see planets forming. Ultimately the same technology will allow us to detect ozone on alien planets that could support life,” said Mr Kenchington Goldsmith from the ANU Research School of Physics and Engineering.
Black light does more than make posters glow. Cornell researchers have developed a chemical tool to control inflammation that is activated by ultraviolet (UV) light.
The method will allow scientists to study inflammation and the immune system, and may one day prove effective as a targeted therapy for inflammatory diseases, while minimizing side effects to healthy tissues.
The researchers, who reported their results in a study published in October in the journal Chemical Science, designed a small molecule that is capable of controlling an immune response when exposed to UV light radiation.
“Currently, there aren’t a lot of tools that are able to manipulate the immune system in a spatio-temporal fashion,” said Pamela Chang, assistant professor of microbiology and immunology, and the paper’s senior author. Bibudha Parasar, a graduate student in Chang’s lab, is the paper’s first author.
“We are pushing the forefront of developing new technologies to control inflammation and the immune system, with the ultimate goal of being able to study these biological pathways and perhaps develop therapies for inflammatory diseases,” Chang said.
Inflammation is caused by the immune system as one of the body’s responses to eliminate infectious pathogens and other foreign or dangerous agents. When inflammation becomes chronic, it can lead to a host of diseases, including asthma, multiple sclerosis, rheumatoid arthritis, colitis and Crohn’s disease.
The researchers created a chemical probe that inhibits a reaction mediated by enzymes called histone deacetylases (HDACs). These enzymes regulate genes that turn on when the immune system is challenged and promote inflammation. HDACs also have inhibitors that suppress this inflammatory response, and the probe specifically activates these inhibitors but only in the presence of UV light. This is especially useful since HDACs are ubiquitous and have other biological effects, and most drugs affect the entire system, leading to unintended consequences.
“If you turned off all the HDACs in the body, you would probably be hitting a lot of pathways that you didn’t want to turn off,” said Chang. “We can control when and where we turn off the HDACs using light. The idea is that you can actually target the tissue that has chronic inflammation and regulate it by selectively inhibiting HDACs in the tissue that’s affected.”
Photodynamic therapies are being developed and have existing infrastructure in the clinic for the potential use of the new tool to inhibit inflammation in patients with inflammatory diseases, Chang said.
Learn more: New tool uses UV light to control inflammation
The ancient Japanese art of flower arranging was the inspiration for a groundbreaking technique to create tiny “artificial brains” that could be used to develop personalized cancer treatments.
The organoids, clusters of thousands of human brain cells, cannot perform a brain’s basic functions, much less generate thought. But they provide a far more authentic model – the first of its kind – for studying how brain tumours grow, and how they can be stopped.
“This puts the tumour within the context of a brain, instead of a flat plastic dish,” said Christian Naus, a professor in the department of cellular and physiological sciences, who conceived the project with a Japanese company that specializes in bioprinting. He shared details about the technique at November’s annual Society for Neuroscience conference in San Diego. “When cells grow in three dimensions instead of two, adhering only to each other and not to plastic, an entirely different set of genes are activated.”
Naus studies glioblastoma, a particularly aggressive brain cancer that usually takes root deep inside the brain, and easily spreads. The standard care is surgery, followed by radiation and/or chemotherapy, but gliomas almost always return because a few malignant cells manage to leave the tumour and invade surrounding brain tissue. From the time of diagnosis, average survival is one year.
The idea for creating a more authentic model of glioblastoma originated when Naus partnered with a Japanese biotechnology company, Cyfuse, that has developed a particular technique for printing human tissues based on the Japanese art of flower arranging known as ikebana. In ikebana, artists use a heavy plate with brass needles sticking up, upon which the stems of flowers are affixed. Cyfuse’s bioprinting technique uses a much smaller plate covered with microneedles.
Working with Naus and research associate Wun Chey Sin, Kaori Harada of Cyfuse skewered small spheres of human neural stem cells on the microneedles. As the stem cells multiplied and differentiated into brain tissue, they merged and formed larger structures known as organoids, about two millimetres to three millimetres in diameter. Although the organoids lack blood vessels, they are small enough to allow oxygen and nutrients to permeate the tissue.
“The cells make their own environment,” said Naus, Canada Research Chair in Gap Junctions and Neurological Disorders. “We’re not doing anything except printing them, and then they self-assemble.”
The team then implanted cancerous glioma cells inside the organoids. Naus found that the gliomas spread into the surrounding normal cells.
Having shown that the tumour invades the surrounding tissue, Naus envisions that such a technique can be used with a patient’s own cells – both their normal brain cells and their cancerous cells – to grow a personalized organoid with a glioma at its core, and then test a variety of possible drugs or combinations of treatment to see if any of them stop the cancer from growing and invading.
“With this method, we can easily and authentically replicate a model of the patient’s brain, or at least some of the conditions under which a tumour grows in that brain,” said Naus. “Then we could feasibly test hundreds of different chemical combinations on that patient’s cells to identify a drug combination that shows the most promising result, offering a personalized therapy for brain cancer patients.”
Researchers discover long sought after mechanism in human cells that could help treat diseases caused by viruses, including influenza and Ebola
A team of researchers, co-led by a University of California, Riverside professor, has found a long-sought-after mechanism in human cells that creates immunity to influenza A virus, which causes annual seasonal epidemics and occasional pandemics.
The research, outlined in a paper published online today in the journal Nature Microbiology, could have broad implications on the immunological understanding of human diseases caused by RNA viruses including influenza, Ebola, West Nile, and Zika viruses.
“This opens up a new way to understand how humans respond to viral infections and develop new methods to control viral infections,” said Shou-Wei Ding, a professor of plant pathology and microbiology at UC Riverside, who is the co-corresponding author of the paper.
The findings build on more than 20 years of research by Ding on antiviral RNA interference (RNAi), which involves an organism producing small interfering RNAs (siRNAs) to clear a virus.
His initial research showed that RNAi is a common antiviral defense in plants, insects and nematodes and that viral infections in these organisms require active suppression of RNAi by specific viral proteins. That work led him to study RNAi as an antiviral defense in mammals.
In a 2013 paper in the journal Science he outlined findings that show mice use RNAi to destroy viruses. But, it remained an open debate as to whether the same was true in humans.
That open debate led Ding back to a key 2004 paper in which he described a new activity of a protein (non-structural protein 1, or NS1) in the influenza virus that can block the antiviral function of RNAi in fruit flies, a common model system used by scientists.
In the current Nature Microbiology paper, the researchers demonstrated that human cells produce abundant siRNAs to target the influenza A virus when the viral NS1 is not active.
They showed that the creation of viral siRNAs in infected human cells is mediated by an enzyme known as Dicer and is potently suppressed by both the NS1 protein of influenza A virus and a protein (virion protein 35, or VP35) found in Ebola and Marburg viruses.
The researchers in the lab of the co-corresponding author, Kate L. Jeffrey, an investigator in the Massachusetts General Hospital gastrointestinal unit and an assistant professor of medicine at Harvard Medical School, further demonstrated that the infections of mature mammal cells by influenza A virus and other RNA viruses are inhibited naturally by RNAi, using mice cells specifically defective in RNAi.
“Our studies show that the antiviral function of RNAi is conserved in mammals against distinct RNA viruses, suggesting an immediate need to assess the role of antiviral RNAi in human infectious diseases caused by RNA viruses, including Ebola, West Nile, and Zika viruses,” Jeffrey said.
The Nature Microbiology paper is called “Induction and suppression of antiviral RNA interference by influenza A virus in mammalian cells.”
Addressing safety risks and energy loss for a society with hydrogen energy
A research group led by Professor Hiroyuki Nishide and Professor Kenichi Oyaizu from the Department of Applied Chemistry developed a hydrogen-carrying polymer, which can be molded as a tangible, safe, and compact plastic sheet.
Although technology developments and research on realizing hydrogen as a major energy source have gone under way, the conventional methods of storing and carrying hydrogen were accompanied by safety risks such as explosions. Hence, hydrogen-exposed organic compounds have been recently studied as hydrogen storage materials, for their ability to stably and reversibly store hydrogen by forming chemical bonds. However, these compounds require vessels or sealed tanks operated at high pressure and/or temperature and often encounter difficulty in their separation from the evolved hydrogen gas. A much safer and more efficient system for storing and storing and carrying hydrogen has been in demand.
The research group discovered that ketone (fluorenone) polymer, which can be molded as a plastic sheet, can fix hydrogen via a simple electrolytic hydrogenation at -1.5V (versus Ag/AgCl) in water at room temperature. On the other hand, fluorenol polymer, a hydrogenated alcohol derivative of fluorenone, can release hydrogen when heated at 80 degree Celsius with an aqueous iridium catalyst. What’s more, the group proved that the cycle of repeatedly fixing and releasing hydrogen under mild conditions without significant deterioration, leading to the developement of a portable, plastic ketone polymer with hydrogen that can be carried around in your pocket.
The advantages of the ketone/alcohol polymer include easy handling, moldability, robustness, non-flammability and low toxicity, and the research results are expected to make contributions in building distributed energy systems in regional areas.
The aim of this research project was to analyse learning using Augmented Reality (AR) technology and the motivational and cognitive aspects related to it in an informal learning context. The 146 participants were 12-year old Finnish pupils visiting a science centre exhibition.
The results showed that AR-technology experience was beneficial especially for the pupils, who otherwise belong to the lowest achieving school success group. They were reaching up the gap with other students while learning science. On the other hand, the students with the high-performance school success gained more challenge and quality for the learning outcomes.
Augmented Reality (AR) differs from Virtual reality (VR): VR is totally virtual and illusion, but AR creates mixed reality by adding visual elements into real, physical environment around us. This research group from the University of Helsinki has been doing research related to informal learning and Augmented Reality for more than a decade now. Now, only after the Pokémon phenomenon this AR-technolgoy has become known by wider audiences. However, many experts don’t consider the Pokémon technology as AR-solution, but as the recent Scientific American calls it as “location-based entertainment”.
“Utilizing ICT- and digitalization in education is much hype. The majority of the comments are based on everyday knowledge and anecdotes. The number of evidence-based education research reports is astonishingly small,” says Professor Hannu Salmi from the University of Helsinki, Finland.
“Several ICT-based educational materials are old-fashioned. The text-books have only been converted into digital form. However, by the latest technologies like AR allow to create totally new type of learning solutions and not only to transport old knowledge into a new format. Our team has been developing clearly defined phenomena like the molecule movement in the gases, gravity, sound waves, or aeroplane wing physics. This is not only for learning the knowledge based facts, but learning by doing to make the observations is developing the thinking skills – learning to learn, as well.”
“Video games and computer based entertainment and serious pc-educational games have traditionally been more beneficial for the boys. However, in this AR-case there was no gap between boys and girls in post-knowledge testing; thus the girls benefitted more from the informal learning experience than the boys,” says Helena Thuneberg, the senior researcher from the University of Helsinki. “Girls had a higher relative autonomy experience (RAI) as an important background factor for high-performance learning. Meanwhile, situation motivation was much more strongly inter-connected among the boys.”
AR seems to be also a good tool for different learners. It is bridging the gap between formal education and informal learning in an effective way.
Scientists at the University of Sussex have invented a ground-breaking new method that puts the construction of large-scale quantum computers within reach of current technology.
Quantum computers could solve certain problems – that would take the fastest supercomputer millions of years to calculate – in just a few milliseconds.
They have the potential to create new materials and medicines, as well as solve long-standing scientific and financial problems.
Universal quantum computers can be built in principle – but the technology challenges are tremendous. The engineering required to build one is considered more difficult than manned space travel to Mars – until now.
Quantum computing on a small scale using trapped ions (charged atoms) is carried out by aligning individual laser beams onto individual ions with each ion forming a quantum bit.
However, a large-scale quantum computer would need billions of quantum bits, therefore requiring billions of precisely aligned lasers, one for each ion.
Instead, scientists at Sussex have invented a simple method where voltages are applied to a quantum computer microchip (without having to align laser beams) – to the same effect.
Professor Winfried Hensinger and his team also succeeded in demonstrating the core building block of this new method with an impressively low error rate at their quantum computing facility at Sussex.
Professor Hensinger said: “This development is a game changer for quantum computing making it accessible for industrial and government use. We will construct a large-scale quantum computer at Sussex making full use of this exciting new technology.”
Quantum computers may revolutionise society in a similar way as the emergence of classical computers. Dr Seb Weidt, part of the Ion Quantum Technology Group said: “Developing this step-changing new technology has been a great adventure and it is absolutely amazing observing it actually work in the laboratory.”
How the brain recognizes faces
MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.
The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.
This property wasn’t built into the system; it emerged spontaneously from the training process. But it duplicates an experimentally observed feature of the primate face-processing mechanism. The researchers consider this an indication that their system and the brain are doing something similar.
“This is not a proof that we understand what’s going on,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines (CBMM), a multi-institution research consortium funded by the National Science Foundation and headquartered at MIT. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”
Indeed, the researchers’ new paper includes a mathematical proof that the particular type of machine-learning system they use, which was intended to offer what Poggio calls a “biologically plausible” model of the nervous system, will inevitably yield intermediary representations that are indifferent to angle of rotation.
Poggio, who is also a primary investigator at MIT’s McGovern Institute for Brain Research, is the senior author on a paper describing the new work, which appeared today in the journal Computational Biology. He’s joined on the paper by several other members of both the CBMM and the McGovern Institute: first author Joel Leibo, a researcher at Google DeepMind, who earned his PhD in brain and cognitive sciences from MIT with Poggio as his advisor; Qianli Liao, an MIT graduate student in electrical engineering and computer science; Fabio Anselmi, a postdoc in the [email protected] Laboratory for Computational and Statistical Learning, a joint venture of MIT and the Italian Institute of Technology; and Winrich Freiwald, an associate professor at the Rockefeller University.
The new paper is “a nice illustration of what we want to do in [CBMM], which is this integration of machine learning and computer science on one hand, neurophysiology on the other, and aspects of human behavior,” Poggio says. “That means not only what algorithms does the brain use, but what are the circuits in the brain that implement these algorithms.”
Poggio has long believed that the brain must produce “invariant” representations of faces and other objects, meaning representations that are indifferent to objects’ orientation in space, their distance from the viewer, or their location in the visual field. Magnetic resonance scans of human and monkey brains suggested as much, but in 2010, Freiwald published a study describing the neuroanatomy of macaque monkeys’ face-recognition mechanism in much greater detail.
Freiwald showed that information from the monkey’s optic nerves passes through a series of brain locations, each of which is less sensitive to face orientation than the last. Neurons in the first region fire only in response to particular face orientations; neurons in the final region fire regardless of the face’s orientation — an invariant representation.
But neurons in an intermediate region appear to be “mirror symmetric”: That is, they’re sensitive to the angle of face rotation without respect to direction. In the first region, one cluster of neurons will fire if a face is rotated 45 degrees to the left, and a different cluster will fire if it’s rotated 45 degrees to the right. In the final region, the same cluster of neurons will fire whether the face is rotated 30 degrees, 45 degrees, 90 degrees, or anywhere in-between. But in the intermediate region, a particular cluster of neurons will fire if the face is rotated by 45 degrees in either direction, another if it’s rotated 30 degrees, and so on.
This is the behavior that the researchers’ machine-learning system reproduced. “It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”
The researchers’ machine-learning system is a neural network, so called because it roughly approximates the architecture of the human brain. A neural network consists of very simple processing units, arranged into layers, that are densely connected to the processing units — or nodes — in the layers above and below. Data are fed into the bottom layer of the network, which processes them in some way and feeds them to the next layer, and so on. During training, the output of the top layer is correlated with some classification criterion — say, correctly determining whether a given image depicts a particular person.
In earlier work, Poggio’s group had trained neural networks to produce invariant representations by, essentially, memorizing a representative set of orientations for just a handful of faces, which Poggio calls “templates.” When the network was presented with a new face, it would measure its difference from these templates. That difference would be smallest for the templates whose orientations were the same as that of the new face, and the output of their associated nodes would end up dominating the information signal by the time it reached the top layer. The measured difference between the new face and the stored faces gives the new face a kind of identifying signature.
In experiments, this approach produced invariant representations: A face’s signature turned out to be roughly the same no matter its orientation. But the mechanism — memorizing templates — was not, Poggio says, biologically plausible.
So instead, the new network uses a variation on Hebb’s rule, which is often described in the neurological literature as “neurons that fire together wire together.” That means that during training, as the weights of the connections between nodes are being adjusted to produce more accurate outputs, nodes that react in concert to particular stimuli end up contributing more to the final output than nodes that react independently (or not at all).
This approach, too, ended up yielding invariant representations. But the middle layers of the network also duplicated the mirror-symmetric responses of the intermediate visual-processing regions of the primate brain.
“I think it’s a significant step forward,” says Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science. “In this day and age, when everything is dominated by either big data or huge computer simulations, this shows you how a principled understanding of learning can explain some puzzling findings.”
“They’re very careful,” Koch adds. “They’re only looking at the feed-forward pathway — in other words, the first 80, 100 milliseconds. The monkey opens its eyes, and within 80 to 100 milliseconds, it can recognize a face and push a button signaling that. The question is what goes on in those 80 to 100 milliseconds, and the model that they have seems to explain that quite well.”
Learn more: How the brain recognizes faces