Graphene quantum dots may offer a simple way to recycle waste carbon dioxide into valuable fuel rather than release it into the atmosphere or bury it underground, according to Rice University scientists.
Nitrogen-doped graphene quantum dots (NGQDs) are an efficient electrocatalyst to make complex hydrocarbons from carbon dioxide, according to the research team led by Rice materials scientist Pulickel Ajayan. Using electrocatalysis, his lab has demonstrated the conversion of the greenhouse gas into small batches of ethylene and ethanol.
The research is detailed this week in Nature Communications.
Though they don’t entirely understand the mechanism, the researchers found NGQDs worked nearly as efficiently as copper, which is also being tested as a catalyst to reduce carbon dioxide into liquid fuels and chemicals. And NGQDs keep their catalytic activity for a long time, they reported.
“It is surprising because people have tried all different kinds of catalysts. And there are only a few real choices such as copper,” Ajayan said. “I think what we found is fundamentally interesting, because it provides an efficient pathway to screen new types of catalysts to convert carbon dioxide to higher-value products.”
Those problems are hardly a secret. Atmospheric carbon dioxide rose above 400 parts per million earlier this year, the highest it’s been in at least 800,000 years, as measured through ice-core analysis.
“If we can convert a sizable fraction of the carbon dioxide that is emitted, we could curb the rising levels of atmospheric carbon dioxide levels, which have been linked to climate change,” said co-author Paul Kenis of the University of Illinois.
In lab tests, NGQDs proved able to reduce carbon dioxide by up to 90 percent and convert 45 percent into either ethylene or alcohol, comparable to copper electrocatalysts.
Graphene quantum dots are atom-thick sheets of carbon atoms that have been split into particles about a nanometer thick and just a few nanometers wide. The addition of nitrogen atoms to the dots enables varying chemical reactions when an electric current is applied and a feedstock like carbon dioxide is introduced.
“Carbon is typically not a catalyst,” Ajayan said. “One of our questions is why this doping is so effective. When nitrogen is inserted into the hexagonal graphitic lattice, there are multiple positions it can take. Each of these positions, depending on where nitrogen sits, should have different catalytic activity. So it’s been a puzzle, and though people have written a lot of papers in the last five to 10 years on doped and defective carbon being catalytic, the puzzle is not really solved.”
“Our findings suggest that the pyridinic nitrogen (a basic organic compound) sitting at the edge of graphene quantum dots leads the catalytic conversion of carbon dioxide to hydrocarbons,” said Rice postdoctoral researcher Jingjie Wu, co-lead author of the paper. “The next task is further increasing nitrogen concentration to help increase the yield of hydrocarbons.”
Ajayan noted that while electrocatalysis is effective at lab scales for now, industry relies on scalable thermal catalysis to produce fuels and chemicals. “For that reason, companies probably won’t use it any time soon for large-scale production. But electrocatalysis can be easily done in the lab, and we showed it will be useful in the development of new catalysts.”
Learn more: Carbon dots dash toward ‘green’ recycling role
Rice University research articles from Innovation Toronto
- Thank 3D Printing for a Painless Flu Shot – April 22, 2015
- Algae from wastewater solves two problems – April 3, 2015
- Aerogel catalyst shows promise for fuel cells – March 3, 2015
- Nano-antioxidants prove their potential – February 11, 2015
- Researchers identify materials to improve biofuel and petroleum processing – January 27, 2015
- Laser-induced graphene ‘super’ for electronics – January 25, 2015
- Scientists trace nanoparticles from plants to caterpillars – December 23, 2014
- Panel-powered car could double the range of electric vehicles- November 8, 2014
- Next-gen energy storage and hydrogen catalysis – November 6, 2014
- ‘Squid skin’ metamaterials project yields vivid color display – September 24, 2014
- Rice nanophotonics experts create powerful molecular sensor – July 17, 2014
- One step to solar-cell efficiency – June 20, 2014
- ‘Quadrapeutics’ works in preclinical study of hard-to-treat tumors – June 3, 2014
- Two-lock box delivers cancer therapy – May 12, 2014
- Rice University bioscaffold material degrades as bone grows to replace it – May 8, 2014
- Creativity and innovation need to talk more
- Rice University Professor: SkyNET’s Gonna Take Ur Jerbs!
- Rice University Develops Spray-on Battery Prototype
- UCLA, Rice University Make Phase-Change Memory Breakthrough
- Vapor nanobubbles rapidly detect malaria through the skin | vapor nanobubble
- Rice U. theorists calculate atom-thick carbyne chains may be strongest material ever
- Tanks, graphene! Rice advances compressed gas storage
- ‘White graphene’ halts rust in high temps
- A BOLD Idea for Big Data
- New password in a heartbeat
- A swarm on every desktop: Robotics experts learn from you, the public
- Clay key to high-temperature supercapacitors
- Breakthrough in detecting DNA mutations could help treat tuberculosis, cancer
- Off-grid sterilization with Rice U.’s ‘solar steam’
- Not-weak knots bolster carbon fiber
- Is this the future for low-power and high-performance integrated circuits?
- Diamonds, nanotubes find common ground in graphene
- Add boron for better batteries
- Chloroform cleanup: just the beginning for palladium-gold catalysts
- The Patent Wars Begin Over Graphene, A Material That’s About To Change Our Lives
- Big Breakthrough for high-power lithium-ion batteries
- Lack of energy an enemy to antibiotic-resistant microbes
- New nanotech fiber: Robust handling, shocking performance
- Another tiny miracle: Graphene oxide soaks up radioactive waste
- Rice uses light to remotely trigger biochemical reactions
- VIDEO: Rice, Texas Children’s team creates biocompatible patch to heal infants with birth defects
- Ancient Red Dye Powers New “Green” Battery
- Graphene/Nanotube Hybrid is a Promising Material for Energy Storage, Electronics
- Rice unveils super-efficient solar-energy technology: Solar Steam
- MIT Breakthrough Could Lead To New Military Body Armor Only ‘Nanometers’ Thick
- New Battery Technology: BIG Breakthrough
- Invisible barrier wards off metal corrosion
- Experts propose ‘cyber war’ on cancer
- VIDEO: How to feed data-hungry mobile devices? Use more antennas
- Nano-Infused Paint Can Detect Strain
- Nanocable Could Be Big Boon for Energy Storage
- This ‘mousetrap’ may save lives
- Computing experts unveil superefficient ‘inexact’ chip
- Reinventing the Battery
- Nanosponges soak up oil again and again
- ‘Nanobubbles’ Plus Chemotherapy Equals Single-Cell Cancer Targeting
- Nano-Oils Keep the Electronic Devices Really Cool
- Giant Flakes Make Graphene Oxide Gel
- New Method to Grow Synthetic Collagen Unveiled
- Breakthrough Could Double Wireless Capacity With No New Towers
- Solid-State Energy Storage Takes a Leap Forward
- Solid-state capacitor said to combine best qualities of batteries and capacitors
- Engineers Reverse E. Coli Metabolism for Quick Production of Fuels, Chemicals
- A Simple Slice of Energy Storage
- Dream Screens from the Graphene Revolution
- Scientists Build Battery in a Nanowire
- ‘Brain Cap’ Technology Turns Thought Into Motion
- Breakthrough in development of cable for ultra-efficient electricity grid of the future
- The Fog of Cyberwar: What Are the Rules of Engagement?
- Linked Wii Balance Boards help children learn balance skills
- Nanocomposite material gets stronger when stressed
- Microsponges from Seaweed May Save Lives
- Early Tests Find Nanoshell Therapy Effective Against Brain Cancer
- Sweet way to make graphene – just add table sugar
- Graphite could revolutionize mass data storage AND circuit design
- New Way to Tap Gas May Expand Global Supplies
- Hydrogen Fuel for Thought: Metallacarboranes May Meet DOE Storage Goals
- Advances Offer Path to Shrink Computer Chips Again
- US$240 TB-detecting microscope on par with $40,000 devices
- Consumer-Grade Camera Detects Cancer Cells in Real Time
- Hidden RFID tags could mean end of bar-codes and lines at the checkout
- Destroying cancer cells with exploding gold nanoparticles
- Producing carbon nanotubes on an industrial scale
- Open-Source Textbooks a Mixed Bag in California
The university is situated near the Houston Museum District and adjacent to the Texas Medical Center.
Opened in 1912 after the murder of its namesake William Marsh Rice, Rice is now a preeminent research university with a distinct undergraduate focus. Its emphasis on education is demonstrated by a small student body and 5:1 student-faculty ratio, among the lowest in the top American universities including the Ivy League. The university has produced 101 Fulbright Scholars, 11 Truman Scholars, 24 Marshall Scholars, and 12 Rhodes Scholars. The university has a very high level of research activity for its size, with $115.3 million in sponsored research funding in 2011.
Rice is noted for its applied science programs in the fields of artificial heart research, structural chemical analysis, signal processing, space science, and nanotechnology. It was ranked first in the world in materials science research by the Times Higher Education (THE) in 2010.
The university is organized into eleven residential colleges and eight schools of academic study, including the Wiess School of Natural Sciences, the George R. Brown School of Engineering, the School of Social Sciences, and the School of Humanities. Graduate programs are offered through the Jesse H. Jones Graduate School of Business, School of Architecture, Shepherd School of Music, and Susanne M. Glasscock School of Continuing Studies. Rice students are bound by the strict Honor Code, which is enforced by a uniquely student-run Honor Council.
Image-processing system learns largely on its own, much like a human baby
Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new “deep learning” method that enables computers to learn about the visual world largely on their own, much as human babies do.
In tests, the group’s “deep rendering mixture model” largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school students. In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself. In tests, the algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.
“In deep-learning parlance, our system uses a method known as semisupervised learning,” said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. “The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.
“Humans don’t learn that way,” Patel said. “When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: ‘Bottle. Chair. Momma.’ But the baby can’t even understand spoken words at that point. It’s learning mostly unsupervised via some interaction with the world.”
Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semisupervised learning system for visual data that didn’t require much “hand-holding” in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.
The semisupervised Rice-Baylor algorithm is a “convolutional neural network,” a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.
“It’s essentially a very simple visual cortex,” Patel said of the convolutional neural net. “You give it an image, and each layer processes the image a little bit more and understands it in a deeper way, and by the last layer, you’ve got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision.”
Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.
“Edges are very important,” Nguyen said. “Many of the lower layer neurons tend to become edge detectors. They’re looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.
“When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on,” he said. “The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it’s able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus.”
Nguyen began working with Patel in January as the latter began his tenure-track academic career at Rice and Baylor. Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he’d just wrapped up a four-year postdoctoral stint in the lab of Rice’s Richard Baraniuk, another co-author on the new study. In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.
Baraniuk said a solid theoretical understanding is vital for designing convolutional nets that go beyond today’s state-of-the-art.
“Understanding video images is a great example,” Baraniuk said. “If I am looking at a video, frame by frame by frame, and I want to understand all the objects and how they’re moving and so on, that is a huge challenge. Imagine how long it would take to label every object in every frame of a video. No one has time for that. And in order for a machine to understand what it’s seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff. We humans learn those things on our own and take them for granted, but they are totally missing in today’s artificial neural networks.”
Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brain.
“There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly,” Patel said. “What the brain is doing may be related, but it’s still very different. And the key thing we know about the brain is that it mostly learns unsupervised.
“What I and my neuroscientist colleagues are trying to figure out is, What is the semisupervised learning algorithm that’s being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?” he said. “Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we’ve designed.”
Rice researchers finds molybdenum diselenide not as strong as they thought
Scientists at Rice University have discovered that an atom-thick material being eyed for flexible electronics and next-generation optical devices is more brittle than they expected.
The Rice team led by materials scientist Jun Lou tested the tensile strength of two-dimensional, semiconducting molybdenum diselenide and discovered that flaws as small as one missing atom can initiate catastrophic cracking under strain.
The team’s report appears this month in Advanced Materials.
The finding may cause industry to look more carefully at the properties of 2-D materials before incorporating them in new technologies, he said.
“It turns out not all 2-D crystals are equal,” said Lou, a Rice professor of materials science and nanoengineering. “Graphene is a lot more robust compared with some of the others we’re dealing with right now, like this molybdenum diselenide. We think it has something to do with defects inherent to these materials.”
The defects could be as small as a single atom that leaves a vacancy in the crystalline structure, he said. “It’s very hard to detect them,” he said. “Even if a cluster of vacancies makes a bigger hole, it’s difficult to find using any technique. It might be possible to see them with a transmission electron microscope, but that would be so labor-intensive that it wouldn’t be useful.”
Molybdenum diselenide is a dichalcogenide, a two-dimensional semiconducting material that appears as a graphene-like hexagonal array from above but is actually a sandwich of metallic atoms between two layers of chalcogen atoms, in this case, selenium. Molybdenum diselenide is being considered for use as transistors and in next-generation solar cells, photodetectors and catalysts as well as electronic and optical devices.
Lou and colleagues measured the material’s elastic modulus, the amount of stretching a material can handle and still return to its initial state, at 177.2 (plus or minus 9.3) gigapascals. Graphene is more than five times as elastic. They attributed the large variation to pre-existing flaws of between 3.6 and 77.5 nanometers.
Its fracture strength, the amount of stretching a material can handle before breaking, was measured at 4.8 (plus or minus 2.9) gigapascals. Graphene is nearly 25 times stronger.
Part of the project led by Rice postdoctoral researcher Yingchao Yang required moving molybdenum diselenide from a growth chamber in a chemical vapor deposition furnace to a microscope without introducing more defects. Yang solved the problem using a dry transfer process in place of a standard acid washing that would have ruined the samples.
To test samples, Yang placed rectangles of molybdenum diselenide onto a sensitive electron microscope platform invented by the Lou group. Natural van der Waals forces held the samples in place on springy cantilever arms that measured the applied stress.
Lou said the group attempted to measure the material’s fracture toughness, an indicator of how likely cracks are to propagate, as they had in an earlier study on graphene. But they found that pre-cutting cracks into molybdenum diselenide resulted in it shattering before stress could be applied, he said.
“The important message of this work is the brittle nature of these materials,” Lou said. “A lot of people are thinking about using 2-D crystals because they’re inherently thin. They’re thinking about flexible electronics because they are semiconductors and their theoretical elastic strain should be very high. According to our calculations, they can be stretched up to 10 percent.
“But in reality, because of the inherent defects, you rarely can achieve that much strain. The samples we have tested so far broke at 2 to 3 percent (of the theoretical maximum) at most,” Lou said. “That should still be fine for most flexible applications, but unless they find a way to quench the defects, it will be very hard to achieve the theoretical limits.”
Rice scientists part of international team demonstrating untethered 3-wheelers
Scientists at Rice University and at the University of Graz, Austria, are driving three-wheeled, single-molecule “nanoroadsters” with light and, for the first time, seeing how they move.
The Rice lab of nanocar inventor and chemist James Tour synthesized light-driven nanocars six years ago, but with the aid of experimental physicists in Austria, they’re now able to drive fleets of single-molecule vehicles at once.
A report on the work appears in the American Chemical Society journal ACS Nano.
“It is exciting to see that motorized nanoroadsters can be propelled by their light-activated motors,” said Tour, who introduced nanocars in 2005 and motorized them a year later. “These three-wheelers are the first example of light-powered nanovehicles being observed to propel across a surface by any method, let alone by scanning tunneling microscopy.”
Rather than drive them chemically or with the tip of a tunneling microscope, as they will do with other vehicles in the upcoming international NanoCar Race in Toulouse, France, the researchers used light at specific wavelengths to move their nanoroadsters along a copper surface. The vehicles have rear-wheel molecular motors that rotate in one direction when light hits them. The rotation propels the vehicle much like a paddle wheel on water.
The team led by Tour and Leonhard Grill, a professor at the University of Graz and formerly at the Fritz-Haber-Institute, Berlin, used wavelength-sensitive modified motors invented by Dutch scientist Bernard Feringa, who shared this year’s Nobel Prize in chemistry for his molecular machine.
Remote control is key to the cars’ useful abilities. “If we have to ‘wire’ the car to a power source, like an electron beam, we would lose a lot of the cars’ functionality,” Tour said. “Powering them with light frees them to be driven wherever one can shine a light — and eventually we hope they will carry cargo.”
Another advantage is the ability to activate fleets of nanocars at once. “This is precisely what we seek — to use a light to activate motors and have swarms of nanovehicles moving across the surface, made directional through electric field gradients,” Tour said. “This would permit us the future prospect of using nanomachines like ants that work collectively to perform construction.”
Grill said remote control by light eliminates the need for a local probe that would have to address the molecules one by one. “Additionally, no ‘fuel’ molecules are required that would contaminate the surface and modify the diffusion properties,” he said.
Tour has used modified Feringa’s motors to power his lab’s nanosubmersibles. In this case, the motors are the back wheel. He said the three-wheeled configuration simplifies its use because larger nanocars are more difficult to put onto an imaging surface and often dissociated during deposition under vacuum, according to Grill.
Experiments by lead author Alex Saywell of the Grill group on nanoroadsters made at Rice demonstrated a need for a fine balance of light and temperature to allow “enhanced diffusion” of the molecules in a vacuum.
Grill said using light to drive nanomachines offers a fundamental advantage — the ability to selectively induce motion because of the motors’ sensitivity to wavelength. Ultraviolet light at 266 nanometers doubled the roadsters’ movement compared with “control” roadster molecules without motors. At 355 nanometers, it tripled.
The roadsters, made of 112 atoms, reached a top speed of 23 nanometers per hour.
A surface activation temperature of 161 kelvins (minus 170 degrees Fahrenheit) proved best for driving conditions. If the temperature is too cold, the roadsters would stick to the surface; too warm and they would diffuse randomly without help from the motor.
“We were surprised by the very clear correlation of the enhanced motion to the presence of the motor, the need for both heat and light to activate this motion — in perfect agreement with the concept of the Feringa motor — and the wavelength sensitivity that nicely fit our expectations from spectroscopy in solution,” Grill said.
Co-authors are Rice alumni Víctor García-López and Pinn-Tsong Chiang, and Anne Bakker, Johannes Mielke, Takashi Kumagai and Martin Wolf of the Max Planck Society, Berlin. Saywell is now the Marie Curie Research Fellow at University of Nottingham, United Kingdom. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.
The National Science Foundation, the Marie Curie Intra-European Fellowship and the German Science Foundation supported the research.
Rice University wireless researchers are taking a page from radio inventor Guglielmo Marconi to create the first laser-free, wireless system capable of delivering 1 terabit of data per second.
Such a speed would be more than 20,000 times faster than today’s top 4G wireless networks and about 20 times faster than the U.S.’ speediest home internet services. A terabit is 1 trillion bits of information. A 1 terabit-per-second signal could simultaneously stream about 200,000 high-definition movies.
“Breaking the terabit-per-second barrier with radio will enable an entirely new set of wireless applications and communication paradigms,” said Edward Knightly, professor and chair of Rice’s Department of Electrical and Computer Engineering and principal investigator on a new $1.3 million, three-year grant from the National Science Foundation (NSF) to develop terabit wireless technology.
The need for such speed is indicated by a 2016 Cisco study that found global mobile data traffic grew by 74 percent in 2015, rising to 3.7 exabytes (almost 30 million terabits) per month in December 2015. The same report found that smartphone data usage grew 43 percent in 2015, with the average user consuming 929 megabytes per month. That was driven in large part by the public’s rapidly growing appetite for watching videos on mobile devices. Cisco found that mobile video accounted for 55 percent of all mobile data traffic in 2015.
That level of demand led the NSF to invest more than $60 million in radio spectrum research over the past five years. The grant to Rice is part of a new $11 million round of investment announced by NSF today.
To hit the 1 terabit-per-second threshold, Knightly and co-principal investigator Aydin Babakhani plan to use pulse-based radio technology. That represents a break with the carrier-wave modulation technology that wireless companies have relied on for decades. Babakhani, assistant professor of electrical and computer engineering at Rice, said pulse-based technology is probably the only laser-free wireless technology that can support data rates in the 1-terabit-per-second range over a single channel, but his team must clear a number hurdles to demonstrate that they can both send and receive 1 trillion high-frequency radio pulses per second.
“Pulse-based technology isn’t new,” Babakhani said. “Marconi first demonstrated it in the early 1900s. He used an antenna connected to a large capacitor. By charging that, he could cause the power to build up until the voltage difference ionized the air gap and caused all the power to be sent to the antenna at once. That was the first pulse-based communication network. He used it to show he could transmit long distances, and it was low-frequency.
“Our pulse-based system is inspired by Marconi’s invention, but instead of the power going to a large antenna through an air gap, like Marconi’s, ours goes to an on-chip antenna through a high-speed bipolar transistor,” he said. “We’re storing magnetic energy on the chip, and then using a simple digital trigger to release that. Once released, it radiates as a picosecond impulse. There is no oscillator: It’s direct digital-to-impulse radiation. Unlike laser-based pulse systems, which can send even shorter pulses, ours can send many pulses very fast, which translates to a high pulse-rate frequency, something that’s vital for achieving the data speeds we are targeting.”
Babakhani’s lab, which set a world record earlier this year for transmitting the shortest radio pulse of 1.9 picoseconds, will develop and fabricate a dinner-plate-sized transmitter that can send even shorter pulses at high frequencies ranging from 100 gigahertz to several terahertz. The transmitter will actually contain about 10,000 individual antennas, each of which is a separate microchip capable of sending out picosecond radio pulses. Babakhani said the number of antennas will boost the signal strength, making it possible to demonstrate the technology over distances up to a quarter mile. In addition, the antenna array also will allow the team to steer the signal with fine accuracy.
“Modulated, frequency-based communications technology has been perfect for the lower frequency radio waves that we have relied on over the past half-century, but everything changes at higher frequencies above 100 gigahertz,” Knightly said. “Instead of having signals that bounce off walls and are highly scattered throughout the environment, we’re moving to a regime where we only effectively have line-of-sight. The benefit is we’re going to blast all the bandwidth and all the information directly to a device with laser-sharp focus, and no one else will be able to intercept that signal because any receiver that’s offline simply won’t detect it. So, we’re focusing like a laser but we’re using radio. The challenge is to steer that beam to the right place at the right time and to follow users as they move.”
Rice University researchers say 2-D boron may be best for flexible electronics
Though they’re touted as ideal for electronics, two-dimensional materials like graphene may be too flat and hard to stretch to serve in flexible, wearable devices. “Wavy” borophene might be better, according to Rice University scientists.
The Rice lab of theoretical physicist Boris Yakobson and experimental collaborators observed examples of naturally undulating, metallic borophene, an atom-thick layer of boron, and suggested that transferring it onto an elastic surface would preserve the material’s stretchability along with its useful electronic properties.
Highly conductive graphene has promise for flexible electronics, Yakobson said, but it is too stiff for devices that also need to stretch, compress or even twist. But borophene deposited on a silver substrate develops nanoscale corrugations. Weakly bound to the silver, it could be moved to a flexible surface for use.
The research appears this month in the American Chemical Society journal Nano Letters.
Rice collaborated with experimentalists at Argonne National Laboratory and Northwestern University to study borophene, which has been made in small quantities. Under the microscope, borophene displays corrugations that demonstrate its wavy nature, meaning it can be highly stretched once removed from the substrate, or reattached to a soft one, Yakobson said.
The Rice group builds computer simulations to analyze the properties of materials from the atoms up. Simulations by first author Zhuhua Zhang, a postdoctoral researcher in Yakobson’s group, showed that hexagonal vacancies in borophene help soften the material to facilitate its corrugated form.
“Borophene is metallic in its typical state, with strong electron-phonon coupling to support possible superconductivity, and a rich band structure that contains Dirac cones, as in graphene,” Yakobson said.
There is a hitch: Borophene needs the underlying structure to make it wavy. When grown on a featureless surface, its natural form resembles graphene, the flat, chicken-wire arrays of carbon atoms. Zhang said borophene is better seen as a triangular lattice with periodic arrays of hexagonal vacancies.
Borophene prefers to be flat because that’s where its energy is lowest, Yakobson said. But surprisingly, when grown on silver, borophene adopts its accordion-like form while silver reconstructs itself to match. The corrugation can be retained by “re-gluing” boron onto another substrate.
“This wavy conformation so far seems unique due to the exceptional structural flexibility and particular interactions of borophene with silver, and may be initially triggered by a slight compression in the layer when a bit too many boron atoms get onto the surface,” Zhang said.
Rice physicists probe photon-electron interactions in vacuum cavity experiments
Where light and matter intersect, the world illuminates. Where light and matter interact so strongly that they become one, they illuminate a world of new physics, according to Rice University scientists.
Rice physicists are closing in on a way to create a new condensed matter state in which all the electrons in a material act as one by manipulating them with light and a magnetic field. The effect made possible by a custom-built, finely tuned cavity for terahertz radiation shows one of the strongest light-matter coupling phenomena ever observed.
The work by Rice physicist Junichiro Kono and his colleagues is described in Nature Physics. It could help advance technologies like quantum computers and communications by revealing new phenomena to those who study cavity quantum electrodynamics and condensed matter physics, Kono said.
Condensed matter in the general sense is anything solid or liquid, but condensed matter physicists study forms that are much more esoteric, like Bose-Einstein condensates. A Rice team was one of the first to make a Bose-Einstein condensate in 1995 when it prompted atoms to form a gas at ultracold temperatures in which all the atoms lose their individual identities and behave as a single unit.
The Kono team is working toward something similar, but with electrons that are strongly coupled, or “dressed,” with light. Qi Zhang, a former graduate student in Kono’s group and lead author of the paper, designed and constructed an extremely high-quality cavity to contain an ultrathin layer of gallium arsenide, a material they’ve used to study superfluorescence. By tuning the material with a magnetic field to resonate with a certain state of light in the cavity, they prompted the formation of polaritons that act in a collective manner.
“This is a nonlinear optical study of a two-dimensional electronic material,” said Zhang, who based his Ph.D. thesis on the work. “When you use light to probe a material’s electronic structure, you’re usually looking for light absorption or reflection or scattering to see what’s happening in the material. That light is just a weak probe and the process is called linear optics.
The researchers employed a parameter known as vacuum Rabi splitting to measure the strength of the light-matter coupling. “In more than 99 percent of previous studies of light-matter coupling in cavities, this value is a negligibly small fraction of the photon energy of the light used,” said Xinwei Li, a co-author and graduate student in Kono’s group. “In our study, vacuum Rabi splitting is as large as 10 percent of the photon energy. That puts us in the so-called ultrastrong coupling regime.
“This is an important regime because, eventually, if the vacuum Rabi splitting becomes larger than the photon energy, the matter goes into a new ground state. That means we can induce a phase transition, which is an important element in condensed matter physics,” he said.
Phase transitions are transitions between states of matter, like ice to water to vapor. The specific transition Kono’s team is looking for is the superradiant phase transition in which the polaritons go into an ordered state with macroscopic coherence.
Kono said the amount of terahertz light put into the cavity is very weak. “What we depend on is the vacuum fluctuation. Vacuum, in a classical sense, is an empty space. There’s nothing. But in a quantum sense, a vacuum is full of fluctuating photons, having so-called zero-point energy. These vacuum photons are actually what we are using to resonantly excite electrons in our cavity.
“This general subject is what’s known as cavity quantum electrodynamics (QED),” Kono said. “In cavity QED, the cavity enhances the light so that matter in the cavity resonantly interacts with the vacuum field. What is unique about solid-state cavity QED is that the light typically interacts with this huge number of electrons, which behave like a single gigantic atom.”
He said solid-state cavity QED is also key for applications that involve quantum information processing, like quantum computers. “The light-matter interface is important because that’s where so-called light-matter entanglement occurs. That way, the quantum information of matter can be transferred to light and light can be sent somewhere.
“For improving the utility of cavity QED in quantum information, the stronger the light-matter coupling, the better, and it has to use a scalable, solid-state system instead of atomic or molecular systems,” he said. “That’s what we’ve achieved here.”
The high-quality gallium arsenide materials used in the study were synthesized via molecular beam epitaxy by John Reno of Sandia National Laboratories and John Watson and Michael Manfra of Purdue University, all co-authors of the paper. Weil Pan of Sandia National Laboratories and Rice graduate student Minhan Lou, who participated in sample preparation and transport and terahertz measurements, are also co-authors.
Rice University lab studies 2-D hybrids to see how they differ from common electronics
The old rules don’t necessarily apply when building electronic components out of two-dimensional materials, according to scientists at Rice University.
The Rice lab of theoretical physicist Boris Yakobson analyzed hybrids that put 2-D materials like graphene and boron nitride side by side to see what happens at the border. They found that the electronic characteristics of such “co-planar” hybrids differ from bulkier components.
Their results appear this month in the American Chemical Society journal Nano Letters.
Shrinking electronics means shrinking their components. Academic labs and industries are studying how materials like graphene may enable the ultimate in thin devices by building all the necessary circuits into an atom-thick layer.
“Our work is important because semiconductor junctions are a big field,” Yakobson said. “There are books with iconic models of electronic behavior that are extremely well-developed and have become the established pillars of industry.
“But these are all for bulk-to-bulk interfaces between three-dimensional metals,” he said. “Now that people are actively working to make two-dimensional devices, especially with co-planar electronics, we realized that the rules have to be reconsidered. Many of the established models utilized in industry just don’t apply.”
The researchers led by Rice graduate student Henry Yu built computer simulations that analyze charge transfer between atom-thick materials.
“It was a logical step to test our theory on both metals and semiconductors, which have very different electronic properties,” Yu said. “This makes graphene, which is a metal — or a semimetal, to be precise — molybdenum disulfide and boron nitride, which are semiconductors, or even their hybrids ideal systems to study.
“In fact, these materials have been widely fabricated and used in the community for almost a decade, which makes analysis of them more appreciable in the field. Furthermore, both hybrids of graphene-molybdenum disulfide and graphene-boron nitride have been successfully synthesized recently, which means our study has practical meaning and can be tested in the lab now,” he said.
Yakobson said 3-D materials have a narrow region for charge transfer at the positive and negative (or p/n) junction. But the researchers found that 2-D interfaces created “a highly nonlocalized charge transfer” — and an electric field along with it — that greatly increased the junction size. That could give them an advantage in photovoltaic applications like solar cells, the researchers said.
The lab built a simulation of a hybrid of graphene and molybdenum disulfide and also considered graphene-boron nitride and graphene in which half was doped to create a p/n junction. Their calculations predicted the presence of an electric field should make 2-D Schottky (one-way) devices like transistors and diodes more tunable based on the size of the device itself.
How the atoms line up with each other is also important, Yakobson said. Graphene and boron nitride both feature hexagonal lattices, so they mesh perfectly. But molybdenum disulfide, another promising material, isn’t exactly flat, though it’s still considered 2-D.
“If the atomic structures don’t match, you get dangling bonds or defects along the borderline,” he said. “The structure has consequences for electronic behavior, especially for what is called Fermi level pinning.”
Pinning can degrade electrical performance by creating an energy barrier at the interface, Yakobson explained. “But your Schottky barrier (in which current moves in only one direction) doesn’t change as expected. This is a well-known phenomenon for semiconductors; it’s just that in two dimensions, it’s different, and in this case may favor 2-D over 3-D systems.”
Yakobson said the principles put forth by the new paper will apply to patterned hybrids of two or more 2-D patches. “You can make something special, but the basic effects are always at the interfaces. If you want to have many transistors in the same plane, it’s fine, but you still have to consider effects at the junctions.
“There’s no reason we can’t build 2-D rectifiers, transistors or memory elements,” he said. “They’ll be the same as we use routinely in devices now. But unless we develop a proper fundamental knowledge of the physics, they may fail to do what we design or plan.”
Learn more: Ultra-flat circuits will have unique properties
Rice lab discovers titanium-gold alloy that is four times harder than most steels
Titanium is the leading material for artificial knee and hip joints because it’s strong, wear-resistant and nontoxic, but an unexpected discovery by Rice University physicists shows that the gold standard for artificial joints can be improved with the addition of some actual gold.
“It is about 3-4 times harder than most steels,” said Emilia Morosan, the lead scientist on a new study in Science Advances that describes the properties of a 3-to-1 mixture of titanium and gold with a specific atomic structure that imparts hardness. “It’s four times harder than pure titanium, which is what’s currently being used in most dental implants and replacement joints.”
Morosan, a physicist who specializes in the design and synthesis of compounds with exotic electronic and magnetic properties, said the new study is “a first for me in a number of ways. This compound is not difficult to make, and it’s not a new material.”
In fact, the atomic structure of the material — its atoms are tightly packed in a “cubic” crystalline structure that’s often associated with hardness — was previously known. It’s not even clear that Morosan and former graduate student Eteri Svanidze, the study’s lead co-author, were the first to make a pure sample of the ultrahard “beta” form of the compound. But due to a couple of lucky breaks, they and their co-authors are the first to document the material’s remarkable properties.
“This began from my core research,” said Morosan, professor of physics and astronomy, of chemistry and of materials science and nanoengineering at Rice. “We published a study not long ago on titanium-gold, a 1-to-1 ratio compound that was a magnetic material made from nonmagnetic elements. One of the things that we do when we make a new compound is try to grind it into powder for X-ray purposes. This helps with identifying the composition, the purity, the crystal structure and other structural properties.
“When we tried to grind up titanium-gold, we couldn’t,” she recalled. “I even bought a diamond (coated) mortar and pestle, and we still couldn’t grind it up.”
Morosan and Svanidze decided to do follow-up tests to determine exactly how hard the compound was, and while they were at it, they also decided to measure the hardness of the other compositions of titanium and gold that they had used as comparisons in the original study.
One of the extra compounds was a mixture of three parts titanium and one part gold that had been prepared at high temperature.
What the team didn’t know at the time was that making titanium-3-gold at relatively high temperature produces an almost pure crystalline form of the beta version of the alloy — the crystal structure that’s four times harder than titanium. At lower temperatures, the atoms tend to arrange in another cubic structure — the alpha form of titanium-3-gold. The alpha structure is about as hard as regular titanium. It appears that labs that had previously measured the hardness of titanium-3-gold had measured samples that largely consisted of the alpha arrangement of atoms.
The team measured the hardness of the beta form of the crystal in conjunction with colleagues at Texas A&M University’s Turbomachinery Laboratory and at the National High Magnetic Field Laboratory at Florida State University, Morosan and Svanidze also performed other comparisons with titanium. For biomedical implants, for example, two key measures are biocompatibility and wear resistance. Because titanium and gold by themselves are among the most biocompatible metals and are often used in medical implants, the team believed titanium-3-gold would be comparable. In fact, tests by colleagues at the University of Texas MD Anderson Cancer Center in Houston determined that the new alloy was even more biocompatible than pure titanium. The story proved much the same for wear resistance: Titanium-3-gold also outperformed pure titanium.
Morosan said she has no plans to become a materials scientist or dramatically alter her lab’s focus, but she said her group is planning to conduct follow-up tests to further investigate the crystal structure of beta titanium-3-gold and to see if chemical dopants might improve its hardness even further.
Technology marries light-harvesting nanoantennas to high-reaction-rate catalysts
In a find that could transform some of the world’s most energy-intensive manufacturing processes, researchers at Rice University’s Laboratory for Nanophotonics have unveiled a new method for uniting light-capturing photonic nanomaterials and high-efficiency metal catalysts.
Each year, chemical producers spend billions of dollars on metal catalysts, materials that spur or speed up chemical reactions. Catalysts are used to produce trillions of dollars worth of chemical products. Unfortunately, most catalysts only work at high temperatures or high pressure or both. For example, the U.S. Energy Information Agency estimated that in 2010, just one segment of the U.S. chemical industry, plastic resin production, used almost 1 quadrillion British thermal units of energy, about the same amount of energy contained in 8 billion gallons of gasoline.
Nanotechnology researchers have long been interested in capturing some of the worldwide catalysis market with energy-efficient photonic materials, metallic materials that are tailor-made with atomic precision to harvest energy from sunlight. Unfortunately, the best nanomaterials for harvesting light — gold, silver and aluminum — aren’t very good catalysts, and the best catalysts — palladium, platinum and rhodium — are poor at capturing solar energy.
The new catalyst, which is described in a study this week in the Proceedings of the National Academy of Sciences, is the latest innovation from LANP, a multidisciplinary, multi-investigator research group headed by photonics pioneer Naomi Halas. Halas, who also directs Rice’s Smalley-Curl Institute, said a number of studies in recent years have shown that light-activated “plasmonic” nanoparticles can be used to increase the amount of light absorbed by adjacent dark nanoparticles. Plasmons are waves of electrons that slosh like a fluid across the surface of tiny metallic nanoparticles. Depending upon the frequency of their sloshing, these plasmonic waves can interact with and harvest the energy from passing light.
Energy-stingy tech could give wearable computers continuous vision
Rice University researchers have just the thing for the age of information overload: an app that sees all and remembers only what it should.
RedEye, new technology from Rice’s Efficient Computing Group that was unveiled today at the International Symposium on Computer Architecture (ISCA 2016) conference in Seoul, South Korea, could provide computers with continuous vision — a first step toward allowing the devices to see what their owners see and keep track of what they need to remember.
“The concept is to allow our computers to assist us by showing them what we see throughout the day,” said group leader Lin Zhong, professor of electrical and computer engineering at Rice and the co-author of a new study about RedEye. “It would be like having a personal assistant who can remember someone you met, where you met them, what they told you and other specific information like prices, dates and times.”
Zhong said RedEye is an example of the kind of technology the computing industry is developing for use with wearable, hands-free, always-on devices that are designed to support people in their daily lives. The trend, which is sometimes referred to as “pervasive computing” or “ambient intelligence,” centers on technology that can recognize and even anticipate what someone needs and provide it right away.
“The pervasive-computing movement foresees devices that are personal assistants, which help us in big and small ways at almost every moment of our lives,” Zhong said. “But a key enabler of this technology is equipping our devices to see what we see and hear what we hear. Smell, taste and touch may come later, but vision and sound will be the initial sensory inputs.”