Since its discovery in 2004, scientists have believed that graphene may have the innate ability to superconduct. Now Cambridge researchers have found a way to activate that previously dormant potential.
Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor – meaning that it can be made to carry an electrical current with zero resistance.
The finding, reported in Nature Communications, further enhances the potential of graphene, which is already widely seen as a material that could revolutionise industries such as healthcare and electronics. Graphene is a two-dimensional sheet of carbon atoms and combines several remarkable properties; for example, it is very strong, but also light and flexible, and highly conductive.
Since its discovery in 2004, scientists have speculated that graphene may also have the capacity to be a superconductor. Until now, superconductivity in graphene has only been achieved by doping it with, or by placing it on, a superconducting material – a process which can compromise some of its other properties.
But in the new study, researchers at the University of Cambridge managed to activate the dormant potential for graphene to superconduct in its own right. This was achieved by coupling it with a material called praseodymium cerium copper oxide (PCCO).
Superconductors are already used in numerous applications. Because they generate large magnetic fields they are an essential component in MRI scanners and levitating trains. They could also be used to make energy-efficient power lines and devices capable of storing energy for millions of years.
Superconducting graphene opens up yet more possibilities. The researchers suggest, for example, that graphene could now be used to create new types of superconducting quantum devices for high-speed computing. Intriguingly, it might also be used to prove the existence of a mysterious form of superconductivity known as “p-wave” superconductivity, which academics have been struggling to verify for more than 20 years.
The research was led by Dr Angelo Di Bernardo and Dr Jason Robinson, Fellows at St John’s College, University of Cambridge, alongside collaborators Professor Andrea Ferrari, from the Cambridge Graphene Centre; Professor Oded Millo, from the Hebrew University of Jerusalem, and Professor Jacob Linder, at the Norwegian University of Science and Technology in Trondheim.
“It has long been postulated that, under the right conditions, graphene should undergo a superconducting transition, but can’t,” Robinson said. “The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on? The question then becomes how do you know that the superconductivity you are seeing is coming from within the graphene itself, and not the underlying superconductor?”
Similar approaches have been taken in previous studies using metallic-based superconductors, but with limited success. “Placing graphene on a metal can dramatically alter the properties so it is technically no longer behaving as we would expect,” Di Bernardo said. “What you see is not graphene’s intrinsic superconductivity, but simply that of the underlying superconductor being passed on.”
PCCO is an oxide from a wider class of superconducting materials called “cuprates”. It also has well-understood electronic properties, and using a technique called scanning and tunnelling microscopy, the researchers were able to distinguish the superconductivity in PCCO from the superconductivity observed in graphene.
Superconductivity is characterised by the way the electrons interact: within a superconductor electrons form pairs, and the spin alignment between the electrons of a pair may be different depending on the type – or “symmetry” – of superconductivity involved. In PCCO, for example, the pairs’ spin state is misaligned (antiparallel), in what is known as a “d-wave state”.
By contrast, when graphene was coupled to superconducting PCCO in the Cambridge-led experiment, the results suggested that the electron pairs within graphene were in a p-wave state. “What we saw in the graphene was, in other words, a very different type of superconductivity than in PCCO,” Robinson said. “This was a really important step because it meant that we knew the superconductivity was not coming from outside it and that the PCCO was therefore only required to unleash the intrinsic superconductivity of graphene.”
It remains unclear what type of superconductivity the team activated, but their results strongly indicate that it is the elusive “p-wave” form. If so, the study could transform the ongoing debate about whether this mysterious type of superconductivity exists, and – if so – what exactly it is.
In 1994, researchers in Japan fabricated a triplet superconductor that may have a p-wave symmetry using a material called strontium ruthenate (SRO). The p-wave symmetry of SRO has never been fully verified, partly hindered by the fact that SRO is a bulky crystal, which makes it challenging to fabricate into the type of devices necessary to test theoretical predictions.
“If p-wave superconductivity is indeed being created in graphene, graphene could be used as a scaffold for the creation and exploration of a whole new spectrum of superconducting devices for fundamental and applied research areas,” Robinson said. “Such experiments would necessarily lead to new science through a better understanding of p-wave superconductivity, and how it behaves in different devices and settings.”
The study also has further implications. For example, it suggests that graphene could be used to make a transistor-like device in a superconducting circuit, and that its superconductivity could be incorporated into molecular electronics. “In principle, given the variety of chemical molecules that can bind to graphene’s surface, this research can result in the development of molecular electronics devices with novel functionalities based on superconducting graphene,” Di Bernardo added.
Learn more: Graphene’s sleeping superconductivity awakens
A UCF researcher has combined cutting-edge nanoscience with a magnetic phenomenon discovered more than 170 years ago to create a method for speedy medical tests.
The discovery, if commercialized, could lead to faster test results for HIV, Lyme disease, syphilis, rotavirus and other infectious conditions.
“I see no reason why a variation of this technique couldn’t be in every hospital throughout the world,” said Shawn Putnam, an assistant professor in the University of Central Florida’s College of Engineering & Computer Science.
At the core of the research recently published in the academic journal Small are nanoparticles – tiny particles that are one-billionth of a meter. Putnam’s team coated nanoparticles with the antibody to BSA, or bovine serum albumin, which is commonly used as the basis of a variety of diagnostic tests.
By mixing the nanoparticles in a test solution – such as one used for a blood test – the BSA proteins preferentially bind with the antibodies that coat the nanoparticles, like a lock and key.
That reaction was already well known. But Putnam’s team came up with a novel way of measuring the quantity of proteins present. He used nanoparticles with an iron core and applied a magnetic field to the solution, causing the particles to align in a particular formation. As proteins bind to the antibody-coated particles, the rotation of the particles becomes sluggish, which is easy to detect with laser optics.
The interaction of a magnetic field and light is known as Faraday rotation, a principle discovered by scientist Michael Faraday in 1845. Putnam adapted it for biological use.
“It’s an old theory, but no one has actually applied this aspect of it,” he said.
Other antigens and their unique antibodies could be substituted for the BSA protein used in the research, allowing medical tests for a wide array of infectious diseases.
The proof of concept shows the method could be used to produce biochemical immunology test results in as little as 15 minutes, compared to several hours for ELISA, or enzyme-linked immunosorbent assay, which is currently a standard approach for biomolecule detection.
Scientific investigations sometimes result in serendipitous discoveries which shift the investigations from one focus to another. In the case of researchers at Mayo Clinic in Arizona, studies addressing obesity’s impact on cancer treatment resulted in an unexpected discovery that shifted the focus from cancer to obesity. The investigators observed that two common cancer-fighting drugs sparked significant weight loss in the obese mice, even though the mice continued their excessive consumption of a high-fat diet.
These results, which are part of a Mayo Clinic study, were reported in the Jan. 17 edition of Oncotarget.
“We were surprised to observe that when morbidly obese mice were treated with certain cancer-fighting drugs, the drugs not only targeted their cancers, but also tended to spontaneously resolve their obesity even with undiminished gorging on a high-fat diet,” says Mayo Clinic cancer immunotherapist Peter Cohen, M.D., who co-led the study with postdoctoral fellow Cheryl Myers, Ph.D. and Mayo Clinic immunologist Sandra Gendler, Ph.D.
“Importantly, two chemotherapy agents, methotrexate and cyclophosphamide could be dosed to completely reverse obesity without detectable toxicity, even in mice without cancer,” explains Dr. Myers. “Interestingly, these drugs are already used to treat some noncancerous conditions, such as rheumatoid arthritis.”
More research needs to be done to see if the same outcome can be achieved in morbidly obese patients.
“The ease with which this weight loss was achieved in mice ? even with continued caloric binging is in stark contrast to the Herculean difficulties morbidly obese patients experience trying to preserve weight loss through dietary restraint,” adds Dr. Gendler.
The weight reduction observed in the obese mice was not attributable to trivial explanations, such as a decrease in dietary intake, increased energy expenditure or malabsorption. Instead, the investigators identified multiple effects of methotrexate or cyclophosphamide that worked together to expedite loss of excessive weight in mice. Much like chemotherapy’s well-known ability to decrease red and white blood cell precursors transiently, methotrexate or cyclophosphamide depleted fat cell precursors, leading to much decreased fat storage. “This meant that excessive dietary calories had to go somewhere else in the body instead, such as to the liver,” explains Dr. Cohen.
“Surprisingly, the liver maintained a robust level of metabolic activity during methotrexate or cyclophosphamide treatment, but was nearly shut down in regards to fat production and fat storage,” adds Dr. Myers.
“Based on our composite data,” explains Dr. Gendler, “it appears that methotrexate or cyclophosphamide can induce the livers of obese mice to burn off rather than accumulate excessive dietary fat. This results in desirable weight reduction instead of increased obesity, even with continued caloric binging.”
The study sets the stage for further research, exploring how these metabolic mechanisms could reduce the need for severe dietary constraints in morbidly obese individuals.
Medical implants like stents, catheters and tubing introduce risk for blood clotting and infection – a perpetual problem for many patients.
Colorado State University engineers offer a potential solution: A specially grown, “superhemophobic” titanium surface that’s extremely repellent to blood. The material could form the basis for surgical implants with lower risk of rejection by the body.
Biomedical, materials approaches
It’s an outside-the-box innovation achieved at the intersection of two disciplines: biomedical engineering and materials science. The work, recently published in Advanced Healthcare Materials, is a collaboration between the labs of Arun Kota, assistant professor of mechanical engineering and biomedical engineering; and Ketul Popat, associate professor in the same departments.
Kota, an expert in novel, “superomniphobic” materials that repel virtually any liquid, joined forces with Popat, an innovator in tissue engineering and bio-compatible materials. Starting with sheets of titanium, commonly used for medical devices, their labs grew chemically altered surfaces that act as perfect barriers between the titanium and blood. Their teams conducted experiments showing very low levels of platelet adhesion, a biological process that leads to blood clotting and eventual rejection of a foreign material.
A material “phobic” (repellent) to blood might seem counterintuitive, the researchers say, as often biomedical scientists use materials “philic” (with affinity) to blood to make them biologically compatible. “What we are doing is the exact opposite,” Kota said. “We are taking a material that blood hates to come in contact with, in order to make it compatible with blood.” The key innovation is that the surface is so repellent, that blood is tricked into believing there’s virtually no foreign material there at all.
The undesirable interaction of blood with foreign materials is an ongoing problem in medical research, Popat said. Over time, stents can form clots, obstructions, and lead to heart attacks or embolisms. Often patients need blood-thinning medications for the rest of their lives – and the drugs aren’t foolproof.
“The reason blood clots is because it finds cells in the blood to go to and attach,” Popat said. “Normally, blood flows in vessels. If we can design materials where blood barely contacts the surface, there is virtually no chance of clotting, which is a coordinated set of events. Here, we’re targeting the prevention of the first set of events.”
The researchers analyzed variations of titanium surfaces, including different textures and chemistries, and they compared the extent of platelet adhesion and activation. Fluorinated nanotubes offered the best protection against clotting, and they plan to conduct follow-up experiments.
Growing a surface and testing it in the lab is only the beginning, the researchers say. They want to continue examining other clotting factors, and eventually, to test real medical devices.
SF State astronomer Stephen Kane searches for signs of life in one of the extrasolar systems closest to Earth
Is there anybody out there? The question of whether Earthlings are alone in the universe has puzzled everyone from biologists and physicists to philosophers and filmmakers. It’s also the driving force behind San Francisco State University astronomer Stephen Kane’s research into exoplanets — planets that exist outside Earth’s solar system.
As one of the world’s leading “planet hunters,” Kane focuses on finding “habitable zones,” areas where water could exist in a liquid state on a planet’s surface if there’s sufficient atmospheric pressure. Kane and his team, including former undergraduate student Miranda Waters, examined the habitable zone on a planetary system 14 light years away. Their findings will appear in the next issue of Astrophysical Journal in a paper titled “Characterization of the Wolf 1061 Planetary System.”
“The Wolf 1061 system is important because it is so close and that gives other opportunities to do follow-up studies to see if it does indeed have life,” Kane said.
But it’s not just Wolf 1061’s proximity to Earth that made it an attractive subject for Kane and his team. One of the three known planets in the system, a rocky planet called Wolf 1061c, is entirely within the habitable zone. With assistance from collaborators at Tennessee State University and in Geneva, Switzerland, they were able to measure the star around which the planet orbits to gain a clearer picture of whether life could exist there.
When scientists search for planets that could sustain life, they are basically looking for a planet with nearly identical properties to Earth, Kane said. Like Earth, the planet would have to exist in a sweet spot often referred to as the “Goldilocks zone” where conditions are just right for life. Simply put, the planet can’t be too close or too far from its parent star. A planet that’s too close would be too hot. If it’s too far, it may be too cold and any water would freeze, which is what happens on Mars, Kane added.
Conversely, when planets warm, a “runaway greenhouse effect” can occur where heat gets trapped in the atmosphere. Scientists believe this is what happened on Earth’s twin, Venus. Scientists believe Venus once had oceans, but because of its proximity to the sun the planet became so hot that all the water evaporated, according to NASA. Since water vapor is extremely effective in trapping in heat, it made the surface of the planet even hotter. The surface temperature on Venus now reaches a scalding 880 degrees Fahrenheit.
Since Wolf 1061c is close to the inner edge of the habitable zone, meaning closer to the star, it could be that the planet has an atmosphere that’s more similar to Venus. “It’s close enough to the star where it’s looking suspiciously like a runaway greenhouse,” Kane said.
Kane and his team also observed that unlike Earth, which experiences climatic changes such as an ice age because of slow variations in its orbit around the sun, Wolf 1061c’s orbit changes at a much faster rate, which could mean the climate there could be quite chaotic. “It could cause the frequency of the planet freezing over or heating up to be quite severe,” Kane said.
These findings all beg the question: Is life possible on Wolf 1061c? One possibility is that the short time scales over which Wolf 1061c’s orbit changes could be enough that it could actually cool the planet off, Kane said. But fully understanding what’s happening on the planet’s surface will take more research.
In the coming years, there will be a launch of new telescopes like the James Webb Space Telescope, the successor to the Hubble Space Telescope, Kane said, and it will be able to detect atmospheric components of the exoplanets and show what’s happening on the surface.
Making A.I. Systems that See the World as Humans Do
A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.
“The model performs in the 75th percentile for American adults, making it better than average,” said Northwestern Engineering’s Ken Forbus. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”
The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)
Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern’s McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.
The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.
While Forbus and Lovett’s system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven’s Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test’s problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Forbus and Lovett’s computational model performed better than the average American.
“The Raven’s test is the best existing predictor of what psychologists call ‘fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,’” said Lovett, now a researcher at the US Naval Research Laboratory. “Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence.”
The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as “the clock is above the door” or “pressure differences cause water to flow.” These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.
“Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it,” Forbus said. “But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly.”
Researchers find no evidence of an overall reduction in the world’s consumption of materials.
Are humans taking more resources from the Earth than the planet can safely produce? The answer lies partly in whether we can “dematerialize,” or reduce the amount of materials needed to produce goods and services.
While some scientists believe that the world can achieve significant dematerialization through improvements in technology, a new MIT-led study finds that technological advances alone will not bring about dematerialization and, ultimately, a sustainable world.
The researchers found that no matter how much more efficient and compact a product is made, consumers will only demand more of that product and in the long run increase the total amount of materials used in making that product.
Take, for instance, one of the world’s fastest-improving technologies: silicon-based semiconductors. Over the last few decades, technological improvements in the efficiency of semiconductors have greatly reduced the amount of material needed to make a single transistor. As a result, today’s smartphones, tablets, and computers are far more powerful and compact than computers built in the 1970s.
Nonetheless, the researchers find that consumers’ demand for silicon has outpaced the rate of its technological change, and that the world’s consumption of silicon has grown by 345 percent over the last four decades. As others have found, by 2005, there were more transistors used than printed text characters.
“Despite how fast technology is racing, there’s actually more silicon used today, because we now just put more stuff on, like movies, and photos, and things we couldn’t even think of 20 years ago,” says Christopher Magee, a professor of the practice of engineering systems in MIT’s Institute for Data, Systems, and Society.
“So we’re still using a little more material all the time.”
The researchers found similar trends in 56 other materials, goods, and services, from basic resources such as aluminum and formaldehyde to hardware and energy technologies such as hard disk drives, transistors, wind energy, and photovoltaics. In all cases, they found no evidence of dematerialization, or an overall reduction in their use, despite technological improvements to their performance.
“There is a techno-optimist’s position that says technological change will fix the environment,” Magee observes. “This says, probably not.”
Magee and his co-author, Tessaleno Devezas, a professor at the University of Beira Interior, in Portugal, published their findings recently in the journal Technological Forecasting and Social Change.
Tracking a rebound
In their research, Magee and Devezas examined whether the world’s use of materials has been swayed by an effect known as Jevons’ Paradox. In 1865, the English economist William Stanley Jevons observed that as improvements to coal-fired steam engines reduced the price of coal, England’s consumption of coal actually increased.
While experts believed technological improvements would reduce coal consumption, Jevons countered the opposite was true: Improving coal-fired power’s efficiency would only increase consumer demand for electricity and further deplete coal reserves.
Magee and Devezas looked to see whether Jevons’ Paradox, and consumer demand in general, has prevented dematerialization of today’s goods and services. They sought to identify a general relationship between dematerialization, technological change, and Jevons’s Paradox — also referred to as a rebound effect.
The team developed a simple model, or equation, to calculate whether dematerialization is taking place for a given product. The model considers a number of variables, including population and economic growth, a product’s yearly increase in technological performance, and demand elasticity — the degree to which demand for a product varies with its price.
Not surprisingly, the researchers’ model indicates that dematerialization is more likely when demand elasticity for a product is relatively low and the rate of its technological improvement is high. But when they applied the equation to common goods and services used today, they found that demand elasticity and technological change worked against each other — the better a product was made to perform, the more consumers wanted it.
“It seems we haven’t seen a saturation in demand,” Magee says. “People haven’t said, ‘That’s enough,’ at least in anything that we can get data to test for.”
A growing appetite
Magee and Devezas gathered data for 57 common goods and services, including widely used chemical components such as ammonia, formaldehyde, polyester fiber, and styrene, along with hardware and energy technologies such as transistors, laser diodes, crude oil, photovoltaics, and wind energy. They worked the data for each product into their equation, and, despite seeing technological improvements in almost all cases, they failed to find a single case in which dematerialization — an overall reduction in materials — was taking place.
In follow-up work, the researchers were eventually able to identify six cases in which an absolute decline in materials usage has occurred. However, these cases mostly include toxic chemicals such as asbestos and thallium, whose dematerialization was due not to technological advances, but to government intervention.
There was one other case in which researchers observed dematerialization: wool. The material’s usage has significantly fallen, due to innovations in synthetic alternatives, such as nylon and polyester fabrics. In this case, Magee argues that substitution, and not dematerialization, has occurred. In other words, wool has simply been replaced by another material to fill the same function.
So what will it take to reduce our materials consumption and achieve a sustainable world?
“What it’s going to take is much more difficult than just letting technological change do it,” Magee says. “Social and cultural change, people talking to each other, cooperating, might do it. That’s not the way we’re going right now, but that doesn’t mean we can’t do it.”
However, others are more hopeful that technology will bring about sustainability, albeit at significant cost.
“[Technology] will get us to a sustainable world — it has to,” says J. Doyne Farmer, a professor of mathematics at the University of Oxford who was not involved in the research. “I say this not only because we need it, but because there is only so much we can suck out of the Earth, and eventually we will be forced into a sustainable world, one way or another. The question is whether we can do that without great pain. Magee’s paper shows that we need to expect more pain than some of us thought.”
Mobile phone, computer and TV displays all use very expensive colour filters and other components, which cannot be easily recycled. German and Spanish scientists have designed a new screen, which is cheaper and ecological as it uses a hybrid material. This material’s luminescent proteins can be used in backlighting systems and colour filters made using a 3D printing technique.
The success of liquid crystal displays (LCD), used in many portable devices such as telephones, laptops and TVs, is largely due to their use of LEDs or inorganic white light-emitting diodes, which provide exceptional image quality and are energy-efficient.
However, these displays have several inconveniencies: the high cost of the colour filters, limited brightness and contrast levels and the difficulty of recycling materials used to make them.
In response to these problems, a team of scientists at the University of Erlangen-Nurnberg (Germany) led by Rubén D. Costa, from Spain, have designed a new display made of natural elements: luminescent proteins, “which will enable, in the not too distant future, the manufacturing of these devices to be done ecologically and at a low cost”.
Researchers use these proteins in two parts of the display. On the one hand, in the display’s backlighting, using a Bio-LED featuring white light with different coloured luminescent proteins. As well as representing a new design in display backlighting, this technology, details of which were published last year, can be used as a substitute from inorganic phosphorous –a rare and expensive material– used in traditional LED displays.
“The proteins have a photoluminescence quantum yield of more than 75%. High efficiency is guaranteed,” says Costa. In addition, they have a low emission bandwidth (30-50 nm), ensuring high colour quality and degradation does not produce significant colour changes”.
Protein colour filter
Details of another protein component to the display, the colour filter, have been published by the scientists in the journal ‘Advanced Functional Materials’. Here, the proteins are stored in a polymeric matrix with micrometric resolution using a 3D printing technique, which enables them to maintain their luminescent properties and optimum stability.
“This colour filter meets the necessary requirement to improve displays currently being used in terms of contrast and brightness, within quality standards demanded across the market,” says Costa. “This new material will allow for the development of energy-efficient Bio-displays for TVs and mobile telephones, with low production costs, high image quality and ecologically sustainability. Also, these filters are not rigid, allowing them to be used in devices which are flexible and light.
Researchers at the University of Bath suggest developments in 3D printing techniques could open the door to the advancement of membrane capabilities.
This work is part of the University’s Centre for Advanced Separations Engineering (CASE) and is the first time the properties of different 3D printing techniques available to membrane fabrication have been assessed.
Wide ranging applications
Membranes are a semi-permeable selective barrier that separate the molecules in a mixture within a gas or liquid into two streams, a key example of this being the separation of salt from water for desalination using reverse osmosis membranes.
3D printing, otherwise known as Additive Manufacturing, has the ability to create almost any geometrically complex shape or feature in a range of materials across different scales. It has applications in various areas including medicine, art, manufacturing and engineering. However, its use in separation membrane engineering is relatively new.
Membranes are currently restricted mainly to tubular/hollow fibre and flat surface configurations due to the limitations of current manufacturing processes. As a result, the precision of present membranes are limited in successfully separating certain properties.
Innovative, more accurate membranes
The use of 3D printing techniques offers novel membrane preparation techniques that are able to produce membranes of different shapes, types and designs, which can be more precisely designed, fabricated and controlled than any other membrane fabrication method currently available.
The paper, which evaluates existing knowledge of the advantages and drawbacks of different 3D printing methods as well as the potential developments of membrane fabrication, identifies a bright future in which 3D printing will enable innovative and far more accurate membranes.
These potential increased capabilities could have significant implications for a number of key industries, including the water industry. New membranes with designer pores and surface shapes that enhance micro-mixing and shear flow across the membrane surface could be used to reduce the energy and down-time associated with cleaning blockages and fouling of the membranes.
Director of the Centre for Advanced Separations Engineering at the University of Bath, Dr Darrell Patterson, commented: “This review is the first to explore the possibility and challenges of using 3D printing for producing separation membranes.
“Although 3D printing technology is not quite well enough developed to yet produce large scale membranes that will be cost competitive with existing products, this work does signal what the future possibilities are with 3D printing, to produce membranes beyond that which are currently available, including controlled complex pore structures, integrated surface patterns and membranes based on nature.”
Lower energy, more sustainable molecular separations
Up to 15 per cent of energy used globally is from the separation and purification of industrial products such as gases, fine chemicals and fresh water. Separation processes also account for 40 to 70 per cent of industry capital and operating costs. Membrane technology potentially offers lower energy, more sustainable molecular separations that can be applied to a wide range of gas and liquid separations. It is therefore a key technology that could be used to help decrease the carbon footprint and costs within industry.
It comes down to privacy — biomedical research can’t proceed without human genomic data sharing, and genomic data sharing can’t proceed without some reasonable level of assurance that de-identified data from patients and other research participants will stay de-identified after they’re released for research.
Data use agreements that carry penalties for attempted re-identification of participants may be a deterrent, but they’re hardly a guarantee of privacy. Genomic data can be partially suppressed as they’re released, addressing vulnerabilities and rendering individual records unrecognizable, but suppression quickly spoils a data set’s scientific usefulness.
A new study from Vanderbilt University presents an unorthodox approach to re-identification risk, showing how optimal trade-offs between risk and scientific utility can be struck as genomic data are released for research.
The study appears in the American Journal of Human Genetics.
Doctoral candidate Zhiyu Wan, Bradley Malin, Ph.D., and colleagues draw on game theory to simulate the behavior of would-be data privacy adversaries, and show how marrying data use agreements with a more sensitive, scalpel-like data suppression policy can provide greater discretion and control as data are released. Their framework can be used to suppress just enough genomic data to persuade would-be snoops that their best privacy attacks will be unprofitable.
“Experts in the privacy field are prone to assume the worse case scenario, an attacker with unlimited capability and no aversion to financial losses. But that may not happen in the real world, so you would tend to overestimate the risk and not share anything,” Wan said. “We developed an approach that gives a better estimate of the risk.”
Malin agrees that failure to come to grips with real-world risk scenarios could stifle genomic data sharing.
“Historically, people have argued that it’s too difficult to represent privacy adversaries. But the game theoretic perspective says you really just have to represent all the ways people can interact with each other around the release of data, and if you can do that, then you’re going to see the solution. You’re doing a simulation of what happens in the real world, and the question just becomes whether you’ve represented the rules of the game correctly,” said Malin, associate professor of Biomedical Informatics, Biostatistics and Computer Science.
To date, no one has faced prosecution for attacking the privacy of de-identified genomic data. Privacy experts nevertheless assume a contest of computerized algorithms as de-identified data are released, with privacy algorithms patrolling the ramparts while nefarious re-identification algorithms try to scale them.
Re-identification attacks have occurred, but according to earlier research by Malin and colleagues, the perpetrators appear to be motivated by curiosity and academic advancement rather than by criminal self-interest. They’re sitting at computers just down the hall, so to speak, overpowering your data set’s de-identification measures, then publishing an academic paper saying just how they did it. It’s all very bloodless and polite.
The new study is something different, more tough-minded, situating data sharing and privacy algorithms in the real world, where people go to jail or are fined for violations. Here the envisaged privacy adversary doesn’t wear elbow patches, lacks government backing and is simply out to make a buck through the illicit sale of private information.
De-identified genotype records are linked to de-identified medical, biometric and demographic information. In what the study refers to as “the game,” the attacker is assumed already to have some named genotype data in hand, and will attempt to match this identified data to de-identified genotype records as study data are released.
To bring these prospective attackers out of the shadows, the authors present a detailed case study involving release of genotype data from some 8,000 patients. They painstakingly assign illicit economic rewards for the criminal re-identification of research data. Based on costs for generating data, they also assign economic value to the scientific utility of study data.
On the way to estimating risk and the attacker’s costs, the authors estimate the likelihood that any named individual genotype record already held by the attacker is included in the de-identified data set slated for release; according to the authors, this key estimate is often neglected in re-identification risk assessments.
The authors measure the utility of a study’s genomic data in terms of the frequencies of genetic variants: for a given variant, the greater the difference between its frequency in the study group and its frequency in the general population (based on available reference data), the greater its scientific utility. This approach to utility triumphed recently when Wan and Malin won the 2016 iDASH Healthcare Privacy Protection Challenge. Their winning algorithm proved best at preserving the scientific utility of a genomic data set while thwarting a privacy attack.
For any genomic data set, before any data are released in a game’s opening move, the sharer can use the game to compare various data sharing policies in terms of risk and utility. In the case study, the game theoretic policy provides the best payoff to the sharer, vastly outperforming a conventional data suppression policy and edging out a data use agreement policy.
No matter where parameters are set regarding illicit financial rewards or information that’s likely to be wielded by an attacker, the authors show that the game theoretic approach generally provides the best payoff to the sharer. They sketch how their approach could serve the release of data from other sources, including the federal government’s upcoming Precision Medicine Initiative.