The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by NYU researchers–a discovery that has implications for the creation of artificial intelligence systems.
The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. It shows that processing information in working memory involves two different networks in the brain rather than one—a discovery that has implications for the creation of artificial intelligence (AI) systems, such as speech translation tools.
“Our results show there are at least two brain networks that are active when we are manipulating speech and language information in our minds,” explains Bijan Pesaran, an associate professor at New York University’s Center for Neural Science and the senior author of the research.
The work appears in the journal Nature Neuroscience.
Past studies had emphasized how a single “Central Executive” oversaw manipulations of information stored in working memory. The distinction is an important one, Pesaran observes, because current AI systems that replicate human speech typically assume computations involved in verbal working memory are performed by a single neural network.
“Artificial intelligence is gradually becoming more human like,” says Pesaran. “By better understanding intelligence in the human brain, we can suggest ways to improve AI systems. Our work indicates that AI systems with multiple working memory networks are needed.”
The paper’s first author was Greg Cogan, an NYU postdoctoral fellow at the time of the study and now a postdoctoral fellow at Duke University; other co-authors were Professor Orrin Devinsky, director of the Comprehensive Epilepsy Center at NYU Langone Medical Center, Werner Doyle, an associate professor at NYU Langone’s Department of Neurosurgery, Dan Friedman, an associate professor at NYU Langone’s Department of Neurology, and Lucia Melloni, an assistant professor at NYU Langone’s Department of Neurology.
The study focused on a form of working memory critical for thinking, planning, and creative reasoning and involves holding in mind and transforming the information necessary for speech and language.
The researchers examined human patients undergoing brain monitoring to treat drug-resistant epilepsy. Specifically, they decoded neural activity recorded from the surface of the brain of these patients as they were listening to speech sounds and speaking after a short delay. This method required the study’s subjects to use a rule provided by the researchers to transform speech sounds they heard into different spoken utterances—for example, the patients were told to repeat the same sound they had heard while at other times the researchers instructed the patients to listen to the sound and make a different utterance.
The researchers decoded the neural activity in each patient’s brain as the patients applied the rule to convert what they heard into what they needed to say. The results revealed that manipulating information held in working memory involved the operation of two brain networks. One network encoded the rule that the patients were using to guide the utterances they made (the rule network). Surprisingly, however, the rule network did not encode the details of how the subjects converted what they heard into what they said. The process of using the rule to transform the sounds into speech was handled by a second, transformation network. Activity in this network could be used to track how the input (what was heard) was being converted into the output (what was spoken) moment-by-moment.
Translating what you hear in one language to speak in another language involves applying a similar set of abstract rules. People with impairments of verbal working memory find it difficult to learn new languages. Modern intelligent machines also have trouble learning languages, the researchers add.
“One way we can enhance the development of more intelligent systems is with a fuller understanding of how the human brain and mind works,” notes Pesaran. “Diagnosing and treating working memory impairments in people involves psychological assessments. By analogy, machine psychology may one day be useful for diagnosing and treating impairments in the intelligence of our machines. This research examines a uniquely human form of intelligence, verbal working memory, and suggests new ways to make machines more intelligent.”
The elusive and complex components of creativity have been identified by computer experts at the University of Kent.
Dr Anna Jordanous, lecturer in the School of Computing, worked with language expert Dr Bill Keller (University of Sussex) on how to define the language people use when talking about creativity, known in the field as computational creativity. With that knowledge it becomes possible to make computer programs use this language too.
Dr Jordanous and Dr Keller looked at what people say when they talk about “what is creativity” in academic discussions, from various disciplines – psychology, arts, business, and computational creativity.
In an article entitled Modelling Creativity: Identifying key components through a corpus-based approach, published by PLOS ONE, they describe a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe it. Computational creativity is a relatively new field of research into computer systems that exhibit creative behaviours.
Using language-analysis software they identified the creative words and grouped them into clusters. These are considered to be 14 components of creativity. These clusters have been used to evaluate the creativity of computational systems, and are expected to be a useful resource for other researchers in computational creativity, as well as forming a basis for the automated evaluation of creative systems.
With the help of artificial intelligence, chemists from the University of Basel in Switzerland have computed the characteristics of about two million crystals made up of four chemical elements. The researchers were able to identify 90 previously unknown thermodynamically stable crystals that can be regarded as new materials. They report on their findings in the scientific journal Physical Review Letters.
Elpasolite is a glassy, transparent, shiny and soft mineral with a cubic crystal structure. First discovered in El Paso County (Colorado, USA), it can also be found in the Rocky Mountains, Virginia and the Apennines (Italy). In experimental databases, elpasolite is one of the most frequently found quaternary crystals (crystals made up of four chemical elements). Depending on its composition, it can be a metallic conductor, a semi-conductor or an insulator, and may also emit light when exposed to radiation.
These characteristics make elpasolite an interesting candidate for use in scintillators (certain aspects of which can already be demonstrated) and other applications. Its chemical complexity means that, mathematically speaking, it is practically impossible to use quantum mechanics to predict every theoretically viable combination of the four elements in the structure of elpasolite.
Machine learning aids statistical analysis
Thanks to modern artificial intelligence, Felix Faber, a doctoral student in Prof. Anatole von Lilienfeld’s group at the University of Basel’s Department of Chemistry, has now succeeded in solving this material design problem. First, using quantum mechanics, he generated predictions for thousands of elpasolite crystals with randomly determined chemical compositions. He then used the results to train statistical machine learning models (ML models). The improved algorithmic strategy achieved a predictive accuracy equivalent to that of standard quantum mechanical approaches.
ML models have the advantage of being several orders of magnitude quicker than corresponding quantum mechanical calculations. Within a day, the ML model was able to predict the formation energy – an indicator of chemical stability – of all two million elpasolite crystals that theoretically can be obtained from the main group elements of the periodic table. In contrast, performance of the calculations by quantum mechanical means would have taken a supercomputer more than 20 million hours.
Unknown materials with interesting characteristics
An analysis of the characteristics computed by the model offers new insights into this class of materials. The researchers were able to detect basic trends in formation energy and identify 90 previously unknown crystals that should be thermodynamically stable, according to quantum mechanical predictions.
On the basis of these potential characteristics, elpasolite has been entered into the Materials Project material database, which plays a key role in the Materials Genome Initiative. The initiative was launched by the US government in 2011 with the aim of using computational support to accelerate the discovery and the experimental synthesis of interesting new materials.
Some of the newly discovered elpasolite crystals display exotic electronic characteristics and unusual compositions. “The combination of artificial intelligence, big data, quantum mechanics and supercomputing opens up promising new avenues for deepening our understanding of materials and discovering new ones that we would not consider if we relied solely on human intuition,” says study director von Lilienfeld.
This year-long exercise in scientific introspection yields a report meant to spur discussion about ‘how the fruits of an AI-dominated economy should be shared’
A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ¬- and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.
Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.
“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”
The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.
“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”
The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.
“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”
The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.
The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.
The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.
“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.
“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”
The eight sections discuss:
- Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.
- Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.
- Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.
- Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.
- Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.
- Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.
- Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.
- Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.
“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”
Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”
Researchers at Houston Methodist have developed an artificial intelligence (AI) software that reliably interprets mammograms, assisting doctors with a quick and accurate prediction of breast cancer risk. According to a new study published in Cancer (early online Aug. 29), the computer software intuitively translates patient charts into diagnostic information at 30 times human speed and with 99 percent accuracy.
“This software intelligently reviews millions of records in a short amount of time, enabling us to determine breast cancer risk more efficiently using a patient’s mammogram. This has the potential to decrease unnecessary biopsies,” says Stephen T. Wong, Ph.D., P.E., chair of the Department of Systems Medicine and Bioengineering at Houston Methodist Research Institute.
The team led by Wong and Jenny C. Chang, M.D., director of the Houston Methodist Cancer Center used the AI software to evaluate mammograms and pathology reports of 500 breast cancer patients. The software scanned patient charts, collected diagnostic features and correlated mammogram findings with breast cancer subtype. Clinicians used results, like the expression of tumor proteins, to accurately predict each patient’s probability of breast cancer diagnosis.
In the United States, 12.1 million mammograms are performed annually, according to the Centers for Disease Control and Prevention (CDC). Fifty percent yield false positive results, according to the American Cancer Society (ACS), resulting in one in every two healthy women told they have cancer.
Currently, when mammograms fall into the suspicious category, a broad range of 3 to 95 percent cancer risk, patients are recommended for biopsies.
Over 1.6 million breast biopsies are performed annually nationwide, and about 20 percent are unnecessarily performed due to false-positive mammogram results of cancer free breasts, estimates the ACS.
The Houston Methodist team hopes this artificial intelligence software will help physicians better define the percent risk requiring a biopsy, equipping doctors with a tool to decrease unnecessary breast biopsies.
Manual review of 50 charts took two clinicians 50-70 hours. AI reviewed 500 charts in a few hours, saving over 500 physician hours.
“Accurate review of this many charts would be practically impossible without AI,” says Wong.
Artificial intelligence recently won out during simulated aerial combat against U.S. expert tacticians. Importantly, it did so using no more than the processing power available in a tiny, affordable computer (Raspberry Pi) that retails for as little as $35.
Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator
Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by subject-matter expert and retired United States Air Force Colonel Gene Lee — who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise — in a high-fidelity air combat simulator.
The artificial intelligence, dubbed ALPHA, was the victor in that simulated scenario, and according to Lee, is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.”
The Defense Advanced Research Projects Agency (DARPA) announced on Friday the launch of Data-Driven Discovery of Models (D3M), which aim to help non-experts bridge what it calls the “data-science expertise gap” by allowing artificial assistants to help people with machine learning. DARPA calls it a “virtual data scientist” assistant.
This software is doubly important because there’s a lack of data scientists right now and a greater demand than ever for more data-driven solutions. DARPA says experts project 2016 deficits of 140,000 to 190,000 data scientists worldwide, and increasing shortfalls in coming years.
For example, in order to construct a model for how different weather, school, location, and crime factors affect congestion for ride-sharing services in downtown Manhattan, a team of NYU students spent the equivalent of more than 90 months of work hours to complete the model. DARPA sees problems just like this all the time and the D3M Program will strive to construct it to drastically reduce the time and expertise needed to make models like these in the future.
New research explains why so many biological networks, including the human brain (a network of neurons), exhibit a hierarchical structure, and will improve attempts to create artificial intelligence.
The study, published in PLOS Computational Biology, demonstrates this by showing that the evolution of hierarchy – a simple system of ranking – in biological networks may arise because of the costs associated with network connections.
Like large businesses, many biological networks are hierarchically organised, such as gene, protein, neural, and metabolic networks. This means they have separate units that can each be repeatedly divided into smaller and smaller subunits. For example, the human brain has separate areas for motor control and tactile processing, and each of these areas consist of sub-regions that govern different parts of the body.
But why do so many biological networks evolve to be hierarchical? The results of this paper suggest that hierarchy evolves not because it produces more efficient networks, but instead because hierarchically wired networks have fewer connections. This is because connections in biological networks are expensive – they have to be built, housed, maintained, etc. – and there is therefore an evolutionary pressure to reduce the number of connections.
In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings may also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.
The robot, called OceanOne, is powered by artificial intelligence and haptic feedback systems, allowing human pilots an unprecedented ability to explore the depths of the oceans in high fidelity.
Oussama Khatib held his breath as he swam through the wreck of La Lune, 100 meters below the Mediterranean. The flagship of King Louis XIV sank here in 1664, 20 miles off the southern coast of France, and no human had touched the ruins – or the countless treasures and artifacts the ship once carried – in the centuries since.
With guidance from a team of skilled deep-sea archaeologists who had studied the site, Khatib, a professor of computer science at Stanford, spotted a grapefruit-size vase. He hovered precisely over the vase, reached out, felt its contours and weight, and stuck a finger inside to get a good grip. He swam over to a recovery basket, gently laid down the vase and shut the lid. Then he stood up and high-fived the dozen archaeologists and engineers who had been crowded around him.
This entire time Khatib had been sitting comfortably in a boat, using a set of joysticks to control OceanOne, a humanoid diving robot outfitted with human vision, haptic force feedback and an artificial brain – in essence, a virtual diver.
The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?
Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” – to be unveiled at the AAAI-16 Conference in Phoenix, Ariz. (Feb. 12 – 17). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research – the Scheherazade system – which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.
Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.
Making a computer that learns and remembers like a human brain is a daunting challenge.
The complex organ has 86 billion neurons and trillions of connections — or synapses — that can grow stronger or weaker over time. But now scientists report in ACS’ journal Nano Letters the development of a first-of-its-kind synthetic synapse that mimics the plasticity of the real thing, bringing us one step closer to human-like artificial intelligence.