Their theoretical work, tested in experiments in a driving simulator, should help to advance the development of safe semi-autonomous systems (SAS) such as self-driving cars. Such systems rely on human supervision and occasional transfer of control between the human and the automated systems, Zilberstein explains. With substantial support from the National Science Foundation and the auto industry, his lab is working on new approaches to SAS that are controlled collaboratively by a person and a machine while each capitalizes on their distinct abilities.
“Self-driving cars are coming,” says Zilberstein, “but the world is fairly chaotic and not many autonomous systems can cope with that yet. My sense is that we’re pretty far from having fully autonomous systems in cars.” This is because artificial intelligence sensing and decision-making techniques are still limited and no matter how much training and design are used, there is no sufficiently accurate model of the real world that allows such systems to operate reliably.
For example, he suggests, “Trains might be next as a candidate for autonomy, but even then, with a downed branch on the track during a storm, a person may be needed to judge how to proceed safely.”
The researcher says the example highlights a significant challenge that SAS research must address, that is, transferring control quickly, safely and smoothly between the system and the person supervising it. Most systems designed to date do not accomplish this. “Paradoxically,” says Zilberstein, “as we introduce more autonomy, people become less engaged with the operation of the system and it becomes harder for them to take over control.” In the paper presented today, to be published in the conference proceedings, the researchers establish precise requirements to assure that controlling entities can act reliably.