Simple system can recognize sixty percent of human touches
A SQUEEZE IN THE ARM, A PAT ON THE SHOULDER, OR A SLAP IN THE FACE – TOUCH IS AN IMPORTANT PART OF THE SOCIAL INTERACTION BETWEEN PEOPLE. SOCIAL TOUCH, HOWEVER, IS A RELATIVELY UNKNOWN FIELD WHEN IT COMES TO ROBOTS, EVEN THOUGH ROBOTS OPERATE WITH INCREASING FREQUENCY IN SOCIETY AT LARGE, RATHER THAN JUST IN THE CONTROLLED ENVIRONMENT OF A FACTORY.
Merel Jung is conducting research at the University of Twente CTIT research institute into social touch interaction with robots. Using a relatively simple system – a mannequin’s arm with pressure sensors, connected to a computer – she has succeeded in getting it to recognize sixty percent of all touches. The research is being published in the Journal on Multimodal User Interfaces scientific journal.
Robots are becoming more and more social. A well-known example of a social robot is Paro, a robot seal that is used in care homes, where it has a calming effect on the elderly residents and stimulates their senses. Positive results have been achieved with the robot for this target group, but we still have a long way to go before robots can correctly recognize, interpret, and respond to different types of social touch in the way that people can. It is a relatively little explored area in science, but one in which much could be achieved in the long term. Examples that come to mind are robots that assist children with autism in improving their social contacts, or robots that train medicine students for real-life situations.
Merel Jung is therefore carrying out research at the University of Twente into social touch interaction between humans and robots. In order to enable a robot to respond in the correct manner to being touched, she has identified four different stages. The robot must perceive, be able to recognize, interpret, and then respond in the correct way. In this phase of her research, Jung focused on the first two stages – perceiving and recognizing. With a relatively simple experiment, involving a mannequin’s arm fitted with 64 pressure sensors, she has succeeded in distinguishing sixty percent of almost 8,000 touches (distributed over fourteen different types of touch at three levels of intensity). Sixty percent does not seem very high on the face of it, but it is a good figure if you bear in mind that there was absolutely no social context and that various touches are very similar to each other. Possible examples include the difference between grabbing and squeezing, or stroking roughly and rubbing gently. In addition, the people touching the mannequin’s arm had been given no instructions on how to ‘perform’ their touches, and the computer system was not able to ‘learn’ how the individual ‘touchers’ operated. In similar circumstances, people too would not be able to correctly recognize every single touch. In her follow-up research, which Jung is currently undertaking, she is concentrating on how robots can interpret touch in a social context. It is expected that robots, by interpreting the context, will be better able to respond to touch correctly, and that therefore the touch robot will be one step closer to reality.
Learn more: First Steps Towards The Touch Robot
More than a decade ago, Ralph Hollis invented the ballbot, an elegantly simple robot whose tall, thin body glides atop a sphere slightly smaller than a bowling ball. The latest version, called SIMbot, has an equally elegant motor with just one moving part: the ball.
The only other active moving part of the robot is the body itself.
The spherical induction motor (SIM) invented by Hollis, a research professor in Carnegie Mellon University’s Robotics Institute, and Masaaki Kumagai, a professor of engineering at Tohoku Gakuin University in Tagajo, Japan, eliminates the mechanical drive systems that each used on previous ballbots. Because of this extreme mechanical simplicity, SIMbot requires less routine maintenance and is less likely to suffer mechanical failures.
The new motor can move the ball in any direction using only electronic controls. These movements keep SIMbot’s body balanced atop the ball.
Early comparisons between SIMbot and a mechanically driven ballbot suggest the new robot is capable of similar speed — about 1.9 meters per second, or the equivalent of a very fast walk — but is not yet as efficient, said Greg Seyfarth, a former member of Hollis’ lab who recently completed his master’s degree in robotics.
Induction motors are nothing new; they use magnetic fields to induce electric current in the motor’s rotor, rather than through an electrical connection. What is new here is that the rotor is spherical and, thanks to some fancy math and advanced software, can move in any combination of three axes, giving it omnidirectional capability. In contrast to other attempts to build a SIM, the design by Hollis and Kumagai enables the ball to turn all the way around, not just move back and forth a few degrees.
Though Hollis said it is too soon to compare the cost of the experimental motor with conventional motors, he said long-range trends favor the technologies at its heart.
“This motor relies on a lot of electronics and software,” he explained. “Electronics and software are getting cheaper. Mechanical systems are not getting cheaper, or at least not as fast as electronics and software are.”
SIMbot’s mechanical simplicity is a significant advance for ballbots, a type of robot that Hollis maintains is ideally suited for working with people in human environments. Because the robot’s body dynamically balances atop the motor’s ball, a ballbot can be as tall as a person, but remain thin enough to move through doorways and in between furniture. This type of robot is inherently compliant, so people can simply push it out of the way when necessary. Ballbots also can perform tasks such as helping a person out of a chair, helping to carry parcels and physically guiding a person.
Greg Seyfarth and SIMbot
Until now, moving the ball to maintain the robot’s balance has relied on mechanical means. Hollis’ ballbots, for instance, have used an “inverse mouse ball” method, in which four motors actuate rollers that press against the ball so that it can move in any direction across a floor, while a fifth motor controls the yaw motion of the robot itself.
“But the belts that drive the rollers wear out and need to be replaced,” said Michael Shomin, a Ph.D. student in robotics. “And when the belts are replaced, the system needs to be recalibrated.” He said the new motor’s solid-state system would eliminate that time-consuming process.
The rotor of the spherical induction motor is a precisely machined hollow iron ball with a copper shell. Current is induced in the ball with six laminated steel stators, each with three-phase wire windings. The stators are positioned just next to the ball and are oriented slightly off vertical.
The six stators generate travelling magnetic waves in the ball, causing the ball to move in the direction of the wave. The direction of the magnetic waves can be steered by altering the currents in the stators.
Hollis and Kumagai jointly designed the motor. Ankit Bhatia, a Ph.D. student in robotics, and Olaf Sassnick, a visiting scientist from Salzburg University of Applied Sciences, adapted it for use in ballbots.
Getting rid of the mechanical drive eliminates a lot of the friction of previous ballbot models, but virtually all friction could be eliminated by eventually installing an air bearing, Hollis said. The robot body would then be separated from the motor ball with a cushion of air, rather than passive rollers.
“Even without optimizing the motor’s performance, SIMbot has demonstrated impressive performance,” Hollis said. “We expect SIMbot technology will make ballbots more accessible and more practical for wide adoption.”
THE R2-D2 ROBOT FROM STAR WARS DOESN’T COMMUNICATE IN HUMAN LANGUAGE BUT IS, NEVERTHELESS, CAPABLE OF SHOWING ITS INTENTIONS. FOR HUMAN-ROBOT INTERACTION, THE ROBOT DOES NOT HAVE TO BE A TRUE ‘HUMANOID’. PROVIDED THAT ITS SIGNALS ARE DESIGNED IN THE RIGHT WAY, UT RESEARCHER DAPHNE KARREMAN SAYS.
A human being will only be capable of communicating with robots if this robot has many human characteristics. That is the common idea. But mimicking natural movements and expressions is complicated, and some of our nonverbal communication is not really suitable for robots: wide arm gestures, for example. Humans prove to be capable of responding in a social way, even to machines that look like machines. We have a natural tendency of translating machine movements and signals to the human world. Two simple lenses on a machine can make people wave to the machine.
Knowing that, designing intuitive signals is challenging. In her research, Daphne Karreman focused on a robot functioning as a guide in a museum or a zoo. If the robot doesn’t have arms, can it still point to something the visitors have to look at? Using speech, written language, a screen, projection of images on a wall and specific movements, the robot has quite a number of ‘modalities’ that humans don’t have. Add to this playing with light and colour, and even a ‘low-anthropomorphic’ robot can be equipped with strong communication skills. It goes way beyond R2-D2 that communicates using beeps that need to be translated first. Karreman’s PhD thesis is therefore entitled ‘Beyond R2-D2’.
IN THE WILD
Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced ‘in the wild’ and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?
To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its ‘head’. All by itself, FROG recognizes if people are interested in interaction and guidance. Thanks to the powerful DREAM tool, for the first time it is possible to analyse and classify human-robot interaction in a fast and reliable way. Unlike other methods, DREAM will not interpret all signals immediately, but it compares several ‘coders’ for a reliable and reproducible result.
How many people show interest, do they join the robot during the entire tour, do they respond as expected? It is possible to evaluate this using questionnaires, but that places the robot in a special position: people primarily come to visit the expo or zoo and not for meeting a robot. Using the DREAM tool, spontaneous interaction becomes more visible and thus, robot behaviour can be optimized.
Learn more: ROBOT DOESN’T HAVE TO BE HUMAN LOOK-ALIKE
Students at Bielefeld University of Applied Sciences have developed “Ourobot”.
Their project was supervised by a professor at the Bielefeld University of Applied Sciences and a CITEC researcher.
It looks like a bicycle chain, but has just twelve segments about the size of a fist. In each segment there is a motor. This describes pretty much the robot developed by the four bachelor students in Computer Engineering, Johann Schröder, Adrian Gucze, Simon Beyer and Matthäus Wiltzok, at Bielefeld University of Applied Sciences. The project was supervised by Professor Dr. Axel Schneider of the Bielefeld University of Applied Sciences and Jan Paskarbeit from Bielefeld University. A new video introduces the robot.
What distinguishes “Ourobot” from other comparable robots are the pressure sensors found in its chain segments which enable it to detect and overcome obstacles. The name of the robot, by the way, was inspired by an ancient Egyptian symbol depicting a serpent eating its own tail, the Ouroboros. “At the moment Ourobot can only move straight ahead and cannot manage curves yet, but its sensors can detect obstacles, such as a book, and can traverse them”, explains Jan Paskarbeit. The control mechanism behind this, i.e. the way the individual chain links interact in order to roll over an obstacle, involves a complex mathematical task. “It is remarkable how the students have solved this”, says Axel Schneider. The professor is a co-opted member of CITEC and leads a large project at the Centre of Excellence developing “Hector”, a walking robot. “There is no concrete application for Ourobot at the moment. It is a feasibility study, meaning basic research”, explains Schneider. This also makes the project exceptional, as bachelor’s projects at the University of Applied Sciences are usually application-oriented. “However, this does not rule out fundamental research projects, quite the opposite, we integrate the students early into research projects“, adds Schneider.
The collaboration with the University continues with the master’s degree in BioMechatronics, jointly offered by Bielefeld University and the Bielefeld University of Applied Sciences. Matthäus Wiltzok, who worked on the project, is now enrolled in this course. He and his colleagues are infected by the “robot virus”, and all are keen to continue working in this area.
A highlight for the team was the visit of the international robot conference ICRA in Stockholm which took place in May this year. The research paper on Ourobot* was met with great interest there. There is a long way to go, however, before the project Ourobot is concluded, as it is continually in development. The supervisors’ vision is to take the present robot that works in two dimensions “into the third dimension”, as Schneider explains. “We would like to develop a robot that actively changes its form, which can adapt to its environment like an amoeba, capable of stretching and shrinking again”, describes the professor. In this way, Ourobot can move through narrow terrain and overcome obstacles by means of different movements. The team has designed different variations of the new 3D version of Ourobot, similar to a ball or a snake. In this area, however, there is still much research to do.
A video shows Ourobot in action:
New Ben-Gurion University of the Negev Robot has applications in medicine, homeland security and search and rescue
The first single actuator wave-like robot (SAW) has been developed by engineers at Ben-Gurion University of the Negev (BGU). The 3D-printed robot can move forward or backward in a wave-like motion, moving much like a worm would in a perpendicular wave.
SAW can climb over obstacles or crawl through unstable terrain like sand, grass and gravel, reaching a top speed of 22.5 inches (57 centimeters) per second, five times faster than similar robots. Its minimalistic mechanical design produces an advancing sine wave with a large amplitude, using only a single motor with no internal straight spine. The breakthrough was published in Bioinspiration & Biomimetics in July.
“Researchers all over the world have been trying to create a wave movement for 90 years,” says Dr. David Zarrouk, of BGU’s Department of Mechanical Engineering, and head of the Bio-Inspired and Medical Robotics Lab.
“We succeeded by finding a simple, unique solution that enables the robot to be built in different sizes for different purposes. For example, it can be scaled up for search and rescue and maintenance, or miniaturized to a diameter of one centimeter or less to travel within the human body for medical purposes, such as imaging and biopsies of the digestive system.”
Robots today can perform space missions, solve a Rubik’s cube, sort hospital medication and even make pancakes. But most can’t manage the simple act of grasping a pencil and spinning it around to get a solid grip.
Intricate tasks that require dexterous in-hand manipulation — rolling, pivoting, bending, sensing friction and other things humans do effortlessly with our hands — have proved notoriously difficult for robots.
Now, a University of Washington team of computer scientists and engineers has built a robot hand that can not only perform dexterous manipulation but also learn from its own experience without needing humans to direct it.
Their latest results are detailed in a paper to be presented May 17 at the IEEE International Conference on Robotics and Automation.
“Hand manipulation is one of the hardest problems that roboticists have to solve,” said lead author Vikash Kumar, a UW doctoral student in computer science and engineering. “A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”
By contrast, the UW research team spent years custom building one of the most highly capable five-fingered robot hands in the world. Then they developed an accurate simulation model that enables a computer to analyze movements in real time. In their latest demonstration, they apply the model to the hardware and real-world tasks like rotating an elongated object.
With each attempt, the robot hand gets progressively more adept at spinning the tube, thanks to machine learning algorithms that help it model both the basic physics involved and plan which actions it should take to achieve the desired result. (This demonstration begins at 1:47 in the video.)
This autonomous learning approach developed by the UW Movement Control Laboratorycontrasts with robotics demonstrations that require people to program each individual movement of the robot’s hand in order to complete a single task.
“Usually people look at a motion and try to determine what exactly needs to happen —the pinky needs to move that way, so we’ll put some rules in and try it and if something doesn’t work, oh the middle finger moved too much and the pen tilted, so we’ll try another rule,” said senior author and lab director Emo Todorov, UW associate professor of computer science and engineering and of applied mathematics.
“It’s almost like making an animated film — it looks real but there was an army of animators tweaking it,” Todorov said. “What we are using is a universal approach that enables the robot to learn from its own movements and requires no tweaking from us.”
Building a dexterous, five-fingered robot hand poses challenges, both in design and control. The first involved building a mechanical hand with enough speed, strength responsiveness and flexibility to mimic basic behaviors of a human hand.
The UW’s dexterous robot hand — which the team built at a cost of roughly $300,000 — uses a Shadow Hand skeleton actuated with a custom pneumatic system and can move faster than a human hand. It is too expensive for routine commercial or industrial use, but it allows the researchers to push core technologies and test innovative control strategies.
“There are a lot of chaotic things going on and collisions happening when you touch an object with different fingers, which is difficult for control algorithms to deal with,” said co-authorSergey Levine, UW assistant professor of computer science and engineering who worked on the project as a postdoctoral fellow at University of California, Berkeley. “The approach we took was quite different from a traditional controls approach.”
The team first developed algorithms that allowed a computer to model highly complex five-fingered behaviors and plan movements to achieve different outcomes — like typing on a keyboard or dropping and catching a stick — in simulation.
Most recently, the research team has transferred the models to work on the actual five-fingered hand hardware, which never proves to be exactly the same as a simulated scenario. As the robot hand performs different tasks, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models.
“It’s like sitting through a lesson, going home and doing your homework to understand things better and then coming back to school a little more intelligent the next day,” said Kumar.
So far, the team has demonstrated local learning with the hardware system — which means the hand can continue to improve at a discrete task that involves manipulating the same object in roughly the same way. Next steps include beginning to demonstrate global learning — which means the hand could figure out how to manipulate an unfamiliar object or a new scenario it hasn’t encountered before.
Eliseo Ferrante and colleagues evolved complex robot behaviors using artificial evolution and detailed robotics simulations.
Darwinian selection can be used to evolve robot controllers able to efficiently self-organize their tasks. Taking inspiration from the way in which ants organize their work and divide up tasks, researchers evolved complex robot behaviors using artificial evolution and detailed robotics simulations.
Darwinian selection can be used to evolve robot controllers able to efficiently self-organize their tasks. Taking inspiration from the way in which ants organise their work and divide up tasks, Eliseo Ferrante and colleagues evolved complex robot behaviors using artificial evolution and detailed robotics simulations.
Just like social insects such as ants, bees or termites teams of robots display a self-organized division of labor in which the different robots automatically specialized into carrying out different subtasks in the group, says new research publishing in PLOS Computational Biology.
The field of ‘swarm robotics‘ aims to use teams of small robots to explore complex environments, such as the moon or foreign planets. However, designing controllers that allow the robots to effectively organize themselves is no easy task.