Robots, common in manufacturing jobs, increasingly spread to office professions: Humanoid or human-like robots already perform tasks in hotels, in stores, and in restaurants. They cook, serve, or advise customers. They communicate like humans via speech, gestures, and sometimes even facial expressions. In Japan, China, and increasingly in the US, a real robot hype can be observed – in the US and Japan almost half, in the UK around a third of today’s occupations are at risk of being replaced by robots/in the process of robotisation.
This trend is on the verge of being transferred to Germany, and motivated by reducing personnel costs dramatically. “But many companies are deploying robots in an unreflected manner, without knowing beforehand what these changes mean for either employees, corporate culture, or customer relationships,” Professor Ruth Stock-Homburg, Institute of Marketing & Human Resource Management, Department of Law and Economics of TU Darmstadt, warns. Surprisingly, these dramatic changes happen against the background that employees and corporate culture are in fact top success drivers of today’s companies.
Reluctance against Robotic Team Members
“The sense and nonsense of deploying robots in office and service professions highly depends on the characteristics of the task,” Stock-Homburg states. 82 % of respondents considered robots as valuable support in performing daily, routine-based work tasks, but only two out of three respondents believe they would have fun working with robots. Around half of the respondents suspect robots of being easy to use. In terms of creativity or emotions in a work context, robots given little credit: after all, more than 80 % of respondents believe that robots can express emotions; more than 30 % even believe that robots are able to recognize feelings or be creative. A surprisingly high percentage against the background that so far, both creative and emotional behaviors of robots are largely programmed and do not emerge autonomously. Here, today’s technical knowledge and subjective perceptions of robots are significantly apart. In addition, many respondents anticipate that “artificial intelligence” will soon enable self-learning and autonomous behaviors of robots.
The question to which extent today’s office workers would accept robots, was investigated in a cultural comparison between Germany and the US: More than 60 % of the respondents in both countries can imagine to be supported by a robot assistant. However, in this case the robot is expected to perform rather repetitive, unpleasant tasks, such as filing and documentation, appointment bookings, and messenger or research services.
Interestingly, 21 % of respondents would trust a robot more than a human counterpart. Reasons can be found in lower error rates, higher predictability, and continuity in behavior. However, the majority of respondents would not like a robot to express emotions at work: “Otherwise, I would turn that thing off,” the tenor of the respondents on this issue. At eye level as colleagues, only every third person would accept a robot: “Robots can only make programmed decisions; their autonomy is very limited,” said one respondent. One can imagine that robots provide information in meetings, take notes, or act as company database that immediately provide facts. The majority of respondents suspect robots to be unable to link complex issues or have detailed conversations with employees.
As a leader, robots are almost taboo: After all, 15% of American respondents and 8 % of German respondents would accept a humanoid robot boss. Why so few? “A robot has no empathy for my family situation or other concerns that radiate into the job” an interviewee expresses. “A machine cannot judge a man… and cannot serve as role model,” argues another respondent. Those individuals who can imagine a Robo-boss, name lower errors rates and subjectivity as reasons. “Robots are just and less moody,” elaborates one respondent.
According to Professor Stock-Homburg, robotics will make many classic jobs expendable. “But new and more conceptual jobs for our future generations will automatically be created. Companies should elicit these future jobs at an early stage and create new occupational fields before deploying robots.” The Darmstadt Future of Work-Study (2016) also shows that companies that deal intensively with new occupational fields are more successful.
New Service Age
Will robots start a new service era? The answer according to the study series is “Definitely yes,” says Moritz Merkle, member of the research team in Darmstadt, “75 % of our respondents would accept services by a robot as a customer.” And in Merkle’s experimental series with around 300 participants, a humanoid reception robot scored nearly identical in terms of customer satisfaction levels and only slightly lower in terms of service quality than its human counterparts.
Most respondents can imagine service robots as receptionists at information desks, as cashiers in supermarkets and car rentals, at the counter of train stations, airports, or even banks, as well as in catering. However, more than 80 % of the respondents prefer personal contact with people for sensitive, personal services, such as complex financial consultations, psychological or medical care. “In the future, people will remain central – robots will initially stay means to an end,” says Jasmine Plechatsch, CEO of Leap in Time and co-founder of the Future Innovation Lab.
Facts about the study series [email protected]:
· 2 experiments with 300 participants overall
· 2 surveys with more than 400 managers and employees in Germany and in the US
· 3 qualitative studies with 80 interviewees
Simple system can recognize sixty percent of human touches
A SQUEEZE IN THE ARM, A PAT ON THE SHOULDER, OR A SLAP IN THE FACE – TOUCH IS AN IMPORTANT PART OF THE SOCIAL INTERACTION BETWEEN PEOPLE. SOCIAL TOUCH, HOWEVER, IS A RELATIVELY UNKNOWN FIELD WHEN IT COMES TO ROBOTS, EVEN THOUGH ROBOTS OPERATE WITH INCREASING FREQUENCY IN SOCIETY AT LARGE, RATHER THAN JUST IN THE CONTROLLED ENVIRONMENT OF A FACTORY.
Merel Jung is conducting research at the University of Twente CTIT research institute into social touch interaction with robots. Using a relatively simple system – a mannequin’s arm with pressure sensors, connected to a computer – she has succeeded in getting it to recognize sixty percent of all touches. The research is being published in the Journal on Multimodal User Interfaces scientific journal.
Robots are becoming more and more social. A well-known example of a social robot is Paro, a robot seal that is used in care homes, where it has a calming effect on the elderly residents and stimulates their senses. Positive results have been achieved with the robot for this target group, but we still have a long way to go before robots can correctly recognize, interpret, and respond to different types of social touch in the way that people can. It is a relatively little explored area in science, but one in which much could be achieved in the long term. Examples that come to mind are robots that assist children with autism in improving their social contacts, or robots that train medicine students for real-life situations.
Merel Jung is therefore carrying out research at the University of Twente into social touch interaction between humans and robots. In order to enable a robot to respond in the correct manner to being touched, she has identified four different stages. The robot must perceive, be able to recognize, interpret, and then respond in the correct way. In this phase of her research, Jung focused on the first two stages – perceiving and recognizing. With a relatively simple experiment, involving a mannequin’s arm fitted with 64 pressure sensors, she has succeeded in distinguishing sixty percent of almost 8,000 touches (distributed over fourteen different types of touch at three levels of intensity). Sixty percent does not seem very high on the face of it, but it is a good figure if you bear in mind that there was absolutely no social context and that various touches are very similar to each other. Possible examples include the difference between grabbing and squeezing, or stroking roughly and rubbing gently. In addition, the people touching the mannequin’s arm had been given no instructions on how to ‘perform’ their touches, and the computer system was not able to ‘learn’ how the individual ‘touchers’ operated. In similar circumstances, people too would not be able to correctly recognize every single touch. In her follow-up research, which Jung is currently undertaking, she is concentrating on how robots can interpret touch in a social context. It is expected that robots, by interpreting the context, will be better able to respond to touch correctly, and that therefore the touch robot will be one step closer to reality.
Learn more: First Steps Towards The Touch Robot
When roboticists create behaviors for teams of robots, they first build algorithms that focus on the intended task. Then they wrap safety behaviors around those primary algorithms to keep the machines from running into each other. Each robot is essentially given an invisible bubble that other robots must stay away from. As long as nothing touches the bubble, the robots move around without any issues. But that’s where the problems begin.
“When you have too many robots together, they get so focused on not colliding with each other that they eventually just stop moving,” said Georgia Tech roboticist Magnus Egerstedt, director of Georgia Tech’s Institute of Robotics and Intelligent Machines. “Their safety behaviors take over and the robots freeze. It’s impossible for them to go anywhere because any movement would cause their bubbles to pop.”
Egerstedt has created a solution. His team’s new algorithms allow any number of robots to move within inches of each other, without colliding, to complete their task — swapping locations on his lab floor. They are the first researchers to create such minimally invasive safety algorithms.
In technical speak, the bots are using a set of safe states and barrier certificates to ensure each stays in its own safe set throughout the entire maneuver.
“In everyday speak, we’ve shrunk the size of each robot’s bubble to make it as small as possible,” said Egerstedt. “Our system allows the robots to make the minimum amount of changes to their original behaviors in order to accomplish the task and not smack into each other.”
In a demo with four robots, the lab’s machines approach from four different areas, meet in the middle, circle counterclockwise within inches of each other, then fan out into opposite directions. In another demonstration, eight robots perform the same task, this time circling clockwise before dispersing. Instead of keeping their distance and taking the long way around their neighbors, the robots move very independently wherever they wish.
Avoiding collisions isn’t anything new in robotics. And Google’s self-driving cars are almost crash-free.
“But we haven’t seen thousands of autonomous cars on the road together yet,” Egerstedt said.
“Robots are very conservative — they want to make sure they’re safe. You couldn’t pack the interstate with self-driving cars with today’s technology.”
Egerstedt also said something similar to these algorithms could be used for the next generation of air traffic control. Instead of people directing the flow, planes will be given the authority in airspaces.
“They’ll have to be safer if we plan to pack the airspace more densely.”
Swarms could one day search the depths of fresh and saltwater
Researchers at Case Western Reserve University have combined tissues from a sea slug with flexible 3-D printed components to build “biohybrid” robots that crawl like sea turtles on the beach.
A muscle from the slug’s mouth provides the movement, which is currently controlled by an external electrical field. However, future iterations of the device will include ganglia, bundles of neurons and nerves that normally conduct signals to the muscle as the slug feeds, as an organic controller.
The researchers also manipulated collagen from the slug’s skin to build an organic scaffold to be tested in new versions of the robot.
In the future, swarms of biohybrid robots could be released for such tasks as locating the source of a toxic leak in a pond that would send animals fleeing, the scientists say. Or they could search the ocean floor for a black box flight data recorder, a potentially long process that may leave current robots stilled with dead batteries.
“We’re building a living machine—a biohybrid robot that’s not completely organic—yet,” said Victoria Webster, a PhD student who is leading the research. Webster will discuss mining the sea slug for materials and constructing the hybrid, which is a little under 2 inches long, at the Living Machines conference in Edinburgh, Scotland, this week.
ASU researcher creates system to control robots with the brain
A researcher at Arizona State University has discovered how to control multiple robotic drones using the human brain.
A controller wears a skull cap outfitted with 128 electrodes wired to a computer. The device records electrical brain activity. If the controller moves a hand or thinks of something, certain areas light up.
“I can see that activity from outside,” said Panagiotis Artemiadis (pictured above), director of the Human-Oriented Robotics and Control Lab and an assistant professor of mechanical and aerospace engineering in the School for Engineering of Matter, Transport and Energy in the Ira A. Fulton Schools of Engineering. “Our goal is to decode that activity to control variables for the robots.”
If the user is thinking about decreasing cohesion between the drones — spreading them out, in other words — “we know what part of the brain controls that thought,” Artemiadis said.
A wireless system sends the thought to the robots. “We have a motion-capture system that knows where the quads are, and we change their distance, and that’s it,” he said.
Up to four small robots, some of which fly, can be controlled with brain interfaces. Joysticks don’t work, because they can only control one craft at a time.
Duke University engineers and computer scientists develop a new computer processor specially designed for robotic motion planning
Once they’ve mastered the skills of toddlerhood, humans are pretty good at what roboticists call “motion planning” — reaching around obstacles to precisely pick up a soda in a crowded fridge, or slipping their hands around a screen to connect an unseen cable.
But for robots with multi-jointed arms, motion planning is a hard problem that requires time-consuming computation. Simply picking an object up in an environment that has not been pre-engineered for the robot may require several seconds of computation.
Duke University researchers have introduced a specially-designed computer processor for motion planning that can plan up to 10,000 times faster than existing approaches while consuming a small fraction of the power. The new processor is fast enough to plan and operate in real time, and power-efficient enough to be used in large-scale manufacturing environments with thousands of robots.
The company lays out five unsolved challenges that need to be addressed if smart machines such as domestic robots are to be safe.
Could machines become so intelligent and powerful they pose a threat to human life, or even humanity as a whole?
It’s a question that has become fashionable in some parts of Silicon Valley in recent years, despite being more or less irreconcilable with the simple robots and glitchy virtual assistants of today (see “AI Doomsayer Says His Ideas Are Catching On”). Some experts in artificial intelligence believe speculation about the dangers of future, super-intelligent software is harming the field.
Now Google, a company heavily invested in artificial intelligence, is trying to carve out a middle way. A new paper released today describes five problems that researchers should investigate to help make future smart software safer. In a blog post on the paper, Google researcher Chris Olah says they show how the debate over AI safety can be made more concrete and productive.
“Most previous discussion has been very hypothetical and speculative,” he writes. “We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”
From the kitchen to the metro, new robotic systems designed to assist the elderly, people with disabilities
Many new robots look less like the metal humanoids of pop culture and more like high-tech extensions of ourselves and our capabilities.
In the same way eyeglasses, wheelchairs, pacemakers and other items enable people to see and move more easily in the world, so will many cutting-edge robotic systems. Their aim is to help people be better, stronger and faster. Further, due to recent advances, most are far less expensive than the Six Million Dollar Man.
Greater access to assistive technologies is critical as the median age of the U.S. population rises. Already, there is an enormous need for such tools.
“The number of people with partial impairments is very large and continues to grow,” says Conor Walsh, a roboticist at Harvard University who is developing soft robotics technologies. “For example, these include people who are aging or have suffered a stroke. Overall, about 10 percent of individuals living in the U.S. have difficulty walking. That’s a tremendous problem when you think about it.”
Walsh and other researchers funded by the National Science Foundation (NSF) are working in labs across the country to ensure these technologies not only exist, but are reliable, durable, comfortable and personalized to users.
Can a robot help Germany integrate its influx of migrants? A new research project thinks the answer might be yes.
More than 1 million refugees reached Germany last year. Children represent around 25 percent of the refugees and migrants arriving in Europe, according to the International Organization for Migration.
That’s where “Nao” comes in. Researchers at Germany’s Bielefeld University are testing whether the high-tech, wide-eyed robot can help teach migrant children language skills.
“Kids respond very positively to the small humanoid robot Nao that we are programming,” said Stefan Kopp, an artificial intelligence expert working on the project. “They are highly motivated and its fun for them to interact with the technology.”
One of the most useful things about robots is that they don’t feel pain. Because of this, we have no problem putting them to work in dangerous environments or having them perform tasks that range between slightly unpleasant anddefinitely fatal to a human. And yet, a pair of German researchers believes that, in some cases, feeling and reacting to pain might be a good capability for robots to have.
The researchers, from Leibniz University of Hannover, are developing an “artificial robot nervous system to teach robots how to feel pain” and quickly respond in order to avoid potential damage to their motors, gears, and electronics. They described the project last week at the IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden, and we were there to ask them what in the name of Asimov they were thinking when they came up with this concept.
Why is it a good idea for robots to feel pain? The same reason why it’s a good idea for humans to feel pain, said Johannes Kuehn, one of the researchers. “Pain is a system that protects us,” he told us. “When we evade from the source of pain, it helps us not get hurt.” Humans that don’t have the ability to feel pain get injured far more often, because their bodies don’t instinctively react to things that hurt them.
Carnegie Mellon Algorithm Balances “Pick And Place” With “Push And Shove”
Clutter is a special challenge for robots, but new Carnegie Mellon University software is helping robots cope, whether they’re beating a path across the moon or grabbing a milk jug from the back of the refrigerator.
The software not only helped a robot deal efficiently with clutter, but it also surprisingly revealed the robot’s creativity in solving problems.
“It was exploiting sort of superhuman capabilities,” Siddhartha Srinivasa, associate professor of robotics, said of his lab’s two-armed mobile robot, the Home Exploring Robot Butler, or HERB. “The robot’s wrist has a 270-degree range, which led to behaviors we didn’t expect. Sometimes, we’re blinded by our own anthropomorphism.”
In one case, the robot used the crook of its arm to cradle an object to be moved.
“We never taught it that,” Srinivasa added.