People need no help doing violence to machines; reports of humans abusing machines have become a common occurrence.
But it turns out machines can make matters worse for us too. With insults, they can get under our skin and rattle us, making us behave irrationally – not that humans really need much help going off the rails.
A group of computer boffins from Carnegie-Mellon University recently found that when a robot playing a game against a human opponent offers discouraging comments, the bot's words influence how the person performs.
In a paper titled "A Robot's Expressive Language Affects Human Strategy and Perceptions in a Competitive Game," distributed through ArXiv, CMU researchers Aaron Roth, Samantha Reig, Umang Bhatt, Jonathan Shulgach, Tamara Amin, Afsaneh Doryab, Fei Fang, and Manuela Veloso explore how comments from a Pepper humanoid robot affected human opponents in a Stackelberg Security Game called The Guards and Treasures.
The game is designed to study bounded rationality: it provides a way to assess the rationality of decisions made with limited information.
The Guards and Treasures allows a player to select one gate to be attacked each round from a set of gates. The defender has placed a limited number of guards to protect the gates and the attacker can see the probability that each gate has a guard. Attacking a guarded gate imposes a penalty while attacking an unguarded guard earns a reward.
The researchers' study involved pitting 40 human participants against a Pepper robot, which would either encourage or discourage its human challengers using phrases generated by a natural language processing algorithm.
The robot would say things like "I have to say you are a great player," or "Over the course of the game your playing has become confused." And when it made negative comments, the human players performed worse. Hardly throwing epic shade but it does seem to have had an effect.
"[A] robot opponent that makes discouraging comments causes a human to play a game less rationally and to perceive the robot more negatively," the paper says.
Previous research has indicated as much in cooperative situations, and has also shown that robot gestures and posture affect interaction with people, but the boffins say the impact of robotic verbal interaction in a competitive scenario deserves more exploration.
They also note that while some prior research shows threatening behavior from a robot makes people more attentive, their data shows that verbal discouragement from a robot makes people irrational.
In an email to The Register, Jeff Bigham, associate professor at Carnegie Mellon's Human-Computer Interaction Institute, said there are a variety of challenges in developing robots for people to use.
"Some are pretty similar to regular HCI challenges," said Bigham. "The fact that the computer system is embodied ends up mattering quite a lot."
"People start treating the robots as social participants, and are especially aware of (and put off by) elements where they don't live up to humans (even super tricky stuff, like social physical behaviors). It's very easy for robots to fall into the uncanny valley."
How an ace-hole AI bot built by Facebook, CMU boffins whipped a table of human poker prosREAD MORE
Bigham, who was not involved in the research but knows some of the CMU computer scientists who were, said it makes sense that people would react to computer-generated emotions when embodied in a robot because it's consistent with previous research.
"Emotion is important and underutilized in the design of computer systems; yet, seems especially important for robots from which humans are so likely to assume human-like qualities," he said.
Bigham said while we understand a lot about the role of emotion in communication and social behavior, we don't understand it well.
"We use emotion and emotional engagement in a number of incredibly complex ways," he said. "Emotion is very powerful, and we're at the early days of knowing how to use it in design of real systems, including robots."
Robot creators, he suggested, should try to design with awareness of robots' capabilities and limitations. "It would be very easy to create systems that would annoy users, which makes working to understand these issues so important," he said. ®