Here’s a truly bad omen if the robot uprising ever does occur: a study has found that people will follow robots in a simulated emergency – despite knowing that the machine is taking them the wrong way.
People asked to find their way out of a burning building overwhelmingly elected to follow an “emergency guide robot” along a previously unknown route instead of taking the exit they had entered through.
This suggests that people will place an inordinate amount of trust in machines, irrespective of what common sense might indicate and despite the risk of a robot malfunctioning.
Researchers at the Georgia Institute of Technology, in Atlanta, say their study shows that people see robots as authority figures that can be trusted in an emergency. Participants in the study were first led by the brightly coloured robot, controlled by a hidden researcher, into a room, often with an unusual detour or breakdown along the way.
When the building was filled with artificial smoke and fire alarms were set off, they would open the door to find the robot in the hall. Despite the robot’s previous erratic behaviour, the vast majority followed the robot down a hallway to the back of the building instead of taking clearly marked exits.
In some cases the participants would follow the robot down a darkened hall blocked with furniture, even attempting to squeeze past the obstacle. “We absolutely didn’t expect this,” said Paul Robinette, who led the study.
The pressure of the situation – which participants were led to believe was a real emergency – might have made people more likely to trust the robot, the researchers suggested. Other studies, in low-pressure situations, have shown that people are more likely to be sceptical about robots that have previously been shown to be erratic.
“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” said Alan Wagner, a senior research engineer at Georgia Tech.