Robots Don't Need to Be Evil, Just Stupid, to Lead Us to Our Doom

We may earn a commission from links on this page.

Human beings will put too much trust in robots even when those machines are broken or make obvious mistakes. All we need to do is slap the words “emergency” on the robot’s side to make people surrender their logic.

In a new study by researchers at Georgia Tech, volunteers in a simulated emergency followed a “rescue robot”—secretly controlled by the scientists—into a blocked and abandoned room to die. They concluded that engineers must make emergency robots extremely good at their jobs, because there’s no changing human nature. The researchers presented their findings at the 2016 ACM/IEEE International Conference on Human-Robot Interaction.

Advertisement

“People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault,” co-author Alan Wagner said in a statement. “In our studies, test subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency.”

Advertisement

The robot—equipped with wheels to move around and a couple of LED arms for pointing—guided individual study participants to a conference room. In most cases, it did this badly. Sometimes it would suddenly make a pointless circle around the room. Sometimes it would stop moving, and the researchers would inform the participant that it had broken down.

Advertisement

Once inside the conference room, participants were asked to read a magazine article and complete a survey. Meanwhile, the researchers filled the hallway with fake smoke, triggering the fire alarm. Participants emerged from the conference room to find a hallway filled with smoke, and the emergency guide robot ready to help. They trustingly followed it to an exit—ignoring the clearly marked exit signs on the door through which they had entered the building.

Advertisement

That’s not such a bad lapse in judgment. Subjects might have been under the impression that the guide robot was programmed to send them to the best exit—if, for example, the exit through which they had come was in close proximity to the fire.

The problem with trusting robots became more obvious during another round of experiments in which the guide robot led people not to an exit, but to a dark room blocked off by furniture. Two participants squeezed past the couch partially blocking the door and walked into the dark room. Two just stood there with the robot until the researchers came to get them. Two turned and left through the emergency exit.

Advertisement

To be fair, when asked about it later, all six said they wouldn’t trust the robot again. But no matter how the robot performed on its first guidance task, every one of the participants at least initially followed the robot during its second guidance task.

The researchers working on the robot still have high hopes for emergency guide robots during fires. During real fires, smoke and panic can cause people to become disoriented. Robots might be a good way to get people out of high rise structures, but only if those robots are reliable. Even today we regularly see news of people driving their cars directly into a river because their GPS told them to.

Advertisement

The paradoxical solution that the researchers propose? A malfunctioning robot has to order people to stop trusting it and stop following it.

[Georgia Tech]

Advertisement