If your office caught on fire right now, how would you get out? We laugh at the prospect, but many of us don’t even move at the first fire alarm. But if you were caught in a fire, would you trust a robot to lead you to safety?
New research out of Georgia Tech suggests that in emergency situations, people will not only put their lives into artificially intelligent hands, but they will continue to trust so-called "Emergency Guide Robots" even when they have been proven wrong, or deemed disabled.
This wasn't the case in every scenario. Under normal conditions, some of Georgia Tech’s volunteers didn't trust the already unreliable robots. However, in a time sensitive, emergency situation, the robots became an authority figure of sorts, and most volunteers followed their directives.
The researchers did have most subjects convinced that the walls around them were burning to the ground, but it's disturbing to think that each person defaulted to the robot's bright, flashing arms even if it had, moments before, lead them to several dead ends or been deemed disabled.
The research is part of a long-term study that looks at how humans trust robots, which is particularly important as robots play a larger role in society.
At first, the study just wanted to see if these robots would be useful in emergency situations. These new findings have researchers reconsidering their scope; and questioning how we can prevent humans from trusting robots too much.