Humans Appear Programmed to Obey Robots, Studies Suggest

15 15 Loading

robot_traffic_cop_congo (1)

Two 8-foot robots recently began directing traffic in the capital city of the Democratic Republic of Congo, Kinshasa. The automatons are little more than traffic lights dressed up as campy 1960s robots—and yet, drivers obey them more readily than the humans previously directing traffic there.

Maybe it’s because the robots are bigger than the average traffic cop. Maybe it’s their fearsome metallic glint. Or maybe it’s because, in addition to their LED signals and stilted hand waving, they have multiple cameras recording ne’er-do-wells.

“If a driver says that it is not going to respect the robot because it’s just a machine the robot is going to take that, and there will be a ticket for him,” Isaie Therese, the engineer behind the bots, told CCTV Africa.

The Congolese bots provide a fascinating glimpse into human-robot interaction. It’s a rather surprising observation that humans so readily obey robots, even very simple ones, in certain situations. But the observation isn’t merely anecdotal—there’s research on the subject. (Hat tip to Motherboard for pointing out a fascinating study for us robot geeks.)

Nao robot sitting beside a human researcher.

Nao robot sitting beside a human researcher.

Last year, scientists at the University of Manitoba observed a group of 27 volunteers pressured to work on a docket of mundane tasks by either a 27-year-old human actor in a lab coat or an Aldeberan Nao robot—both called “Jim.”

Ever since the controversial 1971 Stanford prison experiment—wherein participants assigned roles of guards and prisoners demonstrated just how situational human morality can be—similar behavioral work has been rare and fraught.

Even so, if carefully conducted with the participants’ well-being in mind, such studies can provide valuable behavioral insights. The results of the Stanford study are still taught over 40 years later.

In this case, the researchers gave participants a moderately uncomfortable situation, told them they were free to quit at any time, and briefed them immediately following the test.

Each participant was paid C$10 to change file extensions from .jpg to .png as part of a “machine learning” experiment. To heighten their discomfort and the sense the task was endless, the workload began with a small batch of 10 files but grew each time the participant completed the assigned files (ultimately reaching a batch of 5,000).

Each time a participant protested, he was urged on by either the human or robot. The human proved the more convincing authority figure, but the robot was far from feckless.

10 of 13 participants said they viewed the robot as a legitimate authority, though they couldn’t explain why. Several people tried to strike up a conversation, and one showed remorse when the robot said it was ending the experiment and notifying the lead researcher, exclaiming, “No! Don’t tell him that! Jim, I didn’t mean that….I’m sorry.”

Participant obedience may have been directed more at the human pulling the strings than the robot itself.

Participant obedience may have been directed more at the human pulling the strings than the robot itself.

The researchers write that the novelty of the robot’s design may have detracted from its perceived authority. And involving humans in the robot part of the experiment may have led participants to defer their feeling of responsibility from robot to human.

None of the participants, for example, listed pressure from the robot as a reason for their obedience. Instead, they cited factors like interest in future tasks, trusting the robot had been programmed by a qualified human researcher, and a feeling of obligation to the lead human scientist. 

Despite these caveats, the researchers write, the fact remains that, “A small, child-like humanoid robot had enough authority to pressure 46% of participants to rename files for 80 minutes, even after indicating they wanted to quit.”

And in what may be the most disturbing result, a number of the participants expressed concern the robot might be broken or malfunctioning—yet they didn’t stop working. “They followed a possibly “broken” robot to do something they would rather not do.”

Few studies exist outside the University of Manitoba paper, however, the scientists do note there is past research that appears to corroborate their findings.

“In the single previous case that uses a deterrent (embarrassment), the results are striking: a robot can push people to do embarrassing acts such as removing their clothing or putting a thermometer in their rectum.”

Of course, two studies, one with just 27 people, and an anecdotal example (the Congolese bots) don’t prove humans will dutifully yield the planet when the robots revolt.

How much of the behavior is due to fear or respect of the humans behind the scenes? If the Congelese bots were simply traffic lights and cameras, would folks still readily obey them? Maybe the drivers know human cops can be argued with, ignored, or corrupted, but a machine (humanoid or not) won’t be similarly manipulated.

More study would be worthwhile, the University of Manitoba researchers argue. Human-robot interaction will grow in coming years, particularly in healthcare and the military. A greater body of behavioral research can inform future designs and potentially prevent harmful obedience. (Or, we might add, promote healthy disobedience.)

Image Credit: Mike Kabamba/YouTube; University of ManitobaBen Husmann/Flickr

Discussion — 15 Responses

  • Gear Mentation February 21, 2014 on 9:03 am

    Humans resent traffic cams, precisely because in addition to being intrusive, they can’t be manipulated and lack any human wiggle room. They would both obey and resent the robot for the same reason. Also, we would trust a robot more than a human because we would think that whoever programmed the robot gave it more thought than a person in the room with us giving us directions.

  • tina February 21, 2014 on 1:33 pm

    I watched Watson play Jeopardy at the Computer Museum a few years back. At first everyone in the room rooted for the humans but as the “robot” swept the table there was increasing impatience with the contestants and in the end the audience had totally shifted for the “winner” against our own kind. Fascinating.

  • Jorge Serrano February 21, 2014 on 4:34 pm

    I appreciate learning about the Congolese traffic lights and the University of Manitoba study, but the whole piece was spoiled for me by the phrase “the infamous and ethically questionable 1971 Stanford Prison Experiments”.

    The Stanford prison experiment — there was only one — has become infamous and ethically questionable only in the mouths of Professor Zimbardo’s detractors. That one experiment was left unfinished as soon as Zimbardo saw that his subjects were behaving far worse than he had hypothesized. That one experiment is famous even today, not only because it showed us the infamy of ordinary people, but also because it brought into the public forum the question on experimental ethics posed by Stanley Milgram’s work.

    A journalist who has recourse to negative attributive adjectives is a journalist who has not done his homework. And then we read “Few studies exist outside the University of Manitoba paper”. So, for starters, try googling “obedience to authority” to see what studies pop up. It is a formidable field of social psychology with an evolving ethos. Evidently, though, it is still acceptable to make one’s subjects behave like camwhores (nudity, rectal insertions), to judge by the uncritical reporting of that uncredited experiment.

    • Jason Dorrier Jorge Serrano February 22, 2014 on 10:44 am

      Hi Jorge. The intent was to indicate the Stanford prison experiment, as the researchers in this particular study also indicated, was controversial and similar studies require care in how they’re designed. I think this is the better word and decided to use it instead. As you note, and I did too, the Stanford prison experiment remains influential to this day.

      As for your suggestion that “few studies exist” is misguided. The researchers note there are few studies that specifically deal with authority and human-robot interaction, not authority and obedience in general. The disturbing example they cite is the only other such example they found.

      Thanks for your comment.

      • Jorge Serrano Jason Dorrier February 22, 2014 on 1:18 pm

        Controversy is one thing: infamy and ethics are something else altogether. You should take more care in your choice of words, inasmuch as what you published is libelous.

        The phrase “few studies exist” was taken from the original article bylined by Jason Dorrier. I agree that was certainly misguided. My own comment pointed out that obedience to authority is now a broad area of social psychology — in other words, “many, many studies”. began shortly after World War II when psychological investigators tried to learn why and how prisoners of Wehrmacht concentration camps might be induced to murder other prisoners of the same camps.

        There is no need to thank me for my comment. Thanks to your hack work it is now clear that Singularity Hub has jumped its shark. SH is clearly publishing fluff and does not deserve much respect. Because of your sloppy and disrespectful article, I will no longer recommend SH to my acquaintances.

  • trisn February 22, 2014 on 4:50 am

    People fundamentally don’t like being controlled their peers. We obeyed the robots because, whether we might think them as either inferior or superior, they’re not us.