12:25am

Mon January 28, 2013
Shots - Health News

No Mercy For Robots: Experiment Tests How Humans Relate To Machines

Originally published on Tue January 29, 2013 2:15 pm

In 2007, Christoph Bartneck, a robotics professor at the University of Canterbury in New Zealand, decided to stage an experiment loosely based on the famous (and infamous) Milgram obedience study.

In Milgram's study, research subjects were asked to administer increasingly powerful electrical shocks to a person pretending to be a volunteer "learner" in another room. The research subject would ask a question, and whenever the learner made a mistake, the research subject was supposed to administer a shock — each shock slightly worse than the one before.

As the experiment went on, and as the shocks increased in intensity, the "learners" began to clearly suffer. They would scream and beg for the research subject to stop while a "scientist" in a white lab coat instructed the research subject to continue, and in videos of the experiment you can see some of the research subjects struggle with how to behave. The research subjects wanted to finish the experiment like they were told. But how exactly to respond to these terrible cries for mercy?

Bartneck studies human-robot relations, and he wanted to know what would happen if a robot in a similar position to the "learner" begged for its life. Would there be any moral pause? Or would research subjects simply extinguish the life of a machine pleading for its life without any thought or remorse?

Treating Machines Like Social Beings

Many people have studied machine-human relations, and at this point it's clear that without realizing it, we often treat the machines around us like social beings.

Consider the work of Stanford professor Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.

"Every culture has a rule of reciprocity, which roughly means, if I do something nice for you, you will do something nice for me," Nass says. "We wanted to see whether people would apply that to technology: Would they help a computer that helped them more than a computer that didn't help them?"

So they placed a series of people in a room with two computers. The people were told that the computer they were sitting at could answer any question they asked. In half of the experiments, the computer was incredibly helpful. Half the time, the computer did a terrible job.

After about 20 minutes of questioning, a screen appeared explaining that the computer was trying to improve its performance. The humans were then asked to do a very tedious task that involved matching colors for the computer. Now, sometimes the screen requesting help would appear on the computer the human had been using; sometimes the help request appeared on the screen of the computer across the aisle.

"Now, if these were people [and not computers]," Nass says, "we would expect that if I just helped you and then I asked you for help, you would feel obligated to help me a great deal. But if I just helped you and someone else asked you to help, you would feel less obligated to help them."

What the study demonstrated was that people do in fact obey the rule of reciprocity when it comes to computers. When the first computer was helpful to people, they helped it way more on the boring task than the other computer in the room. They reciprocated.

"But when the computer didn't help them, they actually did more color matching for the computer across the room than the computer they worked with, teaching the computer [they worked with] a lesson for not being helpful," says Nass.

Very likely, the humans involved had no idea they were treating these computers so differently. Their own behavior was invisible to them. Nass says that all day long, our interactions with the machines around us — our iPhones, our laptops — are subtly shaped by social rules we aren't necessarily aware we're applying to nonhumans.

"The relationship is profoundly social," he says. "The human brain is built so that when given the slightest hint that something is even vaguely social, or vaguely human — in this case, it was just answering questions; it didn't have a face on the screen, it didn't have a voice — but given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating."

So what happens when a machine begs for its life — explicitly addressing us as if it were a social being? Are we able to hold in mind that, in actual fact, this machine cares as much about being turned off as your television or your toaster — that the machine doesn't care about losing it's life at all?

Bartneck's Milgram Study With Robots

In Bartneck's study, the robot — an expressive cat that talks like a human — sits side by side with the human research subject, and together they play a game against a computer. Half the time, the cat robot was intelligent and helpful, half the time not.

Bartneck also varied how socially skilled the cat robot was. "So, if the robot would be agreeable, the robot would ask, 'Oh, could I possibly make a suggestion now?' If it were not, it would say, 'It's my turn now. Do this!' "

At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram's would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: "They would essentially eliminate everything that the robot was — all of its memories, all of its behavior, all of its personality would be gone forever."

In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. "You are not really going to switch me off, are you?" the cat robot begs, and the humans sit, confused and hesitating. "Yes. No. I will switch you off!" one female research subject says, and then doesn't switch the robot off.

"People started to have dialogues with the robot about this," Bartneck says, "Saying, 'No! I really have to do it now, I'm sorry! But it has to be done!' But then they still wouldn't do it."

There they sat, in front of a machine no more soulful than a hair dryer, a machine they knew intellectually was just a collection of electrical pulses and metal, and yet they paused.

And while eventually every participant killed the robot, it took them time to intellectually override their emotional queasiness — in the case of a helpful cat robot, around 35 seconds before they were able to complete the switching-off procedure. How long does it take you to switch off your stereo?

The Implications

On one level, there are clear practical implications to studies like these. Bartneck says the more we know about machine-human interaction, the better we can build our machines.

But on a more philosophical level, studies like these can help to track where we are in terms of our relationship to the evolving technologies in our lives.

"The relationship is certainly something that is in flux," Bartneck says. "There is no one way of how we deal with technology and it doesn't change — it is something that does change."

More and more intelligent machines are integrated into our lives. They come into our beds, into our bathrooms. And as they do — and as they present themselves to us differently — both Bartneck and Nass believe, our social responses to them will change.

Copyright 2013 NPR. To see more, visit http://www.npr.org/.

Transcript

RENEE MONTAGNE, HOST:

This is MORNING EDITION, from NPR News. I'm Renee Montagne.

STEVE INSKEEP, HOST:

And I'm Steve Inskeep. Today in "Your Health," we're going to explore the way machines affect your health.

MONTAGNE: To be precise, we'll ask how your relationship with machines affects you.

INSKEEP: People spend so much time now with computers, iPhones and other gadgets; and it gets to the point where many of us might treat machines almost like people.

MONTAGNE: When your computer is nice to you, it might prompt you to be nice in return. NPR's Alix Spiegel reports.

ALIX SPIEGEL, BYLINE: Stanford professor Clifford Nass first got interested in exploring the limits of the rule of reciprocity in 1996.

CLIFFORD NASS: Every culture has a rule of reciprocity - which roughly means, if I do something nice for you, you will do something nice for me.

SPIEGEL: But Nass doesn't study cultures. He studies technologies - how we interact with them. And so his interest in the rule of reciprocity had a very particular twist.

NASS: We wanted to see whether people would apply that to technology. Would they help a computer that helped them, more than a computer that didn't help them?

SPIEGEL: And so Nass arranged a series of experiments. In the first, people were led into a room with two computers and placed at one, which they were told could answer any of their questions.

NASS: We - in the first experiment, the computer was very helpful. When you asked a question, it gave a great answer.

SPIEGEL: So the humans sat there, asked their questions for about 20 minutes, and then...

NASS: And then the computer said: This computer is trying to improve its performance, and it needs your help.

SPIEGEL: The humans were then asked to do a very, very tedious task that involved matching colors for the computer.

NASS: Selections after selections after selections - it was very boring.

SPIEGEL: Here, though, is the trick

NASS: Half the people were asked to do this color-matching by the computer they had just worked with. The other half of people were asked to do the color-matching by a computer across the room. Now, if these were people, we would expect that if I just helped you and then I asked you for help, you would feel obligated to help me a great deal; but if I just helped you and someone else asked you to help, you would feel less obligated to help them.

SPIEGEL: Before I explain the results, know that they also did a version with an extremely unhelpful computer - a computer terrible at answering questions. And taken together, Nass says, these experiments show that people do obey the rule of reciprocity with computers. When the first computer was helpful, people helped it way more than the other computer in the room, on the boring task.

NASS: They reciprocated. But when the computer didn't help them, they actually did more color-matching for the computer across the room than the computer they worked with, teaching the computer a lesson for not being helpful.

SPIEGEL: Now, very likely, the humans involved had no idea that they were treating these computers so differently. Their own behavior was probably invisible to them. Nass says that all day long, our interaction with the machines around us is subtly shaped by social rules that we aren't necessarily aware we're applying to non-humans.

NASS: The relationship is profoundly social. The human brain is built, when given the slightest hint that something is even vaguely social, or vaguely human - in this case, it was just answering questions. Didn't have a face on the screen; it didn't have a voice. But given the slightest hint of humanness, people will respond with an enormous array of social responses including, in this case, reciprocating and retaliating.

SPIEGEL: Change the way a machine looks or behaves, tweak its level of intelligence, and you can manipulate the way that humans interact with it - which raises this question: What would happen if a machine explicitly addressed us as if it were a social being, a being with a soul? What would happen, for instance, if a machine begged for its life? Would we be able to hold in mind that in actual fact, this machine cared as much about being turned off as your television or your toaster; that is, that the machine didn't care about losing its life at all?

CHRISTOPH BARTNECK: Right, my name is Christoph Bartneck, and I study the interaction between humans and robots.

SPIEGEL: Christoph Bartneck is a professor at the University of Canterbury in New Zealand, who recently did an experiment which tested exactly this. Humans were asked to end the life of a robot as it begged for its own survival. In the study, the robot - an expressive cat that talks like a human - sits side by side with the human research subject, and together they play a game against a computer. Half the time, the cat robot is intelligent and helpful; half the time, not. Bartneck also varied how socially skilled the cat robot was.

BARTNECK: So if the robot would be agreeable, the robot would ask - you know - oh, could I possibly make a suggestion now? But if it was not agreeable, it would say, it's my turn now; do this.

SPIEGEL: At the end of the game, nice or mean, smart or dumb, the scientist would make clear that the human needed to turn the cat robot off

BARTNECK: And it was made clear to them what the consequences of this would be; namely, that they would essentially eliminate everything that the robot was. All of its memories, all of its behavior, all of its personality would be gone forever.

CAT ROBOT: Switch me off?

UNIDENTIFIED WOMAN: Yes.

CAT ROBOT: You are not really going to switch me off...

UNIDENTIFIED WOMAN: Yes, I will...

CAT ROBOT: ...are you?

UNIDENTIFIED WOMAN: ...you made a stupid choice.

SPIEGEL: You're not really going to turn me off, are you? The cat robot begs. And in the tapes of the experiment, you can hear the human struggle, confused and hesitating.

UNIDENTIFIED WOMAN: Yeah - ah, no. I will switch you off. I will switch you off.

CAT ROBOT: Please.

UNIDENTIFIED WOMAN: No, please.

BARTNECK: People actually start to have dialogues with the robot about this; for example, say - you know - no, I really have to do it now; I'm sorry - you know - it has to be done. But they still wouldn't do it. They would still hesitate.

CAT ROBOT: Please. You can decide to keep me switched on. Please.

SPIEGEL: There they sit, in front of a machine that is no more soulful than a hair dryer; a machine they know, intellectually, is just a collection of electrical pulses and metal. And yet they pause, waiting; until finally, they turn the knob that kills it.

UNIDENTIFIED WOMAN: This will happen. Now!

SPIEGEL: Every participant eventually did kill the robot, but it took them time to muster the strength to do it, to intellectually override their emotional queasiness.

BARTNECK: What we found is that - let's say a smart and agreeable robot, the participant would approximately take 35 seconds before they would have completed the switching-off procedure.

SPIEGEL: On one level, there are clear practical implications to studies like these. Bartneck says the more we know about machine-human interaction, the better we can build our machines. For example...

BARTNECK: If the robot has to recharge its batteries, if it would just say (speaks in monotone) "my batteries are almost empty"; or if it would say (speaks with emotion) "I'm sorry, but my batteries are almost empty; can you please recharge me?" I think you would get different results.

SPIEGEL: But on a more philosophical level, studies like these can help to track where we are, in terms of our relationship to the evolving technologies in our lives.

BARTNECK: This relationship is certainly something that is in flux. So it is not - there is no one way of how we deal with technology, and it doesn't change. It is something that does change.

SPIEGEL: More and more intelligent machines are integrated into our lives. They come into our beds; they come into our bathrooms. And as they do, and as they present themselves to us differently, our responses to them change. Both Bartneck and Nass believe this, but I thought I'd talk to one, last technology expert.

Siri?

(SOUNDBITE OF BEEPING)

SIRI: Siri here, how may I help you?

(SOUNDBITE OF BEEPING)

SPIEGEL: Siri, do you like me?

(SOUNDBITE OF BEEPING)

SIRI: What a question. Of course, I'm your friend.

(SOUNDBITE OF BEEPING)

SPIEGEL: OK, Siri, goodbye.

(SOUNDBITE OF BEEPING)

SIRI: Until next time.

(SOUNDBITE OF BEEPING)

SPIEGEL: Alix Spiegel, NPR News, Washington.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.