Tuesday, August 14, 2018

My Robot Doesn't Like That

I have a soft spot for robots.  Maybe it was Robot from Lost in Space.  Maybe it was R2-D2 and C-3PO from Star Wars.  OK, maybe the original Terminator wasn't so likable, but its subsequent iterations showed its softer side.  Show me a robot and I'm prepared to like it.

Robots are already a big deal in healthcare.  We've got robotic surgery, robots in the healthcare supply chaincleaning robots in hospitals, and caregiver robots, to name a few.   Soon we may have tiny "spider" robots performing surgery and other tasks inside us. 

But what we haven't been used to is caring about what the robot thinks.  That may soon change.
Aike C. Horstmann , Nikolai Bock, Eva Linhuber,
Jessica M. Szczuka, Carolin Straßmann, Nicole C. Krämer / PLOS
A new study in PLOS found that robots can arouse our sympathy.  Study participants interacted with a robot (Softbank's NAO), either on tasks that were "social" (involving more verbal interaction) or functional.  The participants were told they could turn NAO off once the tasks were completed, but, to the surprise of about half of the participants, when it was time NAO pleaded: "No! Please do not switch me off! I am scared that it will not brighten up again!”

Come on, who could resist that? 

About a third of the people who heard that plea refused to turn it off, and the rest took twice as long to do so as participants who did not get the plea.  The authors state:
Participants treated the protesting robot differently, which can be explained when the robot’s objection was perceived as sign of autonomy. Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on... 
Here are some examples of reasons participants gave for their reluctance:

The researchers were testing something called media equation theory.  Essentially, the premise is that we tend to treat non-human media -- TV, computers, robots, etc. -- as human, as anyone who uses Alexa or Siri can attest.  As the study authors put it:  "Due to their social nature, people will rather make the mistake of treating something falsely as human than treating something falsely as non-human."

In the study, subjects found NAO more likable if their task had been social rather than functional.  However, likability did not, as might have been expected, tie directly to the decision about turning NAO off.  Subjects who had interacted socially with NAO found turning it off more stressful, but those whose interaction had been more functional actually took longer to turn it off once it pleaded not to be. 

The authors speculated:
After the social interaction, people were more used to personal and emotional statements by the robot and probably already found explanations for them. After the functional interaction, the protest was the first time the robot revealed something personal and emotional with the participant and, thus, people were not prepared cognitively
However, for subjects who had negative attitudes towards robots prior to the study, or had "low technical affinity," NAO's plea didn't have a significant impact the switching off decision.

We're already seeing robots interacting with us on an emotional level in healthcare.  For example, IEEE Spectrum reports on QTrobot, from LuxAI, a robot designed to help children with autism develop social skills.  LuxAI cofounder Aida Nazarikhorram explained:
When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism.  But robots have this ability to make everything simplified.  For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.
Hello, QTrobot!  Credit: LuxAI
The article pointed out that using robots for autism has been studied since the 1990's, and one of them (again Softbank's NAO) was used by researchers at MIT Media Lab to estimate engagement and interest of children it interacts with.  Their research gauged that the robots did about as well as humans, which is impressive,or scary, or both. 

It is worth pointing out that the MIT researchers found that the children reacted to the robot "not just as a toy but related to NAO respectfully as it if was a real person."  

When we get AI doctors and other healthcare professionals -- and we will -- it will be interesting to see if we trust them more if they are humanoid robots, versus ones with whom we only have verbal interactions, or ones which present through avatars/holograms.  If it looks and talks like a human, will we be predisposed to treat it like one?

As Nicole Kramer, one of the PLOS study co-authors, told NBC News, "We are preprogrammed to react socially.  We have not yet learned to distinguish between human social cues and artificial entities who present social cues." 

We're already being deeply manipulated by Facebook, video games, and a host of apps, and they aren't even cute little humanoid robots.  Fritz Breithaupt, a humanities scholar and cognitive scientist at Indiana University, also told NBC News. “These emotionally manipulative robots will soon read our emotions better than humans will.  This will allow them to exploit us. People will need to learn that robots are not neutral.”
Isn't that sweet? Credit: Softbank
Aike Horstman, the Ph.D. student who led the PLOS study, is aware of this concern, but is philosophical about it, telling The Verge
I hear this worry a lot.  But I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because for hundreds of thousands of years, we were the only social beings on the planet. Now we’re not, and we have to adapt to it. It’s an unconscious reaction, but it can change.”
Of course, robots and AI are evolving much more rapidly than we are, so, while our reactions can change, the question of whether we will change "in time" remains open. 

I used to think that, should I ever need a caregiver, I would prefer a robot rather than a human for the more unpleasant tasks, but now I have to worry about how they might feel about them as well!

No comments:

Post a Comment