Robots are already a big deal in healthcare. We've got robotic surgery, robots in the healthcare supply chain, cleaning robots in hospitals, and caregiver robots, to name a few. Soon we may have tiny "spider" robots performing surgery and other tasks inside us.
But what we haven't been used to is caring about what the robot thinks. That may soon change.
Aike C. Horstmann , Nikolai Bock, Eva Linhuber, Jessica M. Szczuka, Carolin Straßmann, Nicole C. Krämer / PLOS |
Come on, who could resist that?
About a third of the people who heard that plea refused to turn it off, and the rest took twice as long to do so as participants who did not get the plea. The authors state:
Participants treated the protesting robot differently, which can be explained when the robot’s objection was perceived as sign of autonomy. Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on...Here are some examples of reasons participants gave for their reluctance:
The researchers were testing something called media equation theory. Essentially, the premise is that we tend to treat non-human media -- TV, computers, robots, etc. -- as human, as anyone who uses Alexa or Siri can attest. As the study authors put it: "Due to their social nature, people will rather make the mistake of treating something falsely as human than treating something falsely as non-human."
In the study, subjects found NAO more likable if their task had been social rather than functional. However, likability did not, as might have been expected, tie directly to the decision about turning NAO off. Subjects who had interacted socially with NAO found turning it off more stressful, but those whose interaction had been more functional actually took longer to turn it off once it pleaded not to be.
The authors speculated:
After the social interaction, people were more used to personal and emotional statements by the robot and probably already found explanations for them. After the functional interaction, the protest was the first time the robot revealed something personal and emotional with the participant and, thus, people were not prepared cognitivelyHowever, for subjects who had negative attitudes towards robots prior to the study, or had "low technical affinity," NAO's plea didn't have a significant impact the switching off decision.
We're already seeing robots interacting with us on an emotional level in healthcare. For example, IEEE Spectrum reports on QTrobot, from LuxAI, a robot designed to help children with autism develop social skills. LuxAI cofounder Aida Nazarikhorram explained:
When you are interacting with a person, there are a lot of social cues such as facial expressions, tonality of the voice, and movement of the body which are overwhelming and distracting for children with autism. But robots have this ability to make everything simplified. For example, every time the robot says something or performs a task, it’s exactly the same as the previous time, and that gives comfort to children with autism.
Hello, QTrobot! Credit: LuxAI |
It is worth pointing out that the MIT researchers found that the children reacted to the robot "not just as a toy but related to NAO respectfully as it if was a real person."
When we get AI doctors and other healthcare professionals -- and we will -- it will be interesting to see if we trust them more if they are humanoid robots, versus ones with whom we only have verbal interactions, or ones which present through avatars/holograms. If it looks and talks like a human, will we be predisposed to treat it like one?
As Nicole Kramer, one of the PLOS study co-authors, told NBC News, "We are preprogrammed to react socially. We have not yet learned to distinguish between human social cues and artificial entities who present social cues."
We're already being deeply manipulated by Facebook, video games, and a host of apps, and they aren't even cute little humanoid robots. Fritz Breithaupt, a humanities scholar and cognitive scientist at Indiana University, also told NBC News. “These emotionally manipulative robots will soon read our emotions better than humans will. This will allow them to exploit us. People will need to learn that robots are not neutral.”
Isn't that sweet? Credit: Softbank |
I hear this worry a lot. But I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because for hundreds of thousands of years, we were the only social beings on the planet. Now we’re not, and we have to adapt to it. It’s an unconscious reaction, but it can change.”Of course, robots and AI are evolving much more rapidly than we are, so, while our reactions can change, the question of whether we will change "in time" remains open.
I used to think that, should I ever need a caregiver, I would prefer a robot rather than a human for the more unpleasant tasks, but now I have to worry about how they might feel about them as well!
No comments:
Post a Comment