Technology

Humanoid Robots may Appear to have Mental States

Humanoid Robots may Appear to have Mental States

According to research published by the American Psychological Association, when robots appear to interact with people and display human-like emotions, people may perceive them as capable of “thinking,” or acting on their own beliefs and desires rather than those of their programs.

“The relationship between anthropomorphic shape, human-like behavior, and the tendency to attribute independent thought and intentional behavior to robots remains unknown,” said study author and principal investigator Agnieszka Wykowska, PhD, of the Italian Institute of Technology. “As artificial intelligence becomes more prevalent in our lives, it is critical to understand how interacting with a robot that exhibits human-like behaviors may increase the likelihood of attribution of intentional agency to the robot.”

The research was published in the journal Technology, Mind, and Behavior.

Surprisingly, recent technological advances have enabled robots to express emotions. Despite the fact that emotional expression is valued in the field of service, understanding emotions in human-robot interaction is still a work in progress.

Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication.

Agnieszka Wykowska

Researchers examined how people perceived a human-like robot, the iCub, after socializing with it and watching videos with it in three experiments involving 119 participants. Participants completed a questionnaire before and after interacting with the robot, which showed them pictures of the robot in various situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. Participants, for example, saw three photos of a robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.”

In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognize participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe or happiness.

Humanoid-Robots-may-Appear-to-have-Mental-States-1
Human-like robots may be perceived as having mental states

In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head and neck.

Participants who watched videos with the human-like robot were more likely to rate its actions as intentional rather than programmed, whereas those who only interacted with the machine-like robot were not. This demonstrates that simply seeing a human-like robot does not convince people that it is capable of thinking and feeling. Human-like behavior may be essential for being perceived as an intentional agent.

According to Wykowska, these findings suggest that when artificial intelligence appears to be capable of independent thought, people are more likely to believe it can behave like humans. According to her, this could help shape the design of future social robots.

“Social bonding with robots might be beneficial in some contexts, like with socially assistive robots. For example, in elderly care, social bonding with robots might induce a higher degree of compliance with respect to following recommendations regarding taking medication,” Wykowska said. “Determining contexts in which social bonding and attribution of intentionality is beneficial for the well-being of humans is the next step of research in this area.”

“Our findings begin to demonstrate how robots can see the world through the eyes of another robot,” said lead author Boyuan Chen. Chean added that this ability is “possibly a primitive form of empathy.”

This capability will increase the utility of robots. When robots can predict how humans think, they may be able to manipulate those thoughts. “We recognize that robots will not remain passive instruction-following machines for long,” said Hod Lipson, the laboratory’s leader. “We hope that policymakers, like other forms of advanced AI, can help keep this kind of technology in check so that we can all benefit,” he said.

Machine-learning systems that explain their decisions in human language are being developed by researchers. For example, in one basic model, an AI program determines whether an image depicts a healthy meal and explains its decision: “no” because the image contains a hot dog, or “yes” because it detects the presence of vegetables. However, such programming is still in its early stages and is far from common.