Technology

Researchers Have Developed A Robot With A “Primitive Form Of Empathy”

Researchers Have Developed A Robot With A “Primitive Form Of Empathy”

If robots are to always interact socially with humans, they first need to develop the ability of the Theory of Mind (TOM) which includes the ability to empathize with others. While the development of such an advanced artificial intelligence (AI) system is still a long way off, researchers at Columbia University have succeeded in creating a robot called their “theoretical theory of behavior.” Describing their work in the journal Scientific Reports, the study authors explained that these traits evolved well in animals as Tom’s evolutionary predecessor and could represent a major step towards the creation of AI with complex social capabilities.

The Theory of Mind is a major feature of human perceptions and is thought to have originated in most children around the age of three. It allows us to understand the needs and motives of the people around us and therefore helps to play complex social activities such as games that have certain rules compete in business and lie to each other.

In general, Tom relies on symbolic reasoning, through which the brain clearly analyzes inputs in detail using language, usually to predict another person’s future actions. This can only be achieved by using impressive neural tools such as the prefrontal cortex – something that is in the hands of all humans but is very advanced for robots.

However, the study authors speculate that some of our evolutionary ancestors developed the ability to clearly predict the actions of others by simply imagining them in the eyes of his mind long before the emergence of the power of symbolic reasoning. They labeled the faculty’s visual theory of behavior and decided to restore it to the AI ​​system.

To do this, they programmed a robot to move regularly towards one of the two green spots in the visible field, always choosing the one that seemed to be closest to the two. Many times researchers obscure the robot with a red block, preventing it from being able to see the nearest green space, causing the gadget to move too far away.

This first robot continuously monitors the second AI for two hours as it completes the task. Seriously, this observation robot had a bird’s eye view and so could always see both greens. Eventually, this AI developed the ability to predict exactly what was going on in the frustration and what the first robot would do just by looking at the green dot and red block system.

Despite the lack of symbolic reasoning, the observer AI was able to predict the goals and actions of the first robot with 98.45 percent accuracy. “Our research is beginning to show how the robot can see the world from another robot’s point of view in order to put itself in the observer’s own partner’s shoes, so that its partner can see it or not, without direction, to speak and understand.” “The green circle in terms of completion is probably an early form of empathy,” one statement explained.

This image-based processing capability is obviously more primitive than language-based processing or other forms of symbolic reasoning, but the study authors speculate that it served as an evolutionary stone towards Tom between humans and other primitives.

“We speculate that, presumably, our ancestor primates learned to process a form of behavioral prediction in the purely visual form long before learning to express the internal psychological aspects of language,” they explained in their writings.