Technology

It’s Time for Humans to Take Advantage of Benevolent Artificial Intelligence

It’s Time for Humans to Take Advantage of Benevolent Artificial Intelligence

Humans expect AI to be kind and trustworthy. According to a new study, humans are unwilling to cooperate and compromise with machines at the same time. They even take advantage of them.

Consider yourself driving down a narrow road in the near future when another car appears from a bend ahead. It is a self-driving vehicle with no passengers on board. Will you push forward and assert your right of way, or will you yield and allow it to pass? In such situations involving other humans, most of us now act kindly. Will we extend the same courtesy to self-driving cars?

An international team of researchers from LMU and the University of London used behavioral game theory methods to conduct large-scale online studies to see if people would behave cooperatively with artificial intelligence (AI) systems as they do with fellow humans.

Cooperation is the glue that holds a society together. It frequently necessitates us making compromises with others and accepting the risk that they will let us down. A good example is a traffic. We waste time by allowing others to pass in front of us and are outraged when others fail to reciprocate our kindness. Will we apply the same logic to machines?

Humans expect that AI is benevolent and trustworthy. A new study reveals that at the same time humans are unwilling to cooperate and compromise with machines. They even exploit them.

Exploiting the machine without guilt

According to the study, which was published in the journal iScience, people have the same level of trust in AI as they do in humans: they expect to meet someone who is willing to cooperate.

The distinction is made later. People are much less willing to reciprocate with AI, preferring to take advantage of its generosity for their own gain. Returning to the traffic scenario, a human driver would yield to another human but not to a self-driving car. This unwillingness to compromise with machines is identified in the study as a new challenge to the future of human-AI interactions.

“We put people in the shoes of someone who is interacting with an artificial agent for the first time, as it could happen on the road,” explains Dr. Jurgis Karpus, the study’s first author and a behavioral game theorist and philosopher at LMU Munich. “We modeled various types of social encounters and discovered a consistent pattern. People expected artificial agents to cooperate in the same way that humans do. They, on the other hand, did not reciprocate as generously and exploited the AI more than humans.”

Its-Time-for-Humans-to-Take-Advantage-of-Benevolent-Artificial-Intelligence-1
Humans are ready to take advantage of benevolent AI

The researchers discovered that ‘algorithm exploitation’ is a robust phenomenon using perspectives from game theory, cognitive science, and philosophy. They conducted nine experiments with nearly 2,000 human participants to replicate their findings.

Each experiment investigates various types of social interactions and allows humans to choose whether to compromise and cooperate or to act selfishly. The other players’ expectations were also assessed. People must trust that the other characters in a well-known game, the Prisoner’s Dilemma, will not let them down. They embraced risk with humans and AI alike, but betrayed the AI’s trust far more frequently in order to gain more money.

“Cooperation is supported by a mutual bet: I trust you will be kind to me, and you will trust me to be kind to you. The greatest concern in our field is that people will lose faith in machines. But we demonstrate that they do! “Prof. Bahador Bahrami, a social neuroscientist at LMU and one of the study’s senior researchers, says “The big difference is that they are fine with letting the machine down. People don’t even report much guilt when they do “He continues.

Benevolent AI can backfire

Biased and unethical AI has made numerous headlines, ranging from the 2020 exam fiasco in the United Kingdom to justice systems, but this new study raises a new caution. The industry and legislators are working hard to ensure that artificial intelligence is beneficial. However, benevolence can backfire.

People will be less inclined to cooperate if they believe AI is programmed to be benevolent toward them. Some self-driving car accidents may already be real-life examples: drivers recognize an autonomous vehicle on the road and expect it to yield. Meanwhile, the self-driving vehicle anticipates that normal driver compromises will hold.

“Algorithm exploitation has long-term consequences. If humans are unwilling to let a polite self-driving car join from a side road, should the self-driving car be less polite and more aggressive to be useful? “Jurgis Karpus inquires.

“AI that is benevolent and trustworthy is a buzzword that everyone is excited about. However, fixing the AI is not the end of the story. “If we realize that the robot in front of us will be cooperative regardless of what, we will use it to our selfish advantage,” says Professor Ophelia Deroy, a philosopher and senior author on the study who also works with Norway’s Peace Research Institute Oslo on the ethical implications of integrating autonomous robot soldiers alongside human soldiers.

“Compromises are the oil that keeps society running.” For each of us, it appears to be a minor act of self-interest. It could have far-reaching consequences for society as a whole. If no one lets autonomous cars join the traffic, they will create their own traffic jams on the side, and not make transport easier.”