Psychology

People who Dislike other People have Higher Faith in Artificial Intelligence

People who Dislike other People have Higher Faith in Artificial Intelligence

According to a recent study, a person’s distrust in humans predicts their trust in artificial intelligence’s ability to moderate online content. According to the researchers, the findings have practical implications for both designers and users of AI tools in social media.

“We discovered a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this appears to be due to users invoking the idea that machines are accurate, objective, and free of ideological bias.”

The study, which was published in the journal New Media & Society, also discovered that “power users,” or experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe machines lack the ability to detect nuances in human language.

The study discovered that individual differences such as distrust of others and power usage predict whether users will invoke positive or negative machine characteristics when confronted with an AI-based system for content moderation, which will ultimately influence their trust in the system. The researchers believe that personalizing interfaces based on individual differences can improve user experience. The study’s content moderation involves monitoring social media posts for problematic content such as hate speech and suicidal ideation.

We discovered a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification. Based on our analysis, this appears to be due to users invoking the idea that machines are accurate, objective, and free of ideological bias.

S. Shyam Sundar

“One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. “This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”

The study also found users with conservative political ideology were more likely to trust AI-powered moderation. Molina and coauthor Sundar, who also co-directs Penn State’s Media Effects Research Laboratory, said this may stem from a distrust in mainstream media and social media companies.

People-who-Dislike-other-People-have-Higher-Faith-in-Artificial-Intelligence-1
People who distrust fellow humans show greater trust in artificial intelligence

The researchers recruited 676 participants from the United States. The participants were told they were helping test a content moderating system that was in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. The posts were either flagged for fitting those definitions or not flagged. The participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.

The demonstration was followed by a questionnaire in which the participants were asked about their individual differences. Differences included their distrust of others, political ideologies, technological experience, and faith in AI.

“We are bombarded with so much problematic content, from misinformation to hate speech,” Molina said. “However, at the end of the day, it’s about how we can help users calibrate their trust in AI based on the actual attributes of the technology, rather than being swayed by those individual differences.”

According to Molina and Sundar, their findings could help shape future AI acceptance. Designers could alleviate skepticism and distrust in AI by creating systems tailored to the user.

“A major practical implication of the study is to figure out communication and design strategies for assisting users in calibrating their trust in automated systems,” said Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence. “Those who place too much faith in AI technology should be made aware of its limitations, and those who do not believe in its ability to moderate content should be fully informed about the extent to which humans are involved in the process.”