Technology

For Alerting Users to Problematic Content, they Trust AI as much as Humans

For Alerting Users to Problematic Content, they Trust AI as much as Humans

According to Penn State researchers, social media users may trust artificial intelligence (AI) as much as human editors to flag hate speech and harmful content. According to the researchers, when users consider positive attributes of machines, such as accuracy and objectivity, they have more faith in AI. However, when users are reminded of machines’ inability to make subjective decisions, their trust is diminished.

The findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored, or inaccurately classified, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.

“There is a dire need for content moderation on social media and, more broadly, online media,” said Sundar, who is also affiliated with Penn State’s Institute for Computational and Data Sciences. “We have news editors who act as gatekeepers in traditional media. However, the gates are so wide open online that human gatekeeping is not always feasible, especially given the volume of information being generated. As the industry shifts toward automated solutions, this study examines the differences in how people respond to human and automated content moderators.”

Both human and AI editors have benefits and drawbacks. According to Maria D. Molina, assistant professor of advertising and public relations at Michigan State and the study’s first author, humans tend to more accurately assess whether content is harmful, such as when it is racist or could potentially provoke self-harm. People, on the other hand, are unable to process the massive amounts of content that are now generated and shared online.

When we think about automated content moderation, it raises the question of whether artificial intelligence editors are impinging on a person’s freedom of expression. This creates a dichotomy between the fact that we need content moderation because people are sharing all of this problematic content and, at the same time, people are worried about AI’s ability to moderate content.

Maria D. Molina

While AI editors can quickly analyze content, people frequently doubt their ability to make accurate recommendations and are concerned that the information will be censored.

“When we think about automated content moderation, it raises the question of whether artificial intelligence editors are impinging on a person’s freedom of expression,” said Molina. “This creates a dichotomy between the fact that we need content moderation – because people are sharing all of this problematic content – and, at the same time, people are worried about AI’s ability to moderate content. So, ultimately, we want to know how we can build AI content moderators that people can trust in a way that doesn’t impinge on that freedom of expression.”

For-Alerting-Users-to-Problematic-Content-they-Trust-AI-as-much-as-Humans-1
Users trust AI as much as humans for flagging problematic content

Transparency and interactive transparency

Bringing people and AI together in the moderation process, according to Molina, may be one way to build a trusted moderation system. She went on to say that one approach to increasing trust in AI is transparency, or signaling to users that a machine is involved in moderation. Allowing users to make suggestions to the AIs, referred to by the researchers as “interactive transparency,” appears to increase user trust even further.

The researchers recruited 676 participants to interact with a content classification system in order to study transparency and interactive transparency, among other variables. Participants were randomly assigned to one of 18 experimental conditions, designed to test how the source of moderation AI, human or both and transparency regular, interactive or no transparency might affect the participant’s trust in AI content editors.

The researchers tested classification decisions – whether the content was classified as “flagged” or “not flagged” for being harmful or hateful. The “harmful” test content dealt with suicidal ideation, while the “hateful” test content included hate speech.

Among other findings, the researchers found that users’ trust depends on whether the presence of an AI content moderator invokes positive attributes of machines, such as their accuracy and objectivity, or negative attributes, such as their inability to make subjective judgments about nuances in human language.

Giving users a chance to help the AI system decide whether online information is harmful or not may also boost their trust. The researchers said that study participant who added their own terms to the results of an AI-selected list of words used to classify posts trusted the AI editor just as much as they trusted a human editor.

Ethical concerns

Sundar believes that relieving humans of content review goes beyond simply providing workers with a break from a tedious task. Hiring human editors for the job exposes them to hours of hateful and violent images and content, he claims.

Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence, believes that “there is an ethical need for automated content moderation.” “There is a need to protect human content moderators – who are providing a social benefit by doing so – from constant exposure to harmful content on a daily basis.”

According to Molina, future research could focus on how to help people not only trust but also understand AI. She also mentioned that interactive transparency may be important in understanding AI.

“Not only is trust in systems important, but so is engaging people in such a way that they actually understand AI,” Molina said. “How can we use the concept of interactive transparency, as well as other methods, to help people better understand AI? How can we best present AI so that it elicits the appropriate balance of admiration for machine ability and skepticism about its flaws? These are worthy of investigation.”