Technology

GPT-3 Informs and Misinforms us more Effectively

GPT-3 Informs and Misinforms us more Effectively

According to a recent study, when compared to humans, artificial intelligence (AI) language models produce accurate tweets that are easier to understand and tweets containing disinformation that are more difficult to detect. While these findings can be used to create more effective information campaigns, they also highlight the importance of mitigating the risks associated with AI.

Researchers at the University of Zurich recently investigated the capabilities of AI models, specifically OpenAI’s GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information. The study, which included 697 participants, was led by postdoctoral researchers Giovanni Spitale and Federico Germani, as well as Nikola Biller-Andorno, director of the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich. It sought to determine whether people could distinguish between disinformation and accurate information presented in the form of tweets.

The researchers also wanted to see if participants could tell whether a tweet was written by a real Twitter user or generated by GPT-3, an advanced AI language model. Climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homoeopathic cancer treatments were among the topics discussed.

The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns. In the digital age, understanding the risks associated with AI-generated disinformation is critical for protecting public health and maintaining a robust and trustworthy information ecosystem.

Nikola Biller-Andorno

AI-powered systems could generate large-scale disinformation campaigns

On the one hand, GPT-3 demonstrated the ability to generate accurate and understandable information when compared to tweets from real Twitter users. The researchers did, however, discover that the AI language model had an unsettling proclivity for producing highly persuasive disinformation. In an alarming twist, participants were unable to distinguish between tweets generated by GPT-3 and those written by real Twitter users. “Our study reveals AI’s power to both inform and mislead, raising critical questions about the future of information ecosystems,” says Federico Germani.

These findings imply that GPT-3-created information campaigns, based on well-structured prompts and evaluated by trained humans, would be more effective, for example, in a public health crisis requiring rapid and clear communication with the public. The findings also raise serious concerns about the threat of AI perpetuating disinformation, especially in the context of rapid and widespread misinformation and disinformation dissemination during a crisis or public health event.

According to the study, AI-powered systems could be used to generate large-scale disinformation campaigns on potentially any topic, endangering not only public health but also the integrity of information ecosystems critical to functioning democracies.

GPT-3 informs and disinforms us better

Proactive regulation highly recommended

As AI’s impact on information creation and evaluation grows, the researchers urge policymakers to respond with stringent, evidence-based, and ethically informed regulations to address the potential threats posed by these disruptive technologies and ensure the responsible use of AI in shaping our collective knowledge and well-being.

“The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns,” says Nikola Biller-Andorno. “In the digital age, understanding the risks associated with AI-generated disinformation is critical for protecting public health and maintaining a robust and trustworthy information ecosystem.”

Transparent research using open science best practice

From pre-registration to dissemination, the study followed open science best practises throughout the entire pipeline. “Open science is critical for fostering transparency and accountability in research, allowing for scrutiny and replication,” says Giovanni Spitale, who is also a UZH Open Science Ambassador. It is especially important in the context of our study because it allows stakeholders to access and evaluate the data, code, and intermediate materials, increasing the credibility of our findings and facilitating informed discussions about the risks and implications of AI-generated disinformation.”

Interested parties can access these resources through the OSF repository: https://osf.io/9ntgf/.