Technology

AI should be better Understood and Managed, according to New Research

AI should be better Understood and Managed, according to New Research

According to a Lancaster University academic, AI and algorithms can and are being used to radicalize, polarize, and spread racism and political instability.

According to Joe Burton, Professor of International Security at Lancaster University, AI and algorithms are more than just tools used by national security agencies to prevent malicious online activity; they can also contribute to polarization, radicalism, and political violence, posing a threat to national security. Furthermore, he claims that securitization processes (presenting technology as an existential threat) have played a role in how AI has been designed, and used, and the negative outcomes it has produced.

Professor Burton’s article ‘Algorithmic extremism? The securitization of Artificial Intelligence (AI) and its impact on radicalism, polarization, and political violence’ is published in Elsevier’s High Impact Technology in Society Journal.

This lack of trust in machines, the fears associated with them, and their association with biological, nuclear, and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality.

Professor Burton

“AI is often framed as a tool to be used to counter violent extremism,” says Professor Burton. “Here is the other side of the debate.”

The paper looks at how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring modern examples of AI having polarizing, radicalizing effects that have contributed to political violence.

According to the article, the classic film series The Terminator, which depicted a holocaust committed by a ‘sophisticated and malignant’ artificial intelligence, did more than anything to frame popular awareness of AI and the fear that machine consciousness could lead to devastating consequences for humanity – in this case, a nuclear war and a deliberate attempt to exterminate a species.

“This lack of trust in machines, the fears associated with them, and their association with biological, nuclear, and genetic threats to humankind has contributed to a desire on the part of governments and national security agencies to influence the development of the technology, to mitigate risk and (in some cases) to harness its positive potentiality,” Professor Burton writes in his article.

AI should be better understood and managed - new research warns

The role of sophisticated drones, such as those being used in the war in Ukraine, is, says Professor Burton, now capable of full autonomy including functions such as target identification and recognition. And, while there has been a broad and influential campaign debate, including at the UN, to ban ‘killer robots’ and to keep humans in the loop when it comes to life-or-death decision-making, the acceleration and integration into armed drones has, he says, continued apace.

In cyber security – the security of computers and computer networks — AI is being used in a major way with the most prevalent area being (dis)information and online psychological warfare.

The actions of Putin’s government against US electoral processes in 2016 and the ensuing Cambridge Analytica scandal demonstrated the potential for AI to be combined with big data (including social media) to produce political effects focused on polarization, the encouragement of radical beliefs, and the manipulation of identity groups. It demonstrated AI’s ability and potential to divide societies. During the pandemic, AI was seen as a positive in tracking and tracing the virus, but it also raised privacy and human rights concerns.

The article examines AI technology itself, arguing that there are flaws in its design, the data on which it is based, how it is used, and its outcomes and impacts. The paper concludes with a strong message to researchers working in cyber security and International Relations.

“AI is certainly capable of transforming societies in positive ways but also presents risks which need to be better understood and managed,” writes Professor Burton, an expert in cyber conflict and emerging technologies and who is part of the University’s Security and Protection Science initiative.

“It is clear that understanding the divisive effects of technology at all stages of its development and use is critical.” Scholars in cyber security and international relations have an opportunity to incorporate these considerations into the emerging AI research agenda and avoid treating AI as a politically neutral technology. In other words, concerns about the social effects of AI systems should not be prioritized over concerns about their security and how they are used in international, geopolitical conflicts.”