Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Article number | 102262 |
---|---|
<mark>Journal publication date</mark> | 30/11/2023 |
<mark>Journal</mark> | Technology in Society |
Volume | 75 |
Publication Status | Published |
Early online date | 5/10/23 |
<mark>Original language</mark> | English |
Artificial intelligence is often framed as a tool to be used to counter violent extremism. This paper seeks to make a contribution to scholarship on the use of AI by considering the other side of the debate: how AI and algorithms themselves can and are being used to radicalize, polarize, and spread racism and political instability. The central argument of the paper is that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but contributors to polarization, radicalism and political violence. Further to this, securitiziation processes have been instrumental in how AI has been designed, used and to the harmful otucomes that it has generated. The paper begins with an analysis of the connections between AI, polarization, radicalism and political violence. Drawing on the ‘Copenhagan School’ of International Relations theory, it then moves on to an empirical assessment of how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring a number of modern examples of AI having polarizing, radicalizing effects that have contributed to political violence. The third section of the article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts. The final section draws conclusions and policy implications, arguing that a reconceptualisation of AI-enabled security is necessary that is more attuned to the human, social and psychological impacts of the technology.