Home > Research > Publications & Outputs > Algorithmic extremism?

Links

Text available via DOI:

View graph of relations

Algorithmic extremism?: The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. / Burton, Joe.
In: Technology in Society, Vol. 75, 102262, 30.11.2023.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Burton J. Algorithmic extremism? The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence. Technology in Society. 2023 Nov 30;75:102262. Epub 2023 Oct 5. doi: 10.1016/j.techsoc.2023.102262

Author

Bibtex

@article{4401501966254d22ab6e09306d364e11,
title = "Algorithmic extremism?: The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence",
abstract = "Artificial intelligence is often framed as a tool to be used to counter violent extremism. This paper seeks to make a contribution to scholarship on the use of AI by considering the other side of the debate: how AI and algorithms themselves can and are being used to radicalize, polarize, and spread racism and political instability. The central argument of the paper is that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but contributors to polarization, radicalism and political violence. Further to this, securitiziation processes have been instrumental in how AI has been designed, used and to the harmful otucomes that it has generated. The paper begins with an analysis of the connections between AI, polarization, radicalism and political violence. Drawing on the {\textquoteleft}Copenhagan School{\textquoteright} of International Relations theory, it then moves on to an empirical assessment of how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring a number of modern examples of AI having polarizing, radicalizing effects that have contributed to political violence. The third section of the article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts. The final section draws conclusions and policy implications, arguing that a reconceptualisation of AI-enabled security is necessary that is more attuned to the human, social and psychological impacts of the technology.",
keywords = "Artificial Intelligence, Data, Extremism, Polarization, Securitisation, Securitization",
author = "Joe Burton",
year = "2023",
month = nov,
day = "30",
doi = "10.1016/j.techsoc.2023.102262",
language = "English",
volume = "75",
journal = "Technology in Society",
issn = "0160-791X",
publisher = "Elsevier Limited",

}

RIS

TY - JOUR

T1 - Algorithmic extremism?

T2 - The securitization of artificial intelligence (AI) and its impact on radicalism, polarization and political violence

AU - Burton, Joe

PY - 2023/11/30

Y1 - 2023/11/30

N2 - Artificial intelligence is often framed as a tool to be used to counter violent extremism. This paper seeks to make a contribution to scholarship on the use of AI by considering the other side of the debate: how AI and algorithms themselves can and are being used to radicalize, polarize, and spread racism and political instability. The central argument of the paper is that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but contributors to polarization, radicalism and political violence. Further to this, securitiziation processes have been instrumental in how AI has been designed, used and to the harmful otucomes that it has generated. The paper begins with an analysis of the connections between AI, polarization, radicalism and political violence. Drawing on the ‘Copenhagan School’ of International Relations theory, it then moves on to an empirical assessment of how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring a number of modern examples of AI having polarizing, radicalizing effects that have contributed to political violence. The third section of the article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts. The final section draws conclusions and policy implications, arguing that a reconceptualisation of AI-enabled security is necessary that is more attuned to the human, social and psychological impacts of the technology.

AB - Artificial intelligence is often framed as a tool to be used to counter violent extremism. This paper seeks to make a contribution to scholarship on the use of AI by considering the other side of the debate: how AI and algorithms themselves can and are being used to radicalize, polarize, and spread racism and political instability. The central argument of the paper is that AI and algorithms are not just tools deployed by national security agencies to prevent malicious activity online, but contributors to polarization, radicalism and political violence. Further to this, securitiziation processes have been instrumental in how AI has been designed, used and to the harmful otucomes that it has generated. The paper begins with an analysis of the connections between AI, polarization, radicalism and political violence. Drawing on the ‘Copenhagan School’ of International Relations theory, it then moves on to an empirical assessment of how AI has been securitized throughout its history, and in media and popular culture depictions, and by exploring a number of modern examples of AI having polarizing, radicalizing effects that have contributed to political violence. The third section of the article examines AI technology itself, arguing that problems exist in the design of AI, the data that it relies on, how it is used, and in its outcomes and impacts. The final section draws conclusions and policy implications, arguing that a reconceptualisation of AI-enabled security is necessary that is more attuned to the human, social and psychological impacts of the technology.

KW - Artificial Intelligence

KW - Data

KW - Extremism

KW - Polarization

KW - Securitisation

KW - Securitization

U2 - 10.1016/j.techsoc.2023.102262

DO - 10.1016/j.techsoc.2023.102262

M3 - Journal article

AN - SCOPUS:85173279521

VL - 75

JO - Technology in Society

JF - Technology in Society

SN - 0160-791X

M1 - 102262

ER -