Home > Research > Publications & Outputs > The Law of Armed Conflict Issues Created by Prog...

Links

Text available via DOI:

View graph of relations

The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Published
Publication date4/11/2019
Host publicationYearbook of International Humanitarian Law
EditorsTerry D. Gill, Robin Geiss, Heike Krieger, Christophe Paulussen
Place of PublicationThe Hague
PublisherT.M.C. Asser Press
Pages99-135
Number of pages37
ISBN (electronic)9789462653436
ISBN (print)9789462653429
<mark>Original language</mark>English

Publication series

NameYearbook of International Humanitarian Law
PublisherT.M.C. Asser Press
Number2018
Volume21
ISSN (Print)1389-1359
ISSN (electronic)1574-096X

Abstract

Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.