Home > Research > Publications & Outputs > The Law of Armed Conflict Issues Created by Prog...

Links

Text available via DOI:

View graph of relations

The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Published

Standard

The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. / Hughes, Joshua.
Yearbook of International Humanitarian Law. ed. / Terry D. Gill; Robin Geiss; Heike Krieger; Christophe Paulussen. The Hague: T.M.C. Asser Press, 2019. p. 99-135 (Yearbook of International Humanitarian Law; Vol. 21, No. 2018).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Harvard

Hughes, J 2019, The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. in TD Gill, R Geiss, H Krieger & C Paulussen (eds), Yearbook of International Humanitarian Law. Yearbook of International Humanitarian Law, no. 2018, vol. 21, T.M.C. Asser Press, The Hague, pp. 99-135. https://doi.org/10.1007/978-94-6265-343-6_4

APA

Hughes, J. (2019). The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. In T. D. Gill, R. Geiss, H. Krieger, & C. Paulussen (Eds.), Yearbook of International Humanitarian Law (pp. 99-135). (Yearbook of International Humanitarian Law; Vol. 21, No. 2018). T.M.C. Asser Press. https://doi.org/10.1007/978-94-6265-343-6_4

Vancouver

Hughes J. The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. In Gill TD, Geiss R, Krieger H, Paulussen C, editors, Yearbook of International Humanitarian Law. The Hague: T.M.C. Asser Press. 2019. p. 99-135. (Yearbook of International Humanitarian Law; 2018). doi: 10.1007/978-94-6265-343-6_4

Author

Hughes, Joshua. / The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods. Yearbook of International Humanitarian Law. editor / Terry D. Gill ; Robin Geiss ; Heike Krieger ; Christophe Paulussen. The Hague : T.M.C. Asser Press, 2019. pp. 99-135 (Yearbook of International Humanitarian Law; 2018).

Bibtex

@inbook{d5e6d5706cfe4426ae7b9d9a208acaf6,
title = "The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods",
abstract = "Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.",
keywords = "Deep Learning, Autonomous Weapon Systems, Artificial Intelligence, Killer Robots",
author = "Joshua Hughes",
year = "2019",
month = nov,
day = "4",
doi = "10.1007/978-94-6265-343-6_4",
language = "English",
isbn = "9789462653429",
series = "Yearbook of International Humanitarian Law",
publisher = "T.M.C. Asser Press",
number = "2018",
pages = "99--135",
editor = "Gill, {Terry D.} and Robin Geiss and Heike Krieger and Paulussen, {Christophe }",
booktitle = "Yearbook of International Humanitarian Law",
address = "Netherlands",

}

RIS

TY - CHAP

T1 - The Law of Armed Conflict Issues Created by Programming Automatic Target Recognition Systems Using Deep Learning Methods

AU - Hughes, Joshua

PY - 2019/11/4

Y1 - 2019/11/4

N2 - Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.

AB - Deep learning is a method of machine learning which has advanced several headline-grabbing technologies, from self-driving cars to systems recognising mental health issues in medical data. Due to these successes, its capabilities in image and target recognition is currently being researched for use in armed conflicts. However, this programming method contains inherent limitations, including an inability for the resultant algorithms to comprehend context and the near impossibility for humans to understand the decision-making process of the algorithms. This can lead to the appearance that the algorithms are functioning as intended even when they are not. This chapter examines these problems, amongst others, with regard to the potential use of deep learning to programme automatic target recognition systems, which may be used in an autonomous weapon system during an armed conflict. This chapter evaluates how the limitations of deep learning affect the ability of these systems to perform target recognition in compliance with the law of armed conflict. Ultimately, this chapter concludes that whilst there are some very narrow circumstances where these algorithms could be used in compliance with targeting rules, there are significant risks of unlawful targets being selected. Further, these algorithms impair the exercise of legal duties by autonomous weapon system operators, commanders, and weapons reviewers. As such, this chapter concludes that deep learning-generated algorithms should not be used for target recognition by fully-autonomous weapon systems in armed conflicts, unless they can be made in such a way as to understand the context of targeting decisions and be explainable.

KW - Deep Learning

KW - Autonomous Weapon Systems

KW - Artificial Intelligence

KW - Killer Robots

U2 - 10.1007/978-94-6265-343-6_4

DO - 10.1007/978-94-6265-343-6_4

M3 - Chapter

SN - 9789462653429

T3 - Yearbook of International Humanitarian Law

SP - 99

EP - 135

BT - Yearbook of International Humanitarian Law

A2 - Gill, Terry D.

A2 - Geiss, Robin

A2 - Krieger, Heike

A2 - Paulussen, Christophe

PB - T.M.C. Asser Press

CY - The Hague

ER -