Home > Research > Publications & Outputs > A survey of safety and trustworthiness of deep ...

Links

Text available via DOI:

View graph of relations

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. / Huang, X.; Kroening, D.; Ruan, W. et al.
In: Computer Science Review, Vol. 37, 100270, 01.08.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E et al. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review. 2020 Aug 1;37:100270. Epub 2020 Jun 17. doi: 10.1016/j.cosrev.2020.100270

Author

Bibtex

@article{43dc46cd695d4fd7923397e9cdd02311,
title = "A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability",
abstract = "In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017.",
keywords = "Deep neural networks, Safety testing, Surveys, Fatal incidents, Human-level performance, Interpretability, Research efforts, Standing tasks, Neural networks",
author = "X. Huang and D. Kroening and W. Ruan and J. Sharp and Y. Sun and E. Thamo and M. Wu and X. Yi",
year = "2020",
month = aug,
day = "1",
doi = "10.1016/j.cosrev.2020.100270",
language = "English",
volume = "37",
journal = "Computer Science Review",
issn = "1574-0137",
publisher = "ELSEVIER IRELAND LTD",

}

RIS

TY - JOUR

T1 - A survey of safety and trustworthiness of deep neural networks

T2 - Verification, testing, adversarial attack and defence, and interpretability

AU - Huang, X.

AU - Kroening, D.

AU - Ruan, W.

AU - Sharp, J.

AU - Sun, Y.

AU - Thamo, E.

AU - Wu, M.

AU - Yi, X.

PY - 2020/8/1

Y1 - 2020/8/1

N2 - In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017.

AB - In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks. With the broader deployment of DNNs on various applications, the concerns over their safety and trustworthiness have been raised in public, especially after the widely reported fatal incidents involving self-driving cars. Research to address these concerns is particularly active, with a significant number of papers released in the past few years. This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability. In total, we survey 202 papers, most of which were published after 2017.

KW - Deep neural networks

KW - Safety testing

KW - Surveys

KW - Fatal incidents

KW - Human-level performance

KW - Interpretability

KW - Research efforts

KW - Standing tasks

KW - Neural networks

U2 - 10.1016/j.cosrev.2020.100270

DO - 10.1016/j.cosrev.2020.100270

M3 - Journal article

VL - 37

JO - Computer Science Review

JF - Computer Science Review

SN - 1574-0137

M1 - 100270

ER -