Home > Research > Publications & Outputs > Quantifying safety risks of deep neural networks

Links

Text available via DOI:

View graph of relations

Quantifying safety risks of deep neural networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Quantifying safety risks of deep neural networks. / Xu, Peipei; Ruan, Wenjie; Huang, Xiaowei.
In: Complex & Intelligent Systems, Vol. 9, No. 4, 31.08.2023, p. 3801-3818.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Xu, P, Ruan, W & Huang, X 2023, 'Quantifying safety risks of deep neural networks', Complex & Intelligent Systems, vol. 9, no. 4, pp. 3801-3818. https://doi.org/10.1007/s40747-022-00790-x

APA

Xu, P., Ruan, W., & Huang, X. (2023). Quantifying safety risks of deep neural networks. Complex & Intelligent Systems, 9(4), 3801-3818. https://doi.org/10.1007/s40747-022-00790-x

Vancouver

Xu P, Ruan W, Huang X. Quantifying safety risks of deep neural networks. Complex & Intelligent Systems. 2023 Aug 31;9(4):3801-3818. Epub 2022 Jul 9. doi: 10.1007/s40747-022-00790-x

Author

Xu, Peipei ; Ruan, Wenjie ; Huang, Xiaowei. / Quantifying safety risks of deep neural networks. In: Complex & Intelligent Systems. 2023 ; Vol. 9, No. 4. pp. 3801-3818.

Bibtex

@article{426306c3b1874468b926c8fa0cc31b79,
title = "Quantifying safety risks of deep neural networks",
abstract = "Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of network{\textquoteright}s decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics—the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper, we identify a new class of risk—uncertainty example—on which humans can tell easily, but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support an efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.",
keywords = "Adversarial examples, Lipschitz metrics, Neural networks, Robustness, Safety, Uncertainty",
author = "Peipei Xu and Wenjie Ruan and Xiaowei Huang",
year = "2023",
month = aug,
day = "31",
doi = "10.1007/s40747-022-00790-x",
language = "English",
volume = "9",
pages = "3801--3818",
journal = "Complex & Intelligent Systems",
issn = "2199-4536",
publisher = "Springer Science and Business Media LLC",
number = "4",

}

RIS

TY - JOUR

T1 - Quantifying safety risks of deep neural networks

AU - Xu, Peipei

AU - Ruan, Wenjie

AU - Huang, Xiaowei

PY - 2023/8/31

Y1 - 2023/8/31

N2 - Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of network’s decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics—the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper, we identify a new class of risk—uncertainty example—on which humans can tell easily, but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support an efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.

AB - Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of network’s decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics—the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper, we identify a new class of risk—uncertainty example—on which humans can tell easily, but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support an efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.

KW - Adversarial examples

KW - Lipschitz metrics

KW - Neural networks

KW - Robustness

KW - Safety

KW - Uncertainty

U2 - 10.1007/s40747-022-00790-x

DO - 10.1007/s40747-022-00790-x

M3 - Journal article

VL - 9

SP - 3801

EP - 3818

JO - Complex & Intelligent Systems

JF - Complex & Intelligent Systems

SN - 2199-4536

IS - 4

ER -