Home > Research > Publications & Outputs > A Game-Based Approximate Verification of Deep N...

Electronic data

  • A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

    Rights statement: This is the author’s version of a work that was accepted for publication in Theoretical Computer Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Theoretical Computer Science, 807, 2019 DOI: 10.1016/j.tcs.2019.05.046

    Accepted author manuscript, 6.16 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. / Wu, Min; Wicker, Matthew ; Ruan, Wenjie et al.
In: Theoretical Computer Science, Vol. 807, 06.02.2020, p. 298-329.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Wu, M, Wicker, M, Ruan, W, Huang, X & Kwiatkowska, M 2020, 'A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees', Theoretical Computer Science, vol. 807, pp. 298-329. https://doi.org/10.1016/j.tcs.2019.05.046

APA

Wu, M., Wicker, M., Ruan, W., Huang, X., & Kwiatkowska, M. (2020). A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. Theoretical Computer Science, 807, 298-329. https://doi.org/10.1016/j.tcs.2019.05.046

Vancouver

Wu M, Wicker M, Ruan W, Huang X, Kwiatkowska M. A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. Theoretical Computer Science. 2020 Feb 6;807:298-329. Epub 2019 Jul 18. doi: 10.1016/j.tcs.2019.05.046

Author

Wu, Min ; Wicker, Matthew ; Ruan, Wenjie et al. / A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees. In: Theoretical Computer Science. 2020 ; Vol. 807. pp. 298-329.

Bibtex

@article{85fdd488f08c4be7b562751fa7097ba4,
title = "A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees",
abstract = "Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimisethe distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytimeapproach to solve the games, in the sense of approximating the value of a gameby monotonically improving its upper and lower bounds. The Monte Carlo treesearch algorithm is applied to compute upper bounds for both games, and theAdmissible A* and the Alpha-Beta Pruning algorithms are, respectively, usedto compute lower bounds for the maximum safety radius and feature robustnessgames. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarialexample crafting algorithms. Furthermore, we show how our framework can bedeployed to evaluate pointwise robustness of neural networks in safety-criticalapplications such as traffic sign recognition in self-driving cars.",
author = "Min Wu and Matthew Wicker and Wenjie Ruan and Xiaowei Huang and Marta Kwiatkowska",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Theoretical Computer Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Theoretical Computer Science, 807, 2019 DOI: 10.1016/j.tcs.2019.05.046",
year = "2020",
month = feb,
day = "6",
doi = "10.1016/j.tcs.2019.05.046",
language = "English",
volume = "807",
pages = "298--329",
journal = "Theoretical Computer Science",
issn = "0304-3975",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

AU - Wu, Min

AU - Wicker, Matthew

AU - Ruan, Wenjie

AU - Huang, Xiaowei

AU - Kwiatkowska, Marta

N1 - This is the author’s version of a work that was accepted for publication in Theoretical Computer Science. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Theoretical Computer Science, 807, 2019 DOI: 10.1016/j.tcs.2019.05.046

PY - 2020/2/6

Y1 - 2020/2/6

N2 - Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimisethe distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytimeapproach to solve the games, in the sense of approximating the value of a gameby monotonically improving its upper and lower bounds. The Monte Carlo treesearch algorithm is applied to compute upper bounds for both games, and theAdmissible A* and the Alpha-Beta Pruning algorithms are, respectively, usedto compute lower bounds for the maximum safety radius and feature robustnessgames. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarialexample crafting algorithms. Furthermore, we show how our framework can bedeployed to evaluate pointwise robustness of neural networks in safety-criticalapplications such as traffic sign recognition in self-driving cars.

AB - Despite the improved accuracy of deep neural networks, the discovery of adversarial examples has raised serious safety concerns. In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations. We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded. We then show that the resulting optimisation problems can be reduced to the solution of two-player turn-based games, where the first player selects features and the second perturbs the image within the feature. While the second player aims to minimisethe distance to an adversarial example, depending on the optimisation objective the first player can be cooperative or competitive. We employ an anytimeapproach to solve the games, in the sense of approximating the value of a gameby monotonically improving its upper and lower bounds. The Monte Carlo treesearch algorithm is applied to compute upper bounds for both games, and theAdmissible A* and the Alpha-Beta Pruning algorithms are, respectively, usedto compute lower bounds for the maximum safety radius and feature robustnessgames. When working on the upper bound of the maximum safe radius problem, our tool demonstrates competitive performance against existing adversarialexample crafting algorithms. Furthermore, we show how our framework can bedeployed to evaluate pointwise robustness of neural networks in safety-criticalapplications such as traffic sign recognition in self-driving cars.

U2 - 10.1016/j.tcs.2019.05.046

DO - 10.1016/j.tcs.2019.05.046

M3 - Journal article

VL - 807

SP - 298

EP - 329

JO - Theoretical Computer Science

JF - Theoretical Computer Science

SN - 0304-3975

ER -