Home > Research > Publications & Outputs > Similarity-based Deep Neural Network to Detect ...

Electronic data

  • Similarity_based_Deep_Neural_Network (5)

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 956 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date30/01/2023
Host publicationProceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022
EditorsHisao Ishibuchi, Chee-Keong Kwoh, Ah-Hwee Tan, Dipti Srinivasan, Chunyan Miao, Anupam Trivedi, Keeley Crockett
PublisherIEEE
Pages1028-1035
Number of pages8
ISBN (electronic)9781665487689
<mark>Original language</mark>English

Publication series

NameProceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022

Abstract

Deep neural networks (DNN’s) have become essential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN’s are vulnerable to human-imperceptible adversarial distortion/noise patterns that that can detrimentally impact safety-critical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learn-ing approach, Sim-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototype-based architecture of Sim-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN’s methods were also considered in the analysis. Results have shown that the proposed Sim-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Sim-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.