Home > Research > Publications & Outputs > Similarity-based Deep Neural Network to Detect ...

Electronic data

  • Similarity_based_Deep_Neural_Network (5)

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 956 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks. / Almeida Soares, Eduardo; Angelov, Plamen; Suri, Neeraj.
Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022. ed. / Hisao Ishibuchi; Chee-Keong Kwoh; Ah-Hwee Tan; Dipti Srinivasan; Chunyan Miao; Anupam Trivedi; Keeley Crockett. IEEE, 2023. p. 1028-1035 (Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Almeida Soares, E, Angelov, P & Suri, N 2023, Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks. in H Ishibuchi, C-K Kwoh, A-H Tan, D Srinivasan, C Miao, A Trivedi & K Crockett (eds), Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022. Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022, IEEE, pp. 1028-1035. https://doi.org/10.1109/SSCI51031.2022.10022016

APA

Almeida Soares, E., Angelov, P., & Suri, N. (2023). Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks. In H. Ishibuchi, C-K. Kwoh, A-H. Tan, D. Srinivasan, C. Miao, A. Trivedi, & K. Crockett (Eds.), Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022 (pp. 1028-1035). (Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022). IEEE. https://doi.org/10.1109/SSCI51031.2022.10022016

Vancouver

Almeida Soares E, Angelov P, Suri N. Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks. In Ishibuchi H, Kwoh C-K, Tan A-H, Srinivasan D, Miao C, Trivedi A, Crockett K, editors, Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022. IEEE. 2023. p. 1028-1035. (Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022). Epub 2022 Dec 4. doi: 10.1109/SSCI51031.2022.10022016

Author

Almeida Soares, Eduardo ; Angelov, Plamen ; Suri, Neeraj. / Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks. Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022. editor / Hisao Ishibuchi ; Chee-Keong Kwoh ; Ah-Hwee Tan ; Dipti Srinivasan ; Chunyan Miao ; Anupam Trivedi ; Keeley Crockett. IEEE, 2023. pp. 1028-1035 (Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022).

Bibtex

@inproceedings{0b843e79f7e14ded942190217ca27d08,
title = "Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks",
abstract = "Deep neural networks (DNN{\textquoteright}s) have become essential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN{\textquoteright}s are vulnerable to human-imperceptible adversarial distortion/noise patterns that that can detrimentally impact safety-critical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learn-ing approach, Sim-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototype-based architecture of Sim-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN{\textquoteright}s methods were also considered in the analysis. Results have shown that the proposed Sim-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Sim-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.",
keywords = "adversarial attacks",
author = "{Almeida Soares}, Eduardo and Plamen Angelov and Neeraj Suri",
year = "2023",
month = jan,
day = "30",
doi = "10.1109/SSCI51031.2022.10022016",
language = "English",
series = "Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022",
publisher = "IEEE",
pages = "1028--1035",
editor = "Hisao Ishibuchi and Chee-Keong Kwoh and Ah-Hwee Tan and Dipti Srinivasan and Chunyan Miao and Anupam Trivedi and Keeley Crockett",
booktitle = "Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022",

}

RIS

TY - GEN

T1 - Similarity-based Deep Neural Network to Detect Imperceptible Adversarial Attacks

AU - Almeida Soares, Eduardo

AU - Angelov, Plamen

AU - Suri, Neeraj

PY - 2023/1/30

Y1 - 2023/1/30

N2 - Deep neural networks (DNN’s) have become essential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN’s are vulnerable to human-imperceptible adversarial distortion/noise patterns that that can detrimentally impact safety-critical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learn-ing approach, Sim-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototype-based architecture of Sim-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN’s methods were also considered in the analysis. Results have shown that the proposed Sim-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Sim-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.

AB - Deep neural networks (DNN’s) have become essential for solving diverse complex problems and have achieved considerable success in tackling computer vision tasks. How-ever, DNN’s are vulnerable to human-imperceptible adversarial distortion/noise patterns that that can detrimentally impact safety-critical applications such as autonomous driving. In this paper, we introduce a novel robust-by-design deep learn-ing approach, Sim-DNN, that is able to detect adversarial attacks through its inner defense mechanism that considers the degree of similarity between new data samples and autonomously chosen prototypes. The approach benefits from the abrupt drop of the similarity score to detect concept changes caused by distorted/noise data when comparing their similarities against the set of prototypes. Due to the feed-forward prototype-based architecture of Sim-DNN, no re-training or adversarial training is required. In order to evaluate the robustness of the proposed method, we considered the recently introduced ImageNet-R dataset and different adversarial attack methods such as FGSM, PGD, and DDN. Different DNN’s methods were also considered in the analysis. Results have shown that the proposed Sim-DNN is able to detect adversarial attacks with better performance than its mainstream competitors. Moreover, as no adversarial training is required by Sim-DNN, its performance on clean and robust images is more stable than its competitors which require an external defense mechanism to improve their robustness.

KW - adversarial attacks

U2 - 10.1109/SSCI51031.2022.10022016

DO - 10.1109/SSCI51031.2022.10022016

M3 - Conference contribution/Paper

T3 - Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022

SP - 1028

EP - 1035

BT - Proceedings of the 2022 IEEE Symposium Series on Computational Intelligence, SSCI 2022

A2 - Ishibuchi, Hisao

A2 - Kwoh, Chee-Keong

A2 - Tan, Ah-Hwee

A2 - Srinivasan, Dipti

A2 - Miao, Chunyan

A2 - Trivedi, Anupam

A2 - Crockett, Keeley

PB - IEEE

ER -