Home > Research > Publications & Outputs > Fuzzy Detectors Against Adversarial Attacks

Electronic data

  • SSCI

    Accepted author manuscript, 1.54 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Fuzzy Detectors Against Adversarial Attacks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming

Standard

Fuzzy Detectors Against Adversarial Attacks. / Li, Yi; Angelov, Plamen; Suri, Neeraj.
2023 IEEE Symposium Series on Computational Intelligence. Mexico: IEEE, 2023. p. 306-311.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Li, Y, Angelov, P & Suri, N 2023, Fuzzy Detectors Against Adversarial Attacks. in 2023 IEEE Symposium Series on Computational Intelligence. IEEE, Mexico, pp. 306-311, 2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023, Mexico City, Mexico, 5/12/23. https://doi.org/10.1109/SSCI52147.2023.10372061

APA

Li, Y., Angelov, P., & Suri, N. (in press). Fuzzy Detectors Against Adversarial Attacks. In 2023 IEEE Symposium Series on Computational Intelligence (pp. 306-311). IEEE. https://doi.org/10.1109/SSCI52147.2023.10372061

Vancouver

Li Y, Angelov P, Suri N. Fuzzy Detectors Against Adversarial Attacks. In 2023 IEEE Symposium Series on Computational Intelligence. Mexico: IEEE. 2023. p. 306-311 doi: 10.1109/SSCI52147.2023.10372061

Author

Li, Yi ; Angelov, Plamen ; Suri, Neeraj. / Fuzzy Detectors Against Adversarial Attacks. 2023 IEEE Symposium Series on Computational Intelligence. Mexico : IEEE, 2023. pp. 306-311

Bibtex

@inproceedings{ffa07cefe00c4f8c8c8048005d8af5f2,
title = "Fuzzy Detectors Against Adversarial Attacks",
abstract = "Deep learning-based methods have proved useful for adversarial attack detection. However, conventional detection algorithms exploit crisp set theory for classification boundary. Therefore, representing vague concepts is not available. Motivated by the recent success in fuzzy systems, we propose a fuzzy rule-based neural network to improve adversarial attack detection accuracy. The pre-trained ImageNet model is exploited to extract feature maps from clean and attacked images. Subsequently, the fuzzification network is used to obtain feature maps to produce fuzzy sets of difference degrees between clean and attacked images. The fuzzy rules control the intelligence that determines the detection boundaries. In the defuzzification layer, the fuzzy prediction from the intelligence is mapped back into the crisp model predictions for images. The loss between the prediction and label controls the rules to train the fuzzy detector. We show that the fuzzy rule-based network learns rich feature information than binary outputs and offers to obtain an overall performance gain. Our experiments, conducted over a wide range of images, show that the proposed method consistently performs better than conventional crisp set training in adversarial attack detection with various fuzzy system-based neural networks.",
author = "Yi Li and Plamen Angelov and Neeraj Suri",
year = "2023",
month = sep,
day = "15",
doi = "10.1109/SSCI52147.2023.10372061",
language = "English",
pages = "306--311",
booktitle = "2023 IEEE Symposium Series on Computational Intelligence",
publisher = "IEEE",
note = "2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023 ; Conference date: 05-12-2023 Through 08-12-2023",

}

RIS

TY - GEN

T1 - Fuzzy Detectors Against Adversarial Attacks

AU - Li, Yi

AU - Angelov, Plamen

AU - Suri, Neeraj

PY - 2023/9/15

Y1 - 2023/9/15

N2 - Deep learning-based methods have proved useful for adversarial attack detection. However, conventional detection algorithms exploit crisp set theory for classification boundary. Therefore, representing vague concepts is not available. Motivated by the recent success in fuzzy systems, we propose a fuzzy rule-based neural network to improve adversarial attack detection accuracy. The pre-trained ImageNet model is exploited to extract feature maps from clean and attacked images. Subsequently, the fuzzification network is used to obtain feature maps to produce fuzzy sets of difference degrees between clean and attacked images. The fuzzy rules control the intelligence that determines the detection boundaries. In the defuzzification layer, the fuzzy prediction from the intelligence is mapped back into the crisp model predictions for images. The loss between the prediction and label controls the rules to train the fuzzy detector. We show that the fuzzy rule-based network learns rich feature information than binary outputs and offers to obtain an overall performance gain. Our experiments, conducted over a wide range of images, show that the proposed method consistently performs better than conventional crisp set training in adversarial attack detection with various fuzzy system-based neural networks.

AB - Deep learning-based methods have proved useful for adversarial attack detection. However, conventional detection algorithms exploit crisp set theory for classification boundary. Therefore, representing vague concepts is not available. Motivated by the recent success in fuzzy systems, we propose a fuzzy rule-based neural network to improve adversarial attack detection accuracy. The pre-trained ImageNet model is exploited to extract feature maps from clean and attacked images. Subsequently, the fuzzification network is used to obtain feature maps to produce fuzzy sets of difference degrees between clean and attacked images. The fuzzy rules control the intelligence that determines the detection boundaries. In the defuzzification layer, the fuzzy prediction from the intelligence is mapped back into the crisp model predictions for images. The loss between the prediction and label controls the rules to train the fuzzy detector. We show that the fuzzy rule-based network learns rich feature information than binary outputs and offers to obtain an overall performance gain. Our experiments, conducted over a wide range of images, show that the proposed method consistently performs better than conventional crisp set training in adversarial attack detection with various fuzzy system-based neural networks.

U2 - 10.1109/SSCI52147.2023.10372061

DO - 10.1109/SSCI52147.2023.10372061

M3 - Conference contribution/Paper

SP - 306

EP - 311

BT - 2023 IEEE Symposium Series on Computational Intelligence

PB - IEEE

CY - Mexico

T2 - 2023 IEEE Symposium Series on Computational Intelligence, SSCI 2023

Y2 - 5 December 2023 through 8 December 2023

ER -