Home > Research > Publications & Outputs > Unlearnable Examples Detection via Iterative Fi...

Links

Text available via DOI:

View graph of relations

Unlearnable Examples Detection via Iterative Filtering

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Unlearnable Examples Detection via Iterative Filtering. / Yu, Yi; Zheng, Qichen; Yang, Siyuan et al.
Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings. ed. / Michael Wand; Kristína Malinovská; Jürgen Schmidhuber; Igor V. Tetko. Cham: Springer Science and Business Media Deutschland GmbH, 2024. p. 241-256 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 15025 LNCS).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Yu, Y, Zheng, Q, Yang, S, Yang, W, Liu, J, Lu, S, Tan, YP, Lam, KY & Kot, A 2024, Unlearnable Examples Detection via Iterative Filtering. in M Wand, K Malinovská, J Schmidhuber & IV Tetko (eds), Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 15025 LNCS, Springer Science and Business Media Deutschland GmbH, Cham, pp. 241-256, 33rd International Conference on Artificial Neural Networks, ICANN 2024, Lugano, Switzerland, 17/09/24. https://doi.org/10.1007/978-3-031-72359-9_18

APA

Yu, Y., Zheng, Q., Yang, S., Yang, W., Liu, J., Lu, S., Tan, Y. P., Lam, K. Y., & Kot, A. (2024). Unlearnable Examples Detection via Iterative Filtering. In M. Wand, K. Malinovská, J. Schmidhuber, & I. V. Tetko (Eds.), Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings (pp. 241-256). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 15025 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-72359-9_18

Vancouver

Yu Y, Zheng Q, Yang S, Yang W, Liu J, Lu S et al. Unlearnable Examples Detection via Iterative Filtering. In Wand M, Malinovská K, Schmidhuber J, Tetko IV, editors, Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings. Cham: Springer Science and Business Media Deutschland GmbH. 2024. p. 241-256. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-72359-9_18

Author

Yu, Yi ; Zheng, Qichen ; Yang, Siyuan et al. / Unlearnable Examples Detection via Iterative Filtering. Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings. editor / Michael Wand ; Kristína Malinovská ; Jürgen Schmidhuber ; Igor V. Tetko. Cham : Springer Science and Business Media Deutschland GmbH, 2024. pp. 241-256 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).

Bibtex

@inproceedings{df466826cef645d0b6d1af2eb2d66579,
title = "Unlearnable Examples Detection via Iterative Filtering",
abstract = "Deep neural networks are proven to be vulnerable to data poisoning attacks. Recently, a specific type of data poisoning attack known as availability attacks, has led to the failure of data utilization for model learning by adding imperceptible perturbations to images. Consequently, it is quite beneficial and challenging to detect poisoned samples, also known as Unlearnable Examples (UEs), from a mixed dataset. In response, we propose an Iterative Filtering approach for UEs identification. This method leverages the distinction between the inherent semantic mapping rules and shortcuts, without the need for any additional information. We verify that when training a classifier on a mixed dataset containing both UEs and clean data, the model tends to quickly adapt to the UEs compared to the clean data. Due to the accuracy gaps between training with clean/poisoned samples, we employ a model to misclassify clean samples while correctly identifying the poisoned ones. The incorporation of additional classes and iterative refinement enhances the model{\textquoteright}s ability to differentiate between clean and poisoned samples. Extensive experiments demonstrate the superiority of our method over state-of-the-art detection approaches across various attacks, datasets, and poison ratios, significantly reducing the Half Total Error Rate (HTER) compared to existing methods.",
keywords = "Detection, Unlearnable Examples",
author = "Yi Yu and Qichen Zheng and Siyuan Yang and Wenhan Yang and Jun Liu and Shijian Lu and Tan, {Yap Peng} and Lam, {Kwok Yan} and Alex Kot",
year = "2024",
month = sep,
day = "18",
doi = "10.1007/978-3-031-72359-9_18",
language = "English",
isbn = "9783031723582",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "241--256",
editor = "Michael Wand and Krist{\'i}na Malinovsk{\'a} and J{\"u}rgen Schmidhuber and Tetko, {Igor V.}",
booktitle = "Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings",
address = "Germany",
note = "33rd International Conference on Artificial Neural Networks, ICANN 2024 ; Conference date: 17-09-2024 Through 20-09-2024",

}

RIS

TY - GEN

T1 - Unlearnable Examples Detection via Iterative Filtering

AU - Yu, Yi

AU - Zheng, Qichen

AU - Yang, Siyuan

AU - Yang, Wenhan

AU - Liu, Jun

AU - Lu, Shijian

AU - Tan, Yap Peng

AU - Lam, Kwok Yan

AU - Kot, Alex

PY - 2024/9/18

Y1 - 2024/9/18

N2 - Deep neural networks are proven to be vulnerable to data poisoning attacks. Recently, a specific type of data poisoning attack known as availability attacks, has led to the failure of data utilization for model learning by adding imperceptible perturbations to images. Consequently, it is quite beneficial and challenging to detect poisoned samples, also known as Unlearnable Examples (UEs), from a mixed dataset. In response, we propose an Iterative Filtering approach for UEs identification. This method leverages the distinction between the inherent semantic mapping rules and shortcuts, without the need for any additional information. We verify that when training a classifier on a mixed dataset containing both UEs and clean data, the model tends to quickly adapt to the UEs compared to the clean data. Due to the accuracy gaps between training with clean/poisoned samples, we employ a model to misclassify clean samples while correctly identifying the poisoned ones. The incorporation of additional classes and iterative refinement enhances the model’s ability to differentiate between clean and poisoned samples. Extensive experiments demonstrate the superiority of our method over state-of-the-art detection approaches across various attacks, datasets, and poison ratios, significantly reducing the Half Total Error Rate (HTER) compared to existing methods.

AB - Deep neural networks are proven to be vulnerable to data poisoning attacks. Recently, a specific type of data poisoning attack known as availability attacks, has led to the failure of data utilization for model learning by adding imperceptible perturbations to images. Consequently, it is quite beneficial and challenging to detect poisoned samples, also known as Unlearnable Examples (UEs), from a mixed dataset. In response, we propose an Iterative Filtering approach for UEs identification. This method leverages the distinction between the inherent semantic mapping rules and shortcuts, without the need for any additional information. We verify that when training a classifier on a mixed dataset containing both UEs and clean data, the model tends to quickly adapt to the UEs compared to the clean data. Due to the accuracy gaps between training with clean/poisoned samples, we employ a model to misclassify clean samples while correctly identifying the poisoned ones. The incorporation of additional classes and iterative refinement enhances the model’s ability to differentiate between clean and poisoned samples. Extensive experiments demonstrate the superiority of our method over state-of-the-art detection approaches across various attacks, datasets, and poison ratios, significantly reducing the Half Total Error Rate (HTER) compared to existing methods.

KW - Detection

KW - Unlearnable Examples

U2 - 10.1007/978-3-031-72359-9_18

DO - 10.1007/978-3-031-72359-9_18

M3 - Conference contribution/Paper

AN - SCOPUS:85205384828

SN - 9783031723582

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 241

EP - 256

BT - Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings

A2 - Wand, Michael

A2 - Malinovská, Kristína

A2 - Schmidhuber, Jürgen

A2 - Tetko, Igor V.

PB - Springer Science and Business Media Deutschland GmbH

CY - Cham

T2 - 33rd International Conference on Artificial Neural Networks, ICANN 2024

Y2 - 17 September 2024 through 20 September 2024

ER -