Home > Research > Publications & Outputs > Ensuring User Privacy and Model Security via Ma...

Links

Text available via DOI:

View graph of relations

Ensuring User Privacy and Model Security via Machine Unlearning: A Review

Research output: Contribution to Journal/MagazineReview articlepeer-review

Published

Standard

Ensuring User Privacy and Model Security via Machine Unlearning: A Review. / Tang, Y.; Cai, Z.; Liu, Q. et al.
In: Computers, Materials and Continua, Vol. 77, No. 2, 29.11.2023, p. 2645-2656.

Research output: Contribution to Journal/MagazineReview articlepeer-review

Harvard

Tang, Y, Cai, Z, Liu, Q, Zhou, T & Ni, Q 2023, 'Ensuring User Privacy and Model Security via Machine Unlearning: A Review', Computers, Materials and Continua, vol. 77, no. 2, pp. 2645-2656. https://doi.org/10.32604/cmc.2023.032307

APA

Tang, Y., Cai, Z., Liu, Q., Zhou, T., & Ni, Q. (2023). Ensuring User Privacy and Model Security via Machine Unlearning: A Review. Computers, Materials and Continua, 77(2), 2645-2656. https://doi.org/10.32604/cmc.2023.032307

Vancouver

Tang Y, Cai Z, Liu Q, Zhou T, Ni Q. Ensuring User Privacy and Model Security via Machine Unlearning: A Review. Computers, Materials and Continua. 2023 Nov 29;77(2):2645-2656. doi: 10.32604/cmc.2023.032307

Author

Tang, Y. ; Cai, Z. ; Liu, Q. et al. / Ensuring User Privacy and Model Security via Machine Unlearning : A Review. In: Computers, Materials and Continua. 2023 ; Vol. 77, No. 2. pp. 2645-2656.

Bibtex

@article{e9ee5669d6aa4f1ebfbe2fca970da896,
title = "Ensuring User Privacy and Model Security via Machine Unlearning: A Review",
abstract = "As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training. Therefore, we focus on forgetting systems, the process of which we call machine unlearning, capable of forgetting specific data entirely and efficiently. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields (e.g., inference attacks and data poisoning attacks). Finally, we briefly conclude the existing research directions.",
keywords = "Machine learning, machine unlearning, privacy protection, trusted data deletion",
author = "Y. Tang and Z. Cai and Q. Liu and T. Zhou and Q. Ni",
year = "2023",
month = nov,
day = "29",
doi = "10.32604/cmc.2023.032307",
language = "English",
volume = "77",
pages = "2645--2656",
journal = "Computers, Materials and Continua",
issn = "1546-2218",
publisher = "Tech Science Press",
number = "2",

}

RIS

TY - JOUR

T1 - Ensuring User Privacy and Model Security via Machine Unlearning

T2 - A Review

AU - Tang, Y.

AU - Cai, Z.

AU - Liu, Q.

AU - Zhou, T.

AU - Ni, Q.

PY - 2023/11/29

Y1 - 2023/11/29

N2 - As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training. Therefore, we focus on forgetting systems, the process of which we call machine unlearning, capable of forgetting specific data entirely and efficiently. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields (e.g., inference attacks and data poisoning attacks). Finally, we briefly conclude the existing research directions.

AB - As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training. Therefore, we focus on forgetting systems, the process of which we call machine unlearning, capable of forgetting specific data entirely and efficiently. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields (e.g., inference attacks and data poisoning attacks). Finally, we briefly conclude the existing research directions.

KW - Machine learning

KW - machine unlearning

KW - privacy protection

KW - trusted data deletion

U2 - 10.32604/cmc.2023.032307

DO - 10.32604/cmc.2023.032307

M3 - Review article

VL - 77

SP - 2645

EP - 2656

JO - Computers, Materials and Continua

JF - Computers, Materials and Continua

SN - 1546-2218

IS - 2

ER -