Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Review article › peer-review
Research output: Contribution to Journal/Magazine › Review article › peer-review
}
TY - JOUR
T1 - Ensuring User Privacy and Model Security via Machine Unlearning
T2 - A Review
AU - Tang, Y.
AU - Cai, Z.
AU - Liu, Q.
AU - Zhou, T.
AU - Ni, Q.
PY - 2023/11/29
Y1 - 2023/11/29
N2 - As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training. Therefore, we focus on forgetting systems, the process of which we call machine unlearning, capable of forgetting specific data entirely and efficiently. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields (e.g., inference attacks and data poisoning attacks). Finally, we briefly conclude the existing research directions.
AB - As an emerging discipline, machine learning has been widely used in artificial intelligence, education, meteorology and other fields. In the training of machine learning models, trainers need to use a large amount of practical data, which inevitably involves user privacy. Besides, by polluting the training data, a malicious adversary can poison the model, thus compromising model security. The data provider hopes that the model trainer can prove to them the confidentiality of the model. Trainer will be required to withdraw data when the trust collapses. In the meantime, trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training. Therefore, we focus on forgetting systems, the process of which we call machine unlearning, capable of forgetting specific data entirely and efficiently. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields (e.g., inference attacks and data poisoning attacks). Finally, we briefly conclude the existing research directions.
KW - Machine learning
KW - machine unlearning
KW - privacy protection
KW - trusted data deletion
U2 - 10.32604/cmc.2023.032307
DO - 10.32604/cmc.2023.032307
M3 - Review article
VL - 77
SP - 2645
EP - 2656
JO - Computers, Materials and Continua
JF - Computers, Materials and Continua
SN - 1546-2218
IS - 2
ER -