Home > Research > Publications & Outputs > GDR-GMA

Electronic data

Links

Text available via DOI:

View graph of relations

GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients. / Lin, Shen; Zhang, Xiaoyu; Susilo, Willy et al.
Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM, 2024. p. 9087-9095 (Proceedings of the 32nd ACM International Conference on Multimedia).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Lin, S, Zhang, X, Susilo, W, Chen, X & Liu, J 2024, GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients. in Proceedings of the 32nd ACM International Conference on Multimedia. Proceedings of the 32nd ACM International Conference on Multimedia, ACM, New York, pp. 9087-9095. https://doi.org/10.1145/3664647.3680775

APA

Lin, S., Zhang, X., Susilo, W., Chen, X., & Liu, J. (2024). GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients. In Proceedings of the 32nd ACM International Conference on Multimedia (pp. 9087-9095). (Proceedings of the 32nd ACM International Conference on Multimedia). ACM. https://doi.org/10.1145/3664647.3680775

Vancouver

Lin S, Zhang X, Susilo W, Chen X, Liu J. GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients. In Proceedings of the 32nd ACM International Conference on Multimedia. New York: ACM. 2024. p. 9087-9095. (Proceedings of the 32nd ACM International Conference on Multimedia). doi: 10.1145/3664647.3680775

Author

Lin, Shen ; Zhang, Xiaoyu ; Susilo, Willy et al. / GDR-GMA : Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients. Proceedings of the 32nd ACM International Conference on Multimedia. New York : ACM, 2024. pp. 9087-9095 (Proceedings of the 32nd ACM International Conference on Multimedia).

Bibtex

@inproceedings{77897ee5e0f14497b34d34ea78334601,
title = "GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients",
abstract = "As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency",
author = "Shen Lin and Xiaoyu Zhang and Willy Susilo and Xiaofeng Chen and Jun Liu",
year = "2024",
month = oct,
day = "28",
doi = "10.1145/3664647.3680775",
language = "English",
isbn = "9798400706868",
series = "Proceedings of the 32nd ACM International Conference on Multimedia",
publisher = "ACM",
pages = "9087--9095",
booktitle = "Proceedings of the 32nd ACM International Conference on Multimedia",

}

RIS

TY - GEN

T1 - GDR-GMA

T2 - Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients

AU - Lin, Shen

AU - Zhang, Xiaoyu

AU - Susilo, Willy

AU - Chen, Xiaofeng

AU - Liu, Jun

PY - 2024/10/28

Y1 - 2024/10/28

N2 - As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency

AB - As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency

U2 - 10.1145/3664647.3680775

DO - 10.1145/3664647.3680775

M3 - Conference contribution/Paper

SN - 9798400706868

T3 - Proceedings of the 32nd ACM International Conference on Multimedia

SP - 9087

EP - 9095

BT - Proceedings of the 32nd ACM International Conference on Multimedia

PB - ACM

CY - New York

ER -