Home > Research > Publications & Outputs > GDR-GMA

Electronic data

Links

Text available via DOI:

View graph of relations

GDR-GMA: Machine Unlearning via Direction-Rectified and Magnitude-Adjusted Gradients

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
  • Shen Lin
  • Xiaoyu Zhang
  • Willy Susilo
  • Xiaofeng Chen
  • Jun Liu
Close
Publication date28/10/2024
Host publicationProceedings of the 32nd ACM International Conference on Multimedia
Place of PublicationNew York
PublisherACM
Pages9087-9095
Number of pages9
ISBN (print)9798400706868
<mark>Original language</mark>English

Publication series

NameProceedings of the 32nd ACM International Conference on Multimedia
PublisherACM

Abstract

As concerns over privacy protection grow and relevant laws come into effect, machine unlearning (MU) has emerged as a pivotal research area. Due to the complexity of the forgetting data distribution, the sample-wise MU is still open challenges. Gradient ascent, as the inverse of gradient descent, is naturally applied to machine unlearning, which is also the inverse process of machine learning. However, the straightforward gradient ascent MU method suffers from the trade-off between effectiveness, fidelity, and efficiency. In this work, we analyze the gradient ascent MU process from a multi-task learning (MTL) view. This perspective reveals two problems that cause the trade-off, i.e., the gradient direction problem and the gradient dominant problem. To address these problems, we propose a novel MU method, namely GDR-GMA, consisting of Gradient Direction Rectification (GDR) and Gradient Magnitude Adjustment (GMA). For the gradient direction problem, GDR rectifies the direction between the conflicting gradients by projecting a gradient onto the orthonormal plane of the conflicting gradient. For the gradient dominant problem, GMA dynamically adjusts the magnitude of the update gradients by assigning the dynamic magnitude weight parameter to the update gradients. Furthermore, we evaluate GDR-GMA against several baseline methods in three sample-wise MU scenarios: random data forgetting, sub-class forgetting, and class forgetting. Extensive experimental results demonstrate the superior performance of GDR-GMA in effectiveness, fidelity, and efficiency