Rights statement: ©2023 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Accepted author manuscript, 861 KB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - GradMDM
T2 - Adversarial Attack on Dynamic Networks
AU - Pan, Jianhong
AU - Foo, Lin Geng
AU - Zheng, Qichen
AU - Fan, Zhipeng
AU - Rahmani, Hossein
AU - Ke, Qiuhong
AU - Liu, Jun
N1 - ©2023 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
PY - 2023/9/1
Y1 - 2023/9/1
N2 - Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against \textit{energy-oriented attacks} targeted at reducing their efficiency.Specifically, we attack dynamic models with our novel algorithm GradMDM.GradMDM is a technique that adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input, that will activate more computational units of dynamic models during inference. We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques, significantly increasing computation complexity while reducing the perceptibility of the perturbations.
AB - Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against \textit{energy-oriented attacks} targeted at reducing their efficiency.Specifically, we attack dynamic models with our novel algorithm GradMDM.GradMDM is a technique that adjusts the direction and the magnitude of the gradients to effectively find a small perturbation for each input, that will activate more computational units of dynamic models during inference. We evaluate GradMDM on multiple datasets and dynamic models, where it outperforms previous energy-oriented attack techniques, significantly increasing computation complexity while reducing the perceptibility of the perturbations.
U2 - 10.1109/TPAMI.2023.3263619
DO - 10.1109/TPAMI.2023.3263619
M3 - Journal article
VL - 45
SP - 11374
EP - 11381
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
SN - 0162-8828
IS - 9
ER -