Home > Research > Publications & Outputs > Robust Knowledge Distillation in Federated Lear...

Electronic data

View graph of relations

Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming

Standard

Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks. / Alharbi, Ebtisaam; Soriano Marcolino, Leandro; Ni, Qiang et al.
Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2024.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Alharbi, E, Soriano Marcolino, L, Ni, Q & Gouglidis, A 2024, Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks. in Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE.

APA

Alharbi, E., Soriano Marcolino, L., Ni, Q., & Gouglidis, A. (in press). Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks. In Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) IEEE.

Vancouver

Alharbi E, Soriano Marcolino L, Ni Q, Gouglidis A. Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks. In Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE. 2024

Author

Alharbi, Ebtisaam ; Soriano Marcolino, Leandro ; Ni, Qiang et al. / Robust Knowledge Distillation in Federated Learning : Counteracting Backdoor Attacks. Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2024.

Bibtex

@inproceedings{129f48abe145422fb0adccecf26e5911,
title = "Robust Knowledge Distillation in Federated Learning: Counteracting Backdoor Attacks",
abstract = "Federated Learning (FL) enables collaborative model training across multiple devices while preserving data privacy. However, it remains susceptible to backdoor attacks, where malicious participants can compromise the global model. Existing defence methods are limited by strict assumptions on data heterogeneity (Non-Independent and Identically Distributed data) and the proportion of malicious clients, reducing their practicality and effectiveness. To overcome these limitations, we propose Robust Knowledge Distillation (RKD), a novel defence mechanism that enhances model integrity without relying on restrictive assumptions. RKD integrates clustering and model selection techniques to identify and filter out malicious updates, forming a reliable ensemble of models. It then employs knowledge distillation to transfer the collective insights from this ensemble to a global model. Extensive evaluations demonstrate that RKD effectively mitigates backdoor threats while maintaining high model performance, outperforming current state-of-the-art defence methods across various scenarios.",
author = "Ebtisaam Alharbi and {Soriano Marcolino}, Leandro and Qiang Ni and Antonios Gouglidis",
year = "2024",
month = dec,
day = "12",
language = "English",
booktitle = "Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Robust Knowledge Distillation in Federated Learning

T2 - Counteracting Backdoor Attacks

AU - Alharbi, Ebtisaam

AU - Soriano Marcolino, Leandro

AU - Ni, Qiang

AU - Gouglidis, Antonios

PY - 2024/12/12

Y1 - 2024/12/12

N2 - Federated Learning (FL) enables collaborative model training across multiple devices while preserving data privacy. However, it remains susceptible to backdoor attacks, where malicious participants can compromise the global model. Existing defence methods are limited by strict assumptions on data heterogeneity (Non-Independent and Identically Distributed data) and the proportion of malicious clients, reducing their practicality and effectiveness. To overcome these limitations, we propose Robust Knowledge Distillation (RKD), a novel defence mechanism that enhances model integrity without relying on restrictive assumptions. RKD integrates clustering and model selection techniques to identify and filter out malicious updates, forming a reliable ensemble of models. It then employs knowledge distillation to transfer the collective insights from this ensemble to a global model. Extensive evaluations demonstrate that RKD effectively mitigates backdoor threats while maintaining high model performance, outperforming current state-of-the-art defence methods across various scenarios.

AB - Federated Learning (FL) enables collaborative model training across multiple devices while preserving data privacy. However, it remains susceptible to backdoor attacks, where malicious participants can compromise the global model. Existing defence methods are limited by strict assumptions on data heterogeneity (Non-Independent and Identically Distributed data) and the proportion of malicious clients, reducing their practicality and effectiveness. To overcome these limitations, we propose Robust Knowledge Distillation (RKD), a novel defence mechanism that enhances model integrity without relying on restrictive assumptions. RKD integrates clustering and model selection techniques to identify and filter out malicious updates, forming a reliable ensemble of models. It then employs knowledge distillation to transfer the collective insights from this ensemble to a global model. Extensive evaluations demonstrate that RKD effectively mitigates backdoor threats while maintaining high model performance, outperforming current state-of-the-art defence methods across various scenarios.

M3 - Conference contribution/Paper

BT - Proceedings of the IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

PB - IEEE

ER -