Home > Research > Publications & Outputs > RAFL: A Robust and Adaptive Federated Meta-Lear...

Electronic data

  • Robust_FL_short_version_2_final

    Accepted author manuscript, 703 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

RAFL: A Robust and Adaptive Federated Meta-Learning Framework Against Adversaries

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date1/11/2023
Host publication2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems (MASS)
PublisherIEEE
ISBN (electronic)9798350324334
ISBN (print)9798350324341
<mark>Original language</mark>English

Abstract

With the emergence of data silos and increasing privacy awareness, traditional centralized machine learning provides limited support. Federated learning (FL), as a promising alternative machine learning approach, is capable of leveraging distributed personalized datasets from multiple clients to train a shared global model in a privacy-preserving manner. However, FL systems are vulnerable to attacker-controlled adversarial clients that potentially conduct adversarial attacks by uploading unreliable model updates or clients unintentionally uploading low-quality models leading to degraded FL performance and reduced resilience to attacks. In this paper, we propose RAFL: a new robust-by-design federated meta learning framework capable of mitigating adversarial model updates on non-IID data. RAFL leverages 1) a residual rule-based detection method and a Variational AutoEncoder (VAE) learning based detection method combined to distinguish adversarial clients from benign clients. 2) a similarity-based model aggregation method to reduce the likelihood of uploading adversarial models from adversarial clients. 3) multiple learning loops to collaboratively train multiple personalized detection models against adversaries effectively. Experimental results demonstrate that our proposed FL framework is robust by design and outperforms other defensive methods against adversaries in terms of model accuracy and efficiency.