Home > Research > Publications & Outputs > Robust Federated Learning Method against Data a...

Electronic data

  • 1129Alharbi

    Accepted author manuscript, 3.95 MB, PDF document

View graph of relations

Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming
Publication date15/07/2023
Host publication26th European Conference on Artificial Intelligence ECAI 2023- IOS Press
Number of pages8
<mark>Original language</mark>English
Event26th European Conference on Artificial Intelligence ECAI 2023 - Kraków, Poland
Duration: 30/09/20234/10/2023
Conference number: 26
https://ecai2023.eu/

Conference

Conference26th European Conference on Artificial Intelligence ECAI 2023
Abbreviated titleECAI 23
Country/TerritoryPoland
CityKraków
Period30/09/234/10/23
Internet address

Conference

Conference26th European Conference on Artificial Intelligence ECAI 2023
Abbreviated titleECAI 23
Country/TerritoryPoland
CityKraków
Period30/09/234/10/23
Internet address

Abstract

Federated Learning (FL) is essential for building global models across distributed environments. However, it is significantly vulnerable to data and model poisoning attacks that can critically compromise the accuracy and reliability of the global model. These vulnerabilities become more pronounced in heterogeneous environments, where clients’ data distributions vary broadly, creating a challenging setting for maintaining model integrity. Furthermore, malicious attacks can exploit this heterogeneity, manipulating the learn- ing process to degrade the model or even induce it to learn incorrect patterns. In response to these challenges, we introduce RFCL, a novel Robust Federated aggregation method that leverages CLustering and cosine similarity to select similar cluster models, effectively defending against data and model poisoning attacks even amidst high data heterogeneity. Our experiments assess RFCL’s performance against various attacker numbers and Non-IID degrees. The findings reveal that RFCL outperforms existing robust aggregation methods and demonstrates the capability to defend against multiple attack types.