Home > Research > Publications & Outputs > Robust Federated Learning Method against Data a...

Electronic data

  • 1129Alharbi

    Accepted author manuscript, 3.95 MB, PDF document

View graph of relations

Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Forthcoming

Standard

Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution. / Alharbi, Ebtisaam; Soriano Marcolino, Leandro; Gouglidis, Antonios et al.
26th European Conference on Artificial Intelligence ECAI 2023- IOS Press. 2023.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Alharbi, E, Soriano Marcolino, L, Gouglidis, A & Ni, Q 2023, Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution. in 26th European Conference on Artificial Intelligence ECAI 2023- IOS Press. 26th European Conference on Artificial Intelligence ECAI 2023, Kraków, Poland, 30/09/23.

APA

Vancouver

Author

Bibtex

@inproceedings{a0443dc5fd8c4e3292846082954af135,
title = "Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution",
abstract = "Federated Learning (FL) is essential for building global models across distributed environments. However, it is significantly vulnerable to data and model poisoning attacks that can critically compromise the accuracy and reliability of the global model. These vulnerabilities become more pronounced in heterogeneous environments, where clients{\textquoteright} data distributions vary broadly, creating a challenging setting for maintaining model integrity. Furthermore, malicious attacks can exploit this heterogeneity, manipulating the learn- ing process to degrade the model or even induce it to learn incorrect patterns. In response to these challenges, we introduce RFCL, a novel Robust Federated aggregation method that leverages CLustering and cosine similarity to select similar cluster models, effectively defending against data and model poisoning attacks even amidst high data heterogeneity. Our experiments assess RFCL{\textquoteright}s performance against various attacker numbers and Non-IID degrees. The findings reveal that RFCL outperforms existing robust aggregation methods and demonstrates the capability to defend against multiple attack types.",
author = "Ebtisaam Alharbi and {Soriano Marcolino}, Leandro and Antonios Gouglidis and Qiang Ni",
year = "2023",
month = jul,
day = "15",
language = "English",
booktitle = "26th European Conference on Artificial Intelligence ECAI 2023- IOS Press",
note = "26th European Conference on Artificial Intelligence ECAI 2023, ECAI 23 ; Conference date: 30-09-2023 Through 04-10-2023",
url = "https://ecai2023.eu/",

}

RIS

TY - GEN

T1 - Robust Federated Learning Method against Data and Model Poisoning Attacks with Heterogeneous Data Distribution

AU - Alharbi, Ebtisaam

AU - Soriano Marcolino, Leandro

AU - Gouglidis, Antonios

AU - Ni, Qiang

N1 - Conference code: 26

PY - 2023/7/15

Y1 - 2023/7/15

N2 - Federated Learning (FL) is essential for building global models across distributed environments. However, it is significantly vulnerable to data and model poisoning attacks that can critically compromise the accuracy and reliability of the global model. These vulnerabilities become more pronounced in heterogeneous environments, where clients’ data distributions vary broadly, creating a challenging setting for maintaining model integrity. Furthermore, malicious attacks can exploit this heterogeneity, manipulating the learn- ing process to degrade the model or even induce it to learn incorrect patterns. In response to these challenges, we introduce RFCL, a novel Robust Federated aggregation method that leverages CLustering and cosine similarity to select similar cluster models, effectively defending against data and model poisoning attacks even amidst high data heterogeneity. Our experiments assess RFCL’s performance against various attacker numbers and Non-IID degrees. The findings reveal that RFCL outperforms existing robust aggregation methods and demonstrates the capability to defend against multiple attack types.

AB - Federated Learning (FL) is essential for building global models across distributed environments. However, it is significantly vulnerable to data and model poisoning attacks that can critically compromise the accuracy and reliability of the global model. These vulnerabilities become more pronounced in heterogeneous environments, where clients’ data distributions vary broadly, creating a challenging setting for maintaining model integrity. Furthermore, malicious attacks can exploit this heterogeneity, manipulating the learn- ing process to degrade the model or even induce it to learn incorrect patterns. In response to these challenges, we introduce RFCL, a novel Robust Federated aggregation method that leverages CLustering and cosine similarity to select similar cluster models, effectively defending against data and model poisoning attacks even amidst high data heterogeneity. Our experiments assess RFCL’s performance against various attacker numbers and Non-IID degrees. The findings reveal that RFCL outperforms existing robust aggregation methods and demonstrates the capability to defend against multiple attack types.

M3 - Conference contribution/Paper

BT - 26th European Conference on Artificial Intelligence ECAI 2023- IOS Press

T2 - 26th European Conference on Artificial Intelligence ECAI 2023

Y2 - 30 September 2023 through 4 October 2023

ER -