Home > Research > Publications & Outputs > Robust federated learning framework for defendi...

Electronic data

  • 2025EbtisaamPhD

    Final published version, 14.1 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Text available via DOI:

View graph of relations

Robust federated learning framework for defending against malicious attacks

Research output: ThesisDoctoral Thesis

Published

Standard

Robust federated learning framework for defending against malicious attacks. / Alharbi, Ebtisaam.
Lancaster University, 2025. 173 p.

Research output: ThesisDoctoral Thesis

Harvard

APA

Vancouver

Alharbi E. Robust federated learning framework for defending against malicious attacks. Lancaster University, 2025. 173 p. doi: 10.17635/lancaster/thesis/2884

Author

Bibtex

@phdthesis{0d977099173d44189ae14a4e9793f34e,
title = "Robust federated learning framework for defending against malicious attacks",
abstract = "Federated Learning (FL) has emerged as a decentralized machine learning paradigm that enables collaborative model training while preserving data privacy. However, its reliance on distributed and unverified client updates makes it highly vulnerable to adversarial attacks such as data poisoning, model poisoning, and backdoor attacks. These threats can degrade performance, compromise integrity, and introduce hidden malicious behaviors, raising serious concerns for FL deployment in safety-critical domains such as healthcare, finance, and IoT. Addressing these challenges requires defense mechanisms that are both effective and privacy-preserving.This thesis presents three novel defense frameworks that enhance the security and reliability of FL. First, we propose Robust Federated Clustering (RFCL), a multi-centre clustering-based aggregation strategy that groups client models by similarity to filter out adversarial updates. RFCL improves resilience to poisoning attacks under highly Non-IID (Non-independent and identically distributed) settings by isolating malicious updates while retaining benign diversity.Second, we introduce Robust Knowledge Distillation (RKD) to mitigate backdoor threats. RKD integrates unsupervised clustering, median model selection, and knowledge distillation to suppress compromised client updates during global aggregation. This approach enables robust learning without requiring access to labeled reference data.Third, we develop Synthetic Data-Driven Conformity Scoring for FL (SD-CSFL), an anomaly detection framework that uses synthetic calibration data, entropy-based nonconformity scoring, and adaptive thresholds to detect gradient manipulation and stealthy backdoors. SD-CSFL operates without accessing client data and remains effective in heterogeneous and adaptive attack scenarios.The proposed methods are evaluated on diverse FL benchmarks—MNIST, Fashion-MNIST, EMNIST, CIFAR-10, and Birds—across a broad spectrum of adversarial settings. Results demonstrate that RFCL, RKD, and SD-CSFL consistently outperform existing defenses, significantly improving FL robustness while preserving model performance and data privacy.",
author = "Ebtisaam Alharbi",
year = "2025",
doi = "10.17635/lancaster/thesis/2884",
language = "English",
publisher = "Lancaster University",
school = "Lancaster University",

}

RIS

TY - BOOK

T1 - Robust federated learning framework for defending against malicious attacks

AU - Alharbi, Ebtisaam

PY - 2025

Y1 - 2025

N2 - Federated Learning (FL) has emerged as a decentralized machine learning paradigm that enables collaborative model training while preserving data privacy. However, its reliance on distributed and unverified client updates makes it highly vulnerable to adversarial attacks such as data poisoning, model poisoning, and backdoor attacks. These threats can degrade performance, compromise integrity, and introduce hidden malicious behaviors, raising serious concerns for FL deployment in safety-critical domains such as healthcare, finance, and IoT. Addressing these challenges requires defense mechanisms that are both effective and privacy-preserving.This thesis presents three novel defense frameworks that enhance the security and reliability of FL. First, we propose Robust Federated Clustering (RFCL), a multi-centre clustering-based aggregation strategy that groups client models by similarity to filter out adversarial updates. RFCL improves resilience to poisoning attacks under highly Non-IID (Non-independent and identically distributed) settings by isolating malicious updates while retaining benign diversity.Second, we introduce Robust Knowledge Distillation (RKD) to mitigate backdoor threats. RKD integrates unsupervised clustering, median model selection, and knowledge distillation to suppress compromised client updates during global aggregation. This approach enables robust learning without requiring access to labeled reference data.Third, we develop Synthetic Data-Driven Conformity Scoring for FL (SD-CSFL), an anomaly detection framework that uses synthetic calibration data, entropy-based nonconformity scoring, and adaptive thresholds to detect gradient manipulation and stealthy backdoors. SD-CSFL operates without accessing client data and remains effective in heterogeneous and adaptive attack scenarios.The proposed methods are evaluated on diverse FL benchmarks—MNIST, Fashion-MNIST, EMNIST, CIFAR-10, and Birds—across a broad spectrum of adversarial settings. Results demonstrate that RFCL, RKD, and SD-CSFL consistently outperform existing defenses, significantly improving FL robustness while preserving model performance and data privacy.

AB - Federated Learning (FL) has emerged as a decentralized machine learning paradigm that enables collaborative model training while preserving data privacy. However, its reliance on distributed and unverified client updates makes it highly vulnerable to adversarial attacks such as data poisoning, model poisoning, and backdoor attacks. These threats can degrade performance, compromise integrity, and introduce hidden malicious behaviors, raising serious concerns for FL deployment in safety-critical domains such as healthcare, finance, and IoT. Addressing these challenges requires defense mechanisms that are both effective and privacy-preserving.This thesis presents three novel defense frameworks that enhance the security and reliability of FL. First, we propose Robust Federated Clustering (RFCL), a multi-centre clustering-based aggregation strategy that groups client models by similarity to filter out adversarial updates. RFCL improves resilience to poisoning attacks under highly Non-IID (Non-independent and identically distributed) settings by isolating malicious updates while retaining benign diversity.Second, we introduce Robust Knowledge Distillation (RKD) to mitigate backdoor threats. RKD integrates unsupervised clustering, median model selection, and knowledge distillation to suppress compromised client updates during global aggregation. This approach enables robust learning without requiring access to labeled reference data.Third, we develop Synthetic Data-Driven Conformity Scoring for FL (SD-CSFL), an anomaly detection framework that uses synthetic calibration data, entropy-based nonconformity scoring, and adaptive thresholds to detect gradient manipulation and stealthy backdoors. SD-CSFL operates without accessing client data and remains effective in heterogeneous and adaptive attack scenarios.The proposed methods are evaluated on diverse FL benchmarks—MNIST, Fashion-MNIST, EMNIST, CIFAR-10, and Birds—across a broad spectrum of adversarial settings. Results demonstrate that RFCL, RKD, and SD-CSFL consistently outperform existing defenses, significantly improving FL robustness while preserving model performance and data privacy.

U2 - 10.17635/lancaster/thesis/2884

DO - 10.17635/lancaster/thesis/2884

M3 - Doctoral Thesis

PB - Lancaster University

ER -