Home > Research > Publications & Outputs > Privacy Preservation for Federated Learning wit...

Electronic data

  • Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 644 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Wentao Liu
  • Xiaolong Xu
  • Dejuan Li
  • Lianyong Qi
  • Fei Dai
  • Wanchun Dou
  • Qiang Ni
Close
<mark>Journal publication date</mark>15/04/2023
<mark>Journal</mark>IEEE Internet of Things Journal
Issue number8
Volume10
Number of pages13
Pages (from-to)7343-7355
Publication StatusPublished
Early online date14/12/22
<mark>Original language</mark>English

Abstract

Benefiting from the powerful data analysis and prediction capabilities of artificial intelligence (AI), the data on the edge is often transferred to the cloud center for centralized training to obtain an accurate model. To resist the risk of privacy leakage due to frequent data transmission between the edge and the cloud, federated learning (FL) is engaged in the edge paradigm, uploading the model updated on the edge server (ES) to the central server for aggregation, instead of transferring data directly. However, the adversarial ES can infer the update of other ESs from the aggregated model and the update may still expose some characteristics of data of other ESs. Besides, there is a certain probability that the entire aggregation is disrupted by the adversarial ESs through uploading a malicious update. In this paper, a privacy-preserving FL scheme with robust aggregation in edge computing is proposed, named FL-RAEC. First, the hybrid privacy-preserving mechanism is constructed to preserve the integrity and privacy of the data uploaded by the ESs. For the robust model aggregation, a phased aggregation strategy is proposed. Specifically, anomaly detection based on autoencoder is performed while some ESs are selected for anonymous trust verification at the beginning. In the next stage, via multiple rounds of random verification, the trust score of each ES is assessed to identify the malicious participants. Eventually, FL-RAEC is evaluated in detail, depicting that FL-RAEC has strong robustness and high accuracy under different attacks.

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.