Home > Research > Publications & Outputs > Privacy Preservation for Federated Learning wit...

Electronic data

  • Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 644 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing. / Liu, Wentao; Xu, Xiaolong; Li, Dejuan et al.
In: IEEE Internet of Things Journal, Vol. 10, No. 8, 15.04.2023, p. 7343-7355.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Liu, W, Xu, X, Li, D, Qi, L, Dai, F, Dou, W & Ni, Q 2023, 'Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing', IEEE Internet of Things Journal, vol. 10, no. 8, pp. 7343-7355. https://doi.org/10.1109/jiot.2022.3229122

APA

Liu, W., Xu, X., Li, D., Qi, L., Dai, F., Dou, W., & Ni, Q. (2023). Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing. IEEE Internet of Things Journal, 10(8), 7343-7355. https://doi.org/10.1109/jiot.2022.3229122

Vancouver

Liu W, Xu X, Li D, Qi L, Dai F, Dou W et al. Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing. IEEE Internet of Things Journal. 2023 Apr 15;10(8):7343-7355. Epub 2022 Dec 14. doi: 10.1109/jiot.2022.3229122

Author

Liu, Wentao ; Xu, Xiaolong ; Li, Dejuan et al. / Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing. In: IEEE Internet of Things Journal. 2023 ; Vol. 10, No. 8. pp. 7343-7355.

Bibtex

@article{ac7d24674b014d9892d3579cee6b60d4,
title = "Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing",
abstract = "Benefiting from the powerful data analysis and prediction capabilities of artificial intelligence (AI), the data on the edge is often transferred to the cloud center for centralized training to obtain an accurate model. To resist the risk of privacy leakage due to frequent data transmission between the edge and the cloud, federated learning (FL) is engaged in the edge paradigm, uploading the model updated on the edge server (ES) to the central server for aggregation, instead of transferring data directly. However, the adversarial ES can infer the update of other ESs from the aggregated model and the update may still expose some characteristics of data of other ESs. Besides, there is a certain probability that the entire aggregation is disrupted by the adversarial ESs through uploading a malicious update. In this paper, a privacy-preserving FL scheme with robust aggregation in edge computing is proposed, named FL-RAEC. First, the hybrid privacy-preserving mechanism is constructed to preserve the integrity and privacy of the data uploaded by the ESs. For the robust model aggregation, a phased aggregation strategy is proposed. Specifically, anomaly detection based on autoencoder is performed while some ESs are selected for anonymous trust verification at the beginning. In the next stage, via multiple rounds of random verification, the trust score of each ES is assessed to identify the malicious participants. Eventually, FL-RAEC is evaluated in detail, depicting that FL-RAEC has strong robustness and high accuracy under different attacks.",
keywords = "Computer Networks and Communications, Computer Science Applications, Hardware and Architecture, Information Systems, Signal Processing",
author = "Wentao Liu and Xiaolong Xu and Dejuan Li and Lianyong Qi and Fei Dai and Wanchun Dou and Qiang Ni",
note = "{\textcopyright}2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. ",
year = "2023",
month = apr,
day = "15",
doi = "10.1109/jiot.2022.3229122",
language = "English",
volume = "10",
pages = "7343--7355",
journal = "IEEE Internet of Things Journal",
issn = "2327-4662",
publisher = "IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC",
number = "8",

}

RIS

TY - JOUR

T1 - Privacy Preservation for Federated Learning with Robust Aggregation in Edge Computing

AU - Liu, Wentao

AU - Xu, Xiaolong

AU - Li, Dejuan

AU - Qi, Lianyong

AU - Dai, Fei

AU - Dou, Wanchun

AU - Ni, Qiang

N1 - ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2023/4/15

Y1 - 2023/4/15

N2 - Benefiting from the powerful data analysis and prediction capabilities of artificial intelligence (AI), the data on the edge is often transferred to the cloud center for centralized training to obtain an accurate model. To resist the risk of privacy leakage due to frequent data transmission between the edge and the cloud, federated learning (FL) is engaged in the edge paradigm, uploading the model updated on the edge server (ES) to the central server for aggregation, instead of transferring data directly. However, the adversarial ES can infer the update of other ESs from the aggregated model and the update may still expose some characteristics of data of other ESs. Besides, there is a certain probability that the entire aggregation is disrupted by the adversarial ESs through uploading a malicious update. In this paper, a privacy-preserving FL scheme with robust aggregation in edge computing is proposed, named FL-RAEC. First, the hybrid privacy-preserving mechanism is constructed to preserve the integrity and privacy of the data uploaded by the ESs. For the robust model aggregation, a phased aggregation strategy is proposed. Specifically, anomaly detection based on autoencoder is performed while some ESs are selected for anonymous trust verification at the beginning. In the next stage, via multiple rounds of random verification, the trust score of each ES is assessed to identify the malicious participants. Eventually, FL-RAEC is evaluated in detail, depicting that FL-RAEC has strong robustness and high accuracy under different attacks.

AB - Benefiting from the powerful data analysis and prediction capabilities of artificial intelligence (AI), the data on the edge is often transferred to the cloud center for centralized training to obtain an accurate model. To resist the risk of privacy leakage due to frequent data transmission between the edge and the cloud, federated learning (FL) is engaged in the edge paradigm, uploading the model updated on the edge server (ES) to the central server for aggregation, instead of transferring data directly. However, the adversarial ES can infer the update of other ESs from the aggregated model and the update may still expose some characteristics of data of other ESs. Besides, there is a certain probability that the entire aggregation is disrupted by the adversarial ESs through uploading a malicious update. In this paper, a privacy-preserving FL scheme with robust aggregation in edge computing is proposed, named FL-RAEC. First, the hybrid privacy-preserving mechanism is constructed to preserve the integrity and privacy of the data uploaded by the ESs. For the robust model aggregation, a phased aggregation strategy is proposed. Specifically, anomaly detection based on autoencoder is performed while some ESs are selected for anonymous trust verification at the beginning. In the next stage, via multiple rounds of random verification, the trust score of each ES is assessed to identify the malicious participants. Eventually, FL-RAEC is evaluated in detail, depicting that FL-RAEC has strong robustness and high accuracy under different attacks.

KW - Computer Networks and Communications

KW - Computer Science Applications

KW - Hardware and Architecture

KW - Information Systems

KW - Signal Processing

U2 - 10.1109/jiot.2022.3229122

DO - 10.1109/jiot.2022.3229122

M3 - Journal article

VL - 10

SP - 7343

EP - 7355

JO - IEEE Internet of Things Journal

JF - IEEE Internet of Things Journal

SN - 2327-4662

IS - 8

ER -