Home > Research > Publications & Outputs > Privacy-preserving Decentralized Federated Lear...

Electronic data

  • 3591354

    Accepted author manuscript, 2.19 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph. / Lu, Yang; Yu, Zhengxin; Suri, Neeraj.
In: ACM Transactions on Privacy and Security, Vol. 26, No. 3, 33, 31.08.2023, p. 1-39.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Lu Y, Yu Z, Suri N. Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph. ACM Transactions on Privacy and Security. 2023 Aug 31;26(3):1-39. 33. Epub 2023 Apr 6. doi: 10.1145/3591354

Author

Bibtex

@article{542ab8fefeca42f0a54ad85f7ddae3ef,
title = "Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph",
abstract = "Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir{\textquoteright}s secret sharing scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The paper establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.",
keywords = "Privacy, Mobility, Federated learning, Decentralized aggregation",
author = "Yang Lu and Zhengxin Yu and Neeraj Suri",
year = "2023",
month = aug,
day = "31",
doi = "10.1145/3591354",
language = "English",
volume = "26",
pages = "1--39",
journal = "ACM Transactions on Privacy and Security",
issn = "2471-2574",
publisher = "Association for Computing Machinery (ACM)",
number = "3",

}

RIS

TY - JOUR

T1 - Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph

AU - Lu, Yang

AU - Yu, Zhengxin

AU - Suri, Neeraj

PY - 2023/8/31

Y1 - 2023/8/31

N2 - Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The paper establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.

AB - Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The paper establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.

KW - Privacy

KW - Mobility

KW - Federated learning

KW - Decentralized aggregation

U2 - 10.1145/3591354

DO - 10.1145/3591354

M3 - Journal article

VL - 26

SP - 1

EP - 39

JO - ACM Transactions on Privacy and Security

JF - ACM Transactions on Privacy and Security

SN - 2471-2574

IS - 3

M1 - 33

ER -