Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Accepted author manuscript, 581 KB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - PPFM
T2 - 18th International Conference on Mobility, Sensing and Networking (MSN 2022)
AU - Yu, Zhengxin
AU - Lu, Yang
AU - Angelov, Plamen
AU - Suri, Neeraj
N1 - Conference code: 18th
PY - 2023/3/29
Y1 - 2023/3/29
N2 - With the advancement in Machine Learning (ML) techniques, a wide range of applications that leverage ML have emerged across research, industry, and society to improve application performance. However, existing ML schemes used within such applications struggle to attain high model accuracy due to the heterogeneous and distributed nature of their generated data, resulting in reduced model performance. In this paper we address this challenge by proposing PPFM: an adaptive and hierarchical Peer-to-Peer Federated Meta-learning framework. Instead of leveraging a conventional static ML scheme, PPFM uses multiple learning loops to dynamically self-adapt its own architecture to improve its training effectiveness for different generated data characteristics. Such an approach also allows for PPFM to remove reliance on a fixed centralized server in a distributed environment by utilizing peer-to-peer Federated Learning (FL) framework. Our results demonstrate PPFM provides significant improvement to model accuracy across multiple datasets when compared to contemporary ML approaches.
AB - With the advancement in Machine Learning (ML) techniques, a wide range of applications that leverage ML have emerged across research, industry, and society to improve application performance. However, existing ML schemes used within such applications struggle to attain high model accuracy due to the heterogeneous and distributed nature of their generated data, resulting in reduced model performance. In this paper we address this challenge by proposing PPFM: an adaptive and hierarchical Peer-to-Peer Federated Meta-learning framework. Instead of leveraging a conventional static ML scheme, PPFM uses multiple learning loops to dynamically self-adapt its own architecture to improve its training effectiveness for different generated data characteristics. Such an approach also allows for PPFM to remove reliance on a fixed centralized server in a distributed environment by utilizing peer-to-peer Federated Learning (FL) framework. Our results demonstrate PPFM provides significant improvement to model accuracy across multiple datasets when compared to contemporary ML approaches.
U2 - 10.1109/MSN57253.2022.00086
DO - 10.1109/MSN57253.2022.00086
M3 - Conference contribution/Paper
SN - 9781665464581
SP - 502
EP - 509
BT - 2022 18th International Conference on Mobility, Sensing and Networking (MSN)
PB - IEEE
Y2 - 14 December 2022 through 16 December 2022
ER -