Home > Research > Publications & Outputs > Federated learning based proactive content cach...

Links

Text available via DOI:

View graph of relations

Federated learning based proactive content caching in edge computing

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Federated learning based proactive content caching in edge computing. / Yu, Zhengxin; Hu, Jia; Min, Geyong et al.
2018 IEEE Global Communications Conference (GLOBECOM). IEEE, 2019. p. 1-6 (IEEE Global Communications Conference (GLOBECOM)).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Yu, Z, Hu, J, Min, G, Lu, H, Zhao, Z, Wang, H & Georgalas, N 2019, Federated learning based proactive content caching in edge computing. in 2018 IEEE Global Communications Conference (GLOBECOM). IEEE Global Communications Conference (GLOBECOM), IEEE, pp. 1-6. https://doi.org/10.1109/GLOCOM.2018.8647616

APA

Yu, Z., Hu, J., Min, G., Lu, H., Zhao, Z., Wang, H., & Georgalas, N. (2019). Federated learning based proactive content caching in edge computing. In 2018 IEEE Global Communications Conference (GLOBECOM) (pp. 1-6). (IEEE Global Communications Conference (GLOBECOM)). IEEE. https://doi.org/10.1109/GLOCOM.2018.8647616

Vancouver

Yu Z, Hu J, Min G, Lu H, Zhao Z, Wang H et al. Federated learning based proactive content caching in edge computing. In 2018 IEEE Global Communications Conference (GLOBECOM). IEEE. 2019. p. 1-6. (IEEE Global Communications Conference (GLOBECOM)). Epub 2018 Dec 9. doi: 10.1109/GLOCOM.2018.8647616

Author

Yu, Zhengxin ; Hu, Jia ; Min, Geyong et al. / Federated learning based proactive content caching in edge computing. 2018 IEEE Global Communications Conference (GLOBECOM). IEEE, 2019. pp. 1-6 (IEEE Global Communications Conference (GLOBECOM)).

Bibtex

@inproceedings{e1d78b6f9686405a9885418394232aed,
title = "Federated learning based proactive content caching in edge computing",
abstract = "Content caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.",
author = "Zhengxin Yu and Jia Hu and Geyong Min and Haochuan Lu and Zhiwei Zhao and Haozhe Wang and Nektarios Georgalas",
year = "2019",
month = feb,
day = "21",
doi = "10.1109/GLOCOM.2018.8647616",
language = "English",
isbn = "9781538647288",
series = "IEEE Global Communications Conference (GLOBECOM)",
publisher = "IEEE",
pages = "1--6",
booktitle = "2018 IEEE Global Communications Conference (GLOBECOM)",

}

RIS

TY - GEN

T1 - Federated learning based proactive content caching in edge computing

AU - Yu, Zhengxin

AU - Hu, Jia

AU - Min, Geyong

AU - Lu, Haochuan

AU - Zhao, Zhiwei

AU - Wang, Haozhe

AU - Georgalas, Nektarios

PY - 2019/2/21

Y1 - 2019/2/21

N2 - Content caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.

AB - Content caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m-epsilon-greedy and Thompson sampling in terms of cache efficiency.

U2 - 10.1109/GLOCOM.2018.8647616

DO - 10.1109/GLOCOM.2018.8647616

M3 - Conference contribution/Paper

SN - 9781538647288

T3 - IEEE Global Communications Conference (GLOBECOM)

SP - 1

EP - 6

BT - 2018 IEEE Global Communications Conference (GLOBECOM)

PB - IEEE

ER -