Home > Research > Publications & Outputs > On the Design of Federated Learning in Latency ...

Electronic data

  • Author accepted final manuscript

    Rights statement: ©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1.52 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems. / Shinde, S.S.; Bozorgchenani, A.; Tarchi, D. et al.
In: IEEE Transactions on Vehicular Technology, Vol. 71, No. 2, 28.02.2022, p. 2041-2057.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Shinde SS, Bozorgchenani A, Tarchi D, Ni Q. On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems. IEEE Transactions on Vehicular Technology. 2022 Feb 28;71(2):2041-2057. Epub 2021 Dec 14. doi: 10.1109/TVT.2021.3135332

Author

Shinde, S.S. ; Bozorgchenani, A. ; Tarchi, D. et al. / On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems. In: IEEE Transactions on Vehicular Technology. 2022 ; Vol. 71, No. 2. pp. 2041-2057.

Bibtex

@article{4d3237b3c16445fd80257ae5f2c97007,
title = "On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems",
abstract = "With the advent of smart vehicles, several new latency-critical and data-intensive applications are emerged in Vehicular Networks (VNs). Computation offloading has emerged as a viable option allowing to resort to the nearby edge servers for remote processing within a requested service latency requirement. Despite several advantages, computation offloading over resource-limited edge servers, together with vehicular mobility, is still a challenging problem to be solved. In particular, in order to avoid additional latency due to out-of-coverage operations, Vehicular Users (VUs) mobility introduces a bound on the amount of data to be offloaded towards nearby edge servers. Therefore, several approaches have been used for finding the correct amount of data to be offloaded. Among others, Federated Learning (FL) has been highlighted as one of the most promising solving techniques, given the data privacy concerns in VNs and limited communication resources. However, FL consumes resources during its operation and therefore incurs an additional burden on resource-constrained VUs. In this work, we aim to optimize the VN performance in terms of latency and energy consumption by considering both the FL and the computation offloading processes while selecting the proper number of FL iterations to be implemented. To this end, we first propose an FL-inspired distributed learning} framework for computation offloading in VNs, and then develop a constrained optimization problem to jointly minimize the overall latency and the energy consumed. An evolutionary Genetic Algorithm is proposed for solving the problem in-hand and compared with some benchmarks. The simulation results show the effectiveness of the proposed approach in terms of latency and energy consumption.",
keywords = "Computation Offloading, Costs, Delays, Edge computing, Energy consumption, Federated Learning, Genetic Algorithm, Latency, Optimization, Resource management, Servers, Task analysis, Vehicular Edge Computing, Constrained optimization, Data privacy, Genetic algorithms, Information management, Job analysis, Problem solving, Computation offloading, Delay, Energy-consumption, Federated learning, Optimisations, Vehicular edge computing, Energy utilization",
author = "S.S. Shinde and A. Bozorgchenani and D. Tarchi and Q. Ni",
note = "{\textcopyright}2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. ",
year = "2022",
month = feb,
day = "28",
doi = "10.1109/TVT.2021.3135332",
language = "English",
volume = "71",
pages = "2041--2057",
journal = "IEEE Transactions on Vehicular Technology",
issn = "0018-9545",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

RIS

TY - JOUR

T1 - On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems

AU - Shinde, S.S.

AU - Bozorgchenani, A.

AU - Tarchi, D.

AU - Ni, Q.

N1 - ©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2022/2/28

Y1 - 2022/2/28

N2 - With the advent of smart vehicles, several new latency-critical and data-intensive applications are emerged in Vehicular Networks (VNs). Computation offloading has emerged as a viable option allowing to resort to the nearby edge servers for remote processing within a requested service latency requirement. Despite several advantages, computation offloading over resource-limited edge servers, together with vehicular mobility, is still a challenging problem to be solved. In particular, in order to avoid additional latency due to out-of-coverage operations, Vehicular Users (VUs) mobility introduces a bound on the amount of data to be offloaded towards nearby edge servers. Therefore, several approaches have been used for finding the correct amount of data to be offloaded. Among others, Federated Learning (FL) has been highlighted as one of the most promising solving techniques, given the data privacy concerns in VNs and limited communication resources. However, FL consumes resources during its operation and therefore incurs an additional burden on resource-constrained VUs. In this work, we aim to optimize the VN performance in terms of latency and energy consumption by considering both the FL and the computation offloading processes while selecting the proper number of FL iterations to be implemented. To this end, we first propose an FL-inspired distributed learning} framework for computation offloading in VNs, and then develop a constrained optimization problem to jointly minimize the overall latency and the energy consumed. An evolutionary Genetic Algorithm is proposed for solving the problem in-hand and compared with some benchmarks. The simulation results show the effectiveness of the proposed approach in terms of latency and energy consumption.

AB - With the advent of smart vehicles, several new latency-critical and data-intensive applications are emerged in Vehicular Networks (VNs). Computation offloading has emerged as a viable option allowing to resort to the nearby edge servers for remote processing within a requested service latency requirement. Despite several advantages, computation offloading over resource-limited edge servers, together with vehicular mobility, is still a challenging problem to be solved. In particular, in order to avoid additional latency due to out-of-coverage operations, Vehicular Users (VUs) mobility introduces a bound on the amount of data to be offloaded towards nearby edge servers. Therefore, several approaches have been used for finding the correct amount of data to be offloaded. Among others, Federated Learning (FL) has been highlighted as one of the most promising solving techniques, given the data privacy concerns in VNs and limited communication resources. However, FL consumes resources during its operation and therefore incurs an additional burden on resource-constrained VUs. In this work, we aim to optimize the VN performance in terms of latency and energy consumption by considering both the FL and the computation offloading processes while selecting the proper number of FL iterations to be implemented. To this end, we first propose an FL-inspired distributed learning} framework for computation offloading in VNs, and then develop a constrained optimization problem to jointly minimize the overall latency and the energy consumed. An evolutionary Genetic Algorithm is proposed for solving the problem in-hand and compared with some benchmarks. The simulation results show the effectiveness of the proposed approach in terms of latency and energy consumption.

KW - Computation Offloading

KW - Costs

KW - Delays

KW - Edge computing

KW - Energy consumption

KW - Federated Learning

KW - Genetic Algorithm

KW - Latency

KW - Optimization

KW - Resource management

KW - Servers

KW - Task analysis

KW - Vehicular Edge Computing

KW - Constrained optimization

KW - Data privacy

KW - Genetic algorithms

KW - Information management

KW - Job analysis

KW - Problem solving

KW - Computation offloading

KW - Delay

KW - Energy-consumption

KW - Federated learning

KW - Optimisations

KW - Vehicular edge computing

KW - Energy utilization

U2 - 10.1109/TVT.2021.3135332

DO - 10.1109/TVT.2021.3135332

M3 - Journal article

VL - 71

SP - 2041

EP - 2057

JO - IEEE Transactions on Vehicular Technology

JF - IEEE Transactions on Vehicular Technology

SN - 0018-9545

IS - 2

ER -