With the advent of smart vehicles, several new latency-critical and data-intensive applications are emerged in Vehicular Networks (VNs). Computation offloading has emerged as a viable option allowing to resort to the nearby edge servers for remote processing within a requested service latency requirement. Despite several advantages, computation offloading over resource-limited edge servers, together with vehicular mobility, is still a challenging problem to be solved. In particular, in order to avoid additional latency due to out-of-coverage operations, Vehicular Users (VUs) mobility introduces a bound on the amount of data to be offloaded towards nearby edge servers. Therefore, several approaches have been used for finding the correct amount of data to be offloaded. Among others, Federated Learning (FL) has been highlighted as one of the most promising solving techniques, given the data privacy concerns in VNs and limited communication resources. However, FL consumes resources during its operation and therefore incurs an additional burden on resource-constrained VUs. In this work, we aim to optimize the VN performance in terms of latency and energy consumption by considering both the FL and the computation offloading processes while selecting the proper number of FL iterations to be implemented. To this end, we first propose an FL-inspired distributed learning} framework for computation offloading in VNs, and then develop a constrained optimization problem to jointly minimize the overall latency and the energy consumed. An evolutionary Genetic Algorithm is proposed for solving the problem in-hand and compared with some benchmarks. The simulation results show the effectiveness of the proposed approach in terms of latency and energy consumption.
©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.