Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning
AU - Yao, Liang
AU - Xu, Xiaolong
AU - Bilal, Muhammad
AU - Wang, Huihui
PY - 2022/6/6
Y1 - 2022/6/6
N2 - Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.
AB - Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.
KW - Computational modeling
KW - deep reinforcement learning
KW - Delays
KW - Dynamic scheduling
KW - Edge computing
KW - edge computing.
KW - Internet of Vehicles
KW - Processor scheduling
KW - Task analysis
KW - Vehicle dynamics
U2 - 10.1109/TITS.2022.3178759
DO - 10.1109/TITS.2022.3178759
M3 - Journal article
AN - SCOPUS:85131735975
SP - 1
EP - 9
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
SN - 1524-9050
ER -