Home > Research > Publications & Outputs > Dynamic Edge Computation Offloading for Interne...


Text available via DOI:

View graph of relations

Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
<mark>Journal publication date</mark>6/06/2022
<mark>Journal</mark>IEEE Transactions on Intelligent Transportation Systems
Number of pages9
Pages (from-to)1-9
Publication StatusE-pub ahead of print
Early online date6/06/22
<mark>Original language</mark>English


Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.