Home > Research > Publications & Outputs > Dynamic Edge Computation Offloading for Interne...

Links

Text available via DOI:

View graph of relations

Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print

Standard

Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning. / Yao, Liang; Xu, Xiaolong; Bilal, Muhammad et al.
In: IEEE Transactions on Intelligent Transportation Systems, 06.06.2022, p. 1-9.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Yao, L, Xu, X, Bilal, M & Wang, H 2022, 'Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning', IEEE Transactions on Intelligent Transportation Systems, pp. 1-9. https://doi.org/10.1109/TITS.2022.3178759

APA

Yao, L., Xu, X., Bilal, M., & Wang, H. (2022). Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems, 1-9. Advance online publication. https://doi.org/10.1109/TITS.2022.3178759

Vancouver

Yao L, Xu X, Bilal M, Wang H. Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems. 2022 Jun 6;1-9. Epub 2022 Jun 6. doi: 10.1109/TITS.2022.3178759

Author

Yao, Liang ; Xu, Xiaolong ; Bilal, Muhammad et al. / Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning. In: IEEE Transactions on Intelligent Transportation Systems. 2022 ; pp. 1-9.

Bibtex

@article{32ca95c48e3c46a5b7fb339fe0c51508,
title = "Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning",
abstract = "Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.",
keywords = "Computational modeling, deep reinforcement learning, Delays, Dynamic scheduling, Edge computing, edge computing., Internet of Vehicles, Processor scheduling, Task analysis, Vehicle dynamics",
author = "Liang Yao and Xiaolong Xu and Muhammad Bilal and Huihui Wang",
year = "2022",
month = jun,
day = "6",
doi = "10.1109/TITS.2022.3178759",
language = "English",
pages = "1--9",
journal = "IEEE Transactions on Intelligent Transportation Systems",
issn = "1524-9050",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

RIS

TY - JOUR

T1 - Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning

AU - Yao, Liang

AU - Xu, Xiaolong

AU - Bilal, Muhammad

AU - Wang, Huihui

PY - 2022/6/6

Y1 - 2022/6/6

N2 - Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.

AB - Recent developments in the Internet of Vehicles (IoV) enabled the myriad emergence of a plethora of data-intensive and latency-sensitive vehicular applications, posing significant difficulties to traditional cloud computing. Vehicular edge computing (VEC), as an emerging paradigm, enables the vehicles to utilize the resources of the edge servers to reduce the data transfer burden and computing stress. Although the utilization of VEC is a favourable support for IoV applications, vehicle mobility and other factors further complicate the challenge of designing and implementing such systems, leading to incremental delay and energy consumption. In recent times, there have been attempts to integrate deep reinforcement learning (DRL) approaches with IoV-based systems, to facilitate real-time decision-making and prediction. We demonstrate the potential of such an approach in this paper. Specifically, the dynamic computation offloading problem is constructed as a Markov decision process (MDP). Then, the twin delayed deep deterministic policy gradient (TD3) algorithm is utilized to achieve the optimal offloading strategy. Finally, findings from the simulation demonstrate the potential of our proposed approach.

KW - Computational modeling

KW - deep reinforcement learning

KW - Delays

KW - Dynamic scheduling

KW - Edge computing

KW - edge computing.

KW - Internet of Vehicles

KW - Processor scheduling

KW - Task analysis

KW - Vehicle dynamics

U2 - 10.1109/TITS.2022.3178759

DO - 10.1109/TITS.2022.3178759

M3 - Journal article

AN - SCOPUS:85131735975

SP - 1

EP - 9

JO - IEEE Transactions on Intelligent Transportation Systems

JF - IEEE Transactions on Intelligent Transportation Systems

SN - 1524-9050

ER -