Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
<mark>Journal publication date</mark> | 1/02/2022 |
---|---|
<mark>Journal</mark> | IEEE Transactions on Industrial Informatics |
Issue number | 2 |
Volume | 18 |
Number of pages | 10 |
Pages (from-to) | 1414-1423 |
Publication Status | Published |
<mark>Original language</mark> | English |
With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.