Home > Research > Publications & Outputs > Optimizing CNN Inference Speed over Big Social ...

Associated organisational unit

Electronic data

Links

Text available via DOI:

View graph of relations

Optimizing CNN Inference Speed over Big Social Data through Efficient Model Parallelism for Sustainable Web of Things

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Yuhao Hu
  • Xiaolong Xu
  • Muhammad Bilal
  • Weiyi Zhong
  • Yuwen Liu
  • Huaizhen Kou
  • Lingzhen Kong
Close
Article number104927
<mark>Journal publication date</mark>31/10/2024
<mark>Journal</mark>Journal of Parallel and Distributed Computing
Volume192
Publication StatusPublished
Early online date8/06/24
<mark>Original language</mark>English

Abstract

The rapid development of artificial intelligence and networking technologies has catalyzed the popularity of intelligent services based on deep learning in recent years, which in turn fosters the advancement of Web of Things (WoT). Big social data (BSD) plays an important role during the processing of intelligent services in WoT. However, intelligent BSD services are computationally intensive and require ultra-low latency. End or edge devices with limited computing power cannot realize the extremely low response latency of those services. Distributed inference of deep neural networks (DNNs) on various devices is considered a feasible solution by allocating the computing load of a DNN to several devices. In this work, an efficient model parallelism method that couples convolution layer (Conv) split with resource allocation is proposed. First, given a random computing resource allocation strategy, the Conv split decision is made through a mathematical analysis method to realize the parallel inference of convolutional neural networks (CNNs). Next, Deep Reinforcement Learning is used to get the optimal computing resource allocation strategy to maximize the resource utilization rate and minimize the CNN inference latency. Finally, simulation results show that our approach performs better than the baselines and is applicable for BSD services in WoT with a high workload.