Home > Research > Publications & Outputs > Optimizing Deep Learning Inference on Embedded ...

Electronic data

  • main

    Rights statement: © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Embedded Computing Systems Volume 19, Issue 1 February 2020 https://dl.acm.org/doi/abs/10.1145/3371154

    Accepted author manuscript, 2 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection. / Sanz Marco, Vicent; Taylor, Ben; Wang, Zheng et al.
In: ACM Transactions on Embedded Computing, Vol. 19, No. 1, 2, 01.02.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Author

Bibtex

@article{ddc3151ce3c34d2ca02141abc60171f8,
title = "Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection",
abstract = "Deep neural networks (DNNs) are becoming a key enabling technique for many application domains. However, on-device inference on battery-powered, resource-constrained embedding systems is often infeasible due to prohibitively long inferencing time and resource requirements of many DNNs. Offloading computation into the cloud is often unacceptable due to privacy concerns, high latency, or the lack of connectivity. Although compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy.This article presents a new, alternative approach to enable efficient execution of DNNs on embedded devices. Our approach dynamically determines which DNN to use for a given input by considering the desired accuracy and inference time. It employs machine learning to develop a low-cost predictive model to quickly select a pre-trained DNN to use for a given input and the optimization constraint. We achieve this first by offline training a predictive model and then using the learned model to select a DNN model to use for new, unseen inputs. We apply our approach to two representative DNN domains: image classification and machine translation. We evaluate our approach on a Jetson TX2 embedded deep learning platform and consider a range of influential DNN models including convolutional and recurrent neural networks. For image classification, we achieve a 1.8x reduction in inference time with a 7.52% improvement in accuracy over the most capable single DNN model. For machine translation, we achieve a 1.34x reduction in inference time over the most capable single model with little impact on the quality of translation.",
author = "{Sanz Marco}, Vicent and Ben Taylor and Zheng Wang and Yehia Elkhatib",
note = "{\textcopyright} ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Embedded Computing Systems Volume 19, Issue 1 February 2020 https://dl.acm.org/doi/abs/10.1145/3371154",
year = "2020",
month = feb,
day = "1",
doi = "10.1145/3371154",
language = "English",
volume = "19",
journal = "ACM Transactions on Embedded Computing",
issn = "1539-9087",
publisher = "Association for Computing Machinery (ACM)",
number = "1",

}

RIS

TY - JOUR

T1 - Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection

AU - Sanz Marco, Vicent

AU - Taylor, Ben

AU - Wang, Zheng

AU - Elkhatib, Yehia

N1 - © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Embedded Computing Systems Volume 19, Issue 1 February 2020 https://dl.acm.org/doi/abs/10.1145/3371154

PY - 2020/2/1

Y1 - 2020/2/1

N2 - Deep neural networks (DNNs) are becoming a key enabling technique for many application domains. However, on-device inference on battery-powered, resource-constrained embedding systems is often infeasible due to prohibitively long inferencing time and resource requirements of many DNNs. Offloading computation into the cloud is often unacceptable due to privacy concerns, high latency, or the lack of connectivity. Although compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy.This article presents a new, alternative approach to enable efficient execution of DNNs on embedded devices. Our approach dynamically determines which DNN to use for a given input by considering the desired accuracy and inference time. It employs machine learning to develop a low-cost predictive model to quickly select a pre-trained DNN to use for a given input and the optimization constraint. We achieve this first by offline training a predictive model and then using the learned model to select a DNN model to use for new, unseen inputs. We apply our approach to two representative DNN domains: image classification and machine translation. We evaluate our approach on a Jetson TX2 embedded deep learning platform and consider a range of influential DNN models including convolutional and recurrent neural networks. For image classification, we achieve a 1.8x reduction in inference time with a 7.52% improvement in accuracy over the most capable single DNN model. For machine translation, we achieve a 1.34x reduction in inference time over the most capable single model with little impact on the quality of translation.

AB - Deep neural networks (DNNs) are becoming a key enabling technique for many application domains. However, on-device inference on battery-powered, resource-constrained embedding systems is often infeasible due to prohibitively long inferencing time and resource requirements of many DNNs. Offloading computation into the cloud is often unacceptable due to privacy concerns, high latency, or the lack of connectivity. Although compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy.This article presents a new, alternative approach to enable efficient execution of DNNs on embedded devices. Our approach dynamically determines which DNN to use for a given input by considering the desired accuracy and inference time. It employs machine learning to develop a low-cost predictive model to quickly select a pre-trained DNN to use for a given input and the optimization constraint. We achieve this first by offline training a predictive model and then using the learned model to select a DNN model to use for new, unseen inputs. We apply our approach to two representative DNN domains: image classification and machine translation. We evaluate our approach on a Jetson TX2 embedded deep learning platform and consider a range of influential DNN models including convolutional and recurrent neural networks. For image classification, we achieve a 1.8x reduction in inference time with a 7.52% improvement in accuracy over the most capable single DNN model. For machine translation, we achieve a 1.34x reduction in inference time over the most capable single model with little impact on the quality of translation.

U2 - 10.1145/3371154

DO - 10.1145/3371154

M3 - Journal article

VL - 19

JO - ACM Transactions on Embedded Computing

JF - ACM Transactions on Embedded Computing

SN - 1539-9087

IS - 1

M1 - 2

ER -