Home > Research > Publications & Outputs > Optimizing Deep Learning Inference on Embedded ...

Electronic data

  • main

    Rights statement: © ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Embedded Computing Systems Volume 19, Issue 1 February 2020 https://dl.acm.org/doi/abs/10.1145/3371154

    Accepted author manuscript, 2 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Optimizing Deep Learning Inference on Embedded Systems Through Adaptive Model Selection

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article number2
<mark>Journal publication date</mark>1/02/2020
<mark>Journal</mark>ACM Transactions on Embedded Computing
Issue number1
Volume19
Number of pages28
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Deep neural networks (DNNs) are becoming a key enabling technique for many application domains. However, on-device inference on battery-powered, resource-constrained embedding systems is often infeasible due to prohibitively long inferencing time and resource requirements of many DNNs. Offloading computation into the cloud is often unacceptable due to privacy concerns, high latency, or the lack of connectivity. Although compression algorithms often succeed in reducing inferencing times, they come at the cost of reduced accuracy.

This article presents a new, alternative approach to enable efficient execution of DNNs on embedded devices. Our approach dynamically determines which DNN to use for a given input by considering the desired accuracy and inference time. It employs machine learning to develop a low-cost predictive model to quickly select a pre-trained DNN to use for a given input and the optimization constraint. We achieve this first by offline training a predictive model and then using the learned model to select a DNN model to use for new, unseen inputs. We apply our approach to two representative DNN domains: image classification and machine translation. We evaluate our approach on a Jetson TX2 embedded deep learning platform and consider a range of influential DNN models including convolutional and recurrent neural networks. For image classification, we achieve a 1.8x reduction in inference time with a 7.52% improvement in accuracy over the most capable single DNN model. For machine translation, we achieve a 1.34x reduction in inference time over the most capable single model with little impact on the quality of translation.

Bibliographic note

© ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Embedded Computing Systems Volume 19, Issue 1 February 2020 https://dl.acm.org/doi/abs/10.1145/3371154