Home > Research > Publications & Outputs > To Compress, or Not to Compress

Electronic data

  • ispa18

    Rights statement: ©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 1 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Published
  • Qin Qing
  • Jialong Yu
  • Jie Ren
  • Ling Gao
  • Hai Wang
  • Jie Zheng
  • Yansong Feng
  • Jianbin Fang
  • Zheng Wang
Close
Publication date11/12/2018
Host publicationThe 16th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA)
PublisherIEEE
Pages729-736
Number of pages8
ISBN (Electronic)9781728111414
ISBN (Print)9781728111421
Original languageEnglish
Event 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications - Melbourne, Australia
Duration: 11/12/201813/12/2018

Conference

Conference 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications
Abbreviated title(ISPA/IUCC/BDCloud/SocialCom/SustainCom)
CountryAustralia
CityMelbourne
Period11/12/1813/12/18

Conference

Conference 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications
Abbreviated title(ISPA/IUCC/BDCloud/SocialCom/SustainCom)
CountryAustralia
CityMelbourne
Period11/12/1813/12/18

Abstract

The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does not rely on specialized hardware, or computation-offloading that is often infeasible due to privacy concerns or high latency. However, it remains unclear how model compression techniques perform across a wide range of DNNs. To design efficient embedded deep learning solutions, we need to understand their behaviors. This work develops a quantitative approach to characterize model compression techniques on a representative embedded deep learning architecture, the NVIDIA Jetson Tx2. We perform extensive experiments by considering 11 influential neural network architectures from the image classification and the natural language processing domains. We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics. We demonstrate that there are opportunities to achieve fast deep inference on embedded systems, but one must carefully choose the compression settings. Our results provide insights on when and how to apply model compression techniques and guidelines for designing efficient embedded deep learning systems.

Bibliographic note

©2018 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.