Home > Research > Publications & Outputs > DOPpler: Parallel Measurement Infrastructure fo...

Electronic data

Links

Text available via DOI:

View graph of relations

DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>31/07/2023
<mark>Journal</mark>IEEE Transactions on Parallel and Distributed Systems
Issue number7
Volume34
Number of pages13
Pages (from-to)2208-2220
Publication StatusPublished
Early online date23/05/23
<mark>Original language</mark>English

Abstract

The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this article we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.