Home > Research > Publications & Outputs > DOPpler: Parallel Measurement Infrastructure fo...

Electronic data

Links

Text available via DOI:

View graph of relations

DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs. / Borowiec, Damian; Yeung, Ging-Fung; Friday, Adrian et al.
In: IEEE Transactions on Parallel and Distributed Systems, Vol. 34, No. 7, 31.07.2023, p. 2208-2220.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Borowiec D, Yeung G-F, Friday A, Harper RHR, Garraghan P. DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs. IEEE Transactions on Parallel and Distributed Systems. 2023 Jul 31;34(7):2208-2220. Epub 2023 May 23. doi: 10.1109/TPDS.2023.3279233

Author

Borowiec, Damian ; Yeung, Ging-Fung ; Friday, Adrian et al. / DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs. In: IEEE Transactions on Parallel and Distributed Systems. 2023 ; Vol. 34, No. 7. pp. 2208-2220.

Bibtex

@article{323b84049fe742fda8f6b5e8b84fab24,
title = "DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs",
abstract = "The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this article we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.",
keywords = "Measuring Program Latency, Program Auto-tuning, Deep Learning Compilers, Deep Learning Systems",
author = "Damian Borowiec and Ging-Fung Yeung and Adrian Friday and R.H.R. Harper and Peter Garraghan",
year = "2023",
month = jul,
day = "31",
doi = "10.1109/TPDS.2023.3279233",
language = "English",
volume = "34",
pages = "2208--2220",
journal = "IEEE Transactions on Parallel and Distributed Systems",
issn = "1045-9219",
publisher = "IEEE Computer Society",
number = "7",

}

RIS

TY - JOUR

T1 - DOPpler: Parallel Measurement Infrastructure for Auto-tuning Deep Learning Tensor Programs

AU - Borowiec, Damian

AU - Yeung, Ging-Fung

AU - Friday, Adrian

AU - Harper, R.H.R.

AU - Garraghan, Peter

PY - 2023/7/31

Y1 - 2023/7/31

N2 - The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this article we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.

AB - The heterogeneity of Deep Learning models, libraries, and hardware poses an important challenge for improving model inference performance. Auto-tuners address this challenge via automatic tensor program optimization towards a target-device. However, auto-tuners incur a substantial time cost to complete given their design necessitates performing tensor program candidate measurements serially within an isolated target-device to minimize latency measurement inaccuracy. In this article we propose DOPpler, a parallel auto-tuning measurement infrastructure. DOPpler allows for considerable auto-tuning speedup over conventional approaches whilst maintaining high-quality tensor program optimization. DOPpler accelerates the auto-tuning process by proposing a parallel execution engine to efficiently execute candidate tensor programs in parallel across the CPU-host and GPU target-device, and overcomes measurement inaccuracy by introducing a high-precision on-device measurement technique when measuring tensor program kernel latency. DOPpler is designed to automatically calculate the optimal degree of parallelism to provision fast and accurate auto-tuning for different tensor programs, auto-tuners and target-devices. Experiment results show that DOPpler reduces total auto-tuning time by 50.5% on average whilst achieving optimization gains equivalent to conventional auto-tuning infrastructure.

KW - Measuring Program Latency

KW - Program Auto-tuning

KW - Deep Learning Compilers

KW - Deep Learning Systems

U2 - 10.1109/TPDS.2023.3279233

DO - 10.1109/TPDS.2023.3279233

M3 - Journal article

VL - 34

SP - 2208

EP - 2220

JO - IEEE Transactions on Parallel and Distributed Systems

JF - IEEE Transactions on Parallel and Distributed Systems

SN - 1045-9219

IS - 7

ER -