Home > Research > Publications & Outputs > Horus

Electronic data

  • ICA3PP - Horus - Yeung (Accepted)

    Accepted author manuscript, 882 KB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Horus: An Interference-aware Resource Manager for Deep Learning Systems

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Forthcoming

Standard

Horus : An Interference-aware Resource Manager for Deep Learning Systems. / Yeung, Gingfung; Borowiec, Damian; Yang, Renyu; Friday, Adrian; Harper, R.H.R.; Garraghan, Peter.

20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020). 2020.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Harvard

Yeung, G, Borowiec, D, Yang, R, Friday, A, Harper, RHR & Garraghan, P 2020, Horus: An Interference-aware Resource Manager for Deep Learning Systems. in 20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020).

APA

Yeung, G., Borowiec, D., Yang, R., Friday, A., Harper, R. H. R., & Garraghan, P. (Accepted/In press). Horus: An Interference-aware Resource Manager for Deep Learning Systems. In 20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020)

Vancouver

Yeung G, Borowiec D, Yang R, Friday A, Harper RHR, Garraghan P. Horus: An Interference-aware Resource Manager for Deep Learning Systems. In 20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020). 2020

Author

Yeung, Gingfung ; Borowiec, Damian ; Yang, Renyu ; Friday, Adrian ; Harper, R.H.R. ; Garraghan, Peter. / Horus : An Interference-aware Resource Manager for Deep Learning Systems. 20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020). 2020.

Bibtex

@inproceedings{424f183a402a49379d52cda3496cfa8b,
title = "Horus: An Interference-aware Resource Manager for Deep Learning Systems",
abstract = "Deep Learning (DL) models are deployed as jobs within machines containing GPUs. These DL systems - ranging from a singular GPU device to machine clusters - require state-of-the-art resource management to increase resource utilization and job throughput. While it has been identified that co-location - multiple jobs co-located within the same GPU - is an effective means to achieve this, such co-location incurs performance interference that directly debilitates DL training and inference performance. Existing approaches to mitigate interference require resource intensive and time consuming kernel profiling ill-suited for runtime scheduling decisions. Current DL system resource management are not designed to deal with these problems. This paper proposes Horus, an interference-aware resource manager for DL systems. Instead of leveraging expensive kernel-profiling, our approach estimates job resource utilization and co-location patterns to determine effective DL job placement to minimize likelihood of interference, as well as improve system resource utilization and makespan. Our analysis shows that interference cause up to 3.2x DL job slowdown. We integrated our approach within the Kubernetes resource manager, and conduct experiments in a DL cluster by training 2,500 DL jobs using 13 different models types. Results demonstrate that Horus is able to outperform other DL resource managers by up to 61.5% for resource utilization and 33.6% for makespan.",
keywords = "Machine Learning Systems, Performance Interference, Deep Learning, GPU Scheduling, Cluster resource management",
author = "Gingfung Yeung and Damian Borowiec and Renyu Yang and Adrian Friday and R.H.R. Harper and Peter Garraghan",
year = "2020",
month = jul,
day = "11",
language = "English",
booktitle = "20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020)",

}

RIS

TY - GEN

T1 - Horus

T2 - An Interference-aware Resource Manager for Deep Learning Systems

AU - Yeung, Gingfung

AU - Borowiec, Damian

AU - Yang, Renyu

AU - Friday, Adrian

AU - Harper, R.H.R.

AU - Garraghan, Peter

PY - 2020/7/11

Y1 - 2020/7/11

N2 - Deep Learning (DL) models are deployed as jobs within machines containing GPUs. These DL systems - ranging from a singular GPU device to machine clusters - require state-of-the-art resource management to increase resource utilization and job throughput. While it has been identified that co-location - multiple jobs co-located within the same GPU - is an effective means to achieve this, such co-location incurs performance interference that directly debilitates DL training and inference performance. Existing approaches to mitigate interference require resource intensive and time consuming kernel profiling ill-suited for runtime scheduling decisions. Current DL system resource management are not designed to deal with these problems. This paper proposes Horus, an interference-aware resource manager for DL systems. Instead of leveraging expensive kernel-profiling, our approach estimates job resource utilization and co-location patterns to determine effective DL job placement to minimize likelihood of interference, as well as improve system resource utilization and makespan. Our analysis shows that interference cause up to 3.2x DL job slowdown. We integrated our approach within the Kubernetes resource manager, and conduct experiments in a DL cluster by training 2,500 DL jobs using 13 different models types. Results demonstrate that Horus is able to outperform other DL resource managers by up to 61.5% for resource utilization and 33.6% for makespan.

AB - Deep Learning (DL) models are deployed as jobs within machines containing GPUs. These DL systems - ranging from a singular GPU device to machine clusters - require state-of-the-art resource management to increase resource utilization and job throughput. While it has been identified that co-location - multiple jobs co-located within the same GPU - is an effective means to achieve this, such co-location incurs performance interference that directly debilitates DL training and inference performance. Existing approaches to mitigate interference require resource intensive and time consuming kernel profiling ill-suited for runtime scheduling decisions. Current DL system resource management are not designed to deal with these problems. This paper proposes Horus, an interference-aware resource manager for DL systems. Instead of leveraging expensive kernel-profiling, our approach estimates job resource utilization and co-location patterns to determine effective DL job placement to minimize likelihood of interference, as well as improve system resource utilization and makespan. Our analysis shows that interference cause up to 3.2x DL job slowdown. We integrated our approach within the Kubernetes resource manager, and conduct experiments in a DL cluster by training 2,500 DL jobs using 13 different models types. Results demonstrate that Horus is able to outperform other DL resource managers by up to 61.5% for resource utilization and 33.6% for makespan.

KW - Machine Learning Systems

KW - Performance Interference

KW - Deep Learning

KW - GPU Scheduling

KW - Cluster resource management

M3 - Conference contribution/Paper

BT - 20th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2020)

ER -