Home > Research > Publications & Outputs > Improving Spark Application Throughput Via Memo...

Electronic data

  • middleware17-paper74

    Rights statement: © Owner/Author ACM, 2017. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference http://dx.doi.org/10.1145/3135974.3135984

    Accepted author manuscript, 14 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Improving Spark Application Throughput Via Memory Aware Task Co-location: A Mixture of Experts Approach

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Improving Spark Application Throughput Via Memory Aware Task Co-location: A Mixture of Experts Approach. / Sanz Marco, Vicent; Taylor, Ben; Porter, Barry Francis et al.
Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference. New York: ACM, 2017. p. 95-108.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

APA

Vancouver

Sanz Marco V, Taylor B, Porter BF, Wang Z. Improving Spark Application Throughput Via Memory Aware Task Co-location: A Mixture of Experts Approach. In Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference. New York: ACM. 2017. p. 95-108 doi: 10.1145/3135974.3135984

Author

Sanz Marco, Vicent ; Taylor, Ben ; Porter, Barry Francis et al. / Improving Spark Application Throughput Via Memory Aware Task Co-location : A Mixture of Experts Approach. Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference. New York : ACM, 2017. pp. 95-108

Bibtex

@inproceedings{8a949fe783d4467dbb923f5974177f04,
title = "Improving Spark Application Throughput Via Memory Aware Task Co-location: A Mixture of Experts Approach",
abstract = "Data analytic applications built upon big data processing frameworks such as Apache Spark are an important class of applications. Many of these applications are not latency-sensitive and thus can run as batch jobs in data centers. By running multiple applications on a computing host, task co-location can significantly improve the server utilization and system throughput. However, effective task co-location is a non-trivial task, as it requires an understanding of the computing resource requirement of the co-running applications, in order to determine what tasks, and how many of them, can be co-located. State-of-the-art co-location schemes either require the user to supply the resource demands which are often far beyond what is needed; or use a one-size-fits-all function to estimate the requirement, which, unfortunately, is unlikely to capture the diverse behaviors of applications.In this paper, we present a mixture-of-experts approach to model the memory behavior of Spark applications. We achieve this by learning, off-line, a range of specialized memory models on a range of typical applications; we then determine at runtime which of the memory models, or experts, best describes the memory behavior of the target application. We show that by accurately estimating the resource level that is needed, a co-location scheme can effectively determine how many applications can be co-located on the same host to improve the system throughput, by taking into consideration the memory and CPU requirements of co-running application tasks. Our technique is applied to a set of representative data analytic applications built upon the Apache Spark framework. We evaluated our approach for system throughput and average normalized turnaround time on a multi-core cluster. Our approach achieves over 83.9% of the performance delivered using an ideal memory predictor. We obtain, on average, 8.69x improvement on system throughput and a 49% reduction on turnaround time over executing application tasks in isolation, which translates to a 1.28x and 1.68x improvement over a state-of-the-art co-location scheme for system throughput and turnaround time respectively.",
author = "{Sanz Marco}, Vicent and Ben Taylor and Porter, {Barry Francis} and Zheng Wang",
note = "{\textcopyright} Owner/Author ACM, 2017. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference http://dx.doi.org/10.1145/3135974.3135984",
year = "2017",
month = dec,
day = "11",
doi = "10.1145/3135974.3135984",
language = "English",
isbn = "9781450347204",
pages = "95--108",
booktitle = "Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Improving Spark Application Throughput Via Memory Aware Task Co-location

T2 - A Mixture of Experts Approach

AU - Sanz Marco, Vicent

AU - Taylor, Ben

AU - Porter, Barry Francis

AU - Wang, Zheng

N1 - © Owner/Author ACM, 2017. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference http://dx.doi.org/10.1145/3135974.3135984

PY - 2017/12/11

Y1 - 2017/12/11

N2 - Data analytic applications built upon big data processing frameworks such as Apache Spark are an important class of applications. Many of these applications are not latency-sensitive and thus can run as batch jobs in data centers. By running multiple applications on a computing host, task co-location can significantly improve the server utilization and system throughput. However, effective task co-location is a non-trivial task, as it requires an understanding of the computing resource requirement of the co-running applications, in order to determine what tasks, and how many of them, can be co-located. State-of-the-art co-location schemes either require the user to supply the resource demands which are often far beyond what is needed; or use a one-size-fits-all function to estimate the requirement, which, unfortunately, is unlikely to capture the diverse behaviors of applications.In this paper, we present a mixture-of-experts approach to model the memory behavior of Spark applications. We achieve this by learning, off-line, a range of specialized memory models on a range of typical applications; we then determine at runtime which of the memory models, or experts, best describes the memory behavior of the target application. We show that by accurately estimating the resource level that is needed, a co-location scheme can effectively determine how many applications can be co-located on the same host to improve the system throughput, by taking into consideration the memory and CPU requirements of co-running application tasks. Our technique is applied to a set of representative data analytic applications built upon the Apache Spark framework. We evaluated our approach for system throughput and average normalized turnaround time on a multi-core cluster. Our approach achieves over 83.9% of the performance delivered using an ideal memory predictor. We obtain, on average, 8.69x improvement on system throughput and a 49% reduction on turnaround time over executing application tasks in isolation, which translates to a 1.28x and 1.68x improvement over a state-of-the-art co-location scheme for system throughput and turnaround time respectively.

AB - Data analytic applications built upon big data processing frameworks such as Apache Spark are an important class of applications. Many of these applications are not latency-sensitive and thus can run as batch jobs in data centers. By running multiple applications on a computing host, task co-location can significantly improve the server utilization and system throughput. However, effective task co-location is a non-trivial task, as it requires an understanding of the computing resource requirement of the co-running applications, in order to determine what tasks, and how many of them, can be co-located. State-of-the-art co-location schemes either require the user to supply the resource demands which are often far beyond what is needed; or use a one-size-fits-all function to estimate the requirement, which, unfortunately, is unlikely to capture the diverse behaviors of applications.In this paper, we present a mixture-of-experts approach to model the memory behavior of Spark applications. We achieve this by learning, off-line, a range of specialized memory models on a range of typical applications; we then determine at runtime which of the memory models, or experts, best describes the memory behavior of the target application. We show that by accurately estimating the resource level that is needed, a co-location scheme can effectively determine how many applications can be co-located on the same host to improve the system throughput, by taking into consideration the memory and CPU requirements of co-running application tasks. Our technique is applied to a set of representative data analytic applications built upon the Apache Spark framework. We evaluated our approach for system throughput and average normalized turnaround time on a multi-core cluster. Our approach achieves over 83.9% of the performance delivered using an ideal memory predictor. We obtain, on average, 8.69x improvement on system throughput and a 49% reduction on turnaround time over executing application tasks in isolation, which translates to a 1.28x and 1.68x improvement over a state-of-the-art co-location scheme for system throughput and turnaround time respectively.

U2 - 10.1145/3135974.3135984

DO - 10.1145/3135974.3135984

M3 - Conference contribution/Paper

SN - 9781450347204

SP - 95

EP - 108

BT - Middleware '17 Proceedings of the 18th ACM/IFIP/USENIX Middleware Conference

PB - ACM

CY - New York

ER -