Home > Research > Publications & Outputs > Synthesizing benchmarks for predictive modeling

Electronic data

  • cgo_17_2

    Accepted author manuscript, 1.32 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

View graph of relations

Synthesizing benchmarks for predictive modeling

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Synthesizing benchmarks for predictive modeling. / Cummins, Chris; Petoumenos, Pavlos ; Wang, Zheng et al.
CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization. New York: ACM, 2017. p. 86-99.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Cummins, C, Petoumenos, P, Wang, Z & Leather, H 2017, Synthesizing benchmarks for predictive modeling. in CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization. ACM, New York, pp. 86-99. <http://dl.acm.org/citation.cfm?id=3049843&CFID=749213426>

APA

Cummins, C., Petoumenos, P., Wang, Z., & Leather, H. (2017). Synthesizing benchmarks for predictive modeling. In CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization (pp. 86-99). ACM. http://dl.acm.org/citation.cfm?id=3049843&CFID=749213426

Vancouver

Cummins C, Petoumenos P, Wang Z, Leather H. Synthesizing benchmarks for predictive modeling. In CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization. New York: ACM. 2017. p. 86-99

Author

Cummins, Chris ; Petoumenos, Pavlos ; Wang, Zheng et al. / Synthesizing benchmarks for predictive modeling. CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization. New York : ACM, 2017. pp. 86-99

Bibtex

@inproceedings{d66c38cc5ebf4e459d400f585e3c72eb,
title = "Synthesizing benchmarks for predictive modeling",
abstract = "Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space.We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code.We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27{\~A}—. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30{\~A}—.",
author = "Chris Cummins and Pavlos Petoumenos and Zheng Wang and Hugh Leather",
year = "2017",
month = feb,
day = "4",
language = "English",
isbn = "9781509049318",
pages = "86--99",
booktitle = "CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Synthesizing benchmarks for predictive modeling

AU - Cummins, Chris

AU - Petoumenos, Pavlos

AU - Wang, Zheng

AU - Leather, Hugh

PY - 2017/2/4

Y1 - 2017/2/4

N2 - Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space.We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code.We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27×. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30×.

AB - Predictive modeling using machine learning is an effective method for building compiler heuristics, but there is a shortage of benchmarks. Typical machine learning experiments outside of the compilation field train over thousands or millions of examples. In machine learning for compilers, however, there are typically only a few dozen common benchmarks available. This limits the quality of learned models, as they have very sparse training data for what are often high-dimensional feature spaces. What is needed is a way to generate an unbounded number of training programs that finely cover the feature space. At the same time the generated programs must be similar to the types of programs that human developers actually write, otherwise the learning will target the wrong parts of the feature space.We mine open source repositories for program fragments and apply deep learning techniques to automatically construct models for how humans write programs. We sample these models to generate an unbounded number of runnable training programs. The quality of the programs is such that even human developers struggle to distinguish our generated programs from hand-written code.We use our generator for OpenCL programs, CLgen, to automatically synthesize thousands of programs and show that learning over these improves the performance of a state of the art predictive model by 1.27×. In addition, the fine covering of the feature space automatically exposes weaknesses in the feature design which are invisible with the sparse training examples from existing benchmark suites. Correcting these weaknesses further increases performance by 4.30×.

M3 - Conference contribution/Paper

SN - 9781509049318

SP - 86

EP - 99

BT - CGO '17 Proceedings of the 2017 International Symposium on Code Generation and Optimization

PB - ACM

CY - New York

ER -