Home > Research > Publications & Outputs > Automatic and portable mapping of data parallel...

Links

Text available via DOI:

View graph of relations

Automatic and portable mapping of data parallel programs to OpenCL for GPU-based heterogeneous systems

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
Article number42
<mark>Journal publication date</mark>01/2015
<mark>Journal</mark>ACM Transactions on Architecture and Code Optimization
Issue number4
Volume11
Number of pages26
Publication StatusPublished
<mark>Original language</mark>English

Abstract

General-purpose GPU-based systems are highly attractive, as they give potentially massive performance at little cost. Realizing such potential is challenging due to the complexity of programming. This article presents a compiler-based approach to automatically generate optimized OpenCL code from data parallel OpenMP programs for GPUs. A key feature of our scheme is that it leverages existing transformations, especially data transformations, to improve performance on GPU architectures and uses automatic machine learning to build a predictive model to determine if it is worthwhile running the OpenCL code on the GPU or OpenMP code on the multicore host. We applied our approach to the entire NAS parallel benchmark suite and evaluated it on distinct GPU-based systems. We achieved average (up to) speedups of 4.51× and 4.20× (143× and 67×) on Core i7/NVIDIA GeForce GTX580 and Core i7/AMD Radeon 7970 platforms, respectively, over a sequential baseline. Our approach achieves, on average, greater than 10× speedups over two state-of-the-art automatic GPU code generators.