12,000

We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK

93%

93% of Lancaster students go into work or further study within six months of graduating

Home > Research > Publications & Outputs > Towards a holistic approach to auto-paralleliza...
View graph of relations

« Back

Towards a holistic approach to auto-parallelization: integrating profile-driven parallelism detection and machine-learning based mapping

Research output: Contribution in Book/Report/ProceedingsPaper

Published

  • Georgios Tournavitis
  • Zheng Wang
  • Björn Franke
  • Michael F.P. O'Boyle
Publication date2009
Host publicationProceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2009)
Place of publicationNew York, NY, USA
PublisherACM
Pages177-187
Number of pages11
ISBN (Print)978-1-60558-392-1
Original languageEnglish

Conference

ConferencePLDI 2009 2009 ACM SIGPLAN conference on Programming language design and implementation
CountryIreland
CityDublin
Period15/06/0920/06/09

Conference

ConferencePLDI 2009 2009 ACM SIGPLAN conference on Programming language design and implementation
CountryIreland
CityDublin
Period15/06/0920/06/09

Abstract

Compiler-based auto-parallelization is a much studied area, yet has still not found wide-spread application. This is largely due to the poor exploitation of application parallelism, subsequently resulting in performance levels far below those which a skilled expert programmer could achieve. We have identified two weaknesses in traditional parallelizing compilers and propose a novel, integrated approach, resulting in significant performance improvements of the generated parallel code. Using profile-driven parallelism detection we overcome the limitations of static analysis, enabling us to identify more application parallelism and only rely on the user for final approval. In addition, we replace the traditional target-specific and inflexible mapping heuristics with a machine-learning based prediction mechanism, resulting in better mapping decisions while providing more scope for adaptation to different target architectures. We have evaluated our parallelization strategy against the NAS and SPEC OMP benchmarks and two different multi-core platforms (dual quad-core Intel Xeon SMP and dual-socket QS20 Cell blade). We demonstrate that our approach not only yields significant improvements when compared with state-of-the-art parallelizing compilers, but comes close to and sometimes exceeds the performance of manually parallelized codes. On average, our methodology achieves 96% of the performance of the hand-tuned OpenMP NAS and SPEC parallel benchmarks on the Intel Xeon platform and gains a significant speedup for the IBM Cell platform, demonstrating the potential of profile-guided and machine-learning based parallelization for complex multi-core platforms.