Home > Research > Publications & Outputs > Optimizing Sparse Matrix-Vector Multiplications...

Electronic data

  • aspmv

    Rights statement: The final publication is available at Springer via http://dx.doi.org/10.1007/s10766-018-00625-8

    Accepted author manuscript, 1 MB, PDF-document

    Embargo ends: 1/01/20

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Optimizing Sparse Matrix-Vector Multiplications on an ARMv8-based Many-Core Architecture

Research output: Contribution to journalJournal article

Published
  • Donglin Chen
  • Jianbin Fang
  • Shizhao Chen
  • Chuanfu Xu
  • Zheng Wang
Close
<mark>Journal publication date</mark>1/06/2019
<mark>Journal</mark> International Journal of Parallel Programming
Issue number3
Volume47
Number of pages15
Pages (from-to)418-432
Publication statusPublished
Early online date1/01/19
Original languageEnglish

Abstract

Sparse matrix–vector multiplications (SpMV) are common in scientific and HPC applications but are hard to be optimized. While the ARMv8-based processor IP is emerging as an alternative to the traditional x64 HPC processor design, there is little study on SpMV performance on such new many-cores. To design efficient HPC software and hardware, we need to understand how well SpMV performs. This work develops a quantitative approach to characterize SpMV performance on a recent ARMv8-based many-core architecture, Phytium FT-2000 Plus (FTP). We perform extensive experiments involved over 9500 distinct profiling runs on 956 sparse datasets and five mainstream sparse matrix storage formats, and compare FTP against the Intel Knights Landing many-core. We experimentally show that picking the optimal sparse matrix storage format and parameters is non-trivial as the correct decision requires expert knowledge of the input matrix and the hardware. We address the problem by proposing a machine learning based model that predicts the best storage format and parameters using input matrix features. The model automatically specializes to the many-core architectures we considered. The experimental results show that our approach achieves on average 93% of the best-available performance without incurring runtime profiling overhead.

Bibliographic note

The final publication is available at Springer via http://dx.doi.org/10.1007/s10766-018-00625-8