SpV8: Pursuing Optimal Vectorization and Regular Computation Pattern in SpMV
Sparse Matrix-Vector Multiplication (SpMV) plays an important role in many scientific and industry applications, and remains a well-known challenge due to the high sparsity and irregularity. Most existing researches on SpMV try to pursue high vectorization efficiency. However, such approaches may su...
Saved in:
| Published in: | 2021 58th ACM/IEEE Design Automation Conference (DAC) pp. 661 - 666 |
|---|---|
| Main Authors: | , , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
05.12.2021
|
| Subjects: | |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Sparse Matrix-Vector Multiplication (SpMV) plays an important role in many scientific and industry applications, and remains a well-known challenge due to the high sparsity and irregularity. Most existing researches on SpMV try to pursue high vectorization efficiency. However, such approaches may suffer from non-negligible speculation penalty due to their irregular computation patterns. In this paper, we propose SpV8, a novel approach that optimizes both speculation and vectorization in SpMV. Specifically, SpV8 analyzes data distribution in different matrices and row panels, and accordingly applies optimization method that achieves the maximal vectorization with regular computation patterns. We evaluate SpV8 on Intel Xeon CPU and compare with multiple state-of-art SpMV algorithms using 71 sparse matrices. The results show that SpV8 achieves up to 10× speedup (average 2.8×) against the standard MKL SpMV routine, and up to 2.4× speedup (average 1.4×) against the best existing approach. Moreover, SpMV features very low preprocessing overhead in all compared approaches, which indicates SpV8 is highly-applicable in real-world applications. |
|---|---|
| DOI: | 10.1109/DAC18074.2021.9586251 |