Locality-aware parallel block-sparse matrix-matrix multiplication using the Chunks and Tasks programming model

•We present a method for parallel block-sparse matrix-matrix multiplication.•A distributed quadtree matrix representation allows exploitation of data locality.•The quadtree structure is implemented using the Chunks and Tasks programming model.•Data locality is exploited without prior information abo...

Full description

Saved in:
Bibliographic Details
Published in:Parallel computing Vol. 57; pp. 87 - 106
Main Authors: Rubensson, Emanuel H., Rudberg, Elias
Format: Journal Article
Language:English
Published: Elsevier B.V 01.09.2016
Subjects:
ISSN:0167-8191, 1872-7336, 1872-7336
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We present a method for parallel block-sparse matrix-matrix multiplication.•A distributed quadtree matrix representation allows exploitation of data locality.•The quadtree structure is implemented using the Chunks and Tasks programming model.•Data locality is exploited without prior information about matrix sparsity pattern.•Constant communication per node on average is achieved in weak scaling tests. We present a method for parallel block-sparse matrix-matrix multiplication on distributed memory clusters. By using a quadtree matrix representation, data locality is exploited without prior information about the matrix sparsity pattern. A distributed quadtree matrix representation is straightforward to implement due to our recent development of the Chunks and Tasks programming model [Parallel Comput. 40, 328 (2014)]. The quadtree representation combined with the Chunks and Tasks model leads to favorable weak and strong scaling of the communication cost with the number of processes, as shown both theoretically and in numerical experiments. Matrices are represented by sparse quadtrees of chunk objects. The leaves in the hierarchy are block-sparse submatrices. Sparsity is dynamically detected by the matrix library and may occur at any level in the hierarchy and/or within the submatrix leaves. In case graphics processing units (GPUs) are available, both CPUs and GPUs are used for leaf-level multiplication work, thus making use of the full computing capacity of each node. The performance is evaluated for matrices with different sparsity structures, including examples from electronic structure calculations. Compared to methods that do not exploit data locality, our locality-aware approach reduces communication significantly, achieving essentially constant communication per node in weak scaling tests.
ISSN:0167-8191
1872-7336
1872-7336
DOI:10.1016/j.parco.2016.06.005