Programming parallel dense matrix factorizations and inversion for new-generation NUMA architectures

We propose a methodology to address the programmability issues derived from the emergence of new-generation shared-memory NUMA architectures. For this purpose, we employ dense matrix factorizations and matrix inversion (DMFI) as a use case, and we target two modern architectures (AMD Rome and Huawei...

Full description

Saved in:
Bibliographic Details
Published in:Journal of parallel and distributed computing Vol. 175; pp. 51 - 65
Main Authors: Catalán, Sandra, Igual, Francisco D., Herrero, José R., Rodríguez-Sánchez, Rafael, Quintana-Ortí, Enrique S.
Format: Journal Article
Language:English
Published: Elsevier Inc 01.05.2023
Subjects:
ISSN:0743-7315, 1096-0848
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We propose a methodology to address the programmability issues derived from the emergence of new-generation shared-memory NUMA architectures. For this purpose, we employ dense matrix factorizations and matrix inversion (DMFI) as a use case, and we target two modern architectures (AMD Rome and Huawei Kunpeng 920) that exhibit configurable NUMA topologies. Our methodology pursues performance portability across different NUMA configurations by proposing multi-domain implementations for DMFI plus a hybrid task- and loop-level parallelization that configures multi-threaded executions to fix core-to-data binding, exploiting locality at the expense of minor code modifications. In addition, we introduce a generalization of the multi-domain implementations for DMFI that offers support for virtually any NUMA topology in present and future architectures. Our experimentation on the two target architectures for three representative dense linear algebra operations validates the proposal, reveals insights on the necessity of adapting both the codes and their execution to improve data access locality, and reports performance across architectures and inter- and intra-socket NUMA configurations competitive with state-of-the-art message-passing implementations, maintaining the ease of development usually associated with shared-memory programming. •Exposure of the performance penalty introduced by NUMA-oblivious implementations.•Demonstration that a high-level approach can largely diminish the programming effort.•Demonstration of performance boost when algorithms span across several NUMA domains.•Validation via matrix factorization and inversion on state-of-the-art NUMA servers.
ISSN:0743-7315
1096-0848
DOI:10.1016/j.jpdc.2023.01.004