Automatic generation of ARM NEON micro-kernels for matrix multiplication

General matrix multiplication ( gemm ) is a fundamental kernel in scientific computing and current frameworks for deep learning. Modern realisations of gemm are mostly written in C, on top of a small, highly tuned micro-kernel that is usually encoded in assembly. The high performance realisation of...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of supercomputing Vol. 80; no. 10; pp. 13873 - 13899
Main Authors: Alaejos, Guillermo, Martínez, Héctor, Castelló, Adrián, Dolz, Manuel F., Igual, Francisco D., Alonso-Jordá, Pedro, Quintana-Ortí, Enrique S.
Format: Journal Article
Language:English
Published: New York Springer US 01.07.2024
Springer Nature B.V
Subjects:
ISSN:0920-8542, 1573-0484
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:General matrix multiplication ( gemm ) is a fundamental kernel in scientific computing and current frameworks for deep learning. Modern realisations of gemm are mostly written in C, on top of a small, highly tuned micro-kernel that is usually encoded in assembly. The high performance realisation of gemm in linear algebra libraries in general include a single micro-kernel per architecture, usually implemented by an expert. In this paper, we explore a couple of paths to automatically generate gemm micro-kernels, either using C++ templates with vector intrinsics or high-level Python scripts that directly produce assembly code. Both solutions can integrate high performance software techniques, such as loop unrolling and software pipelining, accommodate any data type, and easily generate micro-kernels of any requested dimension. The performance of this solution is tested on three ARM-based cores and compared with state-of-the-art libraries for these processors: BLIS, OpenBLAS and ArmPL. The experimental results show that the auto-generation approach is highly competitive, mainly due to the possibility of adapting the micro-kernel to the problem dimensions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0920-8542
1573-0484
DOI:10.1007/s11227-024-05955-8