Performance Portability Assessment in Gaia
Uloženo v:
| Název: | Performance Portability Assessment in Gaia |
|---|---|
| Autoři: | Giulio Malenza, Valentina Cesare, Marco Edoardo Santimaria, Robert Birke, Alberto Vecchiato, Ugo Becciani, Marco Aldinucci |
| Zdroj: | IEEE Transactions on Parallel and Distributed Systems. 36:2045-2057 |
| Informace o vydavateli: | Institute of Electrical and Electronics Engineers (IEEE), 2025. |
| Rok vydání: | 2025 |
| Témata: | High-performance computing, performance portability, portable languages, GPU programming, CPU and GPU architectures, astrometry |
| Popis: | Modern scientific experiments produce ever increasing amounts of data, soon requiring ExaFLOPs computing capacities for analysis. Reaching such performance requires purpose-built supercomputers with O(103 ) nodes, each hosting multicore CPUs and multiple GPUs, and applications designed to exploit this hardware optimally. Given that each supercomputer is generally a one-off project, the need for computing frameworks portable across diverse CPU and GPU architectures without performance losses is increasingly compelling. We investigate the performance portability (Φ ) of a real-world application: the solver module of the AVU–GSR pipeline for the ESA Gaia mission. This code finds the astrometric parameters of ∼10^8 stars in the Milky Way using the LSQR iterative algorithm. LSQR is widely used to solve linear systems of equations across a wide range of high-performance computing applications, elevating the study beyond its astrophysical relevance. The code is memory-bound, with six main compute kernels implementing sparse matrix-by-vector products. We optimize the previous CUDA implementation and port the code to further six GPU-acceleration frameworks: C++ PSTL, SYCL, OpenMP, HIP, KOKKOS, and OpenACC. We evaluate each framework’s performance portability across multiple GPUs (NVIDIA and AMD) and problem sizes in terms of application and architectural efficiency. Architectural efficiency is estimated through the roofline model of the six most computationally expensive GPU kernels. Our results show that C++ library-based (C++ PSTL and KOKKOS), pragma-based (OpenMP and OpenACC), and language-specific (CUDA, HIP, and SYCL) frameworks achieve increasingly better performance portability across the supported platforms with larger problem scores due to higher GPU occupancies. |
| Druh dokumentu: | Article |
| Popis souboru: | application/pdf |
| ISSN: | 2161-9883 1045-9219 |
| DOI: | 10.1109/tpds.2025.3591452 |
| Rights: | CC BY |
| Přístupové číslo: | edsair.doi.dedup.....b9432e6fd1aecf26c9d569ced6a3a16c |
| Databáze: | OpenAIRE |
| Abstrakt: | Modern scientific experiments produce ever increasing amounts of data, soon requiring ExaFLOPs computing capacities for analysis. Reaching such performance requires purpose-built supercomputers with O(103 ) nodes, each hosting multicore CPUs and multiple GPUs, and applications designed to exploit this hardware optimally. Given that each supercomputer is generally a one-off project, the need for computing frameworks portable across diverse CPU and GPU architectures without performance losses is increasingly compelling. We investigate the performance portability (Φ ) of a real-world application: the solver module of the AVU–GSR pipeline for the ESA Gaia mission. This code finds the astrometric parameters of ∼10^8 stars in the Milky Way using the LSQR iterative algorithm. LSQR is widely used to solve linear systems of equations across a wide range of high-performance computing applications, elevating the study beyond its astrophysical relevance. The code is memory-bound, with six main compute kernels implementing sparse matrix-by-vector products. We optimize the previous CUDA implementation and port the code to further six GPU-acceleration frameworks: C++ PSTL, SYCL, OpenMP, HIP, KOKKOS, and OpenACC. We evaluate each framework’s performance portability across multiple GPUs (NVIDIA and AMD) and problem sizes in terms of application and architectural efficiency. Architectural efficiency is estimated through the roofline model of the six most computationally expensive GPU kernels. Our results show that C++ library-based (C++ PSTL and KOKKOS), pragma-based (OpenMP and OpenACC), and language-specific (CUDA, HIP, and SYCL) frameworks achieve increasingly better performance portability across the supported platforms with larger problem scores due to higher GPU occupancies. |
|---|---|
| ISSN: | 21619883 10459219 |
| DOI: | 10.1109/tpds.2025.3591452 |
Full Text Finder
Nájsť tento článok vo Web of Science