AMA: Asynchronous Management of Accelerators for Task-based Programming Models

Computational science has benefited in the last years from emerging accelerators that increase the performance of scientific simulations, but using these devices hinders the programming task. This paper presents AMA: a set of optimization techniques to efficiently manage multi-accelerator systems. A...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Procedia computer science Jg. 51; S. 130 - 139
Hauptverfasser: Planas, Judit, Badia, Rosa M., Ayguade, Eduard, Labarta, Jesus
Format: Journal Article Verlag
Sprache:Englisch
Veröffentlicht: Elsevier B.V 2015
Elsevier
Schlagworte:
ISSN:1877-0509, 1877-0509
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Computational science has benefited in the last years from emerging accelerators that increase the performance of scientific simulations, but using these devices hinders the programming task. This paper presents AMA: a set of optimization techniques to efficiently manage multi-accelerator systems. AMA maximizes the overlap of computation and communication in a blocking-free way. Then, we can use such spare time to do other work while waiting for device operations. Implemented on top of a task-based framework, the experimental evaluation of AMA on a quad-GPU node shows that we reach the performance of a hand-tuned native CUDA code, with the advantage of fully hiding the device management. In addition, we obtain up to more than 2x performance speed-up with respect to the original framework implementation.
ISSN:1877-0509
1877-0509
DOI:10.1016/j.procs.2015.05.212