A visual performance analysis framework for task‐based parallel applications running on hybrid clusters

Summary Programming paradigms in High‐Performance Computing have been shifting toward task‐based models that are capable of adapting readily to heterogeneous and scalable supercomputers. The performance of task‐based application heavily depends on the runtime scheduling heuristics and on its ability...

Full description

Saved in:
Bibliographic Details
Published in:Concurrency and computation Vol. 30; no. 18; pp. 1 - n/a
Main Authors: Garcia Pinto, Vinícius, Mello Schnorr, Lucas, Stanisic, Luka, Legrand, Arnaud, Thibault, Samuel, Danjean, Vincent
Format: Journal Article
Language:English
Published: Hoboken Wiley Subscription Services, Inc 25.09.2018
Wiley
Subjects:
ISSN:1532-0626, 1532-0634
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary Programming paradigms in High‐Performance Computing have been shifting toward task‐based models that are capable of adapting readily to heterogeneous and scalable supercomputers. The performance of task‐based application heavily depends on the runtime scheduling heuristics and on its ability to exploit computing and communication resources. Unfortunately, the traditional performance analysis strategies are unfit to fully understand task‐based runtime systems and applications: they expect a regular behavior with communication and computation phases, while task‐based applications demonstrate no clear phases. Moreover, the finer granularity of task‐based applications typically induces a stochastic behavior that leads to irregular structures that are difficult to analyze. Furthermore, the combination of application structure, scheduler, and hardware information is generally essential to understand performance issues. This paper presents a flexible framework that enables one to combine several sources of information and to create custom visualization panels allowing to understand and pinpoint performance problems incurred by bad scheduling decisions in task‐based applications. Three case‐studies using StarPU‐MPI, a task‐based multi‐node runtime system, are detailed to show how our framework can be used to study the performance of the well‐known Cholesky factorization. Performance improvements include a better task partitioning among the multi‐(GPU, core) to get closer to theoretical lower bounds, improved MPI pipelining in multi‐(node, core, GPU) to reduce the slow start, and changes in the runtime system to increase MPI bandwidth, with gains of up to 13% in the total makespan.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.4472