Multi-level load balancing with an integrated runtime approach

The recent trend of increasing numbers of cores per chip has resulted in vast amounts of on-node parallelism. These high core counts result in hardware variability that introduces imbalance. Applications are also becoming more complex, resulting in dynamic load imbalance. Load imbalance of any kind...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2018 18th IEEE ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID) s. 31 - 40
Hlavní autoři: Bak, Seonmyeong, Menon, Harshitha, White, Sam, Diener, Matthias, Kale, Laxmikant
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: Piscataway, NJ, USA IEEE Press 01.05.2018
IEEE
Edice:ACM Conferences
Témata:
ISBN:1538658151, 9781538658154
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The recent trend of increasing numbers of cores per chip has resulted in vast amounts of on-node parallelism. These high core counts result in hardware variability that introduces imbalance. Applications are also becoming more complex, resulting in dynamic load imbalance. Load imbalance of any kind can result in loss of performance and system utilization. We address the challenge of handling both transient and persistent load imbalances while maintaining locality with low overhead. In this paper, we propose an integrated runtime system that combines the Charm++ distributed programming model with concurrent tasks to mitigate load imbalances within and across shared memory address spaces. It utilizes a periodic assignment of work to cores based on load measurement, in combination with user created tasks to handle load imbalance. We integrate OpenMP with Charm++ to enable creation of potential tasks via OpenMP's parallel loop construct. This is also available to MPI applications through the Adaptive MPI implementation. We demonstrate the benefits of our work on three applications. We show improvements of Lassen by 29.6% on Cori and 46.5% on Theta. We also demonstrate the benefits on a Charm++ application, ChaNGa by 25.7% on Theta, as well as an MPI proxy application, Kripke, using Adaptive MPI.
ISBN:1538658151
9781538658154
DOI:10.1109/CCGRID.2018.00018