Investigating different general-purpose and embedded multicores to achieve optimal trade-offs between performance and energy

Thread-level parallelism (TLP) is being widely exploited in embedded and general-purpose multicore processors (GPPs) to increase performance. However, parallelizing an application involves extra executed instructions and accesses to the shared memory, to communicate and synchronize. The overhead of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of parallel and distributed computing Jg. 95; S. 107 - 123
Hauptverfasser: Lorenzon, Arthur Francisco, Cera, Márcia Cristina, Beck, Antonio Carlos Schneider
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Inc 01.09.2016
Schlagworte:
ISSN:0743-7315, 1096-0848
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Thread-level parallelism (TLP) is being widely exploited in embedded and general-purpose multicore processors (GPPs) to increase performance. However, parallelizing an application involves extra executed instructions and accesses to the shared memory, to communicate and synchronize. The overhead of accessing the shared memory, which is very costly in terms of delay and energy because it is at the bottom of the hierarchy, varies depending on the communication model and level of data exchange/synchronization of the application. On top of that, multicore processors are implemented using different architectures, organizations and memory subsystems. In this complex scenario, we evaluate 14 parallel benchmarks implemented with 4 different parallel programming interfaces (PPIs), with distinct communication rates and TLP, running on five representative multicore processors targeted to general-purpose and embedded systems. We show that while the former presents the best performance and the latter will be the most energy efficient, there is no single option that offers the best result for both. We also demonstrate that in applications with low levels of communication, what matters is the communication model, not a specific PPI. On the other hand, applications with high communication demands have a huge search space that can be explored. For those, Pthreads is the most efficient PPI for Intel Processors, while OpenMP is the best for ARM ones. MPI is the worst choice in almost any scenario, and gets very inefficient as the TLP increases. We also evaluate energy delayxproduct (EDxP), weighting performance towards energy by varying the value of x. In a representative case where energy is the most important, three different processors can be the best alternative for different values of x. Finally, we explore how static power influences total energy consumption, showing that its increase brings benefits to ARM multiprocessors, with the opposite effect for Intel ones.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0743-7315
1096-0848
DOI:10.1016/j.jpdc.2016.04.003