Performance analysis of parallel programming models for C++
As multicore processors become more common in today’s computing systems and parallel programming models are enriched, programmers must consider how to choose the appropriate and parallel programming model when writing parallel code. The purpose of this paper is to compare and analyze the performance...
Gespeichert in:
| Veröffentlicht in: | Journal of physics. Conference series Jg. 2646; H. 1; S. 12027 - 12036 |
|---|---|
| 1. Verfasser: | |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Bristol
IOP Publishing
01.12.2023
|
| Schlagworte: | |
| ISSN: | 1742-6588, 1742-6596 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | As multicore processors become more common in today’s computing systems and parallel programming models are enriched, programmers must consider how to choose the appropriate and parallel programming model when writing parallel code. The purpose of this paper is to compare and analyze the performance gap between different C++ parallel programming models, such as C++ standard library threads, OpenMP and Pthreads, in terms of matrix operations. The experiments use different libraries to implement matrix multiplication separately and then analyze their performance. The experimental data show that the data size has a significant impact on the performance of the different models. For very small matrices of size magnitude less than or close to the number of threads, the performance of parallel implementations is much lower than that of serial implementations. For small matrices with magnitudes larger than the number of threads, the C++ standard library threads outperforms Pthreads and OpenMP due to its lightweight thread performance on relatively small matrices. pthreads shows the best performance on very large matrices due to its fine-grained control over thread management, communication, and synchronization operations. openMP’s is not as stable as the other two libraries, especially for smaller matrices. This paper provides a comparative analysis that can help programmers choose the most appropriate library for their specific computational needs. |
|---|---|
| Bibliographie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1742-6588 1742-6596 |
| DOI: | 10.1088/1742-6596/2646/1/012027 |