Performance analysis of parallel programming models for C++

As multicore processors become more common in today’s computing systems and parallel programming models are enriched, programmers must consider how to choose the appropriate and parallel programming model when writing parallel code. The purpose of this paper is to compare and analyze the performance...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Journal of physics. Conference series Ročník 2646; číslo 1; s. 12027 - 12036
Hlavní autor: Zeng, Guang
Médium: Journal Article
Jazyk:angličtina
Vydáno: Bristol IOP Publishing 01.12.2023
Témata:
ISSN:1742-6588, 1742-6596
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:As multicore processors become more common in today’s computing systems and parallel programming models are enriched, programmers must consider how to choose the appropriate and parallel programming model when writing parallel code. The purpose of this paper is to compare and analyze the performance gap between different C++ parallel programming models, such as C++ standard library threads, OpenMP and Pthreads, in terms of matrix operations. The experiments use different libraries to implement matrix multiplication separately and then analyze their performance. The experimental data show that the data size has a significant impact on the performance of the different models. For very small matrices of size magnitude less than or close to the number of threads, the performance of parallel implementations is much lower than that of serial implementations. For small matrices with magnitudes larger than the number of threads, the C++ standard library threads outperforms Pthreads and OpenMP due to its lightweight thread performance on relatively small matrices. pthreads shows the best performance on very large matrices due to its fine-grained control over thread management, communication, and synchronization operations. openMP’s is not as stable as the other two libraries, especially for smaller matrices. This paper provides a comparative analysis that can help programmers choose the most appropriate library for their specific computational needs.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/2646/1/012027