Performance evaluation of some MPI implementations on workstation clusters

The Message Passing Interface (MPI) has already become a standard of the communication library for distributed-memory computing systems. Since the release of the new versions of MPI specification, several MPI implementations have been made publicly available. Different implementations employ differe...

Full description

Saved in:
Bibliographic Details
Published in:The Fourth International Conference/Exhibition on High-Performance Computing in the Asia-Pacific Region, Beijing, China, May 14-17, 2000: Proceedings Vol. 1; pp. 392 - 394 vol.1
Main Authors: Zhixin Ba, Haichang Zhou, Huai Zhang, Zhenxiao Yang
Format: Conference Proceeding
Language:English
Published: IEEE 2000
Subjects:
ISBN:9780769505909, 0769505892, 0769505902
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Message Passing Interface (MPI) has already become a standard of the communication library for distributed-memory computing systems. Since the release of the new versions of MPI specification, several MPI implementations have been made publicly available. Different implementations employ different approaches. It is critical to selecting an appropriate MPI implementation for message passing based applications, because performance of communication is extremely crucial to these applications. Our study is intended to provide a guideline on how to submit a task and how to perform such a task, economically and effectively, on workstation clusters in high performance computing. We investigate several MPI aspects including its implementations, supporting hardware environment and derived datatype which affect the communication performance. In the end, our results point out the strength and weakness of different implementations on our experimental system.
ISBN:9780769505909
0769505892
0769505902
DOI:10.1109/HPC.2000.846584