Message Passing Interface (MPI)
The goal of the Message Passing Interface (MPI) is to provide a standard library of routines for writing portable and efficient message passing programs. MPI is not a language; it is a specification of a library of routines that can be called from programs. MPI provides a rich collection of point‐to...
Saved in:
| Published in: | Advanced Computer Architecture and Parallel Processing pp. 205 - 233 |
|---|---|
| Main Authors: | , |
| Format: | Book Chapter |
| Language: | English |
| Published: |
Hoboken, NJ, USA
John Wiley & Sons, Inc
17.12.2004
|
| Series: | Wiley Series on Parallel and Distributed Computing |
| Subjects: | |
| ISBN: | 9780471467403, 0471467405 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The goal of the Message Passing Interface (MPI) is to provide a standard library of routines for writing portable and efficient message passing programs. MPI is not a language; it is a specification of a library of routines that can be called from programs. MPI provides a rich collection of point‐to‐point communication routines and collective operations for data movement, global computation, and synchronization. The MPI standard has evolved with the work around MPI‐2, which extended MPI to add more features including: dynamic processes, client‐server support, one‐sided communication, parallel I/O, and non‐blocking collective communication functions. In this chapter, we discuss a number of the important functions and programming techniques introduced so far. An MPI application can be visualized as a collection of concurrent communicating tasks. A program includes code written by the application programmer that is linked with a function library provided by the MPI software implementation. Each task is assigned a unique rank within a certain context: an integer number between 0 and n‐1 for an MPI application consisting of n tasks. These ranks are used by MPI tasks to identify each other in sending and receiving messages, to execute collective operations, and to cooperate in general. MPI tasks can run on the same processor or on different processors concurrently. |
|---|---|
| ISBN: | 9780471467403 0471467405 |
| DOI: | 10.1002/0471478385.ch9 |

