Message Passing Interface (MPI)

The goal of the Message Passing Interface (MPI) is to provide a standard library of routines for writing portable and efficient message passing programs. MPI is not a language; it is a specification of a library of routines that can be called from programs. MPI provides a rich collection of point‐to...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Advanced Computer Architecture and Parallel Processing s. 205 - 233
Hlavní autoři: El‐Rewini, Hesham, Abd‐El‐Barr, Mostafa
Médium: Kapitola
Jazyk:angličtina
Vydáno: Hoboken, NJ, USA John Wiley & Sons, Inc 17.12.2004
Edice:Wiley Series on Parallel and Distributed Computing
Témata:
ISBN:9780471467403, 0471467405
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The goal of the Message Passing Interface (MPI) is to provide a standard library of routines for writing portable and efficient message passing programs. MPI is not a language; it is a specification of a library of routines that can be called from programs. MPI provides a rich collection of point‐to‐point communication routines and collective operations for data movement, global computation, and synchronization. The MPI standard has evolved with the work around MPI‐2, which extended MPI to add more features including: dynamic processes, client‐server support, one‐sided communication, parallel I/O, and non‐blocking collective communication functions. In this chapter, we discuss a number of the important functions and programming techniques introduced so far. An MPI application can be visualized as a collection of concurrent communicating tasks. A program includes code written by the application programmer that is linked with a function library provided by the MPI software implementation. Each task is assigned a unique rank within a certain context: an integer number between 0 and n‐1 for an MPI application consisting of n tasks. These ranks are used by MPI tasks to identify each other in sending and receiving messages, to execute collective operations, and to cooperate in general. MPI tasks can run on the same processor or on different processors concurrently.
ISBN:9780471467403
0471467405
DOI:10.1002/0471478385.ch9