Performance prediction for mpi programs executing on workstation clusters
Saved in:
| Title: | Performance prediction for mpi programs executing on workstation clusters |
|---|---|
| Authors: | Phillip M. Dickens (presentor, George K. Thiruvathukal |
| Contributors: | The Pennsylvania State University CiteSeerX Archives |
| Source: | http://etl.luc.edu/gkt/papers/prediction/prediction_pdpta98.pdf. |
| Publication Year: | 1998 |
| Collection: | CiteSeerX |
| Subject Terms: | distributed simulation, direct-execution simulation, performance prediction |
| Description: | Performance prediction and/or scalability analysis of parallel programs is an important area of current research, especially as parallel computers have come to dominate the high performance computing arena. To date, most of the research in this area has concentrated on the performance of massively parallel machines such astheIntel Paragon and the IBM SP2. However, such machines are scarce, expensive, and unavailable to large segments of the research community, motivating the use of networks of workstations as large, distributed memory multicomputers. Even though this approach to distributed computing is wide-spread, we still understand little about the behavior of codes executing on this computational platform. For instance, we would like to understand how an application scales as the size of the program and the number of workstations are simultaneously increased. Also, we would like to predict the behavior of codes executing under di ering network mediums and under varying network loads. The MPI (Message Passing Interface) message passing library is the emerging standard by which distributed computers communicate and synchronize, and we are therefore interested in performance prediction of codes executing on top of this library. In this paper, we investigate the use of direct-execution simulation to study the behavior of large codes, executing on networks of workstations, using the MPI message-passing library. We discuss the di cult issues encountered when building this kind of simulator, and the approach we will take to solve these problems. |
| Document Type: | text |
| File Description: | application/pdf |
| Language: | English |
| Relation: | http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.1257; http://etl.luc.edu/gkt/papers/prediction/prediction_pdpta98.pdf |
| Availability: | http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.83.1257 http://etl.luc.edu/gkt/papers/prediction/prediction_pdpta98.pdf |
| Rights: | Metadata may be used without restrictions as long as the oai identifier remains attached to it. |
| Accession Number: | edsbas.C3DF3DD4 |
| Database: | BASE |
| Abstract: | Performance prediction and/or scalability analysis of parallel programs is an important area of current research, especially as parallel computers have come to dominate the high performance computing arena. To date, most of the research in this area has concentrated on the performance of massively parallel machines such astheIntel Paragon and the IBM SP2. However, such machines are scarce, expensive, and unavailable to large segments of the research community, motivating the use of networks of workstations as large, distributed memory multicomputers. Even though this approach to distributed computing is wide-spread, we still understand little about the behavior of codes executing on this computational platform. For instance, we would like to understand how an application scales as the size of the program and the number of workstations are simultaneously increased. Also, we would like to predict the behavior of codes executing under di ering network mediums and under varying network loads. The MPI (Message Passing Interface) message passing library is the emerging standard by which distributed computers communicate and synchronize, and we are therefore interested in performance prediction of codes executing on top of this library. In this paper, we investigate the use of direct-execution simulation to study the behavior of large codes, executing on networks of workstations, using the MPI message-passing library. We discuss the di cult issues encountered when building this kind of simulator, and the approach we will take to solve these problems. |
|---|
Nájsť tento článok vo Web of Science