pPython for Parallel Python Programming

pPython seeks to provide a parallel capability that provides good speed-up without sacrificing the ease of programming in Python by implementing partitioned global array semantics (PGAS) on top of a simple file-based messaging library (PythonMPI) in pure Python. The core data structure in pPython is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE Conference on High Performance Extreme Computing (Online) S. 1 - 6
Hauptverfasser: Byun, Chansup, Arcand, William, Bestor, David, Bergeron, Bill, Gadepally, Vijay, Houle, Michael, Hubbell, Matthew, Jananthan, Hayden, Jones, Michael, Keville, Kurt, Klein, Anna, Michaleas, Peter, Milechin, Lauren, Morales, Guillermo, Mullen, Julie, Prout, Andrew, Reuther, Albert, Rosa, Antonio, Samsi, Siddharth, Yee, Charles, Kepner, Jeremy
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 19.09.2022
Schlagworte:
ISSN:2643-1971
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:pPython seeks to provide a parallel capability that provides good speed-up without sacrificing the ease of programming in Python by implementing partitioned global array semantics (PGAS) on top of a simple file-based messaging library (PythonMPI) in pure Python. The core data structure in pPython is a distributed numerical array whose distribution onto multiple processors is specified with a 'map' construct. Communication operations between distributed arrays are abstracted away from the user and pPython transparently supports redistribution between any block-cyclic-overlapped distributions in up to four dimensions. pPython follows a SPMD (single program multiple data) model of computation. pPython runs on any combination of heterogeneous systems that support Python, including Windows, Linux, and MacOS operating systems. In addition to running transparently on single-node (e.g., a laptop), pPython provides a scheduler interface, so that pPython can be executed in a massively parallel computing environment. The initial implementation uses the Slurm scheduler. Performance of pPython on the HPC Challenge benchmark suite demonstrates both ease of programming and scalability.
ISSN:2643-1971
DOI:10.1109/HPEC55821.2022.9926365