mpi4py.futures: MPI-Based Asynchronous Task Execution for Python

Saved in:
Bibliographic Details
Title: mpi4py.futures: MPI-Based Asynchronous Task Execution for Python
Authors: Marcin Rogowski, Samar Aseeri, David Keyes, Lisandro Dalcin
Contributors: Applied Mathematics and Computational Science Program, Computer Science Program, Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division, Extreme Computing Research Center, Office of the President, Physical Science and Engineering (PSE) Division
Source: IEEE Transactions on Parallel and Distributed Systems. 34:611-622
Publisher Information: Institute of Electrical and Electronics Engineers (IEEE), 2023.
Publication Year: 2023
Subject Terms: high performance computing, 0301 basic medicine, parallelism, distributed computing, 0303 health sciences, 03 medical and health sciences, task execution, multiprocessing, MPI, parallel programming models, master-worker, Python
Description: We present mpi4py.futures, a lightweight, asynchronous task execution framework targeting the Python programming language and using the Message Passing Interface (MPI) for interprocess communication. mpi4py.futures follows the interface of the concurrent.futures package from the Python standard library and can be used as its drop-in replacement, while allowing applications to scale over multiple compute nodes. We discuss the design, implementation, and feature set of mpi4py.futures and compare its performance to other solutions on both shared and distributed memory architectures. On a shared-memory system, we show mpi4py.futures to consistently outperform Python's concurrent.futures with speedup ratios between 1.4X and 3.7X in throughput (tasks per second) and between 1.9X and 2.9X in bandwidth. On a Cray XC40 system, we compare mpi4py.futures to Dask – a well-known Python parallel computing package. Although we note more varied results, we show mpi4py.futures to outperform Dask in most scenarios. ; The research reported in this paper was funded by King Abdullah University of Science and Technology (KAUST). We are thankful to the KAUST Supercomputing Laboratory for their computing resources. We would like to thank the Dask developer community, and especially John Kirkham, for their feedback. Some discussions in this work were inspired by a series of blog posts by Matthew Rocklin, the initial author of Dask.
Document Type: Article
File Description: application/pdf
ISSN: 2161-9883
1045-9219
DOI: 10.1109/tpds.2022.3225481
Rights: IEEE Copyright
Accession Number: edsair.doi.dedup.....173f4f6dea2de1e0ec2a09da3c548010
Database: OpenAIRE
Description
Abstract:We present mpi4py.futures, a lightweight, asynchronous task execution framework targeting the Python programming language and using the Message Passing Interface (MPI) for interprocess communication. mpi4py.futures follows the interface of the concurrent.futures package from the Python standard library and can be used as its drop-in replacement, while allowing applications to scale over multiple compute nodes. We discuss the design, implementation, and feature set of mpi4py.futures and compare its performance to other solutions on both shared and distributed memory architectures. On a shared-memory system, we show mpi4py.futures to consistently outperform Python's concurrent.futures with speedup ratios between 1.4X and 3.7X in throughput (tasks per second) and between 1.9X and 2.9X in bandwidth. On a Cray XC40 system, we compare mpi4py.futures to Dask – a well-known Python parallel computing package. Although we note more varied results, we show mpi4py.futures to outperform Dask in most scenarios. ; The research reported in this paper was funded by King Abdullah University of Science and Technology (KAUST). We are thankful to the KAUST Supercomputing Laboratory for their computing resources. We would like to thank the Dask developer community, and especially John Kirkham, for their feedback. Some discussions in this work were inspired by a series of blog posts by Matthew Rocklin, the initial author of Dask.
ISSN:21619883
10459219
DOI:10.1109/tpds.2022.3225481