Fast, Cheap, and Turbulent—Global Ocean Modeling With GPU Acceleration in Python
Even to this date, most earth system models are coded in Fortran, especially those used at the largest compute scales. Our ocean model Veros takes a different approach: it is implemented using the high‐level programming language Python. Besides numerous usability advantages, this allows us to levera...
Uloženo v:
| Vydáno v: | Journal of advances in modeling earth systems Ročník 13; číslo 12 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Washington
John Wiley & Sons, Inc
01.12.2021
|
| Témata: | |
| ISSN: | 1942-2466, 1942-2466 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Even to this date, most earth system models are coded in Fortran, especially those used at the largest compute scales. Our ocean model Veros takes a different approach: it is implemented using the high‐level programming language Python. Besides numerous usability advantages, this allows us to leverage modern high‐performance frameworks that emerged in tandem with the machine learning boom. By interfacing with the JAX library, Veros is able to run high‐performance simulations on both central processing units (CPU) and graphical processing unit (GPU) through the same code base, with full support for distributed architectures. On CPU, Veros is able to match the performance of a Fortran reference, both on a single process and on hundreds of CPU cores. On GPU, we find that each device can replace dozens to hundreds of CPU cores, at a fraction of the energy consumption. We demonstrate the viability of using GPUs for earth system modeling by integrating a global 0.1° eddy‐resolving setup in single precision, where we achieve 1.3 model years per day on a single compute instance with 16 GPUs, comparable to over 2,000 Fortran processes.
Plain Language Summary
Climate models are an invaluable tool to understand the earth system and inform policies to combat climate change. Climate simulations often run for thousands of model years, which consumes a substantial amount of resources—both in terms of electricity and time (months) spent by researchers waiting for results. On the other hand, climate models are highly complicated software projects that require countless man‐hours from scientists and engineers to build. Ocean models are one of the main components of a climate model. Here, we present a new type of ocean model that combines strong performance and ease of use and development. We show that by using graphical processing units, we can perform realistic ocean simulations at a high speed, and with a fraction of the energy usage.
Key Points
We present a pure Python ocean model that leverages the JAX accelerator library to achieve competitive performance on CPU and GPU clusters
On CPU, performance is similar to Fortran. On GPU, each device can replace hundreds of CPU cores, with at least 3 times less energy usage
To show how GPUs can be used in practice, we integrate an eddying 0.1° global ocean setup on a single cloud compute instance with 16 GPUs |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1942-2466 1942-2466 |
| DOI: | 10.1029/2021MS002717 |