Reverse-Mode Automatic Differentiation and Optimization of GPU Kernels via Enzyme

Computing derivatives is key to many algorithms in scientific computing and machine learning such as optimization, uncertainty quantification, and stability analysis. Enzyme is a LL VM compiler plugin that performs reverse-mode automatic differentiation (AD) and thus generates high performance gradi...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:SC21: International Conference for High Performance Computing, Networking, Storage and Analysis s. 1 - 18
Hlavní autori: Moses, William S., Churavy, Valentin, Paehler, Ludger, Huckelheim, Jan, Narayanan, Sri Hari Krishna, Schanen, Michel, Doerfert, Johannes
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: ACM 14.11.2021
Predmet:
ISSN:2167-4337
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Computing derivatives is key to many algorithms in scientific computing and machine learning such as optimization, uncertainty quantification, and stability analysis. Enzyme is a LL VM compiler plugin that performs reverse-mode automatic differentiation (AD) and thus generates high performance gradients of programs in languages including \mathrm{C}/\mathrm{C}++ , Fortran, Julia, and Rust. Prior to this work, Enzyme and other AD tools were not capable of generating gradi-ents of GPU kernels. Our paper presents a combination of novel techniques that make Enzyme the first fully automatic reverse-mode AD tool to generate gradients of GPU kernels. Since unlike other tools Enzyme performs automatic differentiation within a general-purpose compiler, we are able to introduce several novel GPU and AD-specific optimizations. To show the generality and efficiency of our approach, we compute gradients office GPU-based HPC applications, executed on NVIDIA and AMD GPUs. All bench-marks run within an order of magnitude of the original program's execution time. Without GPU and AD-specific optimizations, gra-dients of GPU kernels either fail to run from a lack of resources or have infeasible overhead. Finally, we demonstrate that increasing the problem size by either increasing the number of threads or increasing the work per thread, does not substantially impact the overhead from differentiation.
ISSN:2167-4337
DOI:10.1145/3458817.3476165