Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications
•COBRA-TF was adopted by the Consortium for Advanced Simulation of LWRs.•We have improved code performance to support running large-scale LWR simulations.•Code optimization has led to reductions in execution time and memory usage.•An MPI parallelization has reduced full-core simulation time from day...
Uloženo v:
| Vydáno v: | Annals of nuclear energy Ročník 84; číslo C; s. 122 - 130 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
Elsevier Ltd
01.10.2015
Elsevier |
| Témata: | |
| ISSN: | 0306-4549, 1873-2100 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | •COBRA-TF was adopted by the Consortium for Advanced Simulation of LWRs.•We have improved code performance to support running large-scale LWR simulations.•Code optimization has led to reductions in execution time and memory usage.•An MPI parallelization has reduced full-core simulation time from days to minutes.
This paper describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis.
A set of serial code optimizations—including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices—are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a “single program multiple data” parallelization strategy targeting distributed memory “multiple instruction multiple data” platforms utilizing domain decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard Message-Passing Interface (MPI) calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pressurized water reactor (PWR) pre-processor utility that uses a greatly simplified set of user input compared with the traditional CTF input.
To run CTF in parallel, two additional libraries are currently needed: MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is used to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full-core, pincell-resolved model of a PWR core containing 193 17×17 assemblies under hot full-power conditions. This model, representative of Watts Bar Unit 1 and containing about 56,000 pins, was modeled with roughly 59,000 subchannels, leading to about 2.8 million thermal–hydraulic control volumes in total. Results demonstrate that CTF can now perform full-core analysis of a PWR (not previously possible owing to excessively long runtimes and memory requirements) on the order of 20min. This new capability not only is useful to stand-alone CTF users, but also is being leveraged in support of coupled code multi-physics calculations being done in the CASL program. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 AC04-94AL85000; AC05-00OR22725 USDOE Office of Science (SC) SAND2016-11149J USDOE National Nuclear Security Administration (NNSA) |
| ISSN: | 0306-4549 1873-2100 |
| DOI: | 10.1016/j.anucene.2014.11.005 |