Development and assessment of a parallel computing implementation of the Coarse Mesh Radiation Transport (COMET) method

•COMET code solves whole core reactor eigenvalue and power distribution problems.•The COMET method/code is extended for parallel computing.•An estimated parallel fraction of 0.98 is achieved for COMET, implying a high level of parallelism.•Parallel COMET solves whole core benchmark problems in minut...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Annals of nuclear energy Ročník 114; s. 288 - 300
Hlavní autoři: Remley, Kyle, Rahnema, Farzad
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.04.2018
Témata:
ISSN:0306-4549, 1873-2100
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•COMET code solves whole core reactor eigenvalue and power distribution problems.•The COMET method/code is extended for parallel computing.•An estimated parallel fraction of 0.98 is achieved for COMET, implying a high level of parallelism.•Parallel COMET solves whole core benchmark problems in minutes. The reactor physics (neutronics) method of the Coarse Mesh Radiation Transport (COMET) code has been used to solve whole core reactor eigenvalue and power distribution problems. COMET solutions are computed to Monte Carlo accuracy on a single processor with several orders of magnitude faster computational speed. However, to extend the method to include on-the-fly depletion and incident flux response expansion function calculations via Monte Carlo an implementation for a parallel execution of deterministic COMET calculations has been developed. COMET involves inner and outer iterations; inner iterations contain local (i.e., response data) calculations that can be carried out independently, making the algorithm amenable to parallelization. Taking advantage of this fact, a distributed memory algorithm featuring domain decomposition was developed. To allow for efficient parallel implementation of a distributed algorithm, changes to response data access and sweep order are made, along with considerations for communications between processors. These changes make the approach generalizable to many different problem types. A software implementation called COMET-MPI was developed and implemented for several benchmark problems. Analysis of the computational performance of COMET-MPI resulted in an estimated parallel fraction of 0.98 for the code, implying a high level of parallelism. In addition, wall clock times on the order of minutes are achieved when the code is used to solve whole core benchmark problems, showing vastly improved computational efficiency using the distributed memory algorithm.
ISSN:0306-4549
1873-2100
DOI:10.1016/j.anucene.2017.12.048