AN EASILY IMPLEMENTED, BLOCK-BASED FAST MARCHING METHOD WITH SUPERIOR SEQUENTIAL AND PARALLEL PERFORMANCE.

Saved in:
Bibliographic Details
Title: AN EASILY IMPLEMENTED, BLOCK-BASED FAST MARCHING METHOD WITH SUPERIOR SEQUENTIAL AND PARALLEL PERFORMANCE.
Authors: JIANMING YANG
Source: SIAM Journal on Scientific Computing; 2019, Vol. 41 Issue 5, pC446-C478, 33p
Subject Terms: EIKONAL equation, MAGNITUDE (Mathematics), PERFORMANCE theory, PARALLEL algorithms
Abstract: The fast marching method is well known for its worst-case optimal computational complexity in solving the Eikonal equation, and it has been employed in numerous scientific and engineering fields. However, it has barely benefited from fast-advancing multicore and many-core architectures because of the challenges presented by its apparent sequential nature. In this paper, we present a straightforward block-based approach for a highly scalable parallelization of the fast marching method on shared-memory computers. Central to our new algorithm is a simplified restarted narrow band strategy, which is used to replace the global bound for terminating the front marching by an incremental bound that increases by a given stride in each restart. Our algorithm greatly reduces load imbalance among blocks through a synchronized exchanging step after the marching step. Furthermore, simple activation mechanisms are introduced to skip blocks not involved in either step. Thus the new algorithm mainly consists of two loops, each for performing one step on a group of blocks. Notably, it does not contain any data race conditions, which is ideal for a direct loop level parallelization using multiple threads. A systematic performance study is carried out for several point source problems with various speed functions on grids with up to 10243 points using two different computers. Substantial parallel speedups, e.g., up to 3-5 times the number of cores on a 16-core/32-thread computer and up to 2 orders of magnitude on a 64-core/256-thread computer, are demonstrated with all cases. As an extra bonus, our new algorithm also gives much improved sequential performance in most scenarios. Detailed pseudocodes are provided to illustrate the modification procedure from the single-block to the multiblock sequential algorithm in a step-by-step manner. [ABSTRACT FROM AUTHOR]
Copyright of SIAM Journal on Scientific Computing is the property of Society for Industrial & Applied Mathematics and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Complementary Index
Description
Abstract:The fast marching method is well known for its worst-case optimal computational complexity in solving the Eikonal equation, and it has been employed in numerous scientific and engineering fields. However, it has barely benefited from fast-advancing multicore and many-core architectures because of the challenges presented by its apparent sequential nature. In this paper, we present a straightforward block-based approach for a highly scalable parallelization of the fast marching method on shared-memory computers. Central to our new algorithm is a simplified restarted narrow band strategy, which is used to replace the global bound for terminating the front marching by an incremental bound that increases by a given stride in each restart. Our algorithm greatly reduces load imbalance among blocks through a synchronized exchanging step after the marching step. Furthermore, simple activation mechanisms are introduced to skip blocks not involved in either step. Thus the new algorithm mainly consists of two loops, each for performing one step on a group of blocks. Notably, it does not contain any data race conditions, which is ideal for a direct loop level parallelization using multiple threads. A systematic performance study is carried out for several point source problems with various speed functions on grids with up to 10243 points using two different computers. Substantial parallel speedups, e.g., up to 3-5 times the number of cores on a 16-core/32-thread computer and up to 2 orders of magnitude on a 64-core/256-thread computer, are demonstrated with all cases. As an extra bonus, our new algorithm also gives much improved sequential performance in most scenarios. Detailed pseudocodes are provided to illustrate the modification procedure from the single-block to the multiblock sequential algorithm in a step-by-step manner. [ABSTRACT FROM AUTHOR]
ISSN:10648275
DOI:10.1137/18M1213464