Multilevel approaches for FSAI preconditioning

Summary Factorized sparse approximate inverse (FSAI) preconditioners are robust algorithms for symmetric positive matrices, which are particularly attractive in a parallel computational environment because of their inherent and almost perfect scalability. Their parallel degree is even redundant with...

Full description

Saved in:
Bibliographic Details
Published in:Numerical linear algebra with applications Vol. 25; no. 5
Main Authors: Magri, Victor A. P., Franceschini, Andrea, Ferronato, Massimiliano, Janna, Carlo
Format: Journal Article
Language:English
Published: Oxford Wiley Subscription Services, Inc 01.10.2018
Subjects:
ISSN:1070-5325, 1099-1506
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Summary Factorized sparse approximate inverse (FSAI) preconditioners are robust algorithms for symmetric positive matrices, which are particularly attractive in a parallel computational environment because of their inherent and almost perfect scalability. Their parallel degree is even redundant with respect to the actual capabilities of the current computational architectures. In this work, we present two new approaches for FSAI preconditioners with the aim of improving the algorithm effectiveness by adding some sequentiality to the native formulation. The first one, denoted as block tridiagonal FSAI, is based on a block tridiagonal factorization strategy, whereas the second one, domain decomposition FSAI, is built by reordering the matrix graph according to a multilevel k‐way partitioning method followed by a bandwidth minimization algorithm. We test these preconditioners by solving a set of symmetric positive definite problems arising from different engineering applications. The results are evaluated in terms of performance, scalability, and robustness, showing that both strategies lead to faster convergent schemes regarding the number of iterations and total computational time in comparison with the native FSAI with no significant loss in the algorithmic parallel degree.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-5325
1099-1506
DOI:10.1002/nla.2183