Learning a preconditioner to accelerate compressed sensing reconstructions in MRI
Purpose To learn a preconditioner that accelerates parallel imaging (PI) and compressed sensing (CS) reconstructions. Methods A convolutional neural network (CNN) with residual connections was used to train a preconditioning operator. Training and validation data were simulated using 50% brain image...
Uloženo v:
| Vydáno v: | Magnetic resonance in medicine Ročník 87; číslo 4; s. 2063 - 2073 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
Wiley Subscription Services, Inc
01.04.2022
John Wiley and Sons Inc |
| Témata: | |
| ISSN: | 0740-3194, 1522-2594, 1522-2594 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Purpose
To learn a preconditioner that accelerates parallel imaging (PI) and compressed sensing (CS) reconstructions.
Methods
A convolutional neural network (CNN) with residual connections was used to train a preconditioning operator. Training and validation data were simulated using 50% brain images and 50% white Gaussian noise images. Each multichannel training example contains a simulated sampling mask, complex coil sensitivity maps, and two regularization parameter maps. The trained model was integrated in the preconditioned conjugate gradient (PCG) method as part of the split Bregman CS method. The acceleration performance was compared with that of a circulant PI‐CS preconditioner for varying undersampling factors, number of coil elements and anatomies.
Results
The learned preconditioner reduces the number of PCG iterations by a factor of 4, yielding a similar acceleration as an efficient circulant preconditioner. The method generalizes well to different sampling schemes, coil configurations and anatomies.
Conclusion
It is possible to learn adaptable preconditioners for PI and CS reconstructions that meet the performance of state‐of‐the‐art preconditioners. Further acceleration could be achieved by optimizing the network architecture and the training set. Such a preconditioner could also be integrated in fully learned reconstruction methods to accelerate the training process of unrolled networks. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 0740-3194 1522-2594 1522-2594 |
| DOI: | 10.1002/mrm.29073 |