Ultrasound deep beamforming using a multiconstrained hybrid generative adversarial network
•To increase the ease-of-use of the MC-HGAN beamformer, we establish an end-to-end mapping between RF data and the output image through a hybrid GAN model.•The hybrid GAN model is composed of an intrinsic learning module, a perceptual learning module, and a fusion module, which are embedded as an in...
Saved in:
| Published in: | Medical image analysis Vol. 71; p. 102086 |
|---|---|
| Main Authors: | , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Netherlands
Elsevier B.V
01.07.2021
Elsevier BV |
| Subjects: | |
| ISSN: | 1361-8415, 1361-8423, 1361-8423 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | •To increase the ease-of-use of the MC-HGAN beamformer, we establish an end-to-end mapping between RF data and the output image through a hybrid GAN model.•The hybrid GAN model is composed of an intrinsic learning module, a perceptual learning module, and a fusion module, which are embedded as an integral network to simultaneously capture complementary features.•Different from many existing deep beamforming methods that only adopt multiple fully connected layers to address RF signal, the hybrid GAN model use several attention blocks to fully utilize both local and global information so that detailed information and speckle pattern can be preserved.•To increase the robustness of the beamforming performance, we introduce a multiconstrained training strategy to provide comprehensive guidance for the network by invoking intermediates to co-constrain the training process. This strategy ensures that the RF signal and image data act as complementary functions rather than being mutually exclusive.•To ensure wider application, the proposed MC-HGAN beamformer is designed as a generic method that can adapt to different ultrasound emission modes, including PW and line-scan imaging.
Ultrasound beamforming is a principal factor in high-quality ultrasound imaging. The conventional delay-and-sum (DAS) beamformer generates images with high computational speed but low spatial resolution; thus, many adaptive beamforming methods have been introduced to improve image qualities. However, these adaptive beamforming methods suffer from high computational complexity, which limits their practical applications. Hence, an advanced beamformer that can overcome spatiotemporal resolution bottlenecks is eagerly awaited. In this paper, we propose a novel deep-learning-based algorithm, called the multiconstrained hybrid generative adversarial network (MC-HGAN) beamformer that rapidly achieves high-quality ultrasound imaging. The MC-HGAN beamformer directly establishes a one-shot mapping between the radio frequency signals and the reconstructed ultrasound images through a hybrid generative adversarial network (GAN) model. Through two specific branches, the hybrid GAN model extracts both radio frequency-based and image-based features and integrates them through a fusion module. We also introduce a multiconstrained training strategy to provide comprehensive guidance for the network by invoking intermediates to co-constrain the training process. Moreover, our beamformer is designed to adapt to various ultrasonic emission modes, which improves its generalizability for clinical applications. We conducted experiments on a variety of datasets scanned by line-scan and plane wave emission modes and evaluated the results with both similarity-based and ultrasound-specific metrics. The comparisons demonstrate that the MC-HGAN beamformer generates ultrasound images whose quality is higher than that of images generated by other deep learning-based methods and shows very high robustness in different clinical datasets. This technology also shows great potential in real-time imaging.
[Display omitted] |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 1361-8415 1361-8423 1361-8423 |
| DOI: | 10.1016/j.media.2021.102086 |