Sub-pixel multi-scale fusion network for medical image segmentation

CNNs and Transformers have significantly advanced the domain of medical image segmentation. The integration of their strengths facilitates rich feature extraction but also introduces the challenge of mixed multi-scale feature fusion. To overcome this issue, we propose an innovative deep medical imag...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Multimedia tools and applications Ročník 83; číslo 41; s. 89355 - 89373
Hlavní autori: Li, Jing, Chen, Qiaohong, Fang, Xian
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York Springer US 01.12.2024
Springer Nature B.V
Predmet:
ISSN:1573-7721, 1380-7501, 1573-7721
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:CNNs and Transformers have significantly advanced the domain of medical image segmentation. The integration of their strengths facilitates rich feature extraction but also introduces the challenge of mixed multi-scale feature fusion. To overcome this issue, we propose an innovative deep medical image segmentation framework termed Sub-pixel Multi-scale Fusion Network (SMFNet), which effectively incorporates the sub-pixel multi-scale feature fusion results of CNN and Transformer into the architecture. In particular, our design consists of three effective and practical modules. Primarily, we utilize the Sub-pixel Convolutional Module to synchronize the extracted features at multiple scales to a consistent resolution. In the next place, we develop the Three-level Enhancement Module to learn features from adjacent layers and perform information exchange. Lastly, we leverage the Hierarchical Adaptive Gate to fuse information from other contextual levels through the Sub-pixel Convolutional Module. Extensive experiments on the Synapse, ACDC, and ISIC 2018 datasets demonstrate the effectiveness of the proposed SMFNet, and our method is superior to other competitive CNN-based or Transformer-based segmentation methods.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-024-20338-0