Boundary-aware context neural network for medical image segmentation

•Propose a boundary-aware context neural network for 2D medical image segmentation.•The pyramid edge extraction module aggregates edge information with multigranularity.•The multi-task learning module enriches the context by the different task branches.•The cross feature fusion module aims to select...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Medical image analysis Ročník 78; s. 102395
Hlavní autoři: Wang, Ruxin, Chen, Shuyuan, Ji, Chaojie, Fan, Jianping, Li, Ye
Médium: Journal Article
Jazyk:angličtina
Vydáno: Netherlands Elsevier B.V 01.05.2022
Elsevier BV
Témata:
ISSN:1361-8415, 1361-8423, 1361-8423
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Propose a boundary-aware context neural network for 2D medical image segmentation.•The pyramid edge extraction module aggregates edge information with multigranularity.•The multi-task learning module enriches the context by the different task branches.•The cross feature fusion module aims to selectively aggregate multi-level features.•Achieving state-of-the-art performances on five medical image segmentation datasets. [Display omitted] Medical image segmentation can provide a reliable basis for further clinical analysis and disease diagnosis. With the development of convolutional neural networks (CNNs), medical image segmentation performance has advanced significantly. However, most existing CNN-based methods often produce unsatisfactory segmentation masks without accurate object boundaries. This problem is caused by the limited context information and inadequate discriminative feature maps after consecutive pooling and convolution operations. Additionally, medical images are characterized by high intra-class variation, inter-class indistinction and noise, extracting powerful context and aggregating discriminative features for fine-grained segmentation remain challenging. In this study, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information, which incorporates encoder-decoder architecture. In each stage of the encoder sub-network, a proposed pyramid edge extraction module first obtains multi-granularity edge information. Then a newly designed mini multi-task learning module for jointly learning segments the object masks and detects lesion boundaries, in which a new interactive attention layer is introduced to bridge the two tasks. In this way, information complementarity between different tasks is achieved, which effectively leverages the boundary information to offer strong cues for better segmentation prediction. Finally, a cross feature fusion module acts to selectively aggregate multi-level features from the entire encoder sub-network. By cascading these three modules, richer context and fine-grain features of each stage are encoded and then delivered to the decoder. The results of extensive experiments on five datasets show that the proposed BA-Net outperforms state-of-the-art techniques.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1361-8415
1361-8423
1361-8423
DOI:10.1016/j.media.2022.102395