Semi-Supervised Representation Learning for Segmentation on Medical Volumes and Sequences

Benefiting from the massive labeled samples, deep learning-based segmentation methods have achieved great success for two dimensional natural images. However, it is still a challenging task to segment high dimensional medical volumes and sequences, due to the considerable efforts for clinical expert...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on medical imaging Ročník 42; číslo 12; s. 3972 - 3986
Hlavní autoři: Chen, Zejian, Zhuo, Wei, Wang, Tianfu, Cheng, Jun, Xue, Wufeng, Ni, Dong
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0278-0062, 1558-254X, 1558-254X
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Benefiting from the massive labeled samples, deep learning-based segmentation methods have achieved great success for two dimensional natural images. However, it is still a challenging task to segment high dimensional medical volumes and sequences, due to the considerable efforts for clinical expertise to make large scale annotations. Self/semi-supervised learning methods have been shown to improve the performance by exploiting unlabeled data. However, they are still lack of mining local semantic discrimination and exploitation of volume/sequence structures. In this work, we propose a semi-supervised representation learning method with two novel modules to enhance the features in the encoder and decoder, respectively. For the encoder, based on the continuity between slices/frames and the common spatial layout of organs across subjects, we propose an asymmetric network with an attention-guided predictor to enable prediction between feature maps of different slices of unlabeled data. For the decoder, based on the semantic consistency between labeled data and unlabeled data, we introduce a novel semantic contrastive learning to regularize the feature maps in the decoder. The two parts are trained jointly with both labeled and unlabeled volumes/sequences in a semi-supervised manner. When evaluated on three benchmark datasets of medical volumes and sequences, our model outperforms existing methods with a large margin of 7.3% DSC on ACDC, 6.5% on Prostate, and 3.2% on CAMUS when only a few labeled data is available. Further, results on the M&M dataset show that the proposed method yields improvement without using any domain adaption techniques for data from unknown domain. Intensive evaluations reveal the effectiveness of representation mining, and superiority on performance of our method. The code is available at https://github.com/CcchenzJ/BootstrapRepresentation .
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0278-0062
1558-254X
1558-254X
DOI:10.1109/TMI.2023.3319973