Combining contrastive learning and shape awareness for semi-supervised medical image segmentation

For computer-aided diagnosis(CAD) to be successful, automatic segmentation needs to be reliable and efficient. Semi-supervised segmentation (SSL) techniques make extensive use of unlabeled data to address the issue of the high acquisition cost of medically labeled data. However, different anatomical...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Expert systems with applications Ročník 242; s. 122567
Hlavní autori: Chen, Yaqi, Chen, Faquan, Huang, Chenxi
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 15.05.2024
Predmet:
ISSN:0957-4174, 1873-6793
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:For computer-aided diagnosis(CAD) to be successful, automatic segmentation needs to be reliable and efficient. Semi-supervised segmentation (SSL) techniques make extensive use of unlabeled data to address the issue of the high acquisition cost of medically labeled data. However, different anatomical regions and boundaries in medical images may exhibit similar gray-level features. The discrimination of similar regions and the geometrical limitations on boundaries are disregarded by current semi-supervised algorithms for segmenting medical images. In this work, we propose a framework for multi-task pixel-level representation learning that is led by certainty pixels. Specifically, we concentrate on the task of segmentation prediction as the primary task and shape-aware level set representation as a collaborative task to enforce local boundary constraints on unlabeled data. We construct dual decoders to obtain predictions and uncertainty maps from different perspectives, which can enhance the capacity to distinguish similar regions. In addition, we introduce certainty pixels to guide the computation of pixel-level contrastive loss to strengthen the correlation between pixels. Finally, experiments on two open datasets demonstrate that our strategy outperforms current approaches. The code will be released at https://github.com/yqimou/SAMT-PCL.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.122567