Combining contrastive learning and shape awareness for semi-supervised medical image segmentation

For computer-aided diagnosis(CAD) to be successful, automatic segmentation needs to be reliable and efficient. Semi-supervised segmentation (SSL) techniques make extensive use of unlabeled data to address the issue of the high acquisition cost of medically labeled data. However, different anatomical...

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 242; p. 122567
Main Authors: Chen, Yaqi, Chen, Faquan, Huang, Chenxi
Format: Journal Article
Language:English
Published: Elsevier Ltd 15.05.2024
Subjects:
ISSN:0957-4174, 1873-6793
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:For computer-aided diagnosis(CAD) to be successful, automatic segmentation needs to be reliable and efficient. Semi-supervised segmentation (SSL) techniques make extensive use of unlabeled data to address the issue of the high acquisition cost of medically labeled data. However, different anatomical regions and boundaries in medical images may exhibit similar gray-level features. The discrimination of similar regions and the geometrical limitations on boundaries are disregarded by current semi-supervised algorithms for segmenting medical images. In this work, we propose a framework for multi-task pixel-level representation learning that is led by certainty pixels. Specifically, we concentrate on the task of segmentation prediction as the primary task and shape-aware level set representation as a collaborative task to enforce local boundary constraints on unlabeled data. We construct dual decoders to obtain predictions and uncertainty maps from different perspectives, which can enhance the capacity to distinguish similar regions. In addition, we introduce certainty pixels to guide the computation of pixel-level contrastive loss to strengthen the correlation between pixels. Finally, experiments on two open datasets demonstrate that our strategy outperforms current approaches. The code will be released at https://github.com/yqimou/SAMT-PCL.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.122567