Momentum Contrastive Voxel-wise Representation Learning for Semi-supervised Volumetric Medical Image Segmentation

Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing approaches mainly contrast a single positive vector ( , an augmentation of the same image) against a set of negatives within the entire remainder...

Full description

Saved in:
Bibliographic Details
Published in:Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention Vol. 13434; p. 639
Main Authors: You, Chenyu, Zhao, Ruihan, Staib, Lawrence, Duncan, James S
Format: Journal Article
Language:English
Published: Germany 01.01.2022
Subjects:
Online Access:Get more information
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing approaches mainly contrast a single positive vector ( , an augmentation of the same image) against a set of negatives within the entire remainder of the batch by simply mapping all input features into the same constant vector. Despite the impressive empirical performance, those methods have the following shortcomings: (1) it remains a formidable challenge to prevent the collapsing problems to trivial solutions; and (2) we argue that not all voxels within the same image are equally positive since there exist the dissimilar anatomical structures with the same image. In this work, we present a novel ontrastive oxel-wise epresentation earning (CVRL) method to effectively learn low-level and high-level features by capturing 3D spatial context and rich anatomical information along both the feature and the batch dimensions. Specifically, we first introduce a novel CL strategy to ensure feature diversity promotion among the 3D representation dimensions. We train the framework through bi-level contrastive optimization ( , low-level and high-level) on 3D images. Experiments on two benchmark datasets and different labeled settings demonstrate the superiority of our proposed framework. More importantly, we also prove that our method inherits the benefit of hardness-aware property from the standard CL approaches. Codes will be available soon.
AbstractList Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing approaches mainly contrast a single positive vector (i.e., an augmentation of the same image) against a set of negatives within the entire remainder of the batch by simply mapping all input features into the same constant vector. Despite the impressive empirical performance, those methods have the following shortcomings: (1) it remains a formidable challenge to prevent the collapsing problems to trivial solutions; and (2) we argue that not all voxels within the same image are equally positive since there exist the dissimilar anatomical structures with the same image. In this work, we present a novel Contrastive Voxel-wise Representation Learning (CVRL) method to effectively learn low-level and high-level features by capturing 3D spatial context and rich anatomical information along both the feature and the batch dimensions. Specifically, we first introduce a novel CL strategy to ensure feature diversity promotion among the 3D representation dimensions. We train the framework through bi-level contrastive optimization (i.e., low-level and high-level) on 3D images. Experiments on two benchmark datasets and different labeled settings demonstrate the superiority of our proposed framework. More importantly, we also prove that our method inherits the benefit of hardness-aware property from the standard CL approaches. Codes will be available soon.Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing approaches mainly contrast a single positive vector (i.e., an augmentation of the same image) against a set of negatives within the entire remainder of the batch by simply mapping all input features into the same constant vector. Despite the impressive empirical performance, those methods have the following shortcomings: (1) it remains a formidable challenge to prevent the collapsing problems to trivial solutions; and (2) we argue that not all voxels within the same image are equally positive since there exist the dissimilar anatomical structures with the same image. In this work, we present a novel Contrastive Voxel-wise Representation Learning (CVRL) method to effectively learn low-level and high-level features by capturing 3D spatial context and rich anatomical information along both the feature and the batch dimensions. Specifically, we first introduce a novel CL strategy to ensure feature diversity promotion among the 3D representation dimensions. We train the framework through bi-level contrastive optimization (i.e., low-level and high-level) on 3D images. Experiments on two benchmark datasets and different labeled settings demonstrate the superiority of our proposed framework. More importantly, we also prove that our method inherits the benefit of hardness-aware property from the standard CL approaches. Codes will be available soon.
Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing approaches mainly contrast a single positive vector ( , an augmentation of the same image) against a set of negatives within the entire remainder of the batch by simply mapping all input features into the same constant vector. Despite the impressive empirical performance, those methods have the following shortcomings: (1) it remains a formidable challenge to prevent the collapsing problems to trivial solutions; and (2) we argue that not all voxels within the same image are equally positive since there exist the dissimilar anatomical structures with the same image. In this work, we present a novel ontrastive oxel-wise epresentation earning (CVRL) method to effectively learn low-level and high-level features by capturing 3D spatial context and rich anatomical information along both the feature and the batch dimensions. Specifically, we first introduce a novel CL strategy to ensure feature diversity promotion among the 3D representation dimensions. We train the framework through bi-level contrastive optimization ( , low-level and high-level) on 3D images. Experiments on two benchmark datasets and different labeled settings demonstrate the superiority of our proposed framework. More importantly, we also prove that our method inherits the benefit of hardness-aware property from the standard CL approaches. Codes will be available soon.
Author Zhao, Ruihan
Staib, Lawrence
You, Chenyu
Duncan, James S
Author_xml – sequence: 1
  givenname: Chenyu
  surname: You
  fullname: You, Chenyu
  organization: Electrical Engineering, Yale University, New Haven, CT USA
– sequence: 2
  givenname: Ruihan
  surname: Zhao
  fullname: Zhao, Ruihan
  organization: Electrical and Computer Engineering, The University of Texas at Austin, TX USA
– sequence: 3
  givenname: Lawrence
  surname: Staib
  fullname: Staib, Lawrence
  organization: Biomedical Engineering, Yale University, New Haven, CT USA
– sequence: 4
  givenname: James S
  surname: Duncan
  fullname: Duncan, James S
  organization: Biomedical Engineering, Yale University, New Haven, CT USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/37465615$$D View this record in MEDLINE/PubMed
BookMark eNo10E9LwzAYBvAcFP9Mv4FIjl6iyZom7VGGzsGGoMNrSZO3I9AkXZJO_fYW3E7v5fc8D7zX6MwHDwjdMfrIKJVPtaxIQWjBCBOcU1I1gl2gy0JyUQpWXqH9JjjweXR4EXyOKmV7APwVfqAn3zYB_oAhQpqIyjZ4vAYVvfU73IWIP8FZksYB4mGiZor1o4McrcYbMFarHq-c2sEEd-5UcYPOO9UnuD3eGdq-vmwXb2T9vlwtntdk4JRlUhupRVcLCpzrVldgDFNQClmXbWt4DaA7LWUrCyFNR1VpJO_oBMqyrdpiPkMP_7VDDPsRUm6cTRr6XnkIY2rmVVHLaamQE70_0rF1YJohWqfib3P60vwPkb5o5A
ContentType Journal Article
DBID NPM
7X8
DOI 10.1007/978-3-031-16440-8_61
DatabaseName PubMed
MEDLINE - Academic
DatabaseTitle PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod no_fulltext_linktorsrc
ExternalDocumentID 37465615
Genre Journal Article
GrantInformation_xml – fundername: NCI NIH HHS
  grantid: R01 CA206180
GroupedDBID NPM
7X8
ID FETCH-LOGICAL-p401t-9d7c6f960e44cbc8edd1ae56795bbd49eecfc77b7367df0a5d74f0e5655b8b32
IEDL.DBID 7X8
ISICitedReferencesCount 80
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000867306400061&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
IngestDate Thu Jul 10 18:19:43 EDT 2025
Thu Jan 02 22:33:32 EST 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Contrastive Learning
Medical Image Segmentation
Semi-Supervised Learning
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-p401t-9d7c6f960e44cbc8edd1ae56795bbd49eecfc77b7367df0a5d74f0e5655b8b32
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
OpenAccessLink https://pmc.ncbi.nlm.nih.gov/articles/PMC10352821/pdf/nihms-1912991.pdf
PMID 37465615
PQID 2839740137
PQPubID 23479
ParticipantIDs proquest_miscellaneous_2839740137
pubmed_primary_37465615
PublicationCentury 2000
PublicationDate 2022-01-01
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – month: 01
  year: 2022
  text: 2022-01-01
  day: 01
PublicationDecade 2020
PublicationPlace Germany
PublicationPlace_xml – name: Germany
PublicationTitle Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
PublicationTitleAlternate Med Image Comput Comput Assist Interv
PublicationYear 2022
Score 2.3913279
Snippet Contrastive learning (CL) aims to learn useful representation without relying on expert annotations in the context of medical image segmentation. Existing...
SourceID proquest
pubmed
SourceType Aggregation Database
Index Database
StartPage 639
Title Momentum Contrastive Voxel-wise Representation Learning for Semi-supervised Volumetric Medical Image Segmentation
URI https://www.ncbi.nlm.nih.gov/pubmed/37465615
https://www.proquest.com/docview/2839740137
Volume 13434
WOSCitedRecordID wos000867306400061&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText
inHoldings 1
isFullTextHit
isPrint
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1JS8NAFB7UevDiglvdGMHr4MSZzCQnEbHooaVokd5CZisFs7Rp1Z_vmyRFL4LgJYeQCcO8N9_b30PoSmvKtKOWGG4E4ZoHJBYRJymzQgcshTe0HjYhB4NoPI6HrcOtatMqV5hYA7UptPeRX4MYjOvpcfK2nBE_NcpHV9sRGuuow0CV8Sldchz9qJBrgv_AuATMAk5JlIjgd12ylim9nf_uZhdtt9okvmvIv4fWbL6PZn3fVWGxzLBvPTVPK49o-LX4tG_kY1pZ_Fxnv7ZFRzluW6xOMOiv-MVmU1ItSw8hlTWwzKOXb-OP25gOfsoAg-DDSbb6xQEa9R5G94-knaxAStj0gsRGauHAeLGca6Uja0yQ2lDIOFTK8Nha7bSUSjIhjaNpaCQHgoLyF6pIsZtDtJEXuT1GGEQbDYALROgc12CtMC6pksKmVMWO6S66XB1gAozroxFpbotllXwfYRcdNVRIyqbDRsKkb-MWhCd_WH2Ktm58SULtFjlDHQfX1p6jTf2-mFbzi5oj4DkY9r8AR2nGsw
linkProvider ProQuest
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Momentum+Contrastive+Voxel-wise+Representation+Learning+for+Semi-supervised+Volumetric+Medical+Image+Segmentation&rft.jtitle=Medical+image+computing+and+computer-assisted+intervention+%3A+MICCAI+...+International+Conference+on+Medical+Image+Computing+and+Computer-Assisted+Intervention&rft.au=You%2C+Chenyu&rft.au=Zhao%2C+Ruihan&rft.au=Staib%2C+Lawrence&rft.au=Duncan%2C+James+S&rft.date=2022-01-01&rft.volume=13434&rft.spage=639&rft_id=info:doi/10.1007%2F978-3-031-16440-8_61&rft_id=info%3Apmid%2F37465615&rft_id=info%3Apmid%2F37465615&rft.externalDocID=37465615