Entropy‐guided contrastive learning for semi‐supervised medical image segmentation

Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning algorithms have achieved excellent performance in this area, they require a large amount of annotated data, which is often time‐consuming and diffi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET image processing Jg. 18; H. 2; S. 312 - 326
Hauptverfasser: Xie, Junsong, Wu, Qian, Zhu, Renju
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Wiley 01.02.2024
Schlagworte:
ISSN:1751-9659, 1751-9667
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning algorithms have achieved excellent performance in this area, they require a large amount of annotated data, which is often time‐consuming and difficult to obtain. As a result, semi‐supervised learning (SSL) has gained attention as it has the potential to alleviate this challenge by using not only limited labelled data but also a large amount of unlabelled data. A common approach in SSL is to filter out high‐entropy features and use the low‐entropy part to compute unsupervised loss. However, it is believed that the high‐entropy part is also beneficial for model training, and discarding it can lead to information loss. To address this issue, a simple yet efficient contrastive learning approach is proposed in this work for semi‐supervised medical image segmentation, called Entropy‐Guided Contrastive Learning Segmentation Network (EGCL‐Net). The proposed method separates the low‐entropy and high‐entropy features via the average of predictions, using contrastive loss to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets showed that: (1) EGCL‐Net can significantly improve performance by utilizing high‐entropy representation, and (2) the authors’ EGCL‐Net outperforms recent state‐of‐the‐art semi‐supervised methods in both qualitative and quantitative evaluations. The authors proposed a simple yet efficient contrastive learning approach to make sufficient use of unlabelled data for semi‐supervised medical image segmentation. The contrastive loss is employed to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets demonstrate the effectiveness of the proposed method.
AbstractList Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning algorithms have achieved excellent performance in this area, they require a large amount of annotated data, which is often time‐consuming and difficult to obtain. As a result, semi‐supervised learning (SSL) has gained attention as it has the potential to alleviate this challenge by using not only limited labelled data but also a large amount of unlabelled data. A common approach in SSL is to filter out high‐entropy features and use the low‐entropy part to compute unsupervised loss. However, it is believed that the high‐entropy part is also beneficial for model training, and discarding it can lead to information loss. To address this issue, a simple yet efficient contrastive learning approach is proposed in this work for semi‐supervised medical image segmentation, called Entropy‐Guided Contrastive Learning Segmentation Network (EGCL‐Net). The proposed method separates the low‐entropy and high‐entropy features via the average of predictions, using contrastive loss to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets showed that: (1) EGCL‐Net can significantly improve performance by utilizing high‐entropy representation, and (2) the authors’ EGCL‐Net outperforms recent state‐of‐the‐art semi‐supervised methods in both qualitative and quantitative evaluations.
Abstract Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning algorithms have achieved excellent performance in this area, they require a large amount of annotated data, which is often time‐consuming and difficult to obtain. As a result, semi‐supervised learning (SSL) has gained attention as it has the potential to alleviate this challenge by using not only limited labelled data but also a large amount of unlabelled data. A common approach in SSL is to filter out high‐entropy features and use the low‐entropy part to compute unsupervised loss. However, it is believed that the high‐entropy part is also beneficial for model training, and discarding it can lead to information loss. To address this issue, a simple yet efficient contrastive learning approach is proposed in this work for semi‐supervised medical image segmentation, called Entropy‐Guided Contrastive Learning Segmentation Network (EGCL‐Net). The proposed method separates the low‐entropy and high‐entropy features via the average of predictions, using contrastive loss to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets showed that: (1) EGCL‐Net can significantly improve performance by utilizing high‐entropy representation, and (2) the authors’ EGCL‐Net outperforms recent state‐of‐the‐art semi‐supervised methods in both qualitative and quantitative evaluations.
Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning algorithms have achieved excellent performance in this area, they require a large amount of annotated data, which is often time‐consuming and difficult to obtain. As a result, semi‐supervised learning (SSL) has gained attention as it has the potential to alleviate this challenge by using not only limited labelled data but also a large amount of unlabelled data. A common approach in SSL is to filter out high‐entropy features and use the low‐entropy part to compute unsupervised loss. However, it is believed that the high‐entropy part is also beneficial for model training, and discarding it can lead to information loss. To address this issue, a simple yet efficient contrastive learning approach is proposed in this work for semi‐supervised medical image segmentation, called Entropy‐Guided Contrastive Learning Segmentation Network (EGCL‐Net). The proposed method separates the low‐entropy and high‐entropy features via the average of predictions, using contrastive loss to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets showed that: (1) EGCL‐Net can significantly improve performance by utilizing high‐entropy representation, and (2) the authors’ EGCL‐Net outperforms recent state‐of‐the‐art semi‐supervised methods in both qualitative and quantitative evaluations. The authors proposed a simple yet efficient contrastive learning approach to make sufficient use of unlabelled data for semi‐supervised medical image segmentation. The contrastive loss is employed to pull the intra‐class entropy representation distance close and push the inter‐class entropy representation distance away. Extensive experiments on the automated cardiac diagnosis challenge dataset, COVID‐19, and BraTS2019 datasets demonstrate the effectiveness of the proposed method.
Author Xie, Junsong
Wu, Qian
Zhu, Renju
Author_xml – sequence: 1
  givenname: Junsong
  orcidid: 0000-0002-6845-0441
  surname: Xie
  fullname: Xie, Junsong
  organization: Anhui Medical University
– sequence: 2
  givenname: Qian
  surname: Wu
  fullname: Wu, Qian
  email: ayd_wuqian@126.com
  organization: Anhui Medical University
– sequence: 3
  givenname: Renju
  surname: Zhu
  fullname: Zhu, Renju
  organization: Hefei BOE Hospital
BookMark eNp9kM1KAzEUhYNUsP5sfIJZC603yUxmspTiT0FQRN2GTH6GlOmkJFOlOx_BZ_RJTDvqQsTVvdx853ByDtGo851B6BTDFEPOz90qkCkmvIA9NMZlgSecsXL0sxf8AB3GuAAoOFTFGD1fdn3wq83H23uzdtroTPl0kbF3LyZrjQyd65rM-pBFs3QJi-uVCS8uJnRptFOyzdxSNia9N0vT9bJ3vjtG-1a20Zx8zSP0dHX5OLuZ3N5dz2cXtxOV5wQmVgKwmuNc1zkjNKd1XVqtckZpJaXUpVUpqFUciAGMC4Yp2IqVAIRaSjA9QvPBV3u5EKuQkoSN8NKJ3cGHRsjQO9UaAUVVUixVaas6NxbXFeEVY5ppDhrXdfKCwUsFH2MwVig3_Cb14VqBQWw7FtuOxa7jJDn7JfmO8CeMB_jVtWbzDynm9w9k0HwC1K2RXA
CitedBy_id crossref_primary_10_1177_08953996241301685
crossref_primary_10_1007_s00521_024_10934_4
Cites_doi 10.1109/TMI.2018.2837502
10.1016/j.media.2022.102447
10.1007/978-3-319-24574-4_28
10.1109/ICCV48922.2021.00042
10.1016/j.media.2022.102517
10.1109/TIP.2020.3011269
10.1049/ipr2.12419
10.1109/TCBB.2022.3144428
10.1101/2020.04.22.20074948
10.1109/ITME.2018.00080
10.1609/aaai.v36i3.20146
10.1016/j.media.2023.102792
10.1038/s41592-020-01008-z
10.1007/978-3-319-66185-8_29
10.1109/TNNLS.2020.2995319
10.2139/ssrn.4081789
10.1109/CVPR52729.2023.01895
10.1109/TMI.2014.2377694
10.1049/iet‐ipr.2017.1061
10.1109/CVPR52688.2022.00421
10.1109/3DV.2016.79
10.1016/j.knosys.2021.108021
10.1007/978-3-030-59861-7_4
10.1109/CVPR46437.2021.00264
10.1609/aaai.v35i10.17066
10.1007/s11263-020-01395-y
10.1049/sil2.12114
10.1016/j.neunet.2019.08.025
10.1016/j.compmedimag.2022.102092
10.1109/ISBI52829.2022.9761710
10.1109/CVPR.2019.00262
10.1109/CVPR.2017.243
10.1142/S0129065722500162
10.1609/aaai.v31i1.11231
10.1109/ICASSP40776.2020.9053405
10.7557/18.6798
10.1016/j.compbiomed.2022.106051
ContentType Journal Article
Copyright 2023 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Copyright_xml – notice: 2023 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
DBID 24P
AAYXX
CITATION
DOA
DOI 10.1049/ipr2.12950
DatabaseName Wiley Online Library Open Access
CrossRef
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList CrossRef


Database_xml – sequence: 1
  dbid: 24P
  name: Wiley Online Library Open Access
  url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  sourceTypes: Publisher
– sequence: 2
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISSN 1751-9667
EndPage 326
ExternalDocumentID oai_doaj_org_article_058731ac7f8b4ef1b829866d6d90d1bb
10_1049_ipr2_12950
IPR212950
Genre article
GrantInformation_xml – fundername: Nature Science Foundation of the Anhui Provincial Higher Education Institutions of China
  funderid: 2022AH050660; KJ2021A0265
GroupedDBID .DC
0R~
1OC
24P
29I
5GY
6IK
8VB
AAHHS
AAHJG
AAJGR
ABQXS
ACCFJ
ACCMX
ACESK
ACGFS
ACIWK
ACXQS
ADZOD
AEEZP
AENEX
AEQDE
AIWBW
AJBDE
ALMA_UNASSIGNED_HOLDINGS
ALUQN
AVUZU
CS3
DU5
EBS
ESX
GROUPED_DOAJ
HZ~
IAO
IFIPE
IPLJI
ITC
JAVBF
LAI
MCNEO
MS~
O9-
OCL
OK1
P2P
QWB
RIE
RNS
ROL
RUI
ZL0
4.4
8FE
8FG
AAMMB
AAYXX
ABJCF
AEFGJ
AFFHD
AFKRA
AGXDD
AIDQK
AIDYY
ARAPS
BENPR
BGLVJ
CCPQU
CITATION
EJD
HCIFZ
IDLOA
K1G
L6V
M43
M7S
P62
PHGZM
PHGZT
PQGLB
PTHSS
S0W
WIN
ID FETCH-LOGICAL-c4420-fa006b914db462343bb7fdc46338aaad7fc059fc902e01156130f8670023f3213
IEDL.DBID 24P
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001075788700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1751-9659
IngestDate Fri Oct 03 12:38:13 EDT 2025
Wed Nov 05 20:21:30 EST 2025
Tue Nov 18 22:12:25 EST 2025
Wed Jan 22 16:14:57 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License Attribution-NonCommercial-NoDerivs
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4420-fa006b914db462343bb7fdc46338aaad7fc059fc902e01156130f8670023f3213
ORCID 0000-0002-6845-0441
OpenAccessLink https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12950
PageCount 15
ParticipantIDs doaj_primary_oai_doaj_org_article_058731ac7f8b4ef1b829866d6d90d1bb
crossref_citationtrail_10_1049_ipr2_12950
crossref_primary_10_1049_ipr2_12950
wiley_primary_10_1049_ipr2_12950_IPR212950
PublicationCentury 2000
PublicationDate 2024-02-01
PublicationDateYYYYMMDD 2024-02-01
PublicationDate_xml – month: 02
  year: 2024
  text: 2024-02-01
  day: 01
PublicationDecade 2020
PublicationTitle IET image processing
PublicationYear 2024
Publisher Wiley
Publisher_xml – name: Wiley
References 2015; 34
2017; 3
2022; 16(140)
2020; 121
2020; 12436
2021; 129
2021b
2022; 20
2022; 239
2015; 9351
2023; 87
2017; 31
2023
2022
2020; 32(2)
2021
2020
2021; 18
2022; 80
2020; 12261
2022; 79
2019
2018
2022; 36
2017
2016
2022; 32
2022; 99
2018; 12
2022; 16
2018; 37
2020; 29
e_1_2_10_23_1
e_1_2_10_46_1
e_1_2_10_21_1
e_1_2_10_44_1
e_1_2_10_42_1
e_1_2_10_40_1
e_1_2_10_2_1
e_1_2_10_4_1
e_1_2_10_18_1
e_1_2_10_16_1
e_1_2_10_39_1
e_1_2_10_8_1
e_1_2_10_14_1
e_1_2_10_37_1
e_1_2_10_13_1
e_1_2_10_11_1
e_1_2_10_30_1
e_1_2_10_51_1
Tarvainen A. (e_1_2_10_34_1) 2017
e_1_2_10_29_1
e_1_2_10_27_1
e_1_2_10_25_1
e_1_2_10_48_1
e_1_2_10_24_1
e_1_2_10_45_1
Zhou Z. (e_1_2_10_6_1) 2018
e_1_2_10_22_1
e_1_2_10_43_1
e_1_2_10_20_1
e_1_2_10_41_1
Li X. (e_1_2_10_33_1) 2020; 32
Zheng H. (e_1_2_10_17_1) 2020
Zhang Y. (e_1_2_10_32_1) 2017; 3
e_1_2_10_3_1
e_1_2_10_19_1
e_1_2_10_5_1
e_1_2_10_38_1
e_1_2_10_7_1
e_1_2_10_15_1
e_1_2_10_36_1
e_1_2_10_12_1
e_1_2_10_35_1
e_1_2_10_9_1
e_1_2_10_10_1
e_1_2_10_31_1
e_1_2_10_50_1
e_1_2_10_28_1
e_1_2_10_49_1
e_1_2_10_26_1
e_1_2_10_47_1
References_xml – year: 2017
  article-title: Semi‐supervised learning for network‐based cardiac MR image segmentation
– year: 2022
  article-title: Efficient combination of CNN and transformer for dual‐teacher uncertainty‐aware guided semi‐supervised medical image segmentation
  publication-title: SSRN J.
– start-page: 2613
  year: 2021b
  end-page: 2622
  article-title: Semi‐supervised semantic segmentation with cross pseudo supervision
– year: 2023
  article-title: Pseudo‐label guided contrastive learning for semi‐supervised medical image segmentation
– volume: 12
  start-page: 1079
  year: 2018
  end-page: 1085
  article-title: Image segmentation fusion using weakly supervised trace‐norm multi‐task learning method
  publication-title: IET Image Proc.
– volume: 129
  start-page: 1106
  year: 2021
  end-page: 1120
  article-title: Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation
  publication-title: Int. J. Comput. Vision
– start-page: 1
  year: 2022
  end-page: 5
  article-title: Cross‐level contrastive learning and consistency constraint for semi‐supervised medical image segmentation
– volume: 3
  start-page: 408
  year: 2017
  end-page: 416
  article-title: Deep adversarial networks for biomedical image segmentation utilizing unannotated images
  publication-title: MICCAI
– volume: 87
  year: 2023
  article-title: Local contrastive loss with pseudo‐label based self‐training for semi‐supervised medical image segmentation
  publication-title: Med. Image Anal.
– year: 2020
  article-title: Unet 3+: A full‐scale connected unet for medical image segmentation
– year: 2021
– start-page: 1195
  year: 2017
  end-page: 1204
– volume: 239
  year: 2022
  article-title: Semi‐supervised NPC segmentation with uncertainty and attention guided consistency
  publication-title: Knowl. Based Syst.
– volume: 18
  start-page: 203
  issue: 2
  year: 2021
  end-page: 211
  article-title: nnU‐Net: A self‐configuring method fordeep learning‐based biomedical image segmentation
  publication-title: Nat. Methods
– year: 2019
  article-title: ADVENT: Adversarial entropy minimization for domain adaptation in semantic segmentation
– start-page: 3
  year: 2018
  end-page: 11
– volume: 32
  year: 2022
  article-title: Uncertainty‐guided voxel‐level supervised contrastive learning for semi‐supervised medical image segmentation
  publication-title: Int. J. Neur. Syst.
– year: 2018
– volume: 32(2)
  start-page: 523
  year: 2020
  end-page: 534
  article-title: Transformation‐consistent self‐ensembling model for semisupervised medical image segmentation
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
– volume: 20
  start-page: 2457
  issue: 4
  year: 2022
  end-page: 2467
  article-title: Semi‐supervised 3D Medical image segmentation based on dual‐task consistent joint learning and task‐level regularization
  publication-title: IEEE/ACM Trans. Comput. Biol. Bioinf.
– volume: 99
  year: 2022
  article-title: A contrastive consistency semi‐supervised left atrium segmentation model
  publication-title: Comput. Med. Imaging Graph
– volume: 121
  start-page: 74
  year: 2020
  end-page: 87
  article-title: MultiResUNet : Rethinking the U‐Net architecture for multimodal biomedical image segmentation
  publication-title: Neural Netw.
– volume: 12436
  year: 2020
– start-page: 1597
  year: 2020
  end-page: 1607
  article-title: A simple framework for contrastive learning of visual representations
– volume: 9351
  start-page: 234
  year: 2015
  end-page: 241
  article-title: U‐net: Convolutional networks for biomedical image segmentation
– volume: 80
  year: 2022
  article-title: Semi‐supervised medical image segmentation via uncertainty rectified pyramid consistency
  publication-title: Med. Image Anal.
– volume: 12261
  year: 2020
– start-page: 327
  year: 2018
  end-page: 331
  article-title: Weighted res‐unet for high‐quality retina vessel segmentation
– volume: 37
  start-page: 2514
  issue: 11
  year: 2018
  end-page: 2525
  article-title: Deep learning techniques for automatic MRI cardiac multi‐structures segmentation and diagnosis: Is the problem solved?
  publication-title: IEEE Trans. Med. Imag.
– volume: 79
  year: 2022
  article-title: Semi‐supervised medical image segmentation via a tripled‐uncertainty guided mean teacher model with contrastive learning
  publication-title: Med. Image Anal.
– volume: 36
  issue: 3
  year: 2022
  article-title: Separated contrastive learning for organ‐at‐risk and gross‐tumor‐volume segmentation with limited annotation
– year: 2016
  article-title: V‐Net: Fully convolutional neural networks for volumetric medical image segmentation
– year: 2022
– year: 2020
– volume: 16(140)
  year: 2022
  article-title: FFUNet: A novel feature fusion makes strong decoder for medical image segmentation
  publication-title: IET Signal Process
– start-page: 4700
  year: 2017
  end-page: 4708
– volume: 31
  year: 2017
– volume: 34
  start-page: 1993
  issue: 10
  year: 2015
  end-page: 2024
  article-title: The multimodal brain tumor image segmentation benchmark (BRATS)
  publication-title: IEEE Trans. Med. Imag
– volume: 29
  start-page: 8055
  year: 2020
  end-page: 8068
  article-title: Unsupervised learning of image segmentation based on differentiable feature clustering
  publication-title: IEEE Trans. Image Process.
– volume: 16
  start-page: 1243
  year: 2022
  end-page: 1267
  article-title: Medical image segmentation using deep learning: A survey
  publication-title: IET Image Process
– ident: e_1_2_10_44_1
– ident: e_1_2_10_42_1
  doi: 10.1109/TMI.2018.2837502
– ident: e_1_2_10_37_1
  doi: 10.1016/j.media.2022.102447
– ident: e_1_2_10_2_1
  doi: 10.1007/978-3-319-24574-4_28
– ident: e_1_2_10_30_1
  doi: 10.1109/ICCV48922.2021.00042
– ident: e_1_2_10_28_1
– volume: 3
  start-page: 408
  year: 2017
  ident: e_1_2_10_32_1
  article-title: Deep adversarial networks for biomedical image segmentation utilizing unannotated images
  publication-title: MICCAI
– ident: e_1_2_10_36_1
– ident: e_1_2_10_22_1
  doi: 10.1016/j.media.2022.102517
– ident: e_1_2_10_35_1
– ident: e_1_2_10_13_1
  doi: 10.1109/TIP.2020.3011269
– ident: e_1_2_10_11_1
  doi: 10.1049/ipr2.12419
– ident: e_1_2_10_9_1
  doi: 10.1109/TCBB.2022.3144428
– ident: e_1_2_10_43_1
  doi: 10.1101/2020.04.22.20074948
– ident: e_1_2_10_27_1
– ident: e_1_2_10_4_1
  doi: 10.1109/ITME.2018.00080
– ident: e_1_2_10_19_1
– ident: e_1_2_10_20_1
  doi: 10.1609/aaai.v36i3.20146
– ident: e_1_2_10_26_1
– ident: e_1_2_10_21_1
  doi: 10.1016/j.media.2023.102792
– ident: e_1_2_10_29_1
– start-page: 1195
  volume-title: Advances in Neural Information Processing Systems (NeurIPS)
  year: 2017
  ident: e_1_2_10_34_1
– ident: e_1_2_10_3_1
  doi: 10.1038/s41592-020-01008-z
– ident: e_1_2_10_31_1
  doi: 10.1007/978-3-319-66185-8_29
– volume: 32
  start-page: 523
  year: 2020
  ident: e_1_2_10_33_1
  article-title: Transformation‐consistent self‐ensembling model for semisupervised medical image segmentation
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2020.2995319
– ident: e_1_2_10_14_1
  doi: 10.2139/ssrn.4081789
– ident: e_1_2_10_49_1
  doi: 10.1109/CVPR52729.2023.01895
– ident: e_1_2_10_45_1
  doi: 10.1109/TMI.2014.2377694
– ident: e_1_2_10_12_1
  doi: 10.1049/iet‐ipr.2017.1061
– ident: e_1_2_10_48_1
– ident: e_1_2_10_40_1
  doi: 10.1109/CVPR52688.2022.00421
– ident: e_1_2_10_23_1
  doi: 10.1109/3DV.2016.79
– start-page: 3
  volume-title: Unet++: A nestedu‐net architecture for medical image segmentation
  year: 2018
  ident: e_1_2_10_6_1
– ident: e_1_2_10_18_1
  doi: 10.1016/j.knosys.2021.108021
– ident: e_1_2_10_16_1
  doi: 10.1007/978-3-030-59861-7_4
– ident: e_1_2_10_47_1
  doi: 10.1109/CVPR46437.2021.00264
– ident: e_1_2_10_10_1
  doi: 10.1609/aaai.v35i10.17066
– ident: e_1_2_10_41_1
  doi: 10.1007/s11263-020-01395-y
– ident: e_1_2_10_8_1
  doi: 10.1049/sil2.12114
– ident: e_1_2_10_5_1
  doi: 10.1016/j.neunet.2019.08.025
– ident: e_1_2_10_38_1
  doi: 10.1016/j.compmedimag.2022.102092
– volume-title: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020
  year: 2020
  ident: e_1_2_10_17_1
– ident: e_1_2_10_50_1
  doi: 10.1109/ISBI52829.2022.9761710
– ident: e_1_2_10_46_1
  doi: 10.1109/CVPR.2019.00262
– ident: e_1_2_10_25_1
  doi: 10.1109/CVPR.2017.243
– ident: e_1_2_10_15_1
  doi: 10.1142/S0129065722500162
– ident: e_1_2_10_24_1
  doi: 10.1609/aaai.v31i1.11231
– ident: e_1_2_10_7_1
  doi: 10.1109/ICASSP40776.2020.9053405
– ident: e_1_2_10_51_1
  doi: 10.7557/18.6798
– ident: e_1_2_10_39_1
  doi: 10.1016/j.compbiomed.2022.106051
SSID ssj0059085
Score 2.3119612
Snippet Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised learning...
Abstract Accurately segmenting medical images is a critical step in clinical diagnosis and developing patient‐specific treatment plans. While supervised...
SourceID doaj
crossref
wiley
SourceType Open Website
Enrichment Source
Index Database
Publisher
StartPage 312
SubjectTerms contrastive learning
Covid‐19
entropy‐guided
medical image segmentation
semi‐supervised learning
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8NAEF6kePDiW6wvAnpRiM1uNsnuUaVFL6WISm9hnyVga2lawZs_wd_oL3F2k5YKohdvITuw4ZvszDfsPBA6o5HgFOKSUFFCQyoVDpmxcZgwHilw2dYK5odNZN0u6_d5b2nUl8sJq9oDV8C1ooRlMRYqs0xSY7FkhLM01anmkcZSOusLrGceTFU22A3yTnwppBsinyZ83piU8lYxnpBL8HKu0n7JFfmO_d8ZqncxnU20XnPD4Kr6pi20YkbbaKPmiUF9Cssd9NR26eXjt8_3j8Gs0LDmE85F6UxXUM-BGARAR4PSDAsQK2djZxNKEB1WNzNBMQRLAuuDYV19NNpFj532w81tWM9HAGApRH1WwJGRHFMtKbAYGkuZWa1oCmGnEEJnVgESVvGIGMf8XKhgmavLIbGNCY73UGP0MjL7KIi0FVhYjDMb00RKgSOREs1MJoxWiW6i8zlUuaqbh7sZFs-5v8SmPHew5h7WJjpdyI6rlhk_Sl07xBcSrs21fwHKz2vl538pv4kuvL5-2Se_690T_3TwHzseojUCtKbK2z5CjelkZo7RqnqdFuXkxP-EX-EO4fc
  priority: 102
  providerName: Directory of Open Access Journals
Title Entropy‐guided contrastive learning for semi‐supervised medical image segmentation
URI https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12950
https://doaj.org/article/058731ac7f8b4ef1b829866d6d90d1bb
Volume 18
WOSCitedRecordID wos001075788700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1751-9667
  dateEnd: 20241231
  omitProxy: false
  ssIdentifier: ssj0059085
  issn: 1751-9659
  databaseCode: DOA
  dateStart: 20210101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVWIB
  databaseName: Wiley Online Library Free Content
  customDbUrl:
  eissn: 1751-9667
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0059085
  issn: 1751-9659
  databaseCode: WIN
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://onlinelibrary.wiley.com
  providerName: Wiley-Blackwell
– providerCode: PRVWIB
  databaseName: Wiley Online Library Open Access
  customDbUrl:
  eissn: 1751-9667
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0059085
  issn: 1751-9659
  databaseCode: 24P
  dateStart: 20130101
  isFulltext: true
  titleUrlDefault: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  providerName: Wiley-Blackwell
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEA7L6sGL6xPXFwW9KFSbNG0T8KKi6GVZxNet5LkU3AdbV_DmT_A3-kucpN0VQQTxUkozpWUm80oy8yG0TyPBKeQloaKEhlQqHDJj4zBhPFLgsq0VzINNZJ0Oe3zk3QY6mdbCVP0hZgtuTjO8vXYKLmSFQgJBLQixGI3JEXgrl7DPYRwzB9xAaHdqhx2Yd-LLIR2QfJrwaXNSyo-_3v3mjnzX_u9Rqnczl63__eASWqzDy-C0mg_LqGEGK6hVh5pBrcjlKrq_cCfUR68fb--9SaFhzJ9ZF6WzfkENJdELIKINStMvgKycjJxZKYG0X23uBEUfjBGM9_p1AdNgDd1dXtyeX4U1xALIhkLiaAVoneSYakkhEKKxlJnViqaQuQohdGYVMNIqHhHjgkeXbVjmSntIbGOC43XUHAwHZgMFkbYCC4txZmOaSClwJFKimcmE0SrRbXQw5XSu6v7jDgbjKff74JTnjmG5Z1gb7c1oR1XXjR-pzpzAZhSuU7Z_MBz38lrx8ihhWYyFyiyT1FgsGeEsTXWqeaSxlG106IX4y3fy6-4N8XebfyHeQgsEIqDqiPc2aj6PJ2YHzauX56Ic7_q5uuuXAOD6cN35BOFZ8CM
linkProvider Wiley-Blackwell
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1fSxwxEA_FFuyLWm3pWW0X9MXC6iab3U0e26J4VA8Rtb4t-Xss9M7j1hN860foZ-wncSabuyJIQfq2bCbsMsnM_CaZP4Ts8kxJDn5JajjjKdeGpsL5PC2EzAyYbO-VCM0mqsFAXF_Lsxibg7kwXX2IxYEbSkbQ1yjgeCDdOZwci2Q2kynbB3OFHvtLDkgDOzf86A_mihi7eRchHxI7yZeFnFcn5fLg79xH9iiU7X8MU4OdOVr9zz9cIysRYCZfuh3xhrxw43WyGsFmEkW53SBXhxijPrn_8-v3cNZYGAtR66pF_ZfEZhLDBDBt0rpRA2TtbIKKpQXSUXe9kzQjUEcwPhzFFKbxW3J5dHjx7TiNTRZgdTi4jl6B3GlJudUcoBDPta68NbwE31UpZStvgJPeyIw5hI_ob3iByT0s9zmj-TuyNL4Zu_ckyaxXVHlKK5_zQmtFM1UyK1ylnDWF7ZG9OatrEyuQYyOMn3W4CeeyRobVgWE9srOgnXR1N56k-oortqDAWtnhxc10WEfRq7NCVDlVpvJCc-epFkyKsrSllZmlWvfI57CK__hO3T87Z-Fp8znEn8jy8cXpSX3SH3z_QF4zwENdwPcWWbqdztw2eWXubpt2-jFs3AdxovJ3
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1fa9RAEF-kleKLVat4ttZAfakQm91Mkt3H1vawWI5DVPoW9u8R8K7h0iv0rR_Bz9hP0p3N3pWCCMW3kJ2QMLMz85vs_CHkI2RSgI9LUg0MUlCapty6PC24yLR32c5JHoZNVKMRPz8X45ibg7UwfX-I1Q831Ixgr1HBbWtcH3ACNsls2jn77N0VRuzrUFQUNzWD8dIQ4zTvItRD4iT5shDL7qQgDu6ffeCPQtv-hzA1-Jnh5n9-4QvyPALM5LDfES_JEzt7RTYj2EyiKndb5NcJ5qi317c3fyaLxvi1kLUuO7R_SRwmMUk8pk06O208Wbdo0bB0nnTaH-8kzdSbI78-mcYSptlr8nN48uPL1zQOWfDSAR86Oun1TgkKRoGHQpArVTmjofSxq5TSVE57TjotMmYRPmK84TgW97Dc5Yzmb8ja7GJm35IkM05S6SitXA6FUpJmsmSG20paowszIPtLVtc6diDHQRi_63ASDqJGhtWBYQOyt6Jt-74bf6U6QomtKLBXdrhxMZ_UUfXqrOBVTqWuHFdgHVWcCV6WpjQiM1SpAfkUpPiP99Sn4-8sXL17DPEHsjE-HtZnp6Nv2-QZ83Coz_feIWuX84V9T57qq8umm--GfXsHmMXxjg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Entropy%E2%80%90guided+contrastive+learning+for+semi%E2%80%90supervised+medical+image+segmentation&rft.jtitle=IET+image+processing&rft.au=Xie%2C+Junsong&rft.au=Wu%2C+Qian&rft.au=Zhu%2C+Renju&rft.date=2024-02-01&rft.issn=1751-9659&rft.eissn=1751-9667&rft.volume=18&rft.issue=2&rft.spage=312&rft.epage=326&rft_id=info:doi/10.1049%2Fipr2.12950&rft.externalDBID=10.1049%252Fipr2.12950&rft.externalDocID=IPR212950
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9659&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9659&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9659&client=summon