Unsupervised domain adaptation for medical imaging segmentation with self-ensembling

Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, f...

Full description

Saved in:
Bibliographic Details
Published in:NeuroImage (Orlando, Fla.) Vol. 194; pp. 1 - 11
Main Authors: Perone, Christian S., Ballester, Pedro, Barros, Rodrigo C., Cohen-Adad, Julien
Format: Journal Article
Language:English
Published: United States Elsevier Inc 01.07.2019
Elsevier Limited
Subjects:
ISSN:1053-8119, 1095-9572, 1095-9572
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data. •Deep Learning models suffer from poor generalization when applied to other centers.•Unsupervised domain adaptation can mitigate this issue.•Here we show that self-ensembling technique shows better performance even with small amount of training data.•Ablation study demonstrates that unlabeled data provides significant improvements.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1053-8119
1095-9572
1095-9572
DOI:10.1016/j.neuroimage.2019.03.026