Unsupervised domain adaptation for medical imaging segmentation with self-ensembling
Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, f...
Uloženo v:
| Vydáno v: | NeuroImage (Orlando, Fla.) Ročník 194; s. 1 - 11 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
Elsevier Inc
01.07.2019
Elsevier Limited |
| Témata: | |
| ISSN: | 1053-8119, 1095-9572, 1095-9572 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
•Deep Learning models suffer from poor generalization when applied to other centers.•Unsupervised domain adaptation can mitigate this issue.•Here we show that self-ensembling technique shows better performance even with small amount of training data.•Ablation study demonstrates that unlabeled data provides significant improvements. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 1053-8119 1095-9572 1095-9572 |
| DOI: | 10.1016/j.neuroimage.2019.03.026 |