Medical Image Segmentation Review: The Success of U-Net
Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical im...
Gespeichert in:
| Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence Jg. 46; H. 12; S. 10076 - 10095 |
|---|---|
| Hauptverfasser: | , , , , , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
United States
IEEE
01.12.2024
|
| Schlagworte: | |
| ISSN: | 0162-8828, 1939-3539, 2160-9292, 1939-3539 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Abstract | Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model has received tremendous attention from academic and industrial researchers who have extended it to address the scale and complexity created by medical tasks. These extensions are commonly related to enhancing the U-Net's backbone, bottleneck, or skip connections, or including representation learning, or combining it with a Transformer architecture, or even addressing probabilistic prediction of the segmentation map. Having a compendium of different previously proposed U-Net variants makes it easier for machine learning researchers to identify relevant research questions and understand the challenges of the biological tasks that challenge the model. In this work, we discuss the practical aspects of the U-Net model and organize each variant model into a taxonomy. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. Furthermore, we provide a comprehensive implementation library with trained models. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation. |
|---|---|
| AbstractList | Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model has received tremendous attention from academic and industrial researchers who have extended it to address the scale and complexity created by medical tasks. These extensions are commonly related to enhancing the U-Net's backbone, bottleneck, or skip connections, or including representation learning, or combining it with a Transformer architecture, or even addressing probabilistic prediction of the segmentation map. Having a compendium of different previously proposed U-Net variants makes it easier for machine learning researchers to identify relevant research questions and understand the challenges of the biological tasks that challenge the model. In this work, we discuss the practical aspects of the U-Net model and organize each variant model into a taxonomy. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. Furthermore, we provide a comprehensive implementation library with trained models. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation. Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model has received tremendous attention from academic and industrial researchers who have extended it to address the scale and complexity created by medical tasks. These extensions are commonly related to enhancing the U-Net's backbone, bottleneck, or skip connections, or including representation learning, or combining it with a Transformer architecture, or even addressing probabilistic prediction of the segmentation map. Having a compendium of different previously proposed U-Net variants makes it easier for machine learning researchers to identify relevant research questions and understand the challenges of the biological tasks that challenge the model. In this work, we discuss the practical aspects of the U-Net model and organize each variant model into a taxonomy. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. Furthermore, we provide a comprehensive implementation library with trained models. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation.Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm. U-Net is the most widespread image segmentation architecture due to its flexibility, optimized modular design, and success in all medical image modalities. Over the years, the U-Net model has received tremendous attention from academic and industrial researchers who have extended it to address the scale and complexity created by medical tasks. These extensions are commonly related to enhancing the U-Net's backbone, bottleneck, or skip connections, or including representation learning, or combining it with a Transformer architecture, or even addressing probabilistic prediction of the segmentation map. Having a compendium of different previously proposed U-Net variants makes it easier for machine learning researchers to identify relevant research questions and understand the challenges of the biological tasks that challenge the model. In this work, we discuss the practical aspects of the U-Net model and organize each variant model into a taxonomy. Moreover, to measure the performance of these strategies in a clinical application, we propose fair evaluations of some unique and famous designs on well-known datasets. Furthermore, we provide a comprehensive implementation library with trained models. In addition, for ease of future studies, we created an online list of U-Net papers with their possible official implementation. |
| Author | Cohen, Joseph Paul Adeli, Ehsan Jia, Yiwei Azad, Reza Rauland, Amelie Merhof, Dorit Avval, Atlas Haddadi Aghdam, Ehsan Khodapanah Karimijafarbigloo, Sanaz Bozorgpour, Afshin |
| Author_xml | – sequence: 1 givenname: Reza orcidid: 0000-0002-4772-2161 surname: Azad fullname: Azad, Reza organization: Faculty of Electrical Engineering, Information Technology, RWTH Aachen University, Aachen, Germany – sequence: 2 givenname: Ehsan Khodapanah orcidid: 0000-0002-2849-1070 surname: Aghdam fullname: Aghdam, Ehsan Khodapanah organization: Independent Researcher, Tabriz, Iran – sequence: 3 givenname: Amelie orcidid: 0000-0002-8095-2073 surname: Rauland fullname: Rauland, Amelie organization: Faculty of Electrical Engineering, Information Technology, RWTH Aachen University, Aachen, Germany – sequence: 4 givenname: Yiwei orcidid: 0000-0002-5824-8821 surname: Jia fullname: Jia, Yiwei organization: Faculty of Electrical Engineering, Information Technology, RWTH Aachen University, Aachen, Germany – sequence: 5 givenname: Atlas Haddadi orcidid: 0000-0002-3896-7810 surname: Avval fullname: Avval, Atlas Haddadi organization: School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran – sequence: 6 givenname: Afshin orcidid: 0000-0003-1857-1058 surname: Bozorgpour fullname: Bozorgpour, Afshin organization: Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany – sequence: 7 givenname: Sanaz orcidid: 0000-0002-6632-6121 surname: Karimijafarbigloo fullname: Karimijafarbigloo, Sanaz organization: Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany – sequence: 8 givenname: Joseph Paul orcidid: 0000-0002-1334-3059 surname: Cohen fullname: Cohen, Joseph Paul organization: Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Palo Alto, CA, USA – sequence: 9 givenname: Ehsan orcidid: 0000-0002-0579-7763 surname: Adeli fullname: Adeli, Ehsan organization: Stanford University, Stanford, CA, USA – sequence: 10 givenname: Dorit orcidid: 0000-0002-1672-2185 surname: Merhof fullname: Merhof, Dorit email: dorit.merhof@ur.de organization: Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39167505$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kMlKA0EURQuJmEF_QER66aZjzYO7EBwCiYom66K6-nVs6SF2dRT_3sREEBeu3uKdc-HePupUdQUInRI8JASby_njaDYZUkz5kHEmhCIHqEeJxLGhhnZQDxNJY62p7qJ-CK8YEy4wO0JdZohUAoseUjNIc--KaFK6JUTPsCyhal2b11X0BO85fFxF85fNY-09hBDVWbSI76E9RoeZKwKc7O8ALW6u5-O7ePpwOxmPprFnlLdxxlMmmHKMKZoYA0Q46QVhJKGYa0cVAE1SBw6MS4jXwGVKjPKKc51pSdkAXexyV039tobQ2jIPHorCVVCvg2XYCKkk12yDnu_RdVJCaldNXrrm0_6U3QB6B_imDqGBzPp8V7VtXF5Ygu12V_u9q93uave7blT6R_1J_1c620k5APwSJGeMaPYFUzeBDQ |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_bspc_2025_108235 crossref_primary_10_3390_s25134073 crossref_primary_10_3390_app15158610 crossref_primary_10_3390_jimaging11060192 crossref_primary_10_3389_fonc_2025_1498832 crossref_primary_10_1016_j_eswa_2024_125874 crossref_primary_10_1088_2057_1976_ae0482 crossref_primary_10_1186_s12880_025_01837_4 crossref_primary_10_1148_ryai_240353 crossref_primary_10_1038_s41598_025_92423_9 crossref_primary_10_1002_ima_70055 crossref_primary_10_1371_journal_pone_0324327 crossref_primary_10_1016_j_eswa_2025_126609 crossref_primary_10_1111_coin_70063 crossref_primary_10_1016_j_jcms_2025_06_008 crossref_primary_10_3390_bioengineering12040399 crossref_primary_10_1140_epjp_s13360_025_06686_2 crossref_primary_10_1186_s12911_024_02778_8 crossref_primary_10_1016_j_phro_2025_100734 crossref_primary_10_3390_computation12120232 crossref_primary_10_3390_ijms26052338 crossref_primary_10_1016_j_knosys_2025_113932 crossref_primary_10_1016_j_patcog_2025_112396 crossref_primary_10_1002_sstr_202500274 crossref_primary_10_1007_s10694_025_01722_0 crossref_primary_10_1007_s10278_025_01464_z crossref_primary_10_1016_j_rx_2025_501723 crossref_primary_10_1016_j_eswa_2025_129275 crossref_primary_10_3390_bioengineering12050475 crossref_primary_10_1007_s10489_025_06629_5 crossref_primary_10_1007_s42979_025_03900_x crossref_primary_10_1016_j_compmedimag_2024_102460 crossref_primary_10_1063_5_0274903 crossref_primary_10_1109_JSEN_2025_3546972 crossref_primary_10_3390_jimaging10100252 crossref_primary_10_1049_ipr2_70203 crossref_primary_10_1016_j_media_2025_103638 crossref_primary_10_1016_j_optcom_2024_131423 crossref_primary_10_3389_fmars_2025_1535917 crossref_primary_10_1080_01431161_2025_2530237 crossref_primary_10_1016_j_cmpb_2024_108494 crossref_primary_10_1016_j_jrras_2025_101551 crossref_primary_10_1016_j_mri_2025_110505 crossref_primary_10_1109_LGRS_2025_3583786 crossref_primary_10_1016_j_isprsjprs_2025_05_011 crossref_primary_10_1016_j_optlastec_2025_113682 crossref_primary_10_3390_app14177953 crossref_primary_10_1109_ACCESS_2024_3496723 crossref_primary_10_1364_AO_553741 crossref_primary_10_1007_s11548_025_03513_y crossref_primary_10_1007_s12145_025_01910_0 crossref_primary_10_3390_jcs9030103 crossref_primary_10_1016_j_bspc_2025_108180 crossref_primary_10_1016_j_eswa_2025_128512 crossref_primary_10_1016_j_neucom_2025_130569 crossref_primary_10_1109_ACCESS_2025_3562762 crossref_primary_10_3389_fonc_2025_1467672 crossref_primary_10_1016_j_bioactmat_2025_06_028 crossref_primary_10_6009_jjrt_25_1586 crossref_primary_10_1021_acs_langmuir_5c01673 crossref_primary_10_3390_diagnostics15141744 crossref_primary_10_3390_app15105342 crossref_primary_10_1038_s41598_025_15185_4 crossref_primary_10_1038_s41598_025_97779_6 crossref_primary_10_1177_00220345251332511 crossref_primary_10_1016_j_neunet_2025_107919 crossref_primary_10_1002_ima_70073 crossref_primary_10_3390_electronics13244919 crossref_primary_10_3390_healthcare13182327 crossref_primary_10_1016_j_bspc_2025_108534 crossref_primary_10_1016_j_dsp_2025_105553 crossref_primary_10_3389_fnbot_2024_1504070 crossref_primary_10_1177_20552076251366855 crossref_primary_10_1016_j_knosys_2025_114127 crossref_primary_10_1109_JMMCT_2024_3475988 crossref_primary_10_1002_cem_70041 crossref_primary_10_1007_s11047_025_10030_z crossref_primary_10_3389_fmed_2025_1564678 crossref_primary_10_1007_s10278_024_01299_0 crossref_primary_10_1038_s41598_025_91352_x crossref_primary_10_3390_ai6090212 crossref_primary_10_1016_j_patcog_2025_112291 crossref_primary_10_1055_a_2683_6482 crossref_primary_10_1016_j_bspc_2025_107791 crossref_primary_10_1016_j_ecoinf_2025_103078 crossref_primary_10_1109_JSTARS_2025_3555636 crossref_primary_10_1109_TMI_2025_3540906 crossref_primary_10_3390_s25072030 crossref_primary_10_1016_j_dsp_2025_105441 crossref_primary_10_1080_0305215X_2025_2464852 crossref_primary_10_1002_mco2_70247 crossref_primary_10_1140_epjs_s11734_025_01596_x crossref_primary_10_1016_j_bspc_2025_108522 crossref_primary_10_1016_j_cmpb_2025_109069 crossref_primary_10_3390_diagnostics15141752 crossref_primary_10_1016_j_oceaneng_2025_122731 crossref_primary_10_1038_s41598_025_01319_1 crossref_primary_10_1016_j_compositesa_2025_109262 crossref_primary_10_1016_j_eswa_2025_128096 crossref_primary_10_1038_s42256_024_00965_w crossref_primary_10_1109_LSP_2025_3600374 crossref_primary_10_3390_agronomy15071508 crossref_primary_10_5194_gmd_18_2427_2025 crossref_primary_10_1007_s00404_024_07837_z crossref_primary_10_1016_j_engappai_2025_110493 crossref_primary_10_1016_j_optlaseng_2025_108932 crossref_primary_10_1007_s10278_025_01626_z crossref_primary_10_1016_j_neucom_2024_129147 crossref_primary_10_3390_s25061793 crossref_primary_10_1093_jcde_qwaf039 crossref_primary_10_1145_3759254 crossref_primary_10_1007_s13748_025_00406_8 crossref_primary_10_1088_2057_1976_addac8 crossref_primary_10_3390_electronics14081497 crossref_primary_10_1016_j_eswa_2025_128650 crossref_primary_10_1016_j_dsp_2025_105219 crossref_primary_10_1109_TASE_2025_3530936 crossref_primary_10_1016_j_displa_2025_102971 crossref_primary_10_3389_fradi_2025_1503625 crossref_primary_10_3390_s25103093 crossref_primary_10_1016_j_eclinm_2025_103146 crossref_primary_10_1109_ACCESS_2025_3552588 crossref_primary_10_1016_j_anucene_2025_111443 crossref_primary_10_1109_ACCESS_2025_3590291 crossref_primary_10_1148_ryct_240353 crossref_primary_10_1038_s41598_025_15617_1 crossref_primary_10_1109_TCI_2025_3572250 crossref_primary_10_3390_rs17142502 crossref_primary_10_1093_ehjdh_ztaf101 crossref_primary_10_1016_j_eswa_2025_128835 crossref_primary_10_1016_j_jag_2025_104649 crossref_primary_10_1016_j_engappai_2025_112063 crossref_primary_10_1161_STROKEAHA_125_052056 crossref_primary_10_1109_TGRS_2024_3504857 crossref_primary_10_1038_s41586_025_09046_3 crossref_primary_10_3390_bioengineering12060636 crossref_primary_10_1016_j_dsp_2025_105501 crossref_primary_10_1016_j_jrras_2025_101873 crossref_primary_10_1038_s41598_025_98355_8 crossref_primary_10_1016_j_jsg_2025_105426 crossref_primary_10_3390_digital5020023 crossref_primary_10_1088_1361_6560_ada0a0 crossref_primary_10_1109_JSTARS_2025_3601739 crossref_primary_10_3390_app15137285 crossref_primary_10_1016_j_knosys_2025_114130 crossref_primary_10_3390_s24227131 crossref_primary_10_1186_s12891_025_08842_2 crossref_primary_10_3390_bioengineering12070709 crossref_primary_10_3390_e26121059 crossref_primary_10_59717_j_xinn_energy_2025_100087 crossref_primary_10_1109_ACCESS_2025_3585402 crossref_primary_10_1016_j_ijthermalsci_2025_110147 crossref_primary_10_3390_electronics13234594 crossref_primary_10_3390_computers14090344 crossref_primary_10_3390_bioengineering11121291 crossref_primary_10_3390_electronics14112212 crossref_primary_10_3390_jimaging11030085 crossref_primary_10_1016_j_aej_2025_05_026 crossref_primary_10_1117_1_JRS_18_046505 crossref_primary_10_1016_j_imavis_2025_105655 crossref_primary_10_1016_j_measurement_2025_117082 crossref_primary_10_1016_j_engappai_2025_112188 crossref_primary_10_1002_lary_32292 crossref_primary_10_1016_j_engfracmech_2025_111232 crossref_primary_10_3390_bioengineering12050538 crossref_primary_10_3390_sym17040531 crossref_primary_10_1016_j_rineng_2025_105047 crossref_primary_10_1109_TIP_2025_3599783 crossref_primary_10_3390_s25154535 crossref_primary_10_3390_diagnostics15091072 crossref_primary_10_1016_j_bspc_2025_108291 crossref_primary_10_1186_s10033_025_01331_6 crossref_primary_10_1515_bmt_2024_0112 crossref_primary_10_1016_j_compbiomed_2025_110110 crossref_primary_10_1109_TMI_2024_3523333 crossref_primary_10_1109_TIM_2025_3608349 crossref_primary_10_1186_s13018_025_05830_z crossref_primary_10_3390_diagnostics15081041 crossref_primary_10_1007_s11760_025_04277_3 crossref_primary_10_1053_j_jvca_2025_07_040 crossref_primary_10_34133_research_0869 crossref_primary_10_1007_s11517_024_03223_8 crossref_primary_10_1016_j_acra_2025_05_062 crossref_primary_10_1080_00051144_2025_2543610 crossref_primary_10_1016_j_cviu_2025_104389 crossref_primary_10_1007_s11548_025_03457_3 crossref_primary_10_1007_s10044_025_01496_9 crossref_primary_10_1016_j_eswa_2025_129661 crossref_primary_10_1016_j_neunet_2025_107832 |
| Cites_doi | 10.1109/tmi.2017.2759102 10.1109/ICPR48806.2021.9413346 10.1007/978-3-030-46640-4_25 10.1049/ipr2.12406 10.1109/JBHI.2019.2912935 10.1109/ISBI.2019.8759581 10.1007/978-3-030-87199-4_16 10.1109/LGRS.2018.2802944 10.1109/CBMS49503.2020.00111 10.1007/978-3-030-11726-9_17 10.1016/S1470-2045(19)30098-1 10.1109/CVPR.2016.308 10.1109/NAECON46414.2019.9057834 10.1109/TMI.2019.2959609 10.1364/BOE.396598 10.1109/3DV.2016.79 10.1007/978-3-031-21014-3_39 10.1109/TMI.2018.2837502 10.1109/ICASSP43922.2022.9746172 10.1007/978-3-030-87193-2_4 10.1109/ISBI.2019.8759329 10.1016/j.bbe.2022.02.011 10.48550/ARXIV.1706.03762 10.1007/978-3-030-87589-3_40 10.1038/sj.bjc.6601693 10.1007/978-1-4615-7566-5 10.1109/CVPR.2015.7298958 10.1109/CVPR.2017.243 10.3389/fbioe.2020.00670 10.1007/s11548-022-02732-x 10.1007/s11548-022-02738-5 10.1007/978-3-030-87193-2_41 10.1016/j.ejmp.2019.10.001 10.1109/ICASSP40776.2020.9053405 10.1007/978-3-030-59710-8_8 10.1038/s41467-018-07619-7 10.1109/NSS/MIC42101.2019.9059879 10.1007/978-3-030-58548-8_7 10.1007/978-3-319-66179-7_25 10.1109/CVPR52688.2022.01716 10.1038/s41467-022-30695-9 10.1109/ICCMC48092.2020.ICCMC-000112 10.1007/978-3-030-11723-8_27 10.14569/IJACSA.2017.080853 10.1007/978-3-030-32956-3_10 10.1007/978-3-319-24574-4_28 10.1109/ICCVW54120.2021.00366 10.1038/s41591-018-0107-6 10.1016/B978-0-08-051581-6.50065-9 10.1109/TIP.2021.3058783 10.1007/978-3-031-09037-0_17 10.1109/CVPR.2016.90 10.1007/978-3-319-71589-6_43 10.1109/TMI.2018.2791488 10.1155/2019/8415485 10.1080/16878507.2021.1973760 10.1016/j.media.2023.102846 10.1007/978-3-319-75238-9_25 10.1016/j.knosys.2019.04.025 10.1007/978-3-030-11726-9_32 10.1016/j.neucom.2022.05.055 10.1109/ICCVW.2019.00052 10.1109/TPAMI.2017.2699184 10.1117/1.JMI.6.1.014006 10.1016/j.media.2020.101889 10.1016/j.neunet.2019.08.025 10.1007/978-3-031-08999-2_22 10.7303/SYN3193805 10.1109/CVPR.2015.7298965 10.1109/ACCESS.2021.3053408 10.1109/ICCV.2017.89 10.1109/ICCV48922.2021.00986 10.1007/s10462-020-09854-1 10.1007/978-3-031-25066-8_9 10.1609/aaai.v36i3.20144 10.1007/978-3-030-87193-2_11 10.1007/978-3-030-87193-2_52 10.1109/TMI.2018.2845918 10.1007/978-3-030-32486-5_18 10.1109/ACCESS.2020.3025372 10.1117/1.JMI.6.2.025008 10.1007/978-3-030-72084-1_16 10.1016/j.media.2023.103000 10.1016/j.neunet.2020.01.026 10.7150/ijms.3635 10.1109/WACV51458.2022.00181 10.1007/978-3-030-00889-5_1 10.1109/ACCESS.2021.3086020 10.1109/TMI.2014.2377694 10.1109/iros55552.2023.10342025 10.1038/s41591-021-01312-x 10.1038/nmeth.2089 10.1109/TPAMI.2016.2644615 10.1109/CVPR.2018.00745 10.3389/fbioe.2020.605132 10.1007/s41095-022-0274-8 10.1109/TPAMI.2021.3059968 10.1109/JBHI.2019.2946092 10.1016/j.cmpb.2021.106268 10.1007/978-3-030-00946-5_17 10.1142/S1793545822500183 10.1109/ICCV48922.2021.00061 10.1016/j.compbiomed.2018.08.018 10.1007/978-3-319-46976-8_19 10.1109/ACCESS.2018.2886371 10.1007/978-3-319-46723-8_49 10.24963/ijcai.2022/135 10.1109/ACCESS.2019.2896920 10.1109/ICIP40778.2020.9190761 10.1007/978-3-030-72087-2_22 10.1109/TMI.2020.3000314 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
| DOI | 10.1109/TPAMI.2024.3435571 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 10095 |
| ExternalDocumentID | 39167505 10_1109_TPAMI_2024_3435571 10643318 |
| Genre | orig-research Journal Article Review |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ADRHT AENEX AETEA AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P FA8 HZ~ H~9 IBMZZ ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TN5 UHB VH1 XJT ~02 AAYXX CITATION AAYOK CGR CUY CVF ECM EIF NPM PKN RIC RIG Z5M 7X8 |
| ID | FETCH-LOGICAL-c324t-f4d3537a3372b99e15a6c5131b2048a27ee2bdaeae9ab1c8e46d197c7448f8623 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 268 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001364431200162&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Sun Nov 09 11:22:50 EST 2025 Wed Feb 19 02:00:23 EST 2025 Tue Nov 18 22:11:39 EST 2025 Sat Nov 29 02:58:27 EST 2025 Wed Aug 27 03:06:51 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 12 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c324t-f4d3537a3372b99e15a6c5131b2048a27ee2bdaeae9ab1c8e46d197c7448f8623 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 ObjectType-Review-3 content type line 23 |
| ORCID | 0000-0002-5824-8821 0000-0002-6632-6121 0000-0002-1334-3059 0000-0002-4772-2161 0000-0002-1672-2185 0000-0002-3896-7810 0000-0002-2849-1070 0000-0002-8095-2073 0000-0003-1857-1058 0000-0002-0579-7763 |
| PMID | 39167505 |
| PQID | 3095676483 |
| PQPubID | 23479 |
| PageCount | 20 |
| ParticipantIDs | ieee_primary_10643318 pubmed_primary_39167505 crossref_citationtrail_10_1109_TPAMI_2024_3435571 crossref_primary_10_1109_TPAMI_2024_3435571 proquest_miscellaneous_3095676483 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-12-01 |
| PublicationDateYYYYMMDD | 2024-12-01 |
| PublicationDate_xml | – month: 12 year: 2024 text: 2024-12-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2024 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref57 ref56 ref59 ref58 ref53 Ho (ref118) 2019 ref52 ref55 ref54 ref51 Fitzke (ref87) 2021 ref48 ref42 ref41 ref44 ref43 ref8 ref7 Chu (ref120) 2021 ref9 ref4 ref3 ref5 Simpson (ref85) 2019 ref100 ref101 ref40 ref35 Brudfors (ref47) ref37 ref36 ref31 Kohl (ref50) ref30 ref33 ref32 ref39 ref38 ref24 ref23 ref26 ref25 ref20 ref22 ref27 ref29 Asadi-Aghbolaghi (ref98) 2020 ref13 ref12 ref15 ref128 ref14 ref129 ref97 ref126 ref96 ref127 ref11 Zhu (ref115) ref124 ref10 ref17 ref16 ref19 ref18 Huang (ref45) 2021 ref93 ref133 ref92 ref134 ref95 ref131 ref94 ref132 Guan (ref28) 2021 ref130 ref90 Wu (ref46) 2022 ref89 ref86 Xie (ref125) ref88 ref135 ref136 Chen (ref34) 2021 Mader (ref83) 2017 ref81 Oktay (ref66) 2018 ref84 ref80 ref79 ref108 ref78 ref109 ref106 ref107 ref75 ref104 ref74 ref105 ref77 ref102 ref103 Dosovitskiy (ref112) 2020 ref2 ref1 ref71 ref111 ref70 ref73 ref72 ref110 Ciresan (ref6) Nikolov (ref91) 2018 ref68 ref119 ref67 Liefers (ref99) ref117 Azad (ref21) 2022 ref69 ref64 ref63 ref116 ref113 ref65 ref114 Azad (ref76) ref60 ref122 ref123 ref62 ref61 ref121 Myronenko (ref49) |
| References_xml | – ident: ref22 doi: 10.1109/tmi.2017.2759102 – ident: ref73 doi: 10.1109/ICPR48806.2021.9413346 – ident: ref55 doi: 10.1007/978-3-030-46640-4_25 – ident: ref100 doi: 10.1049/ipr2.12406 – ident: ref136 doi: 10.1109/JBHI.2019.2912935 – ident: ref18 doi: 10.1109/ISBI.2019.8759581 – ident: ref37 doi: 10.1007/978-3-030-87199-4_16 – year: 2019 ident: ref118 article-title: Axial attention in multidimensional transformers – ident: ref128 doi: 10.1109/LGRS.2018.2802944 – ident: ref60 doi: 10.1109/CBMS49503.2020.00111 – ident: ref54 doi: 10.1007/978-3-030-11726-9_17 – ident: ref93 doi: 10.1016/S1470-2045(19)30098-1 – ident: ref105 doi: 10.1109/CVPR.2016.308 – ident: ref26 doi: 10.1109/NAECON46414.2019.9057834 – ident: ref63 doi: 10.1109/TMI.2019.2959609 – ident: ref29 doi: 10.1364/BOE.396598 – ident: ref57 doi: 10.1109/3DV.2016.79 – ident: ref40 doi: 10.1007/978-3-031-21014-3_39 – ident: ref89 doi: 10.1109/TMI.2018.2837502 – ident: ref39 doi: 10.1109/ICASSP43922.2022.9746172 – ident: ref43 doi: 10.1007/978-3-030-87193-2_4 – year: 2020 ident: ref112 article-title: An image is worth 16x16 words: Transformers for image recognition at scale – ident: ref14 doi: 10.1109/ISBI.2019.8759329 – ident: ref97 doi: 10.1016/j.bbe.2022.02.011 – ident: ref113 doi: 10.48550/ARXIV.1706.03762 – ident: ref36 doi: 10.1007/978-3-030-87589-3_40 – ident: ref92 doi: 10.1038/sj.bjc.6601693 – ident: ref96 doi: 10.1007/978-1-4615-7566-5 – ident: ref108 doi: 10.1109/CVPR.2015.7298958 – ident: ref106 doi: 10.1109/CVPR.2017.243 – start-page: 337 volume-title: Proc. Int. Conf. Med. Imag. Deep Learn. ident: ref99 article-title: Dense segmentation in selected dimensions: Application to retinal optical coherence tomography – ident: ref127 doi: 10.3389/fbioe.2020.00670 – ident: ref24 doi: 10.1007/s11548-022-02732-x – ident: ref110 doi: 10.1007/s11548-022-02738-5 – ident: ref69 doi: 10.1007/978-3-030-87193-2_41 – ident: ref131 doi: 10.1016/j.ejmp.2019.10.001 – ident: ref64 doi: 10.1109/ICASSP40776.2020.9053405 – ident: ref65 doi: 10.1007/978-3-030-59710-8_8 – year: 2021 ident: ref120 article-title: Conditional positional encodings for vision transformers – year: 2021 ident: ref28 article-title: Dense dilated UNet: Deep learning for 3D photoacoustic tomography image reconstruction – ident: ref95 doi: 10.1038/s41467-018-07619-7 – ident: ref25 doi: 10.1109/NSS/MIC42101.2019.9059879 – ident: ref117 doi: 10.1007/978-3-030-58548-8_7 – year: 2021 ident: ref34 article-title: Transunet: Transformers make strong encoders for medical image segmentation – ident: ref134 doi: 10.1007/978-3-319-66179-7_25 – ident: ref124 doi: 10.1109/CVPR52688.2022.01716 – ident: ref1 doi: 10.1038/s41467-022-30695-9 – ident: ref17 doi: 10.1109/ICCMC48092.2020.ICCMC-000112 – year: 2019 ident: ref85 article-title: A large annotated medical image dataset for the development and evaluation of segmentation algorithms – ident: ref53 doi: 10.1007/978-3-030-11723-8_27 – ident: ref135 doi: 10.14569/IJACSA.2017.080853 – ident: ref19 doi: 10.1007/978-3-030-32956-3_10 – ident: ref8 doi: 10.1007/978-3-319-24574-4_28 – ident: ref75 doi: 10.1109/ICCVW54120.2021.00366 – ident: ref88 doi: 10.1038/s41591-018-0107-6 – ident: ref129 doi: 10.1016/B978-0-08-051581-6.50065-9 – ident: ref79 doi: 10.1109/TIP.2021.3058783 – ident: ref59 doi: 10.1007/978-3-031-09037-0_17 – ident: ref104 doi: 10.1109/CVPR.2016.90 – ident: ref15 doi: 10.1007/978-3-319-71589-6_43 – ident: ref51 doi: 10.1109/TMI.2018.2791488 – ident: ref77 doi: 10.1155/2019/8415485 – ident: ref30 doi: 10.1080/16878507.2021.1973760 – year: 2018 ident: ref91 article-title: Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy – ident: ref10 doi: 10.1016/j.media.2023.102846 – ident: ref94 doi: 10.1007/978-3-319-75238-9_25 – ident: ref61 doi: 10.1016/j.knosys.2019.04.025 – ident: ref12 doi: 10.1007/978-3-030-11726-9_32 – start-page: 48 volume-title: Proc. Conf. Med. Imag. Deep Learn. ident: ref47 article-title: An MRF-UNet product of experts for image segmentation – start-page: 311 volume-title: Proc. Int. MICCAI Brainlesion Workshop ident: ref49 article-title: 3D MRI brain tumor segmentation using autoencoder regularization – ident: ref133 doi: 10.1016/j.neucom.2022.05.055 – ident: ref70 doi: 10.1109/ICCVW.2019.00052 – ident: ref31 doi: 10.1109/TPAMI.2017.2699184 – ident: ref109 doi: 10.1117/1.JMI.6.1.014006 – year: 2021 ident: ref45 article-title: MissFormer: An effective medical image segmentation transformer – ident: ref103 doi: 10.1016/j.media.2020.101889 – ident: ref13 doi: 10.1016/j.neunet.2019.08.025 – ident: ref38 doi: 10.1007/978-3-031-08999-2_22 – ident: ref84 doi: 10.7303/SYN3193805 – ident: ref7 doi: 10.1109/CVPR.2015.7298965 – ident: ref102 doi: 10.1109/ACCESS.2021.3053408 – ident: ref107 doi: 10.1109/ICCV.2017.89 – ident: ref119 doi: 10.1109/ICCV48922.2021.00986 – year: 2018 ident: ref66 article-title: Attention U-Net: Learning where to look for the pancreas – ident: ref2 doi: 10.1007/s10462-020-09854-1 – ident: ref44 doi: 10.1007/978-3-031-25066-8_9 – ident: ref41 doi: 10.1609/aaai.v36i3.20144 – start-page: 48 volume-title: Proc. Int. Conf. Med. Imag. Deep Learn. ident: ref76 article-title: SMU-Net: Style matching U-Net for brain tumor segmentation with missing modalities – ident: ref35 doi: 10.1007/978-3-030-87193-2_11 – start-page: 12 077 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref125 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers – ident: ref101 doi: 10.1007/978-3-030-87193-2_52 – ident: ref58 doi: 10.1109/TMI.2018.2845918 – ident: ref23 doi: 10.1007/978-3-030-32486-5_18 – ident: ref72 doi: 10.1109/ACCESS.2020.3025372 – ident: ref62 doi: 10.1117/1.JMI.6.2.025008 – year: 2022 ident: ref21 article-title: Medical image segmentation on MRI images with missing modalities: A review – ident: ref48 doi: 10.1007/978-3-030-72084-1_16 – ident: ref114 doi: 10.1016/j.media.2023.103000 – ident: ref27 doi: 10.1016/j.neunet.2020.01.026 – year: 2021 ident: ref87 article-title: Oncopetnet: A deep learning based AI system for mitotic figure counting on H&E stained whole slide digital images in a large veterinary diagnostic lab setting – ident: ref90 doi: 10.7150/ijms.3635 – ident: ref116 doi: 10.1109/WACV51458.2022.00181 – ident: ref11 doi: 10.1007/978-3-030-00889-5_1 – ident: ref33 doi: 10.1109/ACCESS.2021.3086020 – year: 2022 ident: ref46 article-title: D-Former: A U-shaped dilated transformer for 3D medical image segmentation – ident: ref86 doi: 10.1109/TMI.2014.2377694 – ident: ref121 doi: 10.1109/iros55552.2023.10342025 – start-page: 6965 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref50 article-title: A probabilistic U-Net for segmentation of ambiguous images – ident: ref4 doi: 10.1038/s41591-021-01312-x – ident: ref5 doi: 10.1038/nmeth.2089 – ident: ref32 doi: 10.1109/TPAMI.2016.2644615 – ident: ref111 doi: 10.1109/CVPR.2018.00745 – ident: ref67 doi: 10.3389/fbioe.2020.605132 – ident: ref123 doi: 10.1007/s41095-022-0274-8 – ident: ref3 doi: 10.1109/TPAMI.2021.3059968 – start-page: 2843 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref6 article-title: Deep neural networks segment neuronal membranes in electron microscopy images – ident: ref71 doi: 10.1109/JBHI.2019.2946092 – ident: ref126 doi: 10.1016/j.cmpb.2021.106268 – ident: ref16 doi: 10.1007/978-3-030-00946-5_17 – ident: ref20 doi: 10.1142/S1793545822500183 – ident: ref122 doi: 10.1109/ICCV48922.2021.00061 – ident: ref9 doi: 10.1016/j.compbiomed.2018.08.018 – ident: ref56 doi: 10.1007/978-3-319-46976-8_19 – ident: ref130 doi: 10.1109/ACCESS.2018.2886371 – ident: ref80 doi: 10.1007/978-3-319-46723-8_49 – ident: ref42 doi: 10.24963/ijcai.2022/135 – year: 2020 ident: ref98 article-title: Multi-level context gating of embedded collective knowledge for medical image segmentation – ident: ref52 doi: 10.1016/j.ejmp.2019.10.001 – ident: ref74 doi: 10.1109/ACCESS.2019.2896920 – ident: ref68 doi: 10.1109/ICIP40778.2020.9190761 – year: 2017 ident: ref83 article-title: Finding and measuring lungs in CT data – ident: ref132 doi: 10.1007/978-3-030-72087-2_22 – ident: ref78 doi: 10.1109/TMI.2020.3000314 – ident: ref81 doi: 10.1007/978-3-319-24574-4_28 – start-page: 1 volume-title: Proc. Int. Conf. Learn. Representations ident: ref115 article-title: Deformable {DETR}: Deformable transformers for end-to-end object detection |
| SSID | ssj0014503 |
| Score | 2.7453682 |
| SecondaryResourceType | review_article |
| Snippet | Automatic medical image segmentation is a crucial topic in the medical domain and successively a critical counterpart in the computer-aided diagnosis paradigm.... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 10076 |
| SubjectTerms | Biomedical imaging Computer architecture Convolutional neural network deep learning Feature extraction Humans Image Interpretation, Computer-Assisted - methods Image Processing, Computer-Assisted - methods Image segmentation Machine Learning medical image segmentation Neural Networks, Computer Task analysis Taxonomy transformer Transformers U-Net |
| Title | Medical Image Segmentation Review: The Success of U-Net |
| URI | https://ieeexplore.ieee.org/document/10643318 https://www.ncbi.nlm.nih.gov/pubmed/39167505 https://www.proquest.com/docview/3095676483 |
| Volume | 46 |
| WOSCitedRecordID | wos001364431200162&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3fS8MwED7c8EEfnM6p88eI4JtU1yZtWt9EHAo6BjrYW0nSqwiuE938-72k7dCHCb4VmqTlvrvmrpfvDuCMQgCTU3DsZX2lPBEL30u0bzwjJXkHGlWsHVH4QQ6H8WSSjCqyuuPCIKI7fIYX9tLl8rOZWdhfZWThkSX4xA1oSBmVZK1lykCErg0yuTBk4hRH1AyZfnL5PLp-vKdYMBAXnNyDUNr-MJZxSttl-GtDch1WVjubbtMZtP75utuwVXmX7LpUhx1Yw6INrbpzA6sMuQ2bP8oQ7oKssjXsfkpfF_aEL9OKkVSwMndwxUid2NPCdVdks5yNvSHOOzAe3D7f3HlVQwXPkN8093KR8ZBLxbkMdJKgH6rIhD73tS3fqwKJGOhMocJEEWAxiijzE2kkxXA5hT58D5rFrMADYBojFHTL-AQ0rapjY3ItsijDIMpC1QW_lmpqqmrjtunFW-qijn6SOlBSC0pagdKF8-Wc97LWxp-jO1bkP0aW0u7CaY1eSqZi8x-qwNniM-W26KKMRMy7sF_Cupxda8PhilWPYMM-vDzIcgzN-ccCT2DdfM1fPz96pI-TuOf08RuNutdT |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwED_8AvXB74_5GcE36VybtGl9E1EczjHYBnsrSXoVQTvRzb_fS9oOfVDwrdAkhNxd7369_O4AzgkCmJzAsZe1lPJELHwv0b7xjJQUHWhUsXZE4Y7sduPRKOlVZHXHhUFEd_kMm_bR5fKzsZnaX2Vk4ZEl-MTzsBgKEbRKutYsaSBC1wiZghgyckISNUemlVwOetePbUKDgWhyChBCaTvEWM4pOczwh0tyPVZ-Dzed27lb_-eGN2Ctii_ZdakQmzCHxRas170bWGXKW7D6rRDhNsgqX8Par_R9YX18eq04SQUrswdXjBSK9aeuvyIb52zodXGyA8O728HNvVe1VPAMRU4TLxcZD7lUnMtAJwn6oYpM6HNf2wK-KpCIgc4UKkwUiSxGEWV-Io0kFJcT-OG7sFCMC9wHpjFCQa-MT6KmVXVsTK5FFmUYRFmoGuDXp5qaqt64bXvxkjrc0UpSJ5TUCiWthNKAi9mct7Laxp-jd-yRfxtZnnYDzmrppWQsNgOiChxPP1Juyy7KSMS8AXulWGeza204-GXVU1i-Hzx20k67-3AIK3Yj5bWWI1iYvE_xGJbM5-T54_3EaeUX2fnZsg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Medical+Image+Segmentation+Review%3A+The+Success+of+U-Net&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Azad%2C+Reza&rft.au=Aghdam%2C+Ehsan+Khodapanah&rft.au=Rauland%2C+Amelie&rft.au=Jia%2C+Yiwei&rft.date=2024-12-01&rft.issn=0162-8828&rft.eissn=2160-9292&rft.volume=46&rft.issue=12&rft.spage=10076&rft.epage=10095&rft_id=info:doi/10.1109%2FTPAMI.2024.3435571&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TPAMI_2024_3435571 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |