TransAttUnet: Multi-Level Attention-Guided U-Net With Transformer for Medical Image Segmentation
Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry. In recent years, convolutional encoder-decoder solutions have achieved substantial progress in the field of automatic medical image segmentation. Due to the inherent bi...
Saved in:
| Published in: | IEEE transactions on emerging topics in computational intelligence Vol. 8; no. 1; pp. 55 - 68 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
Piscataway
IEEE
01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects: | |
| ISSN: | 2471-285X, 2471-285X |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry. In recent years, convolutional encoder-decoder solutions have achieved substantial progress in the field of automatic medical image segmentation. Due to the inherent bias in the convolution operations, prior models mainly focus on local visual cues formed by the neighboring pixels, but fail to fully model the long-range contextual dependencies. In this article, we propose a novel Transformer-based Attention Guided Network called TransAttUnet , in which the multi-level guided attention and multi-scale skip connection are designed to jointly enhance the performance of the semantical segmentation architecture. Inspired by Transformer, the self-aware attention (SAA) module with Transformer Self Attention (TSA) and Global Spatial Attention (GSA) is incorporated into TransAttUnet to effectively learn the non-local interactions among encoder features. Moreover, we also use additional multi-scale skip connections between decoder blocks to aggregate the upsampled features with different semantic scales. In this way, the representation ability of multi-scale context information is strengthened to generate discriminative features. Benefitting from these complementary components, the proposed TransAttUnet can effectively alleviate the loss of fine details caused by the stacking of convolution layers and the consecutive sampling operations, finally improving the segmentation quality of medical images. Extensive experiments were conducted on multiple medical image segmentation datasets from various imaging modalities, which demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods. |
|---|---|
| AbstractList | Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry. In recent years, convolutional encoder-decoder solutions have achieved substantial progress in the field of automatic medical image segmentation. Due to the inherent bias in the convolution operations, prior models mainly focus on local visual cues formed by the neighboring pixels, but fail to fully model the long-range contextual dependencies. In this article, we propose a novel Transformer-based Attention Guided Network called TransAttUnet , in which the multi-level guided attention and multi-scale skip connection are designed to jointly enhance the performance of the semantical segmentation architecture. Inspired by Transformer, the self-aware attention (SAA) module with Transformer Self Attention (TSA) and Global Spatial Attention (GSA) is incorporated into TransAttUnet to effectively learn the non-local interactions among encoder features. Moreover, we also use additional multi-scale skip connections between decoder blocks to aggregate the upsampled features with different semantic scales. In this way, the representation ability of multi-scale context information is strengthened to generate discriminative features. Benefitting from these complementary components, the proposed TransAttUnet can effectively alleviate the loss of fine details caused by the stacking of convolution layers and the consecutive sampling operations, finally improving the segmentation quality of medical images. Extensive experiments were conducted on multiple medical image segmentation datasets from various imaging modalities, which demonstrate that the proposed method consistently outperforms the existing state-of-the-art methods. |
| Author | Liu, Yishu Zhang, Zheng Lu, Guangming Chen, Bingzhi Kong, Adams Wai Kin |
| Author_xml | – sequence: 1 givenname: Bingzhi orcidid: 0000-0002-2497-6214 surname: Chen fullname: Chen, Bingzhi email: chenbingzhi@m.scnu.edu.cn organization: School of Software, South China Normal University, Foshan, Guangdong, China – sequence: 2 givenname: Yishu orcidid: 0000-0003-0465-5381 surname: Liu fullname: Liu, Yishu email: liuyishu@stu.hit.edu.cn organization: Shenzhen Medical Biometrics Perception and Analysis Engineering Laboratory, Harbin Institute of Technology, Shenzhen, China – sequence: 3 givenname: Zheng orcidid: 0000-0003-1470-6998 surname: Zhang fullname: Zhang, Zheng email: zhengzhang@hit.edu.cn organization: Shenzhen Medical Biometrics Perception and Analysis Engineering Laboratory, Harbin Institute of Technology, Shenzhen, China – sequence: 4 givenname: Guangming orcidid: 0000-0003-1578-2634 surname: Lu fullname: Lu, Guangming email: luguangm@hit.edu.cn organization: Shenzhen Medical Biometrics Perception and Analysis Engineering Laboratory, Harbin Institute of Technology, Shenzhen, China – sequence: 5 givenname: Adams Wai Kin orcidid: 0000-0002-9728-9511 surname: Kong fullname: Kong, Adams Wai Kin email: adamskon@ntu.edu.sg organization: School of Computer Science and Engineering, Nanyang Technological University, Singapore |
| BookMark | eNp9kEtPwkAUhScGExH5A8bFJK6L8-hj6o4QRRLQhRDdjdPpLQ4pLU6nJv57B8qCuHB1bm7Od-_JuUS9qq4AoWtKRpSS9G75sJzMRowwPuKcpDGLz1CfhQkNmIjeeyfzBRo2zYYQwtKI8ijso4-lVVUzdm5VgbvHi7Z0JpjDN5TYL6Fypq6CaWtyyPEqeAaH34z7xAeqqO0WLPaCF5AbrUo826o14FdYbz2q9vAVOi9U2cDwqAO0evR5n4L5y3Q2Gc8DzdLYBUIlPlSehczn1BArmjCdcQ1UgKYiiwqd00yEghWcgLeIJAXBdaFZmBfA-ADddnd3tv5qoXFyU7e28i8lS6kgnIQx9S7RubStm8ZCIbXpcjqrTCkpkftK5aFSua9UHiv1KPuD7qzZKvvzP3TTQQYATgAWhjRN-S8fIITB |
| CODEN | ITETCU |
| CitedBy_id | crossref_primary_10_1109_TGRS_2024_3435963 crossref_primary_10_1109_JBHI_2024_3390241 crossref_primary_10_3390_s24144749 crossref_primary_10_1177_20552076241291306 crossref_primary_10_1016_j_visinf_2025_100268 crossref_primary_10_1007_s13042_024_02519_3 crossref_primary_10_1080_0954898X_2024_2323530 crossref_primary_10_1016_j_bspc_2024_106726 crossref_primary_10_1007_s12559_024_10304_1 crossref_primary_10_3390_jimaging10100239 crossref_primary_10_1007_s10489_025_06629_5 crossref_primary_10_1007_s11042_025_21003_w crossref_primary_10_1109_JSTARS_2025_3549678 crossref_primary_10_1016_j_compbiomed_2024_108671 crossref_primary_10_1049_ipr2_70203 crossref_primary_10_1016_j_bspc_2025_107931 crossref_primary_10_3390_a17040168 crossref_primary_10_3390_math12223580 crossref_primary_10_1002_ima_70207 crossref_primary_10_3390_app14104233 crossref_primary_10_1002_ima_70167 crossref_primary_10_1109_ACCESS_2024_3353142 crossref_primary_10_1016_j_neucom_2025_131413 crossref_primary_10_1002_ima_23160 crossref_primary_10_1016_j_eswa_2025_128409 crossref_primary_10_1109_ACCESS_2024_3433612 crossref_primary_10_1016_j_eswa_2025_128801 crossref_primary_10_1016_j_cageo_2025_105999 crossref_primary_10_1016_j_compbiomed_2024_109259 crossref_primary_10_3390_s24031005 crossref_primary_10_1007_s00371_025_03874_0 crossref_primary_10_3390_diagnostics14020191 crossref_primary_10_1016_j_bspc_2025_107722 crossref_primary_10_1109_TMI_2024_3362879 crossref_primary_10_1007_s11517_024_03278_7 crossref_primary_10_1051_itmconf_20257601004 crossref_primary_10_1016_j_jksuci_2024_102218 crossref_primary_10_1109_ACCESS_2024_3365048 crossref_primary_10_1109_TETCI_2025_3529896 crossref_primary_10_1109_TSMC_2025_3539573 crossref_primary_10_3390_a18090551 crossref_primary_10_3390_electronics13173501 crossref_primary_10_1109_RBME_2023_3297604 crossref_primary_10_1002_ima_70064 crossref_primary_10_1007_s10489_024_05900_5 crossref_primary_10_3390_rs17183253 crossref_primary_10_1109_JBHI_2024_3426074 crossref_primary_10_1002_ima_70107 crossref_primary_10_1088_2631_8695_adebe0 crossref_primary_10_1117_1_JEI_33_1_013049 crossref_primary_10_1109_JBHI_2024_3504829 crossref_primary_10_1109_LSP_2025_3600374 crossref_primary_10_1002_ima_70103 crossref_primary_10_1007_s00371_024_03722_7 crossref_primary_10_1109_ACCESS_2024_3494241 crossref_primary_10_1007_s11517_024_03192_y crossref_primary_10_1109_JBHI_2024_3468904 crossref_primary_10_1145_3759254 crossref_primary_10_1016_j_neunet_2024_106489 crossref_primary_10_1109_MCI_2025_3564274 crossref_primary_10_1049_ipr2_13103 crossref_primary_10_1109_TIP_2025_3602739 crossref_primary_10_1016_j_bspc_2025_108439 crossref_primary_10_1038_s41598_025_02714_4 crossref_primary_10_1038_s41598_025_92715_0 crossref_primary_10_1016_j_cviu_2025_104471 crossref_primary_10_1080_21681163_2024_2387458 crossref_primary_10_1109_TIM_2024_3406804 crossref_primary_10_1186_s40537_025_01246_y crossref_primary_10_1007_s40747_024_01574_1 crossref_primary_10_1016_j_neunet_2025_107943 crossref_primary_10_1080_19393555_2025_2547206 crossref_primary_10_1186_s13636_024_00368_0 crossref_primary_10_1007_s13369_025_10443_z crossref_primary_10_1109_ACCESS_2024_3463713 crossref_primary_10_1016_j_jfoodeng_2024_112338 crossref_primary_10_1002_ima_70086 crossref_primary_10_1038_s41598_024_81703_5 crossref_primary_10_1137_23M1577663 crossref_primary_10_1109_TETCI_2025_3547635 crossref_primary_10_3390_agronomy15071568 crossref_primary_10_1109_JSEN_2025_3553904 crossref_primary_10_1109_ACCESS_2025_3563375 crossref_primary_10_3390_bioengineering11060575 crossref_primary_10_1016_j_eswa_2025_127637 crossref_primary_10_3390_app14031293 crossref_primary_10_3390_electronics13234594 crossref_primary_10_1016_j_compbiomed_2024_108005 crossref_primary_10_1007_s10489_025_06387_4 crossref_primary_10_1007_s10278_024_01116_8 crossref_primary_10_1186_s12886_024_03376_y crossref_primary_10_1007_s12559_024_10264_6 crossref_primary_10_3389_fmed_2025_1542737 crossref_primary_10_1109_TIM_2024_3370816 crossref_primary_10_1109_TGRS_2024_3421899 crossref_primary_10_1177_14727978251366513 crossref_primary_10_3390_sym17040531 crossref_primary_10_1016_j_compeleceng_2025_110099 crossref_primary_10_1186_s12903_024_04193_x crossref_primary_10_3390_app14156765 crossref_primary_10_1016_j_jbi_2025_104827 crossref_primary_10_1109_ACCESS_2025_3592229 crossref_primary_10_1016_j_patcog_2025_112126 crossref_primary_10_1109_JBHI_2022_3184930 crossref_primary_10_1007_s11227_025_07313_8 crossref_primary_10_1016_j_engappai_2025_112085 crossref_primary_10_1109_TNSRE_2024_3442788 crossref_primary_10_1080_17538947_2024_2392845 crossref_primary_10_1109_TIM_2024_3476601 crossref_primary_10_1109_TETCI_2024_3449924 crossref_primary_10_1587_transinf_2024EDP7059 crossref_primary_10_1007_s42979_025_03799_4 crossref_primary_10_1109_TETCI_2024_3500025 crossref_primary_10_1038_s41598_025_92010_y crossref_primary_10_1038_s41598_025_87851_6 crossref_primary_10_1109_JBHI_2024_3523492 |
| Cites_doi | 10.1007/978-3-030-87193-2_4 10.1007/978-3-030-59719-1_36 10.1109/TNNLS.2022.3159394 10.1007/978-3-030-46640-4_25 10.1109/TETCI.2022.3174868 10.1117/12.2628519 10.48550/arXiv.2010.11929 10.1007/978-3-030-87193-2_31 10.3978/j.issn.2223-4292.2014.11.20 10.1007/978-3-030-87193-2_2 10.1109/TETCI.2021.3136587 10.48550/arXiv.1802.00368 10.1109/CVPR46437.2021.00681 10.3389/fbioe.2020.00670 10.1109/NAECON.2018.8556686 10.1109/ISM46123.2019.00049 10.1109/TBME.2018.2866166 10.24963/ijcai.2018/614 10.1016/j.isprsjprs.2020.01.013 10.1016/j.patcog.2019.107152 10.1016/j.patcog.2020.107404 10.3389/fgene.2019.01110 10.1609/aaai.v35i6.16614 10.1109/CVPR.2015.7298965 10.2214/ajr.174.1.1740071 10.1109/CBMS49503.2020.00111 10.1007/978-3-031-25066-8_9 10.1109/TETCI.2021.3132382 10.5114/pjr.2022.119027 10.1007/978-3-319-24574-4_28 10.1007/978-3-030-58452-8_13 10.15439/2020F175 10.1109/ICCVW.2019.00052 10.1109/TBME.2017.2734058 10.1007/978-3-030-00889-5_1 10.1038/s41592-019-0612-7 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION 7SP 8FD L7M |
| DOI | 10.1109/TETCI.2023.3309626 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998-Present IEEE Electronic Library (IEL) CrossRef Electronics & Communications Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
| DatabaseTitle | CrossRef Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore Digital Library url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| EISSN | 2471-285X |
| EndPage | 68 |
| ExternalDocumentID | 10_1109_TETCI_2023_3309626 10244199 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Shenzhen Science and Technology Program grantid: RCYX20221008092852077 – fundername: Basic and Applied Basic Research Foundation of Guangdong Province; Guangdong Basic and Applied Basic Research Foundation grantid: 2023A1515010057 funderid: 10.13039/501100021171 – fundername: National Natural Science Foundation of China; NSFC grantid: 62176077; 62302172 funderid: 10.13039/501100001809 – fundername: Shenzhen Key Technical Project grantid: 2022N001 – fundername: Shenzhen Fundamental Research and Discipline Layout project; Shenzhen Fundamental Research Fund grantid: JCYJ20210324132210025 funderid: 10.13039/501100012271 – fundername: Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies grantid: 2022B1212010005 |
| GroupedDBID | 0R~ 97E AAJGR AASAJ AAWTH ABAZT ABJNI ABQJQ ABVLG ACGFS AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE JAVBF OCL RIA RIE AAYXX CITATION 7SP 8FD L7M |
| ID | FETCH-LOGICAL-c296t-8a7029db42285ce6a172cb3ce18ec18b5fcd1b8482f30e85c879e83cfc24dfe23 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 206 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001068988200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2471-285X |
| IngestDate | Sun Jun 29 16:32:56 EDT 2025 Tue Nov 18 21:19:29 EST 2025 Sat Nov 29 05:12:09 EST 2025 Wed Aug 27 03:03:19 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 1 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c296t-8a7029db42285ce6a172cb3ce18ec18b5fcd1b8482f30e85c879e83cfc24dfe23 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0003-1470-6998 0000-0002-9728-9511 0000-0003-0465-5381 0000-0003-1578-2634 0000-0002-2497-6214 |
| PQID | 2918030461 |
| PQPubID | 4437216 |
| PageCount | 14 |
| ParticipantIDs | ieee_primary_10244199 proquest_journals_2918030461 crossref_citationtrail_10_1109_TETCI_2023_3309626 crossref_primary_10_1109_TETCI_2023_3309626 |
| PublicationCentury | 2000 |
| PublicationDate | 2024-02-01 |
| PublicationDateYYYYMMDD | 2024-02-01 |
| PublicationDate_xml | – month: 02 year: 2024 text: 2024-02-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Piscataway |
| PublicationPlace_xml | – name: Piscataway |
| PublicationTitle | IEEE transactions on emerging topics in computational intelligence |
| PublicationTitleAbbrev | TETCI |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref35 ref34 ref15 ref37 ref36 iek (ref22) 2016 ref30 ref11 ref33 ref32 ref2 ref1 ref17 ref39 ref16 ref38 ref19 ref18 Luo (ref7) 2016 ref24 Parmar (ref13) 2018 ref26 ref25 ref20 Tang (ref23) 2019 Codella (ref31) 2019 ref42 ref41 ref44 ref21 ref43 Oktay (ref10) 2019; 53 ref28 ref27 Chen (ref14) 2021 ref29 ref8 Jaderberg (ref12) 2015 ref9 ref4 ref3 ref6 ref5 ref40 Xie (ref45) 2021 |
| References_xml | – ident: ref15 doi: 10.1007/978-3-030-87193-2_4 – ident: ref37 doi: 10.1007/978-3-030-59719-1_36 – ident: ref39 doi: 10.1109/TNNLS.2022.3159394 – ident: ref21 doi: 10.1007/978-3-030-46640-4_25 – ident: ref2 doi: 10.1109/TETCI.2022.3174868 – year: 2021 ident: ref14 article-title: TransUNet: Transformers make strong encoders for medical image segmentation – ident: ref11 doi: 10.1117/12.2628519 – start-page: 424 volume-title: Proc. 19th Int. Conf. Med. Image Comput. Comput.- Assist. Interv. year: 2016 ident: ref22 article-title: 3D U-Net: Learning dense volumetric segmentation from sparse annotation – ident: ref26 doi: 10.48550/arXiv.2010.11929 – ident: ref30 doi: 10.1007/978-3-030-87193-2_31 – ident: ref33 doi: 10.3978/j.issn.2223-4292.2014.11.20 – ident: ref29 doi: 10.1007/978-3-030-87193-2_2 – ident: ref3 doi: 10.1109/TETCI.2021.3136587 – volume: 53 start-page: 197 issue: 2 year: 2019 ident: ref10 article-title: Attention U-net: Learning where to look for the pancreas publication-title: Med. Image Anal. – ident: ref40 doi: 10.48550/arXiv.1802.00368 – ident: ref28 doi: 10.1109/CVPR46437.2021.00681 – ident: ref20 doi: 10.3389/fbioe.2020.00670 – ident: ref38 doi: 10.1109/NAECON.2018.8556686 – ident: ref42 doi: 10.1109/ISM46123.2019.00049 – ident: ref17 doi: 10.1109/TBME.2018.2866166 – ident: ref16 doi: 10.24963/ijcai.2018/614 – ident: ref41 doi: 10.1016/j.isprsjprs.2020.01.013 – start-page: 2017 volume-title: Proc. 28th Int. Conf. Neural Inf. Process. Syst. year: 2015 ident: ref12 article-title: Spatial transformer networks – ident: ref4 doi: 10.1016/j.patcog.2019.107152 – start-page: 4905 volume-title: Proc. Int. Conf. Neural Inf. Process. Syst. year: 2016 ident: ref7 article-title: Understanding the effective receptive field in deep convolutional neural networks – start-page: 4055 volume-title: Proc. Int. Conf. Mach. Learn. year: 2018 ident: ref13 article-title: Image transformer – ident: ref25 doi: 10.1016/j.patcog.2020.107404 – ident: ref8 doi: 10.3389/fgene.2019.01110 – start-page: 168 volume-title: Proc. Int. Symp. Biomed. Imag. year: 2019 ident: ref31 article-title: Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC) – ident: ref34 doi: 10.1609/aaai.v35i6.16614 – ident: ref5 doi: 10.1109/CVPR.2015.7298965 – ident: ref32 doi: 10.2214/ajr.174.1.1740071 – ident: ref36 doi: 10.1109/CBMS49503.2020.00111 – ident: ref44 doi: 10.1007/978-3-031-25066-8_9 – ident: ref1 doi: 10.1109/TETCI.2021.3132382 – ident: ref24 doi: 10.5114/pjr.2022.119027 – ident: ref6 doi: 10.1007/978-3-319-24574-4_28 – start-page: 457 volume-title: Proc. Int. Conf. Med. Imag. Deep Learn. year: 2019 ident: ref23 article-title: XLSor: A robust and accurate lung segmentor on chest X-rays using criss-cross attention and customized radiorealistic abnormalities generation – start-page: 12077 volume-title: Proc. Adv. Neural Inf. Process. Syst. year: 2021 ident: ref45 article-title: Segformer: Simple and efficient design for semantic segmentation with transformers – ident: ref27 doi: 10.1007/978-3-030-58452-8_13 – ident: ref19 doi: 10.15439/2020F175 – ident: ref43 doi: 10.1109/ICCVW.2019.00052 – ident: ref18 doi: 10.1109/TBME.2017.2734058 – ident: ref9 doi: 10.1007/978-3-030-00889-5_1 – ident: ref35 doi: 10.1038/s41592-019-0612-7 |
| SSID | ssj0002951354 |
| Score | 2.6251745 |
| Snippet | Accurate segmentation of organs or lesions from medical images is crucial for reliable diagnosis of diseases and organ morphometry. In recent years,... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 55 |
| SubjectTerms | Biomedical image processing Computer architecture Convolution Convolutional neural networks Decoding Encoders-Decoders Image quality Image segmentation Medical diagnostic imaging Medical image segmentation Medical imaging multi-level guided attention multi-scale skip connection Semantics transformer Transformers |
| Title | TransAttUnet: Multi-Level Attention-Guided U-Net With Transformer for Medical Image Segmentation |
| URI | https://ieeexplore.ieee.org/document/10244199 https://www.proquest.com/docview/2918030461 |
| Volume | 8 |
| WOSCitedRecordID | wos001068988200001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore Digital Library customDbUrl: eissn: 2471-285X dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002951354 issn: 2471-285X databaseCode: RIE dateStart: 20170101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwEA9u-OCLHzhxOiUPvklmP9I28W2I04EMwQ33VtfLRQeuk63z7zdJWxmIgk8t5VKauyb3S3K_O0IupuBJ9DQyHWeKcUyACT9SzCAT4BIirmPlik0kw6GYTORjRVZ3XBhEdMFn2LW37ixfLWBtt8rMCDfOyJeyQRpJEpdkre8NlcBghTDiNTHGk1ej29HNoGvrg3fNql3GNoHChvNx1VR-TMHOr_T3_vlF-2S3ApC0V1r8gGxhfkhenMvpFcU4x-KaOlYte7DxQNQ8LCMa2d16plDRMRtiQZ9nxRsd1agVl9RcaHVqQwdzM8vQJ3ydV8ykvEXGfdPPe1bVTmAQyLhgYpoYjajMZviKAOOpASqQhYC-QPBFFmlQfia4CHTooRERiUQRgoaAK41BeESa-SLHY0J9SOKM8ygyDo-HWmYauAI1NeANIw-SNvFrpaZQJRa39S3eU7fA8GTqDJFaQ6SVIdrk8rvNR5lW40_pllX9hmSp9Tbp1MZLq6G3SgPpizKP_MkvzU7Jjnk7L2OvO6RZLNd4Rrbhs5itlufur_oCo-nL-w |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dS8MwEA9-gb74gYrTqXnwTTL7kbSJb0P8GM4huKFvdb1cdOCqzM6_3yTtRBAFn1rKhTZ3Te6X5H53hBwNIVAYGGQmyTXjmAKTodDMIhPgCgQ3ifbFJtJeTz48qNuarO65MIjog8-w5W79Wb5-hanbKrMj3DqjUKl5sig4j4KKrvW1pRJZtBALPqPGBOqkf94_67RchfCWXberxKVQ-OZ-fD2VH5Ow9ywXa__8pnWyWkNI2q5svkHmsNgkj97ptMtyUGB5Sj2vlnVdRBC1D6uYRnY5HWnUdMB6WNL7UflM-zPcihNqL7Q-t6GdsZ1n6B0-jWtuUrFFBhe2n1esrp7AIFJJyeQwtRrRucvxJQCToYUqkMeAoUQIZS4M6DCXXEYmDtCKyFShjMFAxLXBKN4mC8VrgTuEhpAmOedCWJfHY6NyA1yDHlr4hiKAtEHCmVIzqFOLuwoXL5lfYgQq84bInCGy2hANcvzV5q1KrPGn9JZT_TfJSusN0pwZL6sH33sWqVBWmeR3f2l2SJav-jfdrNvpXe-RFfsmXkViN8lCOZniPlmCj3L0Pjnwf9gnaWjPQg |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=TransAttUnet%3A+Multi-Level+Attention-Guided+U-Net+With+Transformer+for+Medical+Image+Segmentation&rft.jtitle=IEEE+transactions+on+emerging+topics+in+computational+intelligence&rft.au=Chen%2C+Bingzhi&rft.au=Liu%2C+Yishu&rft.au=Zhang%2C+Zheng&rft.au=Lu%2C+Guangming&rft.date=2024-02-01&rft.pub=IEEE&rft.eissn=2471-285X&rft.volume=8&rft.issue=1&rft.spage=55&rft.epage=68&rft_id=info:doi/10.1109%2FTETCI.2023.3309626&rft.externalDocID=10244199 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2471-285X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2471-285X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2471-285X&client=summon |