Rethinking Transformers for Semantic Segmentation of Remote Sensing Images
Transformer has been widely applied in image processing tasks as a substitute for convolutional neural networks (CNNs) for feature extraction due to its superiority in global context modeling and flexibility in model generalization. However, the existing transformer-based methods for semantic segmen...
Uloženo v:
| Vydáno v: | IEEE transactions on geoscience and remote sensing Ročník 61; s. 1 - 15 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
IEEE
2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 0196-2892, 1558-0644 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Transformer has been widely applied in image processing tasks as a substitute for convolutional neural networks (CNNs) for feature extraction due to its superiority in global context modeling and flexibility in model generalization. However, the existing transformer-based methods for semantic segmentation of remote sensing (RS) images are still with several limitations, which can be summarized into two main aspects: 1) the transformer encoder is generally combined with CNN-based decoder, leading to inconsistency in feature representations; and 2) the strategies for global and local context information utilization are not sufficiently effective. Therefore, in this article, a global-local transformer segmentor (GLOTS) framework is proposed for the semantic segmentation of RS images to acquire consistent feature representations by adopting transformers for both encoding and decoding, in which a masked image modeling (MIM) pretrained transformer encoder is adopted to learn semantic-rich representations of input images and a multiscale global-local transformer decoder is designed to fully exploit the global and local features. Specifically, the transformer decoder uses a feature separation-aggregation module (FSAM) to utilize the feature adequately at different scales and adopts a global-local attention module (GLAM) containing global attention block (GAB) and local attention block (LAB) to capture the global and local context information, respectively. Furthermore, a learnable progressive upsampling strategy (LPUS) is proposed to restore the resolution progressively, which can flexibly recover the fine-grained details in the upsampling process. The experiment results on the three benchmark RS datasets demonstrate that the proposed GLOTS is capable of achieving better performance with some state-of-the-art methods, and the superiority of the proposed framework is also verified by ablation studies. The code will be available at https://github.com/lyhnsn/GLOTS . |
|---|---|
| AbstractList | Transformer has been widely applied in image processing tasks as a substitute for convolutional neural networks (CNNs) for feature extraction due to its superiority in global context modeling and flexibility in model generalization. However, the existing transformer-based methods for semantic segmentation of remote sensing (RS) images are still with several limitations, which can be summarized into two main aspects: 1) the transformer encoder is generally combined with CNN-based decoder, leading to inconsistency in feature representations; and 2) the strategies for global and local context information utilization are not sufficiently effective. Therefore, in this article, a global-local transformer segmentor (GLOTS) framework is proposed for the semantic segmentation of RS images to acquire consistent feature representations by adopting transformers for both encoding and decoding, in which a masked image modeling (MIM) pretrained transformer encoder is adopted to learn semantic-rich representations of input images and a multiscale global-local transformer decoder is designed to fully exploit the global and local features. Specifically, the transformer decoder uses a feature separation-aggregation module (FSAM) to utilize the feature adequately at different scales and adopts a global-local attention module (GLAM) containing global attention block (GAB) and local attention block (LAB) to capture the global and local context information, respectively. Furthermore, a learnable progressive upsampling strategy (LPUS) is proposed to restore the resolution progressively, which can flexibly recover the fine-grained details in the upsampling process. The experiment results on the three benchmark RS datasets demonstrate that the proposed GLOTS is capable of achieving better performance with some state-of-the-art methods, and the superiority of the proposed framework is also verified by ablation studies. The code will be available at https://github.com/lyhnsn/GLOTS . |
| Author | Liu, Yuheng Wang, Ye Zhang, Yifan Mei, Shaohui |
| Author_xml | – sequence: 1 givenname: Yuheng orcidid: 0000-0001-5007-8533 surname: Liu fullname: Liu, Yuheng email: hnlyh@mail.nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 2 givenname: Yifan orcidid: 0000-0003-4533-3880 surname: Zhang fullname: Zhang, Yifan email: yifanzhang@nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 3 givenname: Ye orcidid: 0009-0009-5689-8271 surname: Wang fullname: Wang, Ye email: wy2017263322@mail.nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China – sequence: 4 givenname: Shaohui orcidid: 0000-0002-8018-596X surname: Mei fullname: Mei, Shaohui email: meish@nwpu.edu.cn organization: School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China |
| BookMark | eNp9kE9LAzEQxYNUsK1-AMHDguetmWT_5ShFa6UgtPW8ZLOTurWb1GR78NubZXsQD57eMDO_ecybkJGxBgm5BToDoOJhu1hvZowyPuOcBk0uyBjStIhpliQjMqYgspgVgl2Rifd7SiFJIR-T1zV2H435bMwu2jppvLauReejoNEGW2m6RoVi16LpZNdYE1kdrbG1HYa28T24bOUO_TW51PLg8easU_L-_LSdv8Srt8Vy_riKFRNJF1dUSAFK1UzTSle14olQteCoKMhaFyoRIi8qimmFqcoAFCrNeS7SDBTUKZ-S--Hu0dmvE_qu3NuTM8GyZEUKnPPwZtiCYUs5671DXR5d00r3XQIt-8jKPrKyj6w8RxaY_A-jmuHpzsnm8C95N5ANIv5yYlSwMP4Brb58NQ |
| CODEN | IGRSD2 |
| CitedBy_id | crossref_primary_10_1109_TGRS_2024_3377999 crossref_primary_10_1109_LGRS_2024_3401728 crossref_primary_10_1109_JSTARS_2025_3574229 crossref_primary_10_1109_TGRS_2025_3541871 crossref_primary_10_1109_JSTARS_2024_3438620 crossref_primary_10_1109_JSTARS_2024_3439516 crossref_primary_10_1109_JSTARS_2024_3470316 crossref_primary_10_1109_TGRS_2024_3363742 crossref_primary_10_1109_TGRS_2025_3578515 crossref_primary_10_1109_TGRS_2025_3526247 crossref_primary_10_1109_TGRS_2024_3453501 crossref_primary_10_1109_TGRS_2024_3388528 crossref_primary_10_1109_TGRS_2025_3529031 crossref_primary_10_1109_TGRS_2025_3559915 crossref_primary_10_1109_JSTARS_2025_3594044 crossref_primary_10_1109_JSTARS_2025_3566159 crossref_primary_10_1109_TGRS_2024_3421651 crossref_primary_10_1117_1_JRS_19_014502 crossref_primary_10_1109_TGRS_2024_3404922 crossref_primary_10_1109_TIP_2024_3422881 crossref_primary_10_1080_01431161_2023_2274820 crossref_primary_10_1080_17538947_2024_2403619 crossref_primary_10_1109_JSTARS_2024_3424831 crossref_primary_10_1109_JSTARS_2025_3525634 crossref_primary_10_1190_geo2024_0245_1 crossref_primary_10_1109_TGRS_2024_3424295 crossref_primary_10_1109_JSTARS_2025_3571814 crossref_primary_10_1109_TGRS_2023_3318788 crossref_primary_10_3390_rs16132289 crossref_primary_10_1007_s11760_024_03255_5 crossref_primary_10_1109_JSTARS_2024_3388464 crossref_primary_10_1109_TGRS_2024_3385318 crossref_primary_10_1109_ACCESS_2024_3522286 crossref_primary_10_1109_JSTARS_2024_3417211 crossref_primary_10_1109_TGRS_2025_3543821 crossref_primary_10_1109_TGRS_2024_3410977 crossref_primary_10_1109_TGRS_2024_3516501 crossref_primary_10_1109_TGRS_2024_3427370 crossref_primary_10_1038_s41598_024_63575_x crossref_primary_10_1109_JSTARS_2024_3378301 crossref_primary_10_1109_TGRS_2024_3482688 crossref_primary_10_1109_JSTARS_2025_3583442 crossref_primary_10_1109_TMM_2025_3543026 crossref_primary_10_1007_s00530_025_01674_z crossref_primary_10_1109_TGRS_2024_3385747 crossref_primary_10_1109_TGRS_2025_3568475 crossref_primary_10_1109_JSTARS_2024_3485239 crossref_primary_10_1109_TGRS_2024_3521483 crossref_primary_10_1109_TGRS_2025_3589235 crossref_primary_10_1109_TGRS_2024_3492008 |
| Cites_doi | 10.1007/978-3-540-77058-9_42 10.1016/j.ins.2022.04.006 10.1109/TIP.2018.2878958 10.1007/978-3-030-01228-1_26 10.1109/CVPR.2017.660 10.1016/j.ijdrr.2018.11.022 10.1007/s00521-022-07737-w 10.1109/LGRS.2018.2795531 10.3390/rs12071130 10.1080/01431160512331316469 10.1007/978-3-030-58452-8_13 10.1109/TGRS.2022.3144894 10.1109/TGRS.2020.3015157 10.1007/s12665-011-1112-y 10.3390/rs11161922 10.1109/TGRS.2021.3103517 10.1109/TGRS.2022.3144165 10.1109/ICCV48922.2021.00061 10.1109/TGRS.2021.3093977 10.3390/s18103232 10.1016/j.isprsjprs.2021.01.020 10.3390/en14102960 10.1109/ICCV48922.2021.01204 10.1109/CVPRW50498.2020.00107 10.1109/TGRS.2020.3016820 10.1109/CVPR52688.2022.00398 10.1016/j.isprsjprs.2020.01.013 10.1109/ICCV48922.2021.00717 10.3390/rs13040808 10.1109/LGRS.2020.2988294 10.1109/CVPR.2016.90 10.3390/rs13010071 10.1109/TPAMI.2020.2983686 10.3390/rs12040701 10.1109/ICCV48922.2021.00986 10.1016/j.isprsjprs.2019.04.015 10.1109/CVPR46437.2021.00681 10.1109/CVPR.2015.7298965 10.1109/JSTARS.2021.3119654 10.5194/isprs-archives-XLII-2-W16-273-2019 10.1007/978-3-030-01240-3_17 10.3390/rs13183585 10.1007/s12145-018-00376-7 10.1109/TGRS.2021.3124913 10.1109/ICCV.2017.324 10.1016/j.eswa.2023.119508 10.1109/LGRS.2022.3143368 10.1109/TGRS.2020.2994150 10.1109/CVPRW.2016.90 10.1016/j.eswa.2020.114532 10.1016/j.eswa.2023.119858 10.1016/j.isprsjprs.2022.06.008 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
| DBID | 97E RIA RIE AAYXX CITATION 7UA 8FD C1K F1W FR3 H8D H96 KR7 L.G L7M |
| DOI | 10.1109/TGRS.2023.3302024 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Water Resources Abstracts Technology Research Database Environmental Sciences and Pollution Management ASFA: Aquatic Sciences and Fisheries Abstracts Engineering Research Database Aerospace Database Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources Civil Engineering Abstracts Aquatic Science & Fisheries Abstracts (ASFA) Professional Advanced Technologies Database with Aerospace |
| DatabaseTitle | CrossRef Aerospace Database Civil Engineering Abstracts Aquatic Science & Fisheries Abstracts (ASFA) Professional Aquatic Science & Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy & Non-Living Resources Technology Research Database ASFA: Aquatic Sciences and Fisheries Abstracts Engineering Research Database Advanced Technologies Database with Aerospace Water Resources Abstracts Environmental Sciences and Pollution Management |
| DatabaseTitleList | Aerospace Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Physics |
| EISSN | 1558-0644 |
| EndPage | 15 |
| ExternalDocumentID | 10_1109_TGRS_2023_3302024 10209224 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62171381 funderid: 10.13039/501100001809 |
| GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AETIX AFRAH AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ H~9 IBMZZ ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 Y6R AAYXX CITATION 7UA 8FD C1K F1W FR3 H8D H96 KR7 L.G L7M |
| ID | FETCH-LOGICAL-c294t-b09a91ccd2f0bfbdc349cd93ec01adf8c49978b0e5be5c611cecf3379561c1d53 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 86 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001050018600040&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0196-2892 |
| IngestDate | Mon Jun 30 10:08:26 EDT 2025 Tue Nov 18 21:19:29 EST 2025 Sat Nov 29 03:32:25 EST 2025 Wed Aug 27 02:14:49 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c294t-b09a91ccd2f0bfbdc349cd93ec01adf8c49978b0e5be5c611cecf3379561c1d53 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-8018-596X 0000-0001-5007-8533 0009-0009-5689-8271 0000-0003-4533-3880 |
| PQID | 2851333892 |
| PQPubID | 85465 |
| PageCount | 15 |
| ParticipantIDs | crossref_primary_10_1109_TGRS_2023_3302024 proquest_journals_2851333892 crossref_citationtrail_10_1109_TGRS_2023_3302024 ieee_primary_10209224 |
| PublicationCentury | 2000 |
| PublicationDate | 20230000 2023-00-00 20230101 |
| PublicationDateYYYYMMDD | 2023-01-01 |
| PublicationDate_xml | – year: 2023 text: 20230000 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | IEEE transactions on geoscience and remote sensing |
| PublicationTitleAbbrev | TGRS |
| PublicationYear | 2023 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 dosovitskiy (ref34) 2021 ref52 ref11 ref55 ref10 ref54 ref17 ronneberger (ref19) 2015 ref16 ref18 ref50 chen (ref51) 2018 ref46 ref45 ref48 ref42 ref44 ref43 cao (ref68) 2021 ref49 wang (ref41) 2021 ref8 ref7 chu (ref37) 2021; 34 bao (ref47) 2021 ref9 ref4 vaswani (ref33) 2017 ref3 yu (ref69) 2015 ref6 ref36 ref31 ref30 ref32 ref2 ref1 ref39 ref38 zhou (ref20) 2018 xie (ref60) 2021 ref24 ref23 ramesh (ref65) 2021 ref26 ref25 devlin (ref64) 2018 ref63 ref22 ref66 ref21 ref28 ref27 ref29 (ref40) 2018 touvron (ref35) 2021; 139 ref62 ref61 yu (ref67) 2018 van westen (ref5) 2000; 33 |
| References_xml | – start-page: 3 year: 2018 ident: ref20 article-title: UNet++: A nested U-Net architecture for medical image segmentation publication-title: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support – ident: ref16 doi: 10.1007/978-3-540-77058-9_42 – ident: ref30 doi: 10.1016/j.ins.2022.04.006 – ident: ref29 doi: 10.1109/TIP.2018.2878958 – start-page: 1 year: 2021 ident: ref34 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: Proc Int Conf Learn Represent – year: 2021 ident: ref47 article-title: BEiT: BERT pre-training of image transformers publication-title: arXiv 2106 08254 – ident: ref50 doi: 10.1007/978-3-030-01228-1_26 – ident: ref48 doi: 10.1109/CVPR.2017.660 – ident: ref4 doi: 10.1016/j.ijdrr.2018.11.022 – ident: ref57 doi: 10.1007/s00521-022-07737-w – ident: ref26 doi: 10.1109/LGRS.2018.2795531 – volume: 34 start-page: 9355 year: 2021 ident: ref37 article-title: Twins: Revisiting the design of spatial attention in vision transformers publication-title: Proc Adv Neural Inf Process Syst – ident: ref7 doi: 10.3390/rs12071130 – ident: ref1 doi: 10.1080/01431160512331316469 – ident: ref61 doi: 10.1007/978-3-030-58452-8_13 – ident: ref43 doi: 10.1109/TGRS.2022.3144894 – ident: ref23 doi: 10.1109/TGRS.2020.3015157 – ident: ref9 doi: 10.1007/s12665-011-1112-y – ident: ref54 doi: 10.3390/rs11161922 – ident: ref17 doi: 10.1109/TGRS.2021.3103517 – ident: ref42 doi: 10.1109/TGRS.2022.3144165 – ident: ref38 doi: 10.1109/ICCV48922.2021.00061 – ident: ref15 doi: 10.1109/TGRS.2021.3093977 – start-page: 801 year: 2018 ident: ref51 article-title: Encoder-decoder with atrous separable convolution for semantic image segmentation publication-title: Proc Eur Conf Comput Vis (ECCV) – year: 2021 ident: ref68 article-title: Swin-UNet: UNet-like pure transformer for medical image segmentation publication-title: arXiv 2105 05537 – ident: ref10 doi: 10.3390/s18103232 – ident: ref14 doi: 10.1016/j.isprsjprs.2021.01.020 – ident: ref8 doi: 10.3390/en14102960 – ident: ref36 doi: 10.1109/ICCV48922.2021.01204 – ident: ref3 doi: 10.1109/CVPRW50498.2020.00107 – volume: 33 start-page: 1609 year: 2000 ident: ref5 article-title: Remote sensing for natural disaster management publication-title: Int Arch Photogramm Remote Sens – ident: ref27 doi: 10.1109/TGRS.2020.3016820 – ident: ref62 doi: 10.1109/CVPR52688.2022.00398 – year: 2018 ident: ref40 publication-title: 2d semantic labeling dataset – start-page: 8821 year: 2021 ident: ref65 article-title: Zero-shot text-to-image generation publication-title: Proc Int Conf Mach Learn – ident: ref13 doi: 10.1016/j.isprsjprs.2020.01.013 – start-page: 12077 year: 2021 ident: ref60 article-title: SegFormer: Simple and efficient design for semantic segmentation with transformers publication-title: Proc NIPS – ident: ref59 doi: 10.1109/ICCV48922.2021.00717 – ident: ref11 doi: 10.3390/rs13040808 – ident: ref22 doi: 10.1109/LGRS.2020.2988294 – ident: ref52 doi: 10.1109/CVPR.2016.90 – ident: ref25 doi: 10.3390/rs13010071 – ident: ref21 doi: 10.1109/TPAMI.2020.2983686 – ident: ref56 doi: 10.3390/rs12040701 – ident: ref39 doi: 10.1109/ICCV48922.2021.00986 – start-page: 1 year: 2018 ident: ref64 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding publication-title: Proc North Amer Chapter Assoc Comput Linguistics – ident: ref12 doi: 10.1016/j.isprsjprs.2019.04.015 – ident: ref58 doi: 10.1109/CVPR46437.2021.00681 – start-page: 1 year: 2017 ident: ref33 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst – year: 2021 ident: ref41 article-title: LoveDA: A remote sensing land-cover dataset for domain adaptive semantic segmentation publication-title: arXiv 2110 08733 – year: 2015 ident: ref69 article-title: Multi-scale context aggregation by dilated convolutions publication-title: arXiv 1511 07122 – ident: ref18 doi: 10.1109/CVPR.2015.7298965 – ident: ref46 doi: 10.1109/JSTARS.2021.3119654 – ident: ref6 doi: 10.5194/isprs-archives-XLII-2-W16-273-2019 – ident: ref49 doi: 10.1007/978-3-030-01240-3_17 – start-page: 234 year: 2015 ident: ref19 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – ident: ref45 doi: 10.3390/rs13183585 – ident: ref55 doi: 10.1007/s12145-018-00376-7 – ident: ref28 doi: 10.1109/TGRS.2021.3124913 – ident: ref66 doi: 10.1109/ICCV.2017.324 – ident: ref31 doi: 10.1016/j.eswa.2023.119508 – ident: ref44 doi: 10.1109/LGRS.2022.3143368 – ident: ref24 doi: 10.1109/TGRS.2020.2994150 – ident: ref2 doi: 10.1109/CVPRW.2016.90 – start-page: 325 year: 2018 ident: ref67 article-title: BiSeNet: Bilateral segmentation network for real-time semantic segmentation publication-title: Proc Eur Conf Comput Vis (ECCV) – volume: 139 start-page: 10347 year: 2021 ident: ref35 article-title: Training data-efficient image transformers & distillation through attention publication-title: Proc Int Conf Mach Learn – ident: ref53 doi: 10.1016/j.eswa.2020.114532 – ident: ref32 doi: 10.1016/j.eswa.2023.119858 – ident: ref63 doi: 10.1016/j.isprsjprs.2022.06.008 |
| SSID | ssj0014517 |
| Score | 2.641288 |
| Snippet | Transformer has been widely applied in image processing tasks as a substitute for convolutional neural networks (CNNs) for feature extraction due to its... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1 |
| SubjectTerms | Ablation Aggregation Artificial neural networks Coders Context Current transformers Decoding Encoder–decoder structure Feature extraction global-local transformer Image acquisition Image processing Image segmentation Information processing Modelling Modules Neural networks Remote sensing remote sensing (RS) Representations Semantic segmentation Semantics Task analysis Visualization |
| Title | Rethinking Transformers for Semantic Segmentation of Remote Sensing Images |
| URI | https://ieeexplore.ieee.org/document/10209224 https://www.proquest.com/docview/2851333892 |
| Volume | 61 |
| WOSCitedRecordID | wos001050018600040&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Xplore customDbUrl: eissn: 1558-0644 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014517 issn: 0196-2892 databaseCode: RIE dateStart: 19800101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3dT9swED-NakjsAUYBrdBNedgTUoqTOHX8OE3AtodqKkXiLXLOF6hEU9QP_n7OjotAaJN4shXZVnQ_f9ydffcD-C4xtUbUVWyVplhmRsUmsS7hpdXWqJqnAXqyCTUaFTc3-m8IVvexMETkH5_RwFX9Xb6d49q5yniFp0LzmbMFW0qpNljr-cpA5kmIjR7GbEWk4QozEfpscjm-Gjie8AFb71zKV4eQZ1V5sxX78-Vi751_9hl2gyIZ_WiR34cP1HTh04v0gl3Y9s87cXkAf8a0umtZEqLJRlVlxS_iMrqiGYt3ily5nYVQpCaa19GYGEfiz41zKES_Z7z3LA_h-uJ88vNXHFgUYky1XMWV0EYniDatRVVXFjOp0eqMUCTG1gWyzaOKSlBeUY7DJEHCOsuUi3hFhi07gk4zb-gLRHlqtBWChrkhiUYYVvdkgVmVoyS2K3sgNmItMaQYd0wX96U3NYQuHRKlQ6IMSPTg9LnLQ5tf43-ND53oXzRspd6D_ga8MizBZZkWjrqG9bH0-B_dTmDHjd46VPrQWS3W9BU-4uNqulx887PrCYWGzN4 |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEB7yaEh7yLt02zx86CngjSzLa-tYQt7JEjYbyM3Io3Ea6HpDdtPf35GsDSklhZwkjITMfHrMjDTzAXxXKK0RdRXbXFOsUpPHJrEu4aXV1uQ1TwP0ZBN5v1_c3enrEKzuY2GIyD8-o66r-rt8O8Zn5yrjFS6F5jNnHhYzpWTShmu9XBqoLAnR0b2Y7QgZLjEToQ-GJ4ObrmMK77L9zqX66xjyvCr_bMb-hDlefee_rcFKUCWjHy326zBHzQZ8epVgcAOW_ANPnGzC-YCmP1uehGg4U1ZZ9Yu4jG5oxAJ-QK7cj0IwUhON62hAjCTx58a5FKKzEe8-ky24PT4aHp7GgUchRqnVNK6ENjpBtLIWVV1ZTJVGq1NCkRhbF8hWT15UgrKKMuwlCRLWaZq7mFdk4NLPsNCMG_oCUSaNtkJQLzOk0AjDCp8qMK0yVMSWZQfETKwlhiTjjuviV-mNDaFLh0TpkCgDEh3Yf-ny2GbY-F_jLSf6Vw1bqXdgewZeGRbhpJSFI69hjUx-faPbHiyfDq8uy8uz_sU3-OhGat0r27AwfXqmHfiAv6cPk6ddP9P-AFh10CU |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Rethinking+Transformers+for+Semantic+Segmentation+of+Remote+Sensing+Images&rft.jtitle=IEEE+transactions+on+geoscience+and+remote+sensing&rft.au=Liu%2C+Yuheng&rft.au=Zhang%2C+Yifan&rft.au=Wang%2C+Ye&rft.au=Mei%2C+Shaohui&rft.date=2023&rft.issn=0196-2892&rft.eissn=1558-0644&rft.volume=61&rft.spage=1&rft.epage=15&rft_id=info:doi/10.1109%2FTGRS.2023.3302024&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TGRS_2023_3302024 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0196-2892&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0196-2892&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0196-2892&client=summon |