VSS-SpatioNet: a multi-scale feature fusion network for multimodal image integrations

Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Scientific reports Jg. 15; H. 1; S. 9306 - 20
1. Verfasser: Xiang, Zeyu
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Nature Publishing Group UK 18.03.2025
Nature Publishing Group
Nature Portfolio
Schlagworte:
ISSN:2045-2322, 2045-2322
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ( =0.5712), Piella’s metric ( =0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
AbstractList Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance (=0.5712), Piella’s metric (=0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ( =0.5712), Piella’s metric ( =0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ([Formula: see text]=0.5712), Piella's metric ([Formula: see text]=0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework's efficacy in medical applications, particularly precise tissue characterization.Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ([Formula: see text]=0.5712), Piella's metric ([Formula: see text]=0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework's efficacy in medical applications, particularly precise tissue characterization.
Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ([Formula: see text]=0.5712), Piella's metric ([Formula: see text]=0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework's efficacy in medical applications, particularly precise tissue characterization.
Abstract Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ( $$\:{\text{Q}}_{\text{G}}$$ =0.5712), Piella’s metric ( $$\:{\text{Q}}_{\text{S}}$$ =0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ( $$\:{\text{Q}}_{\text{G}}$$ =0.5712), Piella’s metric ( $$\:{\text{Q}}_{\text{S}}$$ =0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
ArticleNumber 9306
Author Xiang, Zeyu
Author_xml – sequence: 1
  givenname: Zeyu
  surname: Xiang
  fullname: Xiang, Zeyu
  email: 221410060124@stu.haust.edu.cn, zeyuxiang@foxmail.com
  organization: College of Information Engineering, Henan University of Science and Technology
BackLink https://www.ncbi.nlm.nih.gov/pubmed/40102490$$D View this record in MEDLINE/PubMed
BookMark eNp9kk9v1DAQxSNUREvpF-CAInHhErDHTuJwQajiT6UKDku5Wo49Wbwk9mI7rPj2eDcttD3Ul7Hs93569szT4sh5h0XxnJLXlDDxJnJad6IiUFcdo5xVu0fFCRBeV8AAjm7tj4uzGDckrxo6TrsnxTEnlADvyElx9X21qlZblaz_gultqcppHpOtolYjlgOqNIdc52i9Kx2mnQ8_y8GHRTZ5o8bSTmqNpXUJ12EPcvFZ8XhQY8Sz63paXH388O38c3X59dPF-fvLStecpWoQoBU1OLDWcNO0PUdoqW6gRwKD6YUSvEZuOjRM9D0DLQgVLRqDNW-Hmp0WFwvXeLWR25CThD_SKysPBz6spQrJ6hEl501PaaNraJFT0nSZxLDPYMIIgM6sdwtrO_cTGo0uBTXegd69cfaHXPvfktIOCOlIJry6JgT_a8aY5GSjxnFUDv0cJaOtEIy0LWTpy3vSjZ-Dy391UHGgAPvnvbgd6V-Wm_ZlgVgEOvgYAw5S23RoQU5oR0mJ3A-LXIZF5mGRh2GRu2yFe9Yb-oMmtphiFrs1hv-xH3D9BYW60cQ
CitedBy_id crossref_primary_10_3390_app15116224
Cites_doi 10.3390/electronics13204115
10.48550/arXiv.2404.18861
10.1016/j.inffus.2015.11.003
10.1038/s41598-022-16329-6
10.1109/CVPR42600.2020.01098
10.1364/OL.533666
10.1109/CVPR52688.2022.00564
10.1007/s11760-013-0556-9
10.1109/TIM.2020.3005230
10.1109/LSP.2016.2618776
10.1109/ICIP46576.2022.9897280
10.1016/j.bspc.2023.105534
10.48550/arXiv.1706.03762
10.1016/j.inffus.2018.09.004
10.1109/TPAMI.2020.3012548
10.1117/1.2945910
10.1049/el:20000267
10.1109/TIP.2020.2999855
10.1016/j.inffus.2024.102870
10.1364/JOT.90.000590
10.1016/j.patcog.2024.111091
10.1109/TNNLS.2013.2248094
10.1109/TPAMI.2011.109
10.1002/jmri.21049
10.1007/s12559-013-9235-y
10.1016/j.inffus.2023.101828
10.1016/j.optlaseng.2023.107925
10.1364/OE.492954
10.1016/j.inffus.2005.04.003
10.1038/s41598-021-97636-2
10.1049/el:20020212
10.1016/j.inffus.2019.07.011
10.48550/arXiv.2405.04404
10.1016/j.ijleo.2018.06.123
10.48550/arXiv.1406.2661
10.1016/j.inffus.2023.102075
10.1364/OE.496484
10.1016/j.inffus.2021.06.008
10.1109/TIP.2015.2442920
10.1016/j.imavis.2020.103964
10.1109/5.726791
10.1016/j.optcom.2014.12.032
10.1364/AO.55.006480
10.1109/TIP.2020.2975984
10.1016/j.jksuci.2024.102150
10.1142/S0219691316500247
10.1109/ICPR.2018.8546006
10.1364/AO.427245
10.1364/OPTICA.502857
10.1364/OE.492126
10.1007/978-3-030-87586-2_9
10.1609/aaai.v34i07.6975
10.3390/app131910891
10.1109/TITS.2024.3373370
10.1007/978-981-97-1068-3_1
10.1016/j.buildenv.2023.110949
10.1364/OL.481395
10.3390/app122110989
10.1016/j.inffus.2021.02.023
10.1016/j.isci.2024.110915
10.3390/rs14030755
10.1109/TIP.2018.2887342
10.1109/TNN.2008.2005601
10.1109/TIP.2020.2977573
10.1016/j.cmpb.2021.106236
10.3390/rs15122969
10.48550/arXiv.2312.00752
10.48550/arXiv.2404.09498
10.1007/s11760-012-0361-x
10.1088/0957-0233/8/4/002
10.1109/TPAMI.2023.3261282
10.1016/j.aeue.2015.09.004
10.48550/arXiv.2402.02491
10.1038/s41568-023-00576-4
10.48550/arXiv.2401.10166
10.3390/app142210111
10.1007/s12596-013-0148-7
ContentType Journal Article
Copyright The Author(s) 2025
2025. The Author(s).
Copyright Nature Publishing Group 2025
The Author(s) 2025 2025
Copyright_xml – notice: The Author(s) 2025
– notice: 2025. The Author(s).
– notice: Copyright Nature Publishing Group 2025
– notice: The Author(s) 2025 2025
DBID C6C
AAYXX
CITATION
NPM
3V.
7X7
7XB
88A
88E
88I
8FE
8FH
8FI
8FJ
8FK
ABUWG
AEUYN
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M1P
M2P
M7P
PHGZM
PHGZT
PIMPY
PJZUB
PKEHL
PPXIY
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
7X8
5PM
DOA
DOI 10.1038/s41598-025-93143-w
DatabaseName Springer Nature OA Free Journals
CrossRef
PubMed
ProQuest Central (Corporate)
Health & Medical Collection (ProQuest)
ProQuest Central (purchase pre-March 2016)
Biology Database (Alumni Edition)
Medical Database (Alumni Edition)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest One Sustainability (subscription)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Collection
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central Korea
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Collection (ProQuest)
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
Medical Database
Science Database (ProQuest)
Biological Science Database (ProQuest)
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest Health & Medical Research Collection
ProQuest One Academic Middle East (New)
ProQuest One Health & Nursing
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest One Health & Nursing
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Biology Journals (Alumni Edition)
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest One Sustainability
ProQuest Health & Medical Research Collection
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Health & Medical Research Collection
Biological Science Collection
ProQuest Central (New)
ProQuest Medical Library (Alumni)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest Medical Library
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList Publicly Available Content Database

MEDLINE - Academic
PubMed


Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Biology
EISSN 2045-2322
EndPage 20
ExternalDocumentID oai_doaj_org_article_446b116c527e410690183ebd3803022c
PMC11920090
40102490
10_1038_s41598_025_93143_w
Genre Journal Article
GrantInformation_xml – fundername: Education Department of Henan Province
  grantid: 202410464018
  funderid: http://dx.doi.org/10.13039/501100009101
– fundername: Education Department of Henan Province
  grantid: 202410464018
GroupedDBID 0R~
4.4
53G
5VS
7X7
88E
88I
8FE
8FH
8FI
8FJ
AAFWJ
AAJSJ
AAKDD
AASML
ABDBF
ABUWG
ACGFS
ACUHS
ADBBV
ADRAZ
AENEX
AEUYN
AFKRA
AFPKN
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C6C
CCPQU
DIK
DWQXO
EBD
EBLON
EBS
ESX
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HH5
HMCUK
HYE
KQ8
LK8
M1P
M2P
M7P
M~E
NAO
OK1
PHGZM
PHGZT
PIMPY
PQQKQ
PROAC
PSQYO
RNT
RNTTT
RPM
SNYQT
UKHRP
AAYXX
AFFHD
CITATION
PJZUB
PPXIY
PQGLB
AJTQC
NPM
3V.
7XB
88A
8FK
K9.
M48
PKEHL
PQEST
PQUKI
PRINS
Q9U
7X8
PUEGO
5PM
ID FETCH-LOGICAL-c543t-f82ca1def37d4d67b4e271c62be02fdb8a845e4d9ed38bb32c80187edde547f53
IEDL.DBID M2P
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001449568100004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2045-2322
IngestDate Fri Oct 03 12:34:48 EDT 2025
Tue Nov 04 02:02:25 EST 2025
Thu Sep 04 17:47:26 EDT 2025
Tue Oct 07 09:05:45 EDT 2025
Thu Apr 03 07:01:40 EDT 2025
Sun Nov 09 14:44:03 EST 2025
Tue Nov 18 22:24:37 EST 2025
Thu May 22 04:29:40 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords Visual state space (VSS) module
Multi-scale feature integration
Medical imaging applications
Lightweight asymmetric encoder-decoder
Infrared and visible image fusion
Language English
License 2025. The Author(s).
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c543t-f82ca1def37d4d67b4e271c62be02fdb8a845e4d9ed38bb32c80187edde547f53
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
OpenAccessLink https://www.proquest.com/docview/3178421225?pq-origsite=%requestingapplication%
PMID 40102490
PQID 3178421225
PQPubID 2041939
PageCount 20
ParticipantIDs doaj_primary_oai_doaj_org_article_446b116c527e410690183ebd3803022c
pubmedcentral_primary_oai_pubmedcentral_nih_gov_11920090
proquest_miscellaneous_3178830772
proquest_journals_3178421225
pubmed_primary_40102490
crossref_citationtrail_10_1038_s41598_025_93143_w
crossref_primary_10_1038_s41598_025_93143_w
springer_journals_10_1038_s41598_025_93143_w
PublicationCentury 2000
PublicationDate 2025-03-18
PublicationDateYYYYMMDD 2025-03-18
PublicationDate_xml – month: 03
  year: 2025
  text: 2025-03-18
  day: 18
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: England
PublicationTitle Scientific reports
PublicationTitleAbbrev Sci Rep
PublicationTitleAlternate Sci Rep
PublicationYear 2025
Publisher Nature Publishing Group UK
Nature Publishing Group
Nature Portfolio
Publisher_xml – name: Nature Publishing Group UK
– name: Nature Publishing Group
– name: Nature Portfolio
References L Chen (93143_CR73) 2024; 14
T Wang (93143_CR2) 2023; 48
93143_CR46
93143_CR45
H Li (93143_CR35) 2021; 73
Z Zhou (93143_CR58) 2016; 55
CS Xydeas (93143_CR68) 2000; 36
Y Luo (93143_CR79) 2023; 13
H Li (93143_CR61) 2020; 29
Y Dong (93143_CR36) 2022; 12
93143_CR42
J Ruan (93143_CR44) 2024
K Ma (93143_CR24) 2015; 24
C Liu (93143_CR6) 2014; 22
A Gu (93143_CR20) 2023
Y Li (93143_CR77) 2021; 2
X Zhang (93143_CR80) 2023; 45
Y Zhang (93143_CR12) 2023; 31
K Li (93143_CR41) 2023; 31
H Xu (93143_CR62) 2020; 29
W Liu (93143_CR38) 2024; 13
Y Zhang (93143_CR32) 2020; 54
K Zhou (93143_CR22) 2022; 14
V Saragadam (93143_CR78) 2024; 11
Y Zhou (93143_CR8) 2023; 31
B Yang (93143_CR7) 2016; 14
Z Zhou (93143_CR59) 2016; 30
J Ding (93143_CR13) 2025; 117
H Li (93143_CR34) 2020; 69
93143_CR76
J Ma (93143_CR67) 2020; 29
93143_CR30
Y Liu (93143_CR29) 2016; 23
L Chen (93143_CR39) 2021; 60
Y Li (93143_CR19) 2024; 36
V Aslantas (93143_CR25) 2015; 69
S Yang (93143_CR1) 2024; 49
YJ Rao (93143_CR66) 1997; 8
D Lu (93143_CR53) 2022; 12
A Vaswani (93143_CR17) 2017
J Ding (93143_CR33) 2024; 25
X Li (93143_CR37) 2023; 15
LI Kuncheva (93143_CR9) 2013; 25
93143_CR27
BKS Kumar (93143_CR57) 2015; 9
J Schwenck (93143_CR5) 2023; 23
PA Estevez (93143_CR70) 2009; 20
93143_CR60
93143_CR21
93143_CR65
93143_CR64
Y Qian (93143_CR49) 2024; 173
93143_CR63
BKS Kumar (93143_CR28) 2013; 7
CR Jack Jr (93143_CR4) 2008; 27
J Wang (93143_CR26) 2024; 27
S Huang (93143_CR47) 2021; 11
Y Zheng (93143_CR72) 2007; 8
A Ng (93143_CR16) 2011; 72
H Xu (93143_CR43) 2020; 44
X Huo (93143_CR52) 2024; 87
93143_CR14
L Pan (93143_CR18) 2020; 101
J Ma (93143_CR40) 2019; 48
G Qu (93143_CR56) 2002; 38
J Ding (93143_CR75) 2025; 159
93143_CR15
JW Roberts (93143_CR23) 2008; 2
93143_CR50
93143_CR54
F Pérez-García (93143_CR55) 2021; 208
93143_CR51
H Li (93143_CR31) 2018; 28
S Umirzakova (93143_CR3) 2024; 103
MK Ermachenkova (93143_CR10) 2023; 90
D Wang (93143_CR48) 2023; 98
Z Liu (93143_CR69) 2012; 34
G Cui (93143_CR71) 2015; 341
K Intharachathorn (93143_CR74) 2023; 246
H Zhang (93143_CR11) 2021; 76
References_xml – volume: 13
  start-page: 4115
  issue: 20
  year: 2024
  ident: 93143_CR38
  publication-title: Electronics
  doi: 10.3390/electronics13204115
– ident: 93143_CR46
  doi: 10.48550/arXiv.2404.18861
– volume: 30
  start-page: 15
  year: 2016
  ident: 93143_CR59
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2015.11.003
– volume: 12
  start-page: 11968
  year: 2022
  ident: 93143_CR53
  publication-title: Sci. Rep.
  doi: 10.1038/s41598-022-16329-6
– ident: 93143_CR76
  doi: 10.1109/CVPR42600.2020.01098
– volume: 49
  start-page: 5163
  year: 2024
  ident: 93143_CR1
  publication-title: Opt. Lett.
  doi: 10.1364/OL.533666
– ident: 93143_CR51
  doi: 10.1109/CVPR52688.2022.00564
– volume: 9
  start-page: 1193
  year: 2015
  ident: 93143_CR57
  publication-title: Signal. Image Video Process.
  doi: 10.1007/s11760-013-0556-9
– volume: 69
  start-page: 9645
  year: 2020
  ident: 93143_CR34
  publication-title: IEEE Trans. Instrum. Meas.
  doi: 10.1109/TIM.2020.3005230
– volume: 72
  start-page: 1
  year: 2011
  ident: 93143_CR16
  publication-title: CS294A Lecture Notes
– volume: 23
  start-page: 1882
  year: 2016
  ident: 93143_CR29
  publication-title: IEEE. Signal. Process. Lett.
  doi: 10.1109/LSP.2016.2618776
– ident: 93143_CR42
  doi: 10.1109/ICIP46576.2022.9897280
– volume: 87
  start-page: 105534
  year: 2024
  ident: 93143_CR52
  publication-title: Biomed. Signal Process. Control
  doi: 10.1016/j.bspc.2023.105534
– year: 2017
  ident: 93143_CR17
  publication-title: Preprint arXiv
  doi: 10.48550/arXiv.1706.03762
– volume: 48
  start-page: 11
  year: 2019
  ident: 93143_CR40
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2018.09.004
– volume: 44
  start-page: 502
  year: 2020
  ident: 93143_CR43
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2020.3012548
– volume: 2
  start-page: 023522
  year: 2008
  ident: 93143_CR23
  publication-title: J. Appl. Remote Sens.
  doi: 10.1117/1.2945910
– volume: 36
  start-page: 308
  year: 2000
  ident: 93143_CR68
  publication-title: Electron. Lett.
  doi: 10.1049/el:20000267
– volume: 29
  start-page: 7203
  year: 2020
  ident: 93143_CR62
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2999855
– volume: 117
  start-page: 102870
  year: 2025
  ident: 93143_CR13
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2024.102870
– volume: 90
  start-page: 590
  year: 2023
  ident: 93143_CR10
  publication-title: J. Opt. Technol.
  doi: 10.1364/JOT.90.000590
– volume: 159
  start-page: 111091
  year: 2025
  ident: 93143_CR75
  publication-title: Pattern Recogn.
  doi: 10.1016/j.patcog.2024.111091
– volume: 25
  start-page: 69
  year: 2013
  ident: 93143_CR9
  publication-title: IEEE Trans. Neural Networks Learn. Syst.
  doi: 10.1109/TNNLS.2013.2248094
– volume: 34
  start-page: 94
  issue: 1
  year: 2012
  ident: 93143_CR69
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2011.109
– volume: 27
  start-page: 685
  year: 2008
  ident: 93143_CR4
  publication-title: J. Magn. Reson. Imaging
  doi: 10.1002/jmri.21049
– ident: 93143_CR60
  doi: 10.1007/s12559-013-9235-y
– volume: 98
  start-page: 101828
  year: 2023
  ident: 93143_CR48
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2023.101828
– volume: 173
  start-page: 107925
  year: 2024
  ident: 93143_CR49
  publication-title: Opt. Lasers Eng.
  doi: 10.1016/j.optlaseng.2023.107925
– volume: 31
  start-page: 25781
  year: 2023
  ident: 93143_CR41
  publication-title: Opt. Express
  doi: 10.1364/OE.492954
– volume: 8
  start-page: 177
  issue: 2
  year: 2007
  ident: 93143_CR72
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2005.04.003
– volume: 11
  start-page: 18396
  year: 2021
  ident: 93143_CR47
  publication-title: Sci. Rep.
  doi: 10.1038/s41598-021-97636-2
– volume: 38
  start-page: 1
  year: 2002
  ident: 93143_CR56
  publication-title: Electron. Lett.
  doi: 10.1049/el:20020212
– volume: 54
  start-page: 99
  year: 2020
  ident: 93143_CR32
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2019.07.011
– ident: 93143_CR45
  doi: 10.48550/arXiv.2405.04404
– ident: 93143_CR65
  doi: 10.1016/j.ijleo.2018.06.123
– ident: 93143_CR15
  doi: 10.48550/arXiv.1406.2661
– volume: 103
  start-page: 102075
  year: 2024
  ident: 93143_CR3
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2023.102075
– volume: 31
  start-page: 36171
  year: 2023
  ident: 93143_CR8
  publication-title: Opt. Express
  doi: 10.1364/OE.496484
– volume: 76
  start-page: 323
  year: 2021
  ident: 93143_CR11
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2021.06.008
– volume: 2
  start-page: 21
  year: 2021
  ident: 93143_CR77
  publication-title: Int. J. Cogn. Comput. Eng.
– volume: 24
  start-page: 3345
  year: 2015
  ident: 93143_CR24
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2015.2442920
– volume: 101
  start-page: 103964
  year: 2020
  ident: 93143_CR18
  publication-title: Image Vis. Comput.
  doi: 10.1016/j.imavis.2020.103964
– ident: 93143_CR14
  doi: 10.1109/5.726791
– volume: 341
  start-page: 199
  year: 2015
  ident: 93143_CR71
  publication-title: Opt. Commun.
  doi: 10.1016/j.optcom.2014.12.032
– volume: 55
  start-page: 23
  year: 2016
  ident: 93143_CR58
  publication-title: Appl. Opt.
  doi: 10.1364/AO.55.006480
– volume: 29
  start-page: 4733
  year: 2020
  ident: 93143_CR61
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2975984
– volume: 36
  start-page: 102150
  issue: 7
  year: 2024
  ident: 93143_CR19
  publication-title: J. King Saud Univ. - Comput. Inform. Sci.
  doi: 10.1016/j.jksuci.2024.102150
– volume: 14
  start-page: 1650024
  year: 2016
  ident: 93143_CR7
  publication-title: Int. J. Wavelets Multiresolut Inf. Process.
  doi: 10.1142/S0219691316500247
– ident: 93143_CR30
  doi: 10.1109/ICPR.2018.8546006
– volume: 60
  start-page: 7017
  year: 2021
  ident: 93143_CR39
  publication-title: Appl. Opt.
  doi: 10.1364/AO.427245
– volume: 11
  start-page: 18
  year: 2024
  ident: 93143_CR78
  publication-title: Optica
  doi: 10.1364/OPTICA.502857
– volume: 31
  start-page: 19463
  year: 2023
  ident: 93143_CR12
  publication-title: Opt. Express
  doi: 10.1364/OE.492126
– ident: 93143_CR50
  doi: 10.1007/978-3-030-87586-2_9
– ident: 93143_CR63
  doi: 10.1609/aaai.v34i07.6975
– volume: 13
  start-page: 10891
  year: 2023
  ident: 93143_CR79
  publication-title: Appl. Sci.
  doi: 10.3390/app131910891
– volume: 25
  start-page: 12464
  issue: 9
  year: 2024
  ident: 93143_CR33
  publication-title: IEEE Trans. Intell. Transp. Syst.
  doi: 10.1109/TITS.2024.3373370
– ident: 93143_CR54
  doi: 10.1007/978-981-97-1068-3_1
– volume: 246
  start-page: 110949
  year: 2023
  ident: 93143_CR74
  publication-title: Build. Environ.
  doi: 10.1016/j.buildenv.2023.110949
– volume: 48
  start-page: 485
  year: 2023
  ident: 93143_CR2
  publication-title: Opt. Lett.
  doi: 10.1364/OL.481395
– volume: 22
  start-page: 220
  year: 2014
  ident: 93143_CR6
  publication-title: IEEE. Signal. Process. Lett.
– volume: 12
  start-page: 10989
  issue: 21
  year: 2022
  ident: 93143_CR36
  publication-title: Appl. Sci.
  doi: 10.3390/app122110989
– volume: 73
  start-page: 72
  year: 2021
  ident: 93143_CR35
  publication-title: Inform. Fusion
  doi: 10.1016/j.inffus.2021.02.023
– volume: 27
  start-page: 110915
  year: 2024
  ident: 93143_CR26
  publication-title: iScience
  doi: 10.1016/j.isci.2024.110915
– volume: 14
  start-page: 755
  year: 2022
  ident: 93143_CR22
  publication-title: Remote Sens.
  doi: 10.3390/rs14030755
– volume: 28
  start-page: 2614
  year: 2018
  ident: 93143_CR31
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2018.2887342
– volume: 20
  start-page: 189
  issue: 1
  year: 2009
  ident: 93143_CR70
  publication-title: IEEE Trans. Neural Networks
  doi: 10.1109/TNN.2008.2005601
– volume: 29
  start-page: 4980
  year: 2020
  ident: 93143_CR67
  publication-title: IEEE Trans. Image Process.
  doi: 10.1109/TIP.2020.2977573
– volume: 208
  start-page: 106236
  year: 2021
  ident: 93143_CR55
  publication-title: Comput. Methods Programs Biomed.
  doi: 10.1016/j.cmpb.2021.106236
– volume: 15
  start-page: 2969
  issue: 12
  year: 2023
  ident: 93143_CR37
  publication-title: Remote Sens.
  doi: 10.3390/rs15122969
– year: 2023
  ident: 93143_CR20
  publication-title: Preprint arXiv
  doi: 10.48550/arXiv.2312.00752
– ident: 93143_CR27
  doi: 10.48550/arXiv.2404.09498
– volume: 7
  start-page: 1125
  year: 2013
  ident: 93143_CR28
  publication-title: Signal. Image Video Process.
  doi: 10.1007/s11760-012-0361-x
– volume: 8
  start-page: 355
  year: 1997
  ident: 93143_CR66
  publication-title: Meas. Sci. Technol.
  doi: 10.1088/0957-0233/8/4/002
– volume: 45
  start-page: 10535
  year: 2023
  ident: 93143_CR80
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2023.3261282
– volume: 69
  start-page: 1890
  year: 2015
  ident: 93143_CR25
  publication-title: AEU - Int. J. Electron. Commun.
  doi: 10.1016/j.aeue.2015.09.004
– year: 2024
  ident: 93143_CR44
  publication-title: Preprint arXiv
  doi: 10.48550/arXiv.2402.02491
– volume: 23
  start-page: 474
  year: 2023
  ident: 93143_CR5
  publication-title: Nat. Rev. Cancer
  doi: 10.1038/s41568-023-00576-4
– ident: 93143_CR21
  doi: 10.48550/arXiv.2401.10166
– volume: 14
  start-page: 10111
  year: 2024
  ident: 93143_CR73
  publication-title: Appl. Sci.
  doi: 10.3390/app142210111
– ident: 93143_CR64
  doi: 10.1007/s12596-013-0148-7
SSID ssj0000529419
Score 2.450786
Snippet Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based...
Abstract Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
springer
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 9306
SubjectTerms 631/114/1305
631/114/1564
631/114/2401
639/624/1075
639/705/117
639/705/258
Biological analysis
Deep learning
Efficiency
Humanities and Social Sciences
Infrared and visible image fusion
Lightweight asymmetric encoder-decoder
Medical imaging
Medical imaging applications
Multi-scale feature integration
multidisciplinary
Science
Science (multidisciplinary)
Sensors
Visual state space (VSS) module
Wavelet transforms
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LaxsxEB5CSKCX0jZ9bJMUBXprRazHWlJvSUjooZiAm5KbWD2WGpp1iO2G_vuMpLUT93npdVdahtGnmdHO6BuAt-jVAw9SU-f4kMrQSqrRQlLvNAYbTe1D6VrySY1G-vLSnD9o9ZVqwgo9cFHcIR5XHGNDX3MVJUu8ugjC6ILQCE_OfbK-A2UeHKYKqzc3kpn-lsxA6MMZeqp0m4zX1AgMEujtmifKhP2_izJ_LZb8KWOaHdHZE3jcR5DkqEj-FDZi9wy2S0_JHztw8WU8puNcJz2K8w-kIblmkM5wMSJpYybyJO0i_SUjXSkCJxi5lmFX04DfnlyhlSFLJokEzOdwcXb6-eQj7XsnUF9LMaet5r5hIbZCBRmGysnIFfND7uKAt8HpRss6ymAiatI5wb1O7fkiWrtaqrYWL2Czm3bxFRCBq9lIxlTrWcqyOhWNiQ1P1G14IBEVsKUere-JxVN_i282J7iFtkX3FnVvs-7tbQXvVnOuC63GX0cfp-VZjUyU2PkBAsX2QLH_AkoFe8vFtf0-nVmMnnTKifO6goPVa9xhKW3SdHG6KGM0mkLFK3hZsLCSRCZKPmkGFeg1lKyJuv6mm3zNLN4MY2sMcHHq-yWg7uX6sy5e_w9d7MIjnnZCqkvUe7A5v1nEfdjy3-eT2c2bvJXuAGH1Hjg
  priority: 102
  providerName: Directory of Open Access Journals
Title VSS-SpatioNet: a multi-scale feature fusion network for multimodal image integrations
URI https://link.springer.com/article/10.1038/s41598-025-93143-w
https://www.ncbi.nlm.nih.gov/pubmed/40102490
https://www.proquest.com/docview/3178421225
https://www.proquest.com/docview/3178830772
https://pubmed.ncbi.nlm.nih.gov/PMC11920090
https://doaj.org/article/446b116c527e410690183ebd3803022c
Volume 15
WOSCitedRecordID wos001449568100004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: DOA
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources (ISSN International Center)
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M~E
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: Biological Science Database (ProQuest)
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M7P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/biologicalscijournals
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Health & Medical Collection (ProQuest)
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: 7X7
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/healthcomplete
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: BENPR
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: PIMPY
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Science Database (ProQuest)
  customDbUrl:
  eissn: 2045-2322
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000529419
  issn: 2045-2322
  databaseCode: M2P
  dateStart: 20110101
  isFulltext: true
  titleUrlDefault: https://search.proquest.com/sciencejournals
  providerName: ProQuest
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9NAEB7RBiQuvB-GEhmJG6yaXa-zay6IolYg0SgiFIWT5X0YIlG7xAkV_56ZteMqPHrhsgd7be16njsz_gbgGVp1J5zUzBgxZtKVkmnUkMwajc5GkVrXdi15ryYTPZ9n0y7g1nRllRudGBS1qy3FyPfRzmnKXor01dl3Rl2jKLvatdDYgQF6NpxKuo7FtI-xUBZL8qz7V2aU6P0G7RX9UyZSliXoKrDzLXsUYPv_5mv-WTL5W940mKOjm_-7kVtwo3NE49ct59yGK766A9fa1pQ_78LJp9mMzUK59cSvXsZFHEoPWYM09XHpAx5oXK4p2BZXbS15jA5wO-20dvjuxSkqq3gDSEH8fQ9Ojg4_vnnLuhYMzKYyWbFSC1tw58tEOenGykgvFLdjYfxIlM7oQsvUS5d5l2hjEmE1dfnzqDRTqco0uQ-7VV35hxAnyBSF5FyVllOy1iifZb4QhACH55okAr4hRG47fHJqk_EtD3nyROct8XIkXh6Il59H8Lx_5qxF57h09gHRt59JyNrhQr38kneCmuPx2HA-trgsLznhOKPS8wa3h-pQCBvB3oaseSfuTX5B0wie9rdRUCn7UlS-XrdzNGpUJSJ40DJTvxJJyH4yG0Wgt9hsa6nbd6rF1wAGztFFRz8ZH32x4ciLdf37Wzy6fBuP4bogIaHCRb0Hu6vl2j-Bq_bHatEsh7Cj5iqMegiDg8PJ9MMwBDOGQf5oVDgOpu-Op59_ASrpNSM
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Zb9QwEB5VBQQv3EegQJDgCayuHWfjICHEVbXqsqq0LeqbiY_ASjQpe7Dqn-I3MuMkWy1H3_rAa-JEdvLNNzOe8QzAU9TqTjipmDGiz6QrJVPIkMwahcZGkVrXdC0ZZMOhOjzM99bgZ3cWhtIqO04MRO1qS3vkm6jnFEUvRfr6-DujrlEUXe1aaDSw2PUnC3TZpq923uP_fSbE1of9d9us7SrAbCqTGSuVsAV3vkwyJ10_M9KLjNu-ML4nSmdUoWTqpcu9S5QxibCKGtd55IFUZiV1iUDKvyCpshilCoq95Z4ORc0kz9uzOb1EbU5RP9IZNpGyPEHThC1W9F9oE_A32_bPFM3f4rRB_W1d-98-3HW42hra8ZtGMm7Amq9uwqWm9ebJLTj4NBqxUUgnH_rZy7iIQ2olmyJmfVz6UO80Lue0mRhXTa58jAZ-M-yodvju8RGScdwV3CD5vQ0H57KmO7Be1ZW_B3GCoC8k51lpOQWjTebz3BeCKtyh35ZEwLsfr21bf53agHzTIQ8gUboBi0aw6AAWvYjg-fKZ46b6yJmj3xKeliOpcni4UE--6JaINLr_hvO-xWl5yalONZK6N7g8pHshbAQbHYx0S2dTfYqhCJ4sbyMRUXSpqHw9b8Yo1BiZiOBuA97lTCRVLpR5LwK1AuuVqa7eqcZfQ7Fzji4I-gH46ItOAk7n9e9vcf_sZTyGy9v7Hwd6sDPcfQBXBAkoJWmqDVifTeb-IVy0P2bj6eRRkPAYPp-3ZPwCMRWNrA
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1ZbxMxEB5VKSBeuI-FAosET2Al9nqzXiSEgBIRtUSRQlH7ZNbHtpHopuQg6l_j1zGzR6pw9K0PvO7aK9v7zTdjz3gG4BlqdSecVMwY0WXS5ZIpZEhmjUJjI4utq6qW7CaDgdrfT4cb8LO5C0NhlQ0nlkTtJpbOyNuo5xR5L0XczuuwiOF2783Jd0YVpMjT2pTTqCCy40-XuH2bve5v479-LkTvw-f3H1ldYYDZWEZzlithM-58HiVOum5ipBcJt11hfEfkzqhMydhLl3oXKWMiYRUVsfPICbFMcqoYgfS_iSa5FC3YHPY_DQ9WJzzkQ5M8rW_qdCLVnqG2pBttImZphIYKW65pw7JowN8s3T8DNn_z2pbKsHf9f17GG3CtNsHDt5XM3IQNX9yCy1VRztPbsPdlNGKjMtB84Oevwiwsgy7ZDNHsw9yXmVDDfEHHjGFRRdGHaPpXzY4nDr89PkaaDptUHCTZd2DvQuZ0F1rFpPD3IYxQHDLJeZJbTm5qk_g09Zmg3He4o4sC4A0ItK0zs1OBkG-6jBCIlK6AoxE4ugSOXgbwYtXnpMpLcm7rd4StVUvKKV4-mEwPdU1RWsqu4bxrcVhecspgjXTvDU4PFYEQNoCtBlK6JrqZPsNTAE9Xr5GiyO-UFX6yqNoo1CWJCOBeBeTVSCTlNJRpJwC1BvG1oa6_KcZHZRp0jpsT3CFg15eNNJyN699r8eD8aTyBKygQerc_2HkIVwXJKkVvqi1ozacL_wgu2R_z8Wz6uBb3EL5etGj8AmoXl_U
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=VSS-SpatioNet%3A+a+multi-scale+feature+fusion+network+for+multimodal+image+integrations&rft.jtitle=Scientific+reports&rft.au=Xiang%2C+Zeyu&rft.date=2025-03-18&rft.issn=2045-2322&rft.eissn=2045-2322&rft.volume=15&rft.issue=1&rft_id=info:doi/10.1038%2Fs41598-025-93143-w&rft.externalDBID=n%2Fa&rft.externalDocID=10_1038_s41598_025_93143_w
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2045-2322&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2045-2322&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2045-2322&client=summon