VSS-SpatioNet: a multi-scale feature fusion network for multimodal image integrations

Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that r...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Scientific reports Ročník 15; číslo 1; s. 9306 - 20
Hlavní autor: Xiang, Zeyu
Médium: Journal Article
Jazyk:angličtina
Vydáno: London Nature Publishing Group UK 18.03.2025
Nature Publishing Group
Nature Portfolio
Témata:
ISSN:2045-2322, 2045-2322
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Infrared and visible image fusion (vis-ir) enhances diagnostic accuracy in medical imaging and biological analysis. Existing CNN-based and Transformer-based methods face computational inefficiencies in modeling global dependencies. The author proposes VSS-SpatioNet, a lightweight architecture that replaces self-attention in Transformers with a Visual State Space (VSS) module for efficient dependency modeling. The framework employs an asymmetric encoder-decoder with a multi-scale autoencoder and a novel VSS-Spatial (VS) fusion block for local-global feature integration. Evaluations on TNO, Harvard Medical, and RoadScene datasets demonstrate superior performance. On TNO, VSS-SpatioNet achieves state-of-the-art Entropy (En = 7.0058) and Mutual Information (MI = 14.0116), outperforming 12 benchmark methods. For RoadScene, it attains gradient-based fusion performance ( =0.5712), Piella’s metric ( =0.7926), and average gradient (AG = 5.2994), surpassing prior works. On Harvard Medical, the VS strategy improves Mean Gradient by 18.7% (0.0224 vs. 0.0198) against FusionGAN, validating enhanced feature preservation. Results confirm the framework’s efficacy in medical applications, particularly precise tissue characterization.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-025-93143-w