Convolutional Sparse Modular Fusion Algorithm for Non-Rigid Registration of Visible–Infrared Images

Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framewor...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Applied sciences Ročník 15; číslo 5; s. 2508
Hlavní autori: Luo, Tao, Chen, Ning, Zhu, Xianyou, Yi, Heyuan, Duan, Weiwen
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Basel MDPI AG 01.03.2025
Predmet:
ISSN:2076-3417, 2076-3417
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framework for convolutional sparse fusion in the context of non-rigid registration of visible–infrared images. Our approach begins with an attention-based convolutional sparse encoder to extract cross-modal feature encodings from source images. To enhance feature extraction, we introduce a feature-guided loss and an information entropy loss to guide the extraction of homogeneous and isolated features, resulting in a feature decomposition network. Next, we create a registration module that estimates the registration parameters based on homogeneous feature pairs. Finally, we develop an image fusion module by applying homogeneous and isolated feature filtering to the feature groups, resulting in high-quality fused images with maximized information retention. Experimental results on multiple datasets indicate that, compared with similar studies, the proposed algorithm achieves an average improvement of 8.3% in image registration and 30.6% in fusion performance in mutual information. In addition, in downstream target recognition tasks, the fusion images generated by the proposed algorithm show a maximum improvement of 20.1% in average relative accuracy compared with the original images. Importantly, our algorithm maintains a relatively lightweight computational and parameter load.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15052508