Convolutional Sparse Modular Fusion Algorithm for Non-Rigid Registration of Visible–Infrared Images

Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framewor...

Full description

Saved in:
Bibliographic Details
Published in:Applied sciences Vol. 15; no. 5; p. 2508
Main Authors: Luo, Tao, Chen, Ning, Zhu, Xianyou, Yi, Heyuan, Duan, Weiwen
Format: Journal Article
Language:English
Published: Basel MDPI AG 01.03.2025
Subjects:
ISSN:2076-3417, 2076-3417
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framework for convolutional sparse fusion in the context of non-rigid registration of visible–infrared images. Our approach begins with an attention-based convolutional sparse encoder to extract cross-modal feature encodings from source images. To enhance feature extraction, we introduce a feature-guided loss and an information entropy loss to guide the extraction of homogeneous and isolated features, resulting in a feature decomposition network. Next, we create a registration module that estimates the registration parameters based on homogeneous feature pairs. Finally, we develop an image fusion module by applying homogeneous and isolated feature filtering to the feature groups, resulting in high-quality fused images with maximized information retention. Experimental results on multiple datasets indicate that, compared with similar studies, the proposed algorithm achieves an average improvement of 8.3% in image registration and 30.6% in fusion performance in mutual information. In addition, in downstream target recognition tasks, the fusion images generated by the proposed algorithm show a maximum improvement of 20.1% in average relative accuracy compared with the original images. Importantly, our algorithm maintains a relatively lightweight computational and parameter load.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15052508