Convolutional Sparse Modular Fusion Algorithm for Non-Rigid Registration of Visible–Infrared Images

Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framewor...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied sciences Ročník 15; číslo 5; s. 2508
Hlavní autoři: Luo, Tao, Chen, Ning, Zhu, Xianyou, Yi, Heyuan, Duan, Weiwen
Médium: Journal Article
Jazyk:angličtina
Vydáno: Basel MDPI AG 01.03.2025
Témata:
ISSN:2076-3417, 2076-3417
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Existing image fusion algorithms involve extensive models and high computational demands when processing source images that require non-rigid registration, which may not align with the practical needs of engineering applications. To tackle this challenge, this study proposes a comprehensive framework for convolutional sparse fusion in the context of non-rigid registration of visible–infrared images. Our approach begins with an attention-based convolutional sparse encoder to extract cross-modal feature encodings from source images. To enhance feature extraction, we introduce a feature-guided loss and an information entropy loss to guide the extraction of homogeneous and isolated features, resulting in a feature decomposition network. Next, we create a registration module that estimates the registration parameters based on homogeneous feature pairs. Finally, we develop an image fusion module by applying homogeneous and isolated feature filtering to the feature groups, resulting in high-quality fused images with maximized information retention. Experimental results on multiple datasets indicate that, compared with similar studies, the proposed algorithm achieves an average improvement of 8.3% in image registration and 30.6% in fusion performance in mutual information. In addition, in downstream target recognition tasks, the fusion images generated by the proposed algorithm show a maximum improvement of 20.1% in average relative accuracy compared with the original images. Importantly, our algorithm maintains a relatively lightweight computational and parameter load.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2076-3417
2076-3417
DOI:10.3390/app15052508