A Novel Fusion Method Based on Online Convolutional Sparse Coding with Sample-Dependent Dictionary for Visible–Infrared Images

As an important branch of information fusion, infrared–visible image fusion can generate scene information with rich texture details via signal processing technology. The fused images have characteristics of complete information and higher quality, which can significantly improve nighttime detection...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Arabian journal for science and engineering (2011) Ročník 48; číslo 8; s. 10605 - 10615
Hlavní autoři: Li, Haoyue, Zhang, Chengfang, He, Sidi, Feng, Ziliang, Yi, Liangzhong
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.08.2023
Springer Nature B.V
Témata:
ISSN:2193-567X, 1319-8025, 2191-4281
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:As an important branch of information fusion, infrared–visible image fusion can generate scene information with rich texture details via signal processing technology. The fused images have characteristics of complete information and higher quality, which can significantly improve nighttime detection capabilities. Online convolutional sparse coding (OCSC) alleviates the disadvantages of large computation and low representation rate of convolutional sparse coding (CSC) and is successfully introduced into multimodal image fusion tasks. However, complexity of OCSC still depends on number of filters, which is expensive and affects the fusion performance. Inspired by the idea of separable filters, sample-dependent convolutional sparse coding (SCSC) can update its model efficiently with low algorithm complexity by means of online learning. In this paper, SCSC is applied in infrared–visible image fusion due to its superior algorithmic complexity. Firstly, the original images are decomposed into two layers of high- and low-frequency images, and then each layer is fused by different fusion rules. The image is finally reconstructed according to the above two fused layers. Compared with other 6 popular fusion methods with 8 metrics, the experimental results show that the proposed method has more than 10% improvement over other methods on metrics such as MG (mean gradient) and EI (edge intensity), and it demonstrates the advantages of sample-dependent convolutional sparse coding in infrared–visible image fusion.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2193-567X
1319-8025
2191-4281
DOI:10.1007/s13369-023-07716-w