Fusion of Visible and Infrared Images Using Efficient Online Convolutional Dictionary Learning

To balance fusion performance and time consumption, this paper proposes a fast and effective infrared and visible fusion method using online convolutional dictionary learning (OCDL). First, source images are decomposed into low-pass layers and a detail layers using low-pass filtering technique. Then...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2023 12th International Conference of Information and Communication Technology (ICTech) s. 117 - 121
Hlavní autori: Zhang, ChengFang, Yi, Kai
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.04.2023
Predmet:
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:To balance fusion performance and time consumption, this paper proposes a fast and effective infrared and visible fusion method using online convolutional dictionary learning (OCDL). First, source images are decomposed into low-pass layers and a detail layers using low-pass filtering technique. Then low-pass fused layer is obtained using 'average' rule. The dictionary trained on Flicker-large dataset using OCDL is applied to detail layer fusion, which uses ADMM-based CSC and 'maximum' strategy to calculate fused high-pass components. Finally, fused image is reconstructed based on fused low-pass components and high-pass components. The subjective results and objective metrics of the infrared-visible image fusion experiments show that our designed fusion method ensures quality of fused images and outperforms advanced online convolutional sparse coding (OCSC) based fusion methods in terms of computational cost and memory requirements. Our method averagely increases objective metrics (Cross entropy, NMI, PSNR, NABF and CC) by 13.66%, 6.42%, 0.03%, 74.77% and 0.21% compared to OCSC. Meanwhile, our method improved 10.1% beyond OCSC in terms of time consumption.
DOI:10.1109/ICTech58362.2023.00033