Fusion of Visible and Infrared Images Using Efficient Online Convolutional Dictionary Learning

To balance fusion performance and time consumption, this paper proposes a fast and effective infrared and visible fusion method using online convolutional dictionary learning (OCDL). First, source images are decomposed into low-pass layers and a detail layers using low-pass filtering technique. Then...

Full description

Saved in:
Bibliographic Details
Published in:2023 12th International Conference of Information and Communication Technology (ICTech) pp. 117 - 121
Main Authors: Zhang, ChengFang, Yi, Kai
Format: Conference Proceeding
Language:English
Published: IEEE 01.04.2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To balance fusion performance and time consumption, this paper proposes a fast and effective infrared and visible fusion method using online convolutional dictionary learning (OCDL). First, source images are decomposed into low-pass layers and a detail layers using low-pass filtering technique. Then low-pass fused layer is obtained using 'average' rule. The dictionary trained on Flicker-large dataset using OCDL is applied to detail layer fusion, which uses ADMM-based CSC and 'maximum' strategy to calculate fused high-pass components. Finally, fused image is reconstructed based on fused low-pass components and high-pass components. The subjective results and objective metrics of the infrared-visible image fusion experiments show that our designed fusion method ensures quality of fused images and outperforms advanced online convolutional sparse coding (OCSC) based fusion methods in terms of computational cost and memory requirements. Our method averagely increases objective metrics (Cross entropy, NMI, PSNR, NABF and CC) by 13.66%, 6.42%, 0.03%, 74.77% and 0.21% compared to OCSC. Meanwhile, our method improved 10.1% beyond OCSC in terms of time consumption.
DOI:10.1109/ICTech58362.2023.00033