Multi-focus image fusion via online convolutional sparse coding

Efficiently and perfectly eliminate out-of-focus pixels is still a persistent challenge for multi-focus image fusion. Previous approaches tend to focus on high quality fusion results while ignoring running cost. Online Convolutional Sparse Coding (OCSC) is an online version of Convolutional Sparse C...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications Vol. 83; no. 6; pp. 17327 - 17356
Main Authors: Zhang, Chengfang, Zhang, Ziyou, Li, Haoyue, He, Sidi, Feng, Ziliang
Format: Journal Article
Language:English
Published: New York Springer US 01.02.2024
Springer Nature B.V
Subjects:
ISSN:1573-7721, 1380-7501, 1573-7721
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Efficiently and perfectly eliminate out-of-focus pixels is still a persistent challenge for multi-focus image fusion. Previous approaches tend to focus on high quality fusion results while ignoring running cost. Online Convolutional Sparse Coding (OCSC) is an online version of Convolutional Sparse Coding (CSC) that discards expensive time and space costs associated with batch mode of CSC. In this paper, we use parallel version of OCSC to alleviate time-consuming defects of previous methods. Multi-focus gray and color images, are tested to verify superiority of the proposed method by obtaining excellent visual effects and exciting objective evaluations. The operating cost is roughly reduced by 95% over fusion method using online dictionary learning. A comprehensive analysis of subjectivity, objectivity and time show that our method has characteristics of fast fusion and high reconstruction quality.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-15972-z