Multi-focus image fusion via online convolutional sparse coding

Efficiently and perfectly eliminate out-of-focus pixels is still a persistent challenge for multi-focus image fusion. Previous approaches tend to focus on high quality fusion results while ignoring running cost. Online Convolutional Sparse Coding (OCSC) is an online version of Convolutional Sparse C...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Multimedia tools and applications Ročník 83; číslo 6; s. 17327 - 17356
Hlavní autoři: Zhang, Chengfang, Zhang, Ziyou, Li, Haoyue, He, Sidi, Feng, Ziliang
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.02.2024
Springer Nature B.V
Témata:
ISSN:1573-7721, 1380-7501, 1573-7721
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Efficiently and perfectly eliminate out-of-focus pixels is still a persistent challenge for multi-focus image fusion. Previous approaches tend to focus on high quality fusion results while ignoring running cost. Online Convolutional Sparse Coding (OCSC) is an online version of Convolutional Sparse Coding (CSC) that discards expensive time and space costs associated with batch mode of CSC. In this paper, we use parallel version of OCSC to alleviate time-consuming defects of previous methods. Multi-focus gray and color images, are tested to verify superiority of the proposed method by obtaining excellent visual effects and exciting objective evaluations. The operating cost is roughly reduced by 95% over fusion method using online dictionary learning. A comprehensive analysis of subjectivity, objectivity and time show that our method has characteristics of fast fusion and high reconstruction quality.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-7721
1380-7501
1573-7721
DOI:10.1007/s11042-023-15972-z