Simultaneous image fusion and denoising with adaptive sparse representation

In this study, a novel adaptive sparse representation (ASR) model is presented for simultaneous image fusion and denoising. As a powerful signal modelling technique, sparse representation (SR) has been successfully employed in many image processing applications such as denoising and fusion. In tradi...

Full description

Saved in:
Bibliographic Details
Published in:IET image processing Vol. 9; no. 5; pp. 347 - 357
Main Authors: Liu, Yu, Wang, Zengfu
Format: Journal Article
Language:English
Published: The Institution of Engineering and Technology 01.05.2015
Subjects:
ISSN:1751-9659, 1751-9667
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this study, a novel adaptive sparse representation (ASR) model is presented for simultaneous image fusion and denoising. As a powerful signal modelling technique, sparse representation (SR) has been successfully employed in many image processing applications such as denoising and fusion. In traditional SR-based applications, a highly redundant dictionary is always needed to satisfy signal reconstruction requirement since the structures vary significantly across different image patches. However, it may result in potential visual artefacts as well as high computational cost. In the proposed ASR model, instead of learning a single redundant dictionary, a set of more compact sub-dictionaries are learned from numerous high-quality image patches which have been pre-classified into several corresponding categories based on their gradient information. At the fusion and denoising processes, one of the sub-dictionaries is adaptively selected for a given set of source image patches. Experimental results on multi-focus and multi-modal image sets demonstrate that the ASR-based fusion method can outperform the conventional SR-based method in terms of both visual quality and objective assessment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1751-9659
1751-9667
DOI:10.1049/iet-ipr.2014.0311