Multiscale and Multitopic Sparse Representation for Multisensor Infrared Image Superresolution

Methods based on sparse coding have been successfully used in single-image superresolution (SR) reconstruction. However, the traditional sparse representation-based SR image reconstruction for infrared (IR) images usually suffers from three problems. First, IR images always lack detailed information...

Full description

Saved in:
Bibliographic Details
Published in:Journal of sensors Vol. 2016; no. 2016; pp. 1 - 14
Main Authors: Yan, Binyu, Gan, Zhongliang, Liu, Kai, Yang, Xiaomin
Format: Journal Article
Language:English
Published: Cairo, Egypt Hindawi Publishing Corporation 01.01.2016
John Wiley & Sons, Inc
Subjects:
ISSN:1687-725X, 1687-7268
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Methods based on sparse coding have been successfully used in single-image superresolution (SR) reconstruction. However, the traditional sparse representation-based SR image reconstruction for infrared (IR) images usually suffers from three problems. First, IR images always lack detailed information. Second, a traditional sparse dictionary is learned from patches with a fixed size, which may not capture the exact information of the images and may ignore the fact that images naturally come at different scales in many cases. Finally, traditional sparse dictionary learning methods aim at learning a universal and overcomplete dictionary. However, many different local structural patterns exist. One dictionary is inadequate in capturing all of the different structures. We propose a novel IR image SR method to overcome these problems. First, we combine the information from multisensors to improve the resolution of the IR image. Then, we use multiscale patches to represent the image in a more efficient manner. Finally, we partition the natural images into documents and group such documents to determine the inherent topics and to learn the sparse dictionary of each topic. Extensive experiments validate that using the proposed method yields better results in terms of quantitation and visual perception than many state-of-the-art algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1687-725X
1687-7268
DOI:10.1155/2016/7036349