Deep Video Deblurring Using Sharpness Features from Exemplars

Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to hi...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 29; p. 1
Main Authors: Xiang, Xinguang, Wei, Hao, Pan, Jinshan
Format: Journal Article
Language:English
Published: United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2020.3023534