Deep Video Deblurring Using Sharpness Features from Exemplars

Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to hi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing Jg. 29; S. 1
Hauptverfasser: Xiang, Xinguang, Wei, Hao, Pan, Jinshan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1057-7149, 1941-0042, 1941-0042
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2020.3023534