Deep Video Deblurring Using Sharpness Features from Exemplars

Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to hi...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on image processing Ročník 29; s. 1
Hlavní autori: Xiang, Xinguang, Wei, Hao, Pan, Jinshan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1057-7149, 1941-0042, 1941-0042
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2020.3023534