Deep Video Deblurring Using Sharpness Features from Exemplars

Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to hi...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 29; p. 1
Main Authors: Xiang, Xinguang, Wei, Hao, Pan, Jinshan
Format: Journal Article
Language:English
Published: United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
ISSN:1057-7149, 1941-0042, 1941-0042
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
AbstractList Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this article, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.
Author Xiang, Xinguang
Pan, Jinshan
Wei, Hao
Author_xml – sequence: 1
  givenname: Xinguang
  surname: Xiang
  fullname: Xiang, Xinguang
  organization: Nanjing University of Science and Technology, China
– sequence: 2
  givenname: Hao
  surname: Wei
  fullname: Wei, Hao
  organization: Nanjing University of Science and Technology, China
– sequence: 3
  givenname: Jinshan
  surname: Pan
  fullname: Pan, Jinshan
  organization: Nanjing University of Science and Technology, China. (e-mail: jspan@njust.edu.cn)
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32936755$$D View this record in MEDLINE/PubMed
BookMark eNp9kctLw0AQxhdRbKveBUECXryk7nuzBw_ShxYKCrZel20y0ZS83E1A_3sT2nrw4GVmGH7fMDPfCB2XVQkIXRI8JgTru9XiZUwxxWOGKROMH6Eh0ZyEGHN63NVYqFARrgdo5P0WY8IFkadowKhmUgkxRPdTgDp4yxKogils8ta5rHwP1r6Prx_W1SV4H8zBNq0DH6SuKoLZFxR1bp0_RyepzT1c7PMZWs9nq8lTuHx-XEwelmHMuGpCJbVKqEqAyyglGiJNFTAds1hLauOuZUFFiU4J4DTRGwZRwiQTiUqZBBGzM3S7m1u76rMF35gi8zHkuS2har2hnLMoIjoSHXrzB91WrSu77XpKYhFJ0VPXe6rdFJCY2mWFdd_m8JgOkDsgdpX3DlITZ41tsqpsnM1yQ7DpHTCdA6Z3wOwd6IT4j_Aw-x_J1U6SAcAvrrtzNGHsB6EejoI
CODEN IIPRE4
CitedBy_id crossref_primary_10_1109_TCSVT_2023_3262685
crossref_primary_10_1109_TPAMI_2023_3243059
crossref_primary_10_3390_app15031311
crossref_primary_10_1109_TIP_2024_3372454
crossref_primary_10_1109_TCSVT_2024_3369073
crossref_primary_10_1016_j_patcog_2023_109360
crossref_primary_10_1016_j_dsp_2023_104248
crossref_primary_10_1038_s42256_021_00392_1
crossref_primary_10_1109_TIP_2024_3482176
crossref_primary_10_1016_j_dsp_2025_105125
crossref_primary_10_1016_j_patcog_2024_110813
crossref_primary_10_1016_j_optlaseng_2023_107522
crossref_primary_10_1109_ACCESS_2023_3311033
crossref_primary_10_1109_TIP_2022_3142518
crossref_primary_10_1109_TIP_2024_3512362
crossref_primary_10_1117_1_JEI_34_2_023027
crossref_primary_10_1109_TCSVT_2022_3201045
crossref_primary_10_1109_TCSVT_2023_3325408
crossref_primary_10_1007_s11263_022_01708_3
crossref_primary_10_1051_0004_6361_202244904
crossref_primary_10_1007_s10489_022_04158_z
crossref_primary_10_1109_TPAMI_2025_3557866
Cites_doi 10.1109/CVPR.2014.371
10.1609/aaai.v34i07.6818
10.1109/CVPR.2013.147
10.1109/CVPR.2016.180
10.1109/ICCV.2017.34
10.1109/ICCVW.2019.00475
10.1109/ICCV.2017.356
10.1109/CVPR.2014.374
10.1109/ICCV.2017.435
10.1109/ICCVW.2017.353
10.1109/ICCV.2019.00257
10.5201/ipol.2013.26
10.1109/CVPR.2018.00267
10.1007/978-3-319-10599-4_16
10.1109/CVPR.2017.304
10.1109/TCI.2016.2644865
10.1109/CVPRW.2019.00269
10.1109/CVPR.2018.00340
10.1109/TPAMI.2018.2832125
10.1109/CVPR.2019.00829
10.1007/978-3-642-33715-4_45
10.1109/CVPR.2013.140
10.1109/ICCV.2013.296
10.1109/CVPR.2018.00931
10.1109/CVPR.2010.5539938
10.1109/CVPR.2015.7299181
10.1109/CVPR.2015.7298677
10.1109/CVPRW.2019.00267
10.1109/CVPR.2017.35
10.1109/ICCV.2019.00567
10.1109/CVPR.2007.383214
10.1109/CVPR.2018.00853
10.1109/ICCV.2015.316
10.1007/978-3-030-01219-9_7
10.1109/CVPR.2019.00613
10.1109/CVPR.2019.00397
10.1109/ICCV.2017.274
10.1109/TIP.2003.819861
10.1109/CVPR.2017.33
10.1109/TIP.2018.2867733
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2020.3023534
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
Technology Research Database
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 1
ExternalDocumentID 32936755
10_1109_TIP_2020_3023534
9198913
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Natural Science Foundation of Jiangsu Province
  grantid: BK20180471
  funderid: 10.13039/501100004608
– fundername: National Natural Science Foundation of China
  grantid: 61872421; 61922043; U1611461
  funderid: 10.13039/501100001809
– fundername: National Key Research and Development Program of China
  grantid: 2018AAA0102002
  funderid: 10.13039/501100012166
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
5GY
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AGQYO
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
IFIPE
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
53G
5VS
AAYXX
ABFSI
AETIX
AGSQL
AI.
AIBXA
ALLEH
CITATION
E.L
H~9
ICLAB
IFJZH
VH1
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c347t-7697d27de468f19e8927e39c3c962acf19ae78d9f1e0fd9b3e8d3635d7f36e5c3
IEDL.DBID RIE
ISICitedReferencesCount 32
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000572623700004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1057-7149
1941-0042
IngestDate Thu Oct 02 11:14:01 EDT 2025
Mon Jun 30 10:21:13 EDT 2025
Sun Nov 09 08:39:25 EST 2025
Sat Nov 29 03:21:13 EST 2025
Tue Nov 18 21:28:59 EST 2025
Wed Aug 27 02:31:58 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c347t-7697d27de468f19e8927e39c3c962acf19ae78d9f1e0fd9b3e8d3635d7f36e5c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-2344-6174
0000-0003-0304-9507
PMID 32936755
PQID 2446058655
PQPubID 85429
PageCount 1
ParticipantIDs proquest_journals_2446058655
crossref_primary_10_1109_TIP_2020_3023534
pubmed_primary_32936755
ieee_primary_9198913
crossref_citationtrail_10_1109_TIP_2020_3023534
proquest_miscellaneous_2443881985
PublicationCentury 2000
PublicationDate 2020-01-01
PublicationDateYYYYMMDD 2020-01-01
PublicationDate_xml – month: 01
  year: 2020
  text: 2020-01-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2020
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref35
ref13
ref34
ref12
ref15
ref36
ref14
ref31
ref30
ref33
ref11
kingma (ref39) 2015
ref32
ref10
ref2
ref1
ref17
ref38
ref16
ref19
ref18
ronneberger (ref37) 2015
paszke (ref40) 2017
wang (ref22) 2019
ref24
ref45
ref23
ref26
ref25
ref20
ref42
ref41
ref44
ref43
ref28
ref27
ref29
ref8
ref7
ref9
ref4
ref3
tian (ref21) 2018
ref6
ref5
References_xml – year: 2017
  ident: ref40
  article-title: Automatic differentiation in Pytorch
  publication-title: Proc Neural Inf Process Syst Autodiff Workshop
– ident: ref7
  doi: 10.1109/CVPR.2014.371
– ident: ref44
  doi: 10.1609/aaai.v34i07.6818
– ident: ref8
  doi: 10.1109/CVPR.2013.147
– ident: ref6
  doi: 10.1109/CVPR.2016.180
– ident: ref23
  doi: 10.1109/ICCV.2017.34
– ident: ref18
  doi: 10.1109/ICCVW.2019.00475
– ident: ref4
  doi: 10.1109/ICCV.2017.356
– ident: ref32
  doi: 10.1109/CVPR.2014.374
– ident: ref24
  doi: 10.1109/ICCV.2017.435
– ident: ref15
  doi: 10.1109/ICCVW.2017.353
– ident: ref19
  doi: 10.1109/ICCV.2019.00257
– ident: ref35
  doi: 10.5201/ipol.2013.26
– ident: ref14
  doi: 10.1109/CVPR.2018.00267
– ident: ref33
  doi: 10.1007/978-3-319-10599-4_16
– ident: ref27
  doi: 10.1109/CVPR.2017.304
– start-page: 234
  year: 2015
  ident: ref37
  article-title: U-Net: Convolutional networks for biomedical image segmentation
  publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent (MICCAI)
– ident: ref38
  doi: 10.1109/TCI.2016.2644865
– ident: ref28
  doi: 10.1109/CVPRW.2019.00269
– ident: ref20
  doi: 10.1109/CVPR.2018.00340
– year: 2018
  ident: ref21
  article-title: TDAN: Temporally deformable alignment network for video super-resolution
  publication-title: arXiv 1812 02898
– ident: ref43
  doi: 10.1109/TPAMI.2018.2832125
– ident: ref25
  doi: 10.1109/CVPR.2019.00829
– ident: ref5
  doi: 10.1007/978-3-642-33715-4_45
– ident: ref30
  doi: 10.1109/CVPR.2013.140
– ident: ref45
  doi: 10.1109/ICCV.2013.296
– ident: ref42
  doi: 10.1109/CVPR.2018.00931
– ident: ref31
  doi: 10.1109/CVPR.2010.5539938
– ident: ref34
  doi: 10.1109/CVPR.2015.7299181
– ident: ref1
  doi: 10.1109/CVPR.2015.7298677
– ident: ref2
  doi: 10.1109/CVPRW.2019.00267
– ident: ref10
  doi: 10.1109/CVPR.2017.35
– ident: ref12
  doi: 10.1109/ICCV.2019.00567
– ident: ref3
  doi: 10.1109/CVPR.2007.383214
– ident: ref9
  doi: 10.1109/CVPR.2018.00853
– ident: ref36
  doi: 10.1109/ICCV.2015.316
– ident: ref17
  doi: 10.1007/978-3-030-01219-9_7
– start-page: 1954
  year: 2019
  ident: ref22
  article-title: EDVR: Video restoration with enhanced deformable convolutional networks
  publication-title: Proc IEEE/CVF Conf Comput Vis Pattern Recognit Workshops (CVPRW)
– ident: ref13
  doi: 10.1109/CVPR.2019.00613
– ident: ref11
  doi: 10.1109/CVPR.2019.00397
– ident: ref29
  doi: 10.1109/ICCV.2017.274
– year: 2015
  ident: ref39
  article-title: Adam: A method for stochastic optimization
  publication-title: Proc Int Conf Learn Represent (ICLR)
– ident: ref41
  doi: 10.1109/TIP.2003.819861
– ident: ref16
  doi: 10.1109/CVPR.2017.33
– ident: ref26
  doi: 10.1109/TIP.2018.2867733
SSID ssj0014516
Score 2.5157018
Snippet Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1
SubjectTerms Algorithms
Coders
Decoding
Estimation
exemplars
Feature extraction
Image restoration
Kernel
Learning systems
Object motion
optical flow
Optical flow (image analysis)
Optical imaging
sharp feature fusion
Sharpness
Video deblurring
Title Deep Video Deblurring Using Sharpness Features from Exemplars
URI https://ieeexplore.ieee.org/document/9198913
https://www.ncbi.nlm.nih.gov/pubmed/32936755
https://www.proquest.com/docview/2446058655
https://www.proquest.com/docview/2443881985
Volume 29
WOSCitedRecordID wos000572623700004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library
  customDbUrl:
  eissn: 1941-0042
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014516
  issn: 1057-7149
  databaseCode: RIE
  dateStart: 19920101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LS-RAEC5UPKyH9bk6vugFL4LZyaSTfhz2ID5QWMSDytxC0l0RQTODM7P4863qZIILuwveQlKdNP11db7q6qoCOEpI_8sqM5FKMYmIEeuo9NZEbmBUlRSxasLHHn7pmxszHNrbBTjpYmEQMRw-wx98GXz5fuRmvFXWt3zAh0vULmqtmlitzmPABWeDZzPTkSbaP3dJxrZ_d31LhmBC9iknd5HpH7-gUFPl3_Qy_GYuVz_XwTX42tJJcdrgvw4LWG_AakstRau4kw1Y-ZB3cBN-niOOxcOTx5GgBeeZtwHrRxGODwhO4TzmBVAwPZyROS44BkVcvOHLmO3gLbi_vLg7u4raOgqRk6meRlpZ7RPtMVWmGlg0NtEorZPOqqRwdKtAbbytBhhX3pYSjZdERLyuJGHl5DdYqkc17oAwurSczyYtORFc5UuvY1pOVVFSK6XKHvTnQ5u7Nsk417p4zoOxEducwMgZjLwFowfHXYtxk2DjP7KbPOadXDvcPdifo5e3GjjJibawx1dlWQ--d49Jd9ghUtQ4mgUZaYgSGZLZblDv3i2JB5Exle3-_Zt78IV71mzG7MPS9HWGB7Dsfk-fJq-HNEGH5jBM0HchAN-N
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4hQGp7AAp9LK-6EpdKTTcbJ34ceqgKCNTtisOCuEWJPUFIkF2xuxU_nxknG7VSW4lblIwTy-NxvvF4vgE4Ssj-yyozkUoxiQgR66j01kRuYFSVFLFq0seuhno0MtfX9mIFPne5MIgYDp_hF74MsXw_cQveKutbPuDDJWrXsjRN4iZbq4sZcMnZENvMdKQJ-C-DkrHtj88vyBVMyENleheZ_vETClVV_g0ww4_mdPN5XdyCjRZQim_NDHgNK1hvw2YLLkVrurNtePUb8-AOfD1GnIqrW48TQUvOHW8E1jciHCAQTOI85SVQMEBckEMuOAtFnDzi_ZQ94TdweXoy_n4WtZUUIidTPY-0ston2mOqTDWwaGyiUVonnVVJ4ehWgdp4Ww0wrrwtJRovCYp4XUnSlpNvYbWe1PgehNGlZUabtGQquMqXXse0oKqipFZKlT3oL4c2dy3NOFe7uMuDuxHbnJSRszLyVhk9-NS1mDYUG_-R3eEx7-Ta4e7B_lJ7eWuDs5yAC8d8VZb14GP3mKyHQyJFjZNFkJGGQJEhmXeN1rt3S0JC5E5lu3__5gd4cTb-OcyH56Mfe_CSe9lszezD6vxhgQew7n7Nb2cPh2GaPgGkLuHs
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Video+Deblurring+Using+Sharpness+Features+from+Exemplars&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Xiang%2C+Xinguang&rft.au=Wei%2C+Hao&rft.au=Pan%2C+Jinshan&rft.date=2020-01-01&rft.pub=IEEE&rft.issn=1057-7149&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTIP.2020.3023534&rft.externalDocID=9198913
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon