Deep Video Deblurring Using Sharpness Features from Exemplars
Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to hi...
Uloženo v:
| Vydáno v: | IEEE transactions on image processing Ročník 29; s. 1 |
|---|---|
| Hlavní autoři: | , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
United States
IEEE
01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 1057-7149, 1941-0042, 1941-0042 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images. |
|---|---|
| AbstractList | Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images. Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images.Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this paper, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images. Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods usually impose handcrafted image priors or use end-to-end trainable networks to solve this problem. However, using image priors usually leads to highly non-convex problems while directly using end-to-end trainable networks in a regression generates over-smoothes details in the restored images. In this article, we explore the sharpness features from exemplars to help the blur removal and details restoration. We first estimate optical flow to explore the temporal information which can help to make full use of neighboring information. Then, we develop an encoder and decoder network and explore the sharpness features from exemplars to guide the network for better image restoration. We train the proposed algorithm in an end-to-end manner and show that using sharpness features from exemplars can help blur removal and details restoration. Both quantitative and qualitative evaluations demonstrate that our method performs favorably against state-of-the-art approaches on the benchmark video deblurring datasets and real-world images. |
| Author | Xiang, Xinguang Pan, Jinshan Wei, Hao |
| Author_xml | – sequence: 1 givenname: Xinguang surname: Xiang fullname: Xiang, Xinguang organization: Nanjing University of Science and Technology, China – sequence: 2 givenname: Hao surname: Wei fullname: Wei, Hao organization: Nanjing University of Science and Technology, China – sequence: 3 givenname: Jinshan surname: Pan fullname: Pan, Jinshan organization: Nanjing University of Science and Technology, China. (e-mail: jspan@njust.edu.cn) |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/32936755$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kctLw0AQxhdRbKveBUECXryk7nuzBw_ShxYKCrZel20y0ZS83E1A_3sT2nrw4GVmGH7fMDPfCB2XVQkIXRI8JgTru9XiZUwxxWOGKROMH6Eh0ZyEGHN63NVYqFARrgdo5P0WY8IFkadowKhmUgkxRPdTgDp4yxKogils8ta5rHwP1r6Prx_W1SV4H8zBNq0DH6SuKoLZFxR1bp0_RyepzT1c7PMZWs9nq8lTuHx-XEwelmHMuGpCJbVKqEqAyyglGiJNFTAds1hLauOuZUFFiU4J4DTRGwZRwiQTiUqZBBGzM3S7m1u76rMF35gi8zHkuS2har2hnLMoIjoSHXrzB91WrSu77XpKYhFJ0VPXe6rdFJCY2mWFdd_m8JgOkDsgdpX3DlITZ41tsqpsnM1yQ7DpHTCdA6Z3wOwd6IT4j_Aw-x_J1U6SAcAvrrtzNGHsB6EejoI |
| CODEN | IIPRE4 |
| CitedBy_id | crossref_primary_10_1109_TCSVT_2023_3262685 crossref_primary_10_1109_TPAMI_2023_3243059 crossref_primary_10_3390_app15031311 crossref_primary_10_1109_TIP_2024_3372454 crossref_primary_10_1109_TCSVT_2024_3369073 crossref_primary_10_1016_j_patcog_2023_109360 crossref_primary_10_1016_j_dsp_2023_104248 crossref_primary_10_1038_s42256_021_00392_1 crossref_primary_10_1109_TIP_2024_3482176 crossref_primary_10_1016_j_dsp_2025_105125 crossref_primary_10_1016_j_patcog_2024_110813 crossref_primary_10_1016_j_optlaseng_2023_107522 crossref_primary_10_1109_ACCESS_2023_3311033 crossref_primary_10_1109_TIP_2022_3142518 crossref_primary_10_1109_TIP_2024_3512362 crossref_primary_10_1117_1_JEI_34_2_023027 crossref_primary_10_1109_TCSVT_2022_3201045 crossref_primary_10_1109_TCSVT_2023_3325408 crossref_primary_10_1007_s11263_022_01708_3 crossref_primary_10_1051_0004_6361_202244904 crossref_primary_10_1007_s10489_022_04158_z crossref_primary_10_1109_TPAMI_2025_3557866 |
| Cites_doi | 10.1109/CVPR.2014.371 10.1609/aaai.v34i07.6818 10.1109/CVPR.2013.147 10.1109/CVPR.2016.180 10.1109/ICCV.2017.34 10.1109/ICCVW.2019.00475 10.1109/ICCV.2017.356 10.1109/CVPR.2014.374 10.1109/ICCV.2017.435 10.1109/ICCVW.2017.353 10.1109/ICCV.2019.00257 10.5201/ipol.2013.26 10.1109/CVPR.2018.00267 10.1007/978-3-319-10599-4_16 10.1109/CVPR.2017.304 10.1109/TCI.2016.2644865 10.1109/CVPRW.2019.00269 10.1109/CVPR.2018.00340 10.1109/TPAMI.2018.2832125 10.1109/CVPR.2019.00829 10.1007/978-3-642-33715-4_45 10.1109/CVPR.2013.140 10.1109/ICCV.2013.296 10.1109/CVPR.2018.00931 10.1109/CVPR.2010.5539938 10.1109/CVPR.2015.7299181 10.1109/CVPR.2015.7298677 10.1109/CVPRW.2019.00267 10.1109/CVPR.2017.35 10.1109/ICCV.2019.00567 10.1109/CVPR.2007.383214 10.1109/CVPR.2018.00853 10.1109/ICCV.2015.316 10.1007/978-3-030-01219-9_7 10.1109/CVPR.2019.00613 10.1109/CVPR.2019.00397 10.1109/ICCV.2017.274 10.1109/TIP.2003.819861 10.1109/CVPR.2017.33 10.1109/TIP.2018.2867733 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TIP.2020.3023534 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library (IEL) (UW System Shared) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | MEDLINE - Academic Technology Research Database PubMed |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE/IET Electronic Library (IEL) (UW System Shared) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Engineering |
| EISSN | 1941-0042 |
| EndPage | 1 |
| ExternalDocumentID | 32936755 10_1109_TIP_2020_3023534 9198913 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Natural Science Foundation of Jiangsu Province grantid: BK20180471 funderid: 10.13039/501100004608 – fundername: National Natural Science Foundation of China grantid: 61872421; 61922043; U1611461 funderid: 10.13039/501100001809 – fundername: National Key Research and Development Program of China grantid: 2018AAA0102002 funderid: 10.13039/501100012166 |
| GroupedDBID | --- -~X .DC 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 53G 5VS AAYXX ABFSI AETIX AGSQL AI. AIBXA ALLEH CITATION E.L H~9 ICLAB IFJZH VH1 NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c347t-7697d27de468f19e8927e39c3c962acf19ae78d9f1e0fd9b3e8d3635d7f36e5c3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 32 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000572623700004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 1057-7149 1941-0042 |
| IngestDate | Thu Oct 02 11:14:01 EDT 2025 Mon Jun 30 10:21:13 EDT 2025 Sun Nov 09 08:39:25 EST 2025 Sat Nov 29 03:21:13 EST 2025 Tue Nov 18 21:28:59 EST 2025 Wed Aug 27 02:31:58 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c347t-7697d27de468f19e8927e39c3c962acf19ae78d9f1e0fd9b3e8d3635d7f36e5c3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0002-2344-6174 0000-0003-0304-9507 |
| PMID | 32936755 |
| PQID | 2446058655 |
| PQPubID | 85429 |
| PageCount | 1 |
| ParticipantIDs | proquest_journals_2446058655 crossref_primary_10_1109_TIP_2020_3023534 pubmed_primary_32936755 ieee_primary_9198913 crossref_citationtrail_10_1109_TIP_2020_3023534 proquest_miscellaneous_2443881985 |
| PublicationCentury | 2000 |
| PublicationDate | 2020-01-01 |
| PublicationDateYYYYMMDD | 2020-01-01 |
| PublicationDate_xml | – month: 01 year: 2020 text: 2020-01-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on image processing |
| PublicationTitleAbbrev | TIP |
| PublicationTitleAlternate | IEEE Trans Image Process |
| PublicationYear | 2020 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref35 ref13 ref34 ref12 ref15 ref36 ref14 ref31 ref30 ref33 ref11 kingma (ref39) 2015 ref32 ref10 ref2 ref1 ref17 ref38 ref16 ref19 ref18 ronneberger (ref37) 2015 paszke (ref40) 2017 wang (ref22) 2019 ref24 ref45 ref23 ref26 ref25 ref20 ref42 ref41 ref44 ref43 ref28 ref27 ref29 ref8 ref7 ref9 ref4 ref3 tian (ref21) 2018 ref6 ref5 |
| References_xml | – year: 2017 ident: ref40 article-title: Automatic differentiation in Pytorch publication-title: Proc Neural Inf Process Syst Autodiff Workshop – ident: ref7 doi: 10.1109/CVPR.2014.371 – ident: ref44 doi: 10.1609/aaai.v34i07.6818 – ident: ref8 doi: 10.1109/CVPR.2013.147 – ident: ref6 doi: 10.1109/CVPR.2016.180 – ident: ref23 doi: 10.1109/ICCV.2017.34 – ident: ref18 doi: 10.1109/ICCVW.2019.00475 – ident: ref4 doi: 10.1109/ICCV.2017.356 – ident: ref32 doi: 10.1109/CVPR.2014.374 – ident: ref24 doi: 10.1109/ICCV.2017.435 – ident: ref15 doi: 10.1109/ICCVW.2017.353 – ident: ref19 doi: 10.1109/ICCV.2019.00257 – ident: ref35 doi: 10.5201/ipol.2013.26 – ident: ref14 doi: 10.1109/CVPR.2018.00267 – ident: ref33 doi: 10.1007/978-3-319-10599-4_16 – ident: ref27 doi: 10.1109/CVPR.2017.304 – start-page: 234 year: 2015 ident: ref37 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent (MICCAI) – ident: ref38 doi: 10.1109/TCI.2016.2644865 – ident: ref28 doi: 10.1109/CVPRW.2019.00269 – ident: ref20 doi: 10.1109/CVPR.2018.00340 – year: 2018 ident: ref21 article-title: TDAN: Temporally deformable alignment network for video super-resolution publication-title: arXiv 1812 02898 – ident: ref43 doi: 10.1109/TPAMI.2018.2832125 – ident: ref25 doi: 10.1109/CVPR.2019.00829 – ident: ref5 doi: 10.1007/978-3-642-33715-4_45 – ident: ref30 doi: 10.1109/CVPR.2013.140 – ident: ref45 doi: 10.1109/ICCV.2013.296 – ident: ref42 doi: 10.1109/CVPR.2018.00931 – ident: ref31 doi: 10.1109/CVPR.2010.5539938 – ident: ref34 doi: 10.1109/CVPR.2015.7299181 – ident: ref1 doi: 10.1109/CVPR.2015.7298677 – ident: ref2 doi: 10.1109/CVPRW.2019.00267 – ident: ref10 doi: 10.1109/CVPR.2017.35 – ident: ref12 doi: 10.1109/ICCV.2019.00567 – ident: ref3 doi: 10.1109/CVPR.2007.383214 – ident: ref9 doi: 10.1109/CVPR.2018.00853 – ident: ref36 doi: 10.1109/ICCV.2015.316 – ident: ref17 doi: 10.1007/978-3-030-01219-9_7 – start-page: 1954 year: 2019 ident: ref22 article-title: EDVR: Video restoration with enhanced deformable convolutional networks publication-title: Proc IEEE/CVF Conf Comput Vis Pattern Recognit Workshops (CVPRW) – ident: ref13 doi: 10.1109/CVPR.2019.00613 – ident: ref11 doi: 10.1109/CVPR.2019.00397 – ident: ref29 doi: 10.1109/ICCV.2017.274 – year: 2015 ident: ref39 article-title: Adam: A method for stochastic optimization publication-title: Proc Int Conf Learn Represent (ICLR) – ident: ref41 doi: 10.1109/TIP.2003.819861 – ident: ref16 doi: 10.1109/CVPR.2017.33 – ident: ref26 doi: 10.1109/TIP.2018.2867733 |
| SSID | ssj0014516 |
| Score | 2.5156312 |
| Snippet | Video deblurring is a challenging problem as the blur in videos is usually caused by camera shake, object motion, depth variation, etc. Existing methods... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 1 |
| SubjectTerms | Algorithms Coders Decoding Estimation exemplars Feature extraction Image restoration Kernel Learning systems Object motion optical flow Optical flow (image analysis) Optical imaging sharp feature fusion Sharpness Video deblurring |
| Title | Deep Video Deblurring Using Sharpness Features from Exemplars |
| URI | https://ieeexplore.ieee.org/document/9198913 https://www.ncbi.nlm.nih.gov/pubmed/32936755 https://www.proquest.com/docview/2446058655 https://www.proquest.com/docview/2443881985 |
| Volume | 29 |
| WOSCitedRecordID | wos000572623700004&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE/IET Electronic Library (IEL) (UW System Shared) customDbUrl: eissn: 1941-0042 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014516 issn: 1057-7149 databaseCode: RIE dateStart: 19920101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LT9wwEB4B4gCH8ip0y0NG4lKp6WbjJLYPHBAPUalCHADtLUrsMUKC7IrdRfz8zjjZqEgUiVuUTBLLnx_f57FnAI5kqvwAjY-SpIqjVOosKjHxkTapQh9n3lUhzuwfdXWlh0NzvQA_u7MwiBg2n-Evvgy-fDeyM14q6xve4MMpaheVypuzWp3HgBPOBs9mpiJFtH_ukoxN_-b3NQnBhPQpB3eR6ZspKORU-T-9DNPMxdrnCrgOX1o6KU4a_DdgAetNWGuppWg77mQTVv-JO7gFx2eIY3H34HAkaMB55GXA-l6E7QOCQziPeQAUTA9nJMcFn0ER56_4NGYd_BVuL85vTi-jNo9CZAmKaaRyo1yiHKa59gOD2iQKpbHSmjwpLd0qUWlnCLTYO1NJ1E4SEXHKyxwzK7dhqR7V-A2EynEgfVXyszTNdFWSHkGlLI2Slbe-B_151Ra2DTLOuS4eiyA2YlMQGAWDUbRg9OBH98a4CbDxge0W13ln11Z3D_bm6BVtD5wURFvY45tnWQ8Ou8fUd9ghUtY4mgUbqYkSabLZaVDvvi2JB5GYyr6__89dWOGSNYsxe7A0fZ7hPizbl-nD5PmAGuhQH4QG-hf8bODq |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Nb9QwEB1VBQk4UGj5WChgJC5IpJuN7dg-cEDQqhXLqocF9RYl9hhVKtlVdxfx85lxshFIgMQtSiaJ5eeP9zz2DMArqUycoItZUTR5pqTVWY1FzKxTBmOuY2hSnNmpmc3sxYU734E3w1kYREybz_CIL5MvPyz8hpfKxo43-HCK2htaqSLvTmsNPgNOOZt8m9pkhoj_1imZu_H87JykYEEKlcO7SPXbJJSyqvydYKaJ5mTv_4p4D-72hFK861rAfdjBdh_2enIp-q672oc7v0QePIC3HxCX4stlwIWgIeeKFwLbryJtIBAcxHnJQ6BggrghQS74FIo4_oHflqyEH8Dnk-P5-9Osz6SQeQJjnZnSmVCYgKq0ceLQusKgdF56Vxa1p1s1GhscwZbH4BqJNkiiIsFEWaL28iHstosWH4MwJU5kbGp-ppS2TU2KBI3xNE420ccRjLdVW_k-zDhnu7iqktzIXUVgVAxG1YMxgtfDG8suxMY_bA-4zge7vrpHcLhFr-r74Koi4sI-31LrEbwcHlPvYZdI3eJik2ykJVJkyeZRh_rwbUlMiOSUfvLnf76AW6fzT9Nqejb7-BRucym7pZlD2F1fb_AZ3PTf15er6-epmf4EiDvjSQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Video+Deblurring+Using+Sharpness+Features+from+Exemplars&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Xiang%2C+Xinguang&rft.au=Wei%2C+Hao&rft.au=Pan%2C+Jinshan&rft.date=2020-01-01&rft.pub=IEEE&rft.issn=1057-7149&rft.spage=1&rft.epage=1&rft_id=info:doi/10.1109%2FTIP.2020.3023534&rft.externalDocID=9198913 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |