Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs
For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets), i.e. , a defocus blurry image, an all-in-focus sharp image (and a defocus blur map), is a challenging task for developing effective deblurring models. Existing image defocus deblurring methods typicall...
Uložené v:
| Vydané v: | International journal of computer vision Ročník 133; číslo 10; s. 6953 - 6970 |
|---|---|
| Hlavní autori: | , , , , , |
| Médium: | Journal Article |
| Jazyk: | English |
| Vydavateľské údaje: |
New York
Springer US
01.10.2025
Springer Nature B.V |
| Predmet: | |
| ISSN: | 0920-5691, 1573-1405 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Abstract | For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets),
i.e.
, a defocus blurry image, an all-in-focus sharp image (and a defocus blur map), is a challenging task for developing effective deblurring models. Existing image defocus deblurring methods typically rely on training data collected by specialized imaging equipment, with the assumption that these pairs or triplets are perfectly aligned. However, in practical scenarios involving the collection of real-world data, direct acquisition of training triplets is infeasible, and training pairs inevitably encounter spatial misalignment issues. In this work, we introduce a reblurring-guided learning framework for single image defocus deblurring, enabling the learning of a deblurring network even with misaligned training pairs. By reconstructing spatially variant isotropic blur kernels, our reblurring module ensures spatial consistency between the deblurred image, the reblurred image and the input blurry image, thereby addressing the misalignment issue while effectively extracting sharp textures from the all-in-focus sharp image. Moreover, spatially variant blur can be derived from the reblurring module, and serve as pseudo supervision for defocus blur map during training, interestingly transforming training pairs into training triplets. To leverage this pseudo supervision, we propose a lightweight defocus blur estimator coupled with a fusion block, which enhances deblurring performance through seamless integration with state-of-the-art deblurring networks. Additionally, we have collected a new dataset for single image defocus deblurring (SDD) with typical misalignments, which not only validates our proposed method but also serves as a benchmark for future research. The effectiveness of our method is validated by notable improvements in both quantitative metrics and visual quality across several datasets with real-world defocus blurry images, including DPDD, RealDOF, DED, and our SDD. The source code and dataset are available at
https://github.com/ssscrystal/Reblurring-guided-JDRL
. |
|---|---|
| AbstractList | For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets),
i.e.
, a defocus blurry image, an all-in-focus sharp image (and a defocus blur map), is a challenging task for developing effective deblurring models. Existing image defocus deblurring methods typically rely on training data collected by specialized imaging equipment, with the assumption that these pairs or triplets are perfectly aligned. However, in practical scenarios involving the collection of real-world data, direct acquisition of training triplets is infeasible, and training pairs inevitably encounter spatial misalignment issues. In this work, we introduce a reblurring-guided learning framework for single image defocus deblurring, enabling the learning of a deblurring network even with misaligned training pairs. By reconstructing spatially variant isotropic blur kernels, our reblurring module ensures spatial consistency between the deblurred image, the reblurred image and the input blurry image, thereby addressing the misalignment issue while effectively extracting sharp textures from the all-in-focus sharp image. Moreover, spatially variant blur can be derived from the reblurring module, and serve as pseudo supervision for defocus blur map during training, interestingly transforming training pairs into training triplets. To leverage this pseudo supervision, we propose a lightweight defocus blur estimator coupled with a fusion block, which enhances deblurring performance through seamless integration with state-of-the-art deblurring networks. Additionally, we have collected a new dataset for single image defocus deblurring (SDD) with typical misalignments, which not only validates our proposed method but also serves as a benchmark for future research. The effectiveness of our method is validated by notable improvements in both quantitative metrics and visual quality across several datasets with real-world defocus blurry images, including DPDD, RealDOF, DED, and our SDD. The source code and dataset are available at
https://github.com/ssscrystal/Reblurring-guided-JDRL
. For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets), i.e., a defocus blurry image, an all-in-focus sharp image (and a defocus blur map), is a challenging task for developing effective deblurring models. Existing image defocus deblurring methods typically rely on training data collected by specialized imaging equipment, with the assumption that these pairs or triplets are perfectly aligned. However, in practical scenarios involving the collection of real-world data, direct acquisition of training triplets is infeasible, and training pairs inevitably encounter spatial misalignment issues. In this work, we introduce a reblurring-guided learning framework for single image defocus deblurring, enabling the learning of a deblurring network even with misaligned training pairs. By reconstructing spatially variant isotropic blur kernels, our reblurring module ensures spatial consistency between the deblurred image, the reblurred image and the input blurry image, thereby addressing the misalignment issue while effectively extracting sharp textures from the all-in-focus sharp image. Moreover, spatially variant blur can be derived from the reblurring module, and serve as pseudo supervision for defocus blur map during training, interestingly transforming training pairs into training triplets. To leverage this pseudo supervision, we propose a lightweight defocus blur estimator coupled with a fusion block, which enhances deblurring performance through seamless integration with state-of-the-art deblurring networks. Additionally, we have collected a new dataset for single image defocus deblurring (SDD) with typical misalignments, which not only validates our proposed method but also serves as a benchmark for future research. The effectiveness of our method is validated by notable improvements in both quantitative metrics and visual quality across several datasets with real-world defocus blurry images, including DPDD, RealDOF, DED, and our SDD. The source code and dataset are available at https://github.com/ssscrystal/Reblurring-guided-JDRL. |
| Author | Ren, Dongwei Li, Yu Wu, Xiaohe Zuo, Wangmeng Shu, Xinya Li, Jin |
| Author_xml | – sequence: 1 givenname: Dongwei orcidid: 0000-0002-0965-6810 surname: Ren fullname: Ren, Dongwei email: rendongweihit@gmail.com organization: College of Intelligence and Computing, Tianjin University – sequence: 2 givenname: Xinya surname: Shu fullname: Shu, Xinya organization: Faculty of Computing, Harbin Institute of Technology – sequence: 3 givenname: Yu surname: Li fullname: Li, Yu organization: Faculty of Computing, Harbin Institute of Technology – sequence: 4 givenname: Xiaohe surname: Wu fullname: Wu, Xiaohe organization: Faculty of Computing, Harbin Institute of Technology – sequence: 5 givenname: Jin surname: Li fullname: Li, Jin organization: School of Electrical and Information Engineering, Tianjin University – sequence: 6 givenname: Wangmeng surname: Zuo fullname: Zuo, Wangmeng organization: Faculty of Computing, Harbin Institute of Technology |
| BookMark | eNp9kEtPAjEUhRuDiYD-AVdNXFf7mM7DHUFBEoxGcd10OrdYHGawZUL89xbQuHNxc87inHOTb4B6TdsAQpeMXjNKs5vAGE8FoVzuj3MiTlCfyUwQllDZQ31acEpkWrAzNAhhRSnlORd9ZF-grDvvXbMk085VUOHX6GvAs7VeAr4D25ouRP2N3eIRnoP2TfR44vUadq3_wDu3fcePLujaLZu4svDaHSLP2vlwjk6trgNc_OgQvU3uF-MHMn-azsajOTE841tSAUsMyNyK0srUaEgNJJkGk5cMEqCVNRXnHHJWldxAmRSJpWCzxAiWClqIIbo67m58-9lB2KpV2_kmvlSCy6KQjAsZU_yYMr4NwYNVG-_W2n8pRtWepzryVJGlOvBUIpbEsRQ2ewzg_6b_aX0DDwh7eA |
| Cites_doi | 10.1016/j.imavis.2024.105190 10.1609/aaai.v37i2.25235 10.1109/CVPR.2010.5540063 10.1109/CVPR52733.2024.00277 10.1109/CVPR.2018.00068 10.1109/CVPR52733.2024.01058 10.1109/CVPR52688.2022.00564 10.1145/3664647.3680888 10.1007/978-3-642-33715-4_45 10.1109/CVPR.2014.43 10.1109/CVPR52733.2024.00290 10.1109/CVPR.2017.295 10.1109/ICCPhot.2013.6528301 10.1109/ICIP49359.2023.10223146 10.1109/ICIP.1994.413553 10.1109/CVPRW.2019.00251 10.1109/TMM.2023.3334023 10.1109/ICCV48922.2021.00264 10.1109/ICCV.1998.710772 10.1109/CVPR52733.2024.02403 10.1109/CVPR52688.2022.01716 10.1109/CVPR.2018.00931 10.1109/CVPR42600.2020.00281 10.1109/TNNLS.2018.2876865 10.1109/CVPR.2018.00652 10.1007/978-3-030-58621-8_37 10.1109/TIP.2021.3127850 10.1364/JOSAA.12.000058 10.1109/CVPR52733.2024.02427 10.1109/CVPR52733.2024.02273 10.1007/978-3-642-15549-9_12 10.1109/CVPR52729.2023.01753 10.1109/CVPR52729.2023.00563 10.1109/ICCV48922.2021.00229 10.1109/CVPR52729.2023.00953 10.1007/978-3-030-58607-2_7 10.1109/CVPR.2015.7298665 10.1007/978-3-319-24574-4_28 10.1109/ICCV.2017.89 10.1109/CVPR52729.2023.00557 10.1109/CVPR46437.2021.01458 10.1109/CVPR.2015.7298642 10.1109/ICCV.2015.123 10.1109/JPROC.2023.3238524 10.1109/CVPR46437.2021.00207 10.1109/TIP.2003.819861 10.1109/ICCV51070.2023.01158 10.1109/CVPR.2019.00613 10.1109/ICCPHOT.2018.8368468 10.1109/CVPR52688.2022.00475 |
| ContentType | Journal Article |
| Copyright | The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025. |
| Copyright_xml | – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025. |
| DBID | AAYXX CITATION 7SC 8FD JQ2 L7M L~C L~D |
| DOI | 10.1007/s11263-025-02522-3 |
| DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Computer and Information Systems Abstracts Technology Research Database Computer and Information Systems Abstracts – Academic Advanced Technologies Database with Aerospace ProQuest Computer Science Collection Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Computer and Information Systems Abstracts |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Computer Science |
| EISSN | 1573-1405 |
| EndPage | 6970 |
| ExternalDocumentID | 10_1007_s11263_025_02522_3 |
| GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62172127; U22B2035 funderid: http://dx.doi.org/10.13039/501100001809 |
| GroupedDBID | -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AAPKM AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBRH ABBXA ABDBE ABDBF ABDZT ABECU ABFSG ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABRTQ ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSTC ACUHS ACZOJ ADHHG ADHIR ADHKG ADIMF ADKFA ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AEZWR AFBBN AFDZB AFEXP AFGCZ AFHIU AFKRA AFLOW AFOHR AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGQPQ AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHPBZ AHSBF AHWEU AHYZX AIAKS AIGIU AIIXL AILAN AITGF AIXLP AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG ATHPR AVWKF AXYYD AYFIA AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ7 GQ8 GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO ICD IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PHGZM PHGZT PQBIZ PQBZA PQGLB PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 ZMTXR ~8M ~EX AAYXX AFFHD CITATION 7SC 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c272t-de14ce58f3bf56cae6ce47aec8b1e4e0dfcd222e81db2ceb494f0ef74c3163093 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 0 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001524470700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0920-5691 |
| IngestDate | Wed Nov 05 09:05:05 EST 2025 Sat Nov 29 07:11:02 EST 2025 Sat Oct 11 06:45:36 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 10 |
| Keywords | Isotropic blur kernels Reblurring model Image deblurring Defocus deblurring |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c272t-de14ce58f3bf56cae6ce47aec8b1e4e0dfcd222e81db2ceb494f0ef74c3163093 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-0965-6810 |
| PQID | 3259951235 |
| PQPubID | 1456341 |
| PageCount | 18 |
| ParticipantIDs | proquest_journals_3259951235 crossref_primary_10_1007_s11263_025_02522_3 springer_journals_10_1007_s11263_025_02522_3 |
| PublicationCentury | 2000 |
| PublicationDate | 20251000 2025-10-00 20251001 |
| PublicationDateYYYYMMDD | 2025-10-01 |
| PublicationDate_xml | – month: 10 year: 2025 text: 20251000 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | International journal of computer vision |
| PublicationTitleAbbrev | Int J Comput Vis |
| PublicationYear | 2025 |
| Publisher | Springer US Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer Nature B.V |
| References | 2522_CR41 2522_CR42 2522_CR43 2522_CR44 2522_CR45 2522_CR46 2522_CR47 2522_CR48 2522_CR49 Yu Li (2522_CR20) 2023; 37 Zhong-Qiu Zhao (2522_CR53) 2019; 30 2522_CR30 2522_CR31 2522_CR32 2522_CR33 Xin Zhang (2522_CR51) 2024; 149 2522_CR34 2522_CR35 2522_CR36 2522_CR37 2522_CR38 2522_CR39 Haoyu Ma (2522_CR22) 2022; 31 2522_CR1 2522_CR2 2522_CR7 2522_CR8 2522_CR9 2522_CR21 2522_CR3 2522_CR23 2522_CR4 2522_CR24 2522_CR5 2522_CR25 2522_CR6 2522_CR26 2522_CR27 2522_CR28 2522_CR29 Zhengxia Zou (2522_CR54) 2023; 111 DA Fish (2522_CR11) 1995; 12 Zhou Wang (2522_CR40) 2004; 13 2522_CR50 2522_CR52 2522_CR10 2522_CR12 2522_CR13 2522_CR14 2522_CR15 2522_CR16 2522_CR17 2522_CR18 2522_CR19 |
| References_xml | – volume: 149 year: 2024 ident: 2522_CR51 publication-title: Image and Vision Computing doi: 10.1016/j.imavis.2024.105190 – ident: 2522_CR30 – volume: 37 start-page: 1495 year: 2023 ident: 2522_CR20 publication-title: In Proceedings of the AAAI Conference on Artificial Intelligence doi: 10.1609/aaai.v37i2.25235 – ident: 2522_CR5 doi: 10.1109/CVPR.2010.5540063 – ident: 2522_CR21 doi: 10.1109/CVPR52733.2024.00277 – ident: 2522_CR48 doi: 10.1109/CVPR.2018.00068 – ident: 2522_CR50 doi: 10.1109/CVPR52733.2024.01058 – ident: 2522_CR46 doi: 10.1109/CVPR52688.2022.00564 – ident: 2522_CR23 doi: 10.1145/3664647.3680888 – ident: 2522_CR12 doi: 10.1007/978-3-642-33715-4_45 – ident: 2522_CR19 doi: 10.1109/CVPR.2014.43 – ident: 2522_CR25 – ident: 2522_CR26 doi: 10.1109/CVPR52733.2024.00290 – ident: 2522_CR29 doi: 10.1109/CVPR.2017.295 – ident: 2522_CR36 doi: 10.1109/ICCPhot.2013.6528301 – ident: 2522_CR44 doi: 10.1109/ICIP49359.2023.10223146 – ident: 2522_CR6 doi: 10.1109/ICIP.1994.413553 – ident: 2522_CR24 doi: 10.1109/CVPRW.2019.00251 – ident: 2522_CR52 doi: 10.1109/TMM.2023.3334023 – ident: 2522_CR35 doi: 10.1109/ICCV48922.2021.00264 – ident: 2522_CR27 doi: 10.1109/ICCV.1998.710772 – ident: 2522_CR3 doi: 10.1109/CVPR52733.2024.02403 – ident: 2522_CR39 doi: 10.1109/CVPR52688.2022.01716 – ident: 2522_CR37 doi: 10.1109/CVPR.2018.00931 – ident: 2522_CR49 doi: 10.1109/CVPR42600.2020.00281 – volume: 30 start-page: 3212 issue: 11 year: 2019 ident: 2522_CR53 publication-title: IEEE transactions on neural networks and learning systems doi: 10.1109/TNNLS.2018.2876865 – ident: 2522_CR4 doi: 10.1109/CVPR.2018.00652 – ident: 2522_CR13 doi: 10.1007/978-3-030-58621-8_37 – volume: 31 start-page: 216 year: 2022 ident: 2522_CR22 publication-title: IEEE Transactions on Image Processing doi: 10.1109/TIP.2021.3127850 – volume: 12 start-page: 58 issue: 1 year: 1995 ident: 2522_CR11 publication-title: JOSA A doi: 10.1364/JOSAA.12.000058 – ident: 2522_CR8 doi: 10.1109/CVPR52733.2024.02427 – ident: 2522_CR43 doi: 10.1109/CVPR52733.2024.02273 – ident: 2522_CR42 doi: 10.1007/978-3-642-15549-9_12 – ident: 2522_CR18 doi: 10.1109/CVPR52729.2023.01753 – ident: 2522_CR28 doi: 10.1109/CVPR52729.2023.00563 – ident: 2522_CR2 doi: 10.1109/ICCV48922.2021.00229 – ident: 2522_CR9 – ident: 2522_CR38 doi: 10.1109/CVPR52729.2023.00953 – ident: 2522_CR1 doi: 10.1007/978-3-030-58607-2_7 – ident: 2522_CR34 doi: 10.1109/CVPR.2015.7298665 – ident: 2522_CR33 doi: 10.1007/978-3-319-24574-4_28 – ident: 2522_CR10 doi: 10.1109/ICCV.2017.89 – ident: 2522_CR31 doi: 10.1109/CVPR52729.2023.00557 – ident: 2522_CR45 doi: 10.1109/CVPR46437.2021.01458 – ident: 2522_CR14 doi: 10.1109/CVPR.2015.7298642 – ident: 2522_CR15 doi: 10.1109/ICCV.2015.123 – volume: 111 start-page: 257 issue: 3 year: 2023 ident: 2522_CR54 publication-title: Proceedings of the IEEE doi: 10.1109/JPROC.2023.3238524 – ident: 2522_CR17 doi: 10.1109/CVPR46437.2021.00207 – volume: 13 start-page: 600 issue: 4 year: 2004 ident: 2522_CR40 publication-title: IEEE transactions on image processing doi: 10.1109/TIP.2003.819861 – ident: 2522_CR16 – ident: 2522_CR32 doi: 10.1109/ICCV51070.2023.01158 – ident: 2522_CR47 doi: 10.1109/CVPR.2019.00613 – ident: 2522_CR7 doi: 10.1109/ICCPHOT.2018.8368468 – ident: 2522_CR41 doi: 10.1109/CVPR52688.2022.00475 |
| SSID | ssj0002823 |
| Score | 2.4819438 |
| Snippet | For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets),
i.e.
, a defocus blurry image, an all-in-focus sharp image... For single image defocus deblurring, acquiring well-aligned training pairs (or training triplets), i.e., a defocus blurry image, an all-in-focus sharp image... |
| SourceID | proquest crossref springer |
| SourceType | Aggregation Database Index Database Publisher |
| StartPage | 6953 |
| SubjectTerms | Artificial Intelligence Blurring Cameras Computer Imaging Computer Science Datasets Deep learning Design Effectiveness Image acquisition Image Processing and Computer Vision Learning Misalignment Modules Neural networks Pattern Recognition Pattern Recognition and Graphics Source code Supervision Vision |
| Title | Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs |
| URI | https://link.springer.com/article/10.1007/s11263-025-02522-3 https://www.proquest.com/docview/3259951235 |
| Volume | 133 |
| WOSCitedRecordID | wos001524470700001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLink Contemporary (1997 - Present) customDbUrl: eissn: 1573-1405 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002823 issn: 0920-5691 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELagMLBQnqJQkAc2iJSH82KrgAIDVQUFsUXJ5VxVKilqWn4_58QmAsEAUyLnYll39vmz78XYKYRpSEADLQdEbIlQCivzM2mRroQApAQZ1MUmwsEgenmJhzoorDTe7sYkWWnqJtjNcSubo4oo9pUP-ipbo-0uUgUbHh6fP_UvHSLqAvJ0MPKD2NGhMj_38XU7ajDmN7Notdv02_8b5xbb1OiS9-rpsM1WsNhhbY00uV7HJTWZYg6mbZdJYvRU3QcWY-tmOckVPb1Pkd-9ks7hVyhnsCzpacgueI_r9Kxj3jdOXlzd7PL7SUkAf0w6nI90EQo-VKajPfbUvx5d3lq6CIMFbugurBwdAehH0sukH0CKAaAIU4Qoc1CgnUvICWMg4d7MBcxELKSNMhTgEdSzY2-ftYpZgQeM274bydzHXNhAf2Zx7gChB5HmKXEuszvszMgieatzbSRNVmXF1YQ4mlRcTbwO6xpxJXrdlYnnqgRqKv63w86NeJrPv_d2-DfyI7bhKglXXn1d1lrMl3jM1uF9MSnnJ9V8_AAD5dsk |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT8MwDLZ4SXDhjRgMyIEbVOojbVduCBibgGmCgbhVretMk2AguvH7cbqECQQHOLVK3SiyE-dL_AI4xDiLGWiQ46FMHBkr6eRhrhzWlRihUqiiSbGJuNNpPD4mXRMUVlpvd2uSrDT1NNjN8yubo44oDrUP-izMS96xdMb827uHT_3Lh4hJAXk-GIVR4plQmZ_7-LodTTHmN7Notds0V_43zlVYNuhSnE6mwxrM0HAdVgzSFGYdl9xkiznYtg1QzOgnfR847DuX40Gh6fn9iUT7mXWOOCf1guOSn5bsRJwKk561L5rWyUvom11xMygZ4PdZh4ueKUIhutp0tAn3zYveWcsxRRgc9GN_5BTkSaSwoYJchRFmFCHJOCNs5B5JcguFBWMMYtyb-0i5TKRyScUSA4Z6bhJswdzwZUjbINzQb6gipEK6yH_mSeEhoweZFRlzLndrcGRlkb5Ocm2k06zKmqspczStuJoGNahbcaVm3ZVp4OsEajr-twbHVjzTz7_3tvM38gNYbPVurtPrdudqF5Z8Le3Kw68Oc6O3Me3BAr6PBuXbfjU3PwBSvN4I |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3JTsMwEB2xCXFhR5TVB24QkcVJGm4VUEBAVYlF3KJkPK4qlYCalu9nnIUCggPilMhxLGvGHr_xbAAHGCYhAw2yHJSRJUMtrdRPtcWyEgPUGnVQFpsIO53m01PU_RTFX3i71ybJMqbBZGnKRsevSh9PAt8ct7A_muhi3_ijT8OsNI70Rl-_e_yQxaxQlMXkWUnyg8ipwmZ-HuPr0TTBm99MpMXJ0176_5yXYbFCnaJVLpMVmKJsFZYqBCqq_Z1zU13koW5bA80MGJh7wqxnXYz7yvTn9wGJq2eWReKM9AuOc37W3U5ES1RpW3uiXTt_CXPjK277OQP_Hst2cV8VpxBdY1Jah4f2-f3ppVUVZ7DQDd2RpciRSH5Te6n2A0woQJJhQthMHZJkK42KsQcxHk5dpFRGUtukQ4keQ0A78jZgJnvJaBOE7btNrXxS0kb-M42Ug4wqZKISplxqN-Cw5kv8WubgiCfZlg1VY6ZoXFA19hqwU7MurvZjHnuuSaxm4oIbcFSzavL599G2_tZ9H-a7Z-345qpzvQ0LrmF24fi3AzOj4Zh2YQ7fRv18uFcs03dhdubs |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Reblurring-Guided+Single+Image+Defocus+Deblurring%3A+A+Learning+Framework+with+Misaligned+Training+Pairs&rft.jtitle=International+journal+of+computer+vision&rft.au=Ren%2C+Dongwei&rft.au=Shu%2C+Xinya&rft.au=Li%2C+Yu&rft.au=Wu%2C+Xiaohe&rft.date=2025-10-01&rft.pub=Springer+Nature+B.V&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=133&rft.issue=10&rft.spage=6953&rft.epage=6970&rft_id=info:doi/10.1007%2Fs11263-025-02522-3&rft.externalDBID=HAS_PDF_LINK |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |