Joint self-supervised and reference-guided learning for depth inpainting

Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computational visual media (Beijing) Jg. 8; H. 4; S. 597 - 612
Hauptverfasser: Wu, Heng, Fu, Kui, Zhao, Yifan, Song, Haokun, Li, Jia
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Beijing Tsinghua University Press 01.12.2022
Springer Nature B.V
Schlagworte:
ISSN:2096-0433, 2096-0662
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Abstract Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.
AbstractList Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and also large holes. To improve such data, we propose a joint self-supervised and reference-guided learning approach for depth inpainting. For the self-supervised learning strategy, we introduce an improved spatial convolutional sparse coding module in which total variation regularization is employed to enhance the structural information while preserving edge information. This module alternately learns a convolutional dictionary and sparse coding from a corrupted depth map. Then, both the learned convolutional dictionary and sparse coding are convolved to yield an initial depth map, which is effectively smoothed using local contextual information. The reference-guided learning part is inspired by the fact that adjacent pixels with close colors in the RGB image tend to have similar depth values. We thus construct a hierarchical joint bilateral filter module using the corresponding color image to fill in large holes. In summary, our approach integrates a convolutional sparse coding module to preserve local contextual information and a hierarchical joint bilateral filter module for filling using specific adjacent information. Experimental results show that the proposed approach works well for both invalid value restoration and large hole inpainting.
Author Wu, Heng
Zhao, Yifan
Fu, Kui
Li, Jia
Song, Haokun
Author_xml – sequence: 1
  givenname: Heng
  surname: Wu
  fullname: Wu, Heng
  organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University
– sequence: 2
  givenname: Kui
  surname: Fu
  fullname: Fu, Kui
  organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University
– sequence: 3
  givenname: Yifan
  surname: Zhao
  fullname: Zhao, Yifan
  organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University
– sequence: 4
  givenname: Haokun
  surname: Song
  fullname: Song, Haokun
  organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University
– sequence: 5
  givenname: Jia
  surname: Li
  fullname: Li, Jia
  email: jiali@buaa.edu.cn
  organization: State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University
BookMark eNp9kE1LxDAQhoOs4LruD_BW8BzNR5umR1nUXVnwoueQNpM1UtOatIL7681SRRD0MMwwzDPvzHuKZr7zgNA5JZeUkPIq5pRUBSaMpigqvD9Cc0YqgYkQbPZd55yfoGWMriZMcEloWc3R-r5zfsgitBbHsYfw7iKYTHuTBbAQwDeAd6MzqdmCDt75XWa7kBnoh-fM-V4nPjXP0LHVbYTlV16gp9ubx9Uabx_uNqvrLW64YAM2NSmpNI3WlFOqOYGi1E0ugBNjRQWiolICY1bXVhhWW2kszU1RNZZIWVu-QBfT3j50byPEQb10Y_BJUjEhC8YKSco0RaepJnQxpk9UH9yrDh-KEnXwTE2eqeSZOnim9okpfzGNG_TgOj8E7dp_STaRMan4HYSfm_6GPgEY74P9
CitedBy_id crossref_primary_10_1038_s41598_025_08323_5
crossref_primary_10_26599_CVM_2025_9450384
Cites_doi 10.1109/CVPR.2017.28
10.1007/978-3-030-58589-1_1
10.1109/ICCV.2011.6126488
10.1109/ICCV.2017.566
10.1109/CVPR.2019.01273
10.1007/978-3-642-38886-6_52
10.1016/j.imavis.2013.07.006
10.1109/CVPR.2015.7299149
10.1109/TIP.2017.2718183
10.1137/S0036142903422429
10.1109/ICRA.2018.8460184
10.1109/TIP.2015.2409551
10.1007/978-3-642-33715-4_54
10.1109/3DTV.2011.5877202
10.1016/j.patrec.2012.06.003
10.1109/CVPR.2013.57
10.1109/CVPR.2013.149
10.1109/ICCV.2015.212
10.1109/WACV.2017.145
10.1007/s40436-015-0131-4
10.1109/CVPR.2019.00840
10.1109/CVPR.2018.00026
10.1109/TVCG.2020.3003768
10.1109/CVPR.2010.5539957
10.1109/JSTSP.2017.2743683
10.1007/978-3-319-46487-9_38
10.1109/3DV.2018.00017
10.1007/978-3-319-03731-8_38
10.1109/TPAMI.2012.213
10.5244/C.30.125
10.1016/j.icte.2020.05.004
10.1109/3DV.2017.00012
10.1177/1729881421996544
10.1109/ICCV.2013.127
10.1109/ICRA.2019.8793637
10.1007/978-3-030-01270-0_7
10.1109/CRV.2018.00013
ContentType Journal Article
Copyright The Author(s) 2022
The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2022
– notice: The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
7SC
8FD
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
JQ2
L7M
L~C
L~D
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
DOI 10.1007/s41095-021-0259-z
DatabaseName Springer Nature OA Free Journals
CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials Local Electronic Collection Information
ProQuest Central
ProQuest One Community College
ProQuest Central
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
DatabaseTitle CrossRef
Publicly Available Content Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest Central China
Computer and Information Systems Abstracts Professional
ProQuest Central
ProQuest One Academic UKI Edition
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
Advanced Technologies Database with Aerospace
ProQuest One Academic (New)
DatabaseTitleList Publicly Available Content Database

Database_xml – sequence: 1
  dbid: PIMPY
  name: Publicly Available Content Database
  url: http://search.proquest.com/publiccontent
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 2096-0662
EndPage 612
ExternalDocumentID 10_1007_s41095_021_0259_z
GroupedDBID -0I
-0Y
-SI
-S~
0R~
5VR
5VS
92M
9D9
9DI
AAFWJ
AAKKN
AAXDM
ABEEZ
ABFTD
ACACY
ACGFS
ACULB
ADINQ
ADMLS
AFGXO
AFKRA
AFPKN
AFUIB
AHBYD
AHSBF
ALMA_UNASSIGNED_HOLDINGS
AMKLP
ARCSS
BAPOH
BENPR
C24
C6C
CAJEI
CCEZO
CCPQU
CUBFJ
EBS
EJD
FA0
GROUPED_DOAJ
IAO
ISR
ITC
JAVBF
JUIAU
M~E
OK1
PIMPY
PROAC
Q--
Q-8
R-I
RSV
RT9
S..
SOJ
T8Y
U1F
U1G
U5I
U5S
~NL
AAYXX
ABVLG
AFFHD
CITATION
PHGZM
PHGZT
7SC
8FD
ABUWG
AZQEC
DWQXO
JQ2
L7M
L~C
L~D
PKEHL
PQEST
PQQKQ
PQUKI
PRINS
ID FETCH-LOGICAL-c362t-db0718dcaa1311a30e57ac46e30df69e69188e22fabf6d2bf8df14d59cf088bf3
IEDL.DBID C24
ISICitedReferencesCount 4
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000801981000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 2096-0433
IngestDate Fri Jul 25 06:38:28 EDT 2025
Wed Nov 12 18:35:32 EST 2025
Tue Nov 18 20:47:34 EST 2025
Fri Feb 21 02:45:29 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 4
Keywords self-supervised learning
reference-guided learning
depth inpainting
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c362t-db0718dcaa1311a30e57ac46e30df69e69188e22fabf6d2bf8df14d59cf088bf3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
OpenAccessLink https://link.springer.com/10.1007/s41095-021-0259-z
PQID 2685225807
PQPubID 4402894
PageCount 16
ParticipantIDs proquest_journals_2685225807
crossref_primary_10_1007_s41095_021_0259_z
crossref_citationtrail_10_1007_s41095_021_0259_z
springer_journals_10_1007_s41095_021_0259_z
PublicationCentury 2000
PublicationDate 20221200
2022-12-00
20221201
PublicationDateYYYYMMDD 2022-12-01
PublicationDate_xml – month: 12
  year: 2022
  text: 20221200
PublicationDecade 2020
PublicationPlace Beijing
PublicationPlace_xml – name: Beijing
PublicationTitle Computational visual media (Beijing)
PublicationTitleAbbrev Comp. Visual Media
PublicationYear 2022
Publisher Tsinghua University Press
Springer Nature B.V
Publisher_xml – name: Tsinghua University Press
– name: Springer Nature B.V
References Chen, L.; Lin, H.; Li, S. Depth image enhancement for Kinect using region growing and bilateral filter. In: Proceedings of the 21st International Conference on Pattern Recognition, 3070–3073, 2012.
MoriSEratOBrollWSaitoHSchmalstiegDKalkofenDInpaintFusion: Incremental RGB-D inpainting for 3D scenesIEEE Transactions on Visualization and Computer Graphics202026102994300710.1109/TVCG.2020.3003768
GongX JLiuJ YZhouW HLiuJ LGuided depth enhancement via a fast marching methodImage and Vision Computing2013311069570310.1016/j.imavis.2013.07.006
ZhangCWangTImage inpainting using double discriminator generative adversarial networksJournal of Physics: Conference Series202117321012052
Bristow, H.; Eriksson, A.; Lucey, S. Fast convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 391–398, 2013.
LiaoMLuFZhouDZhangSLiWYangRVedaldiABischofHBroxTFrahmJ MDVI: Depth guided video inpainting for autonomous drivingComputer Vision — ECCV 20202020ChamSpringer11710.1007/978-3-030-58589-1_1
Liu, J.; Gong, X.; Liu, J. Guided inpainting and filtering for Kinect depth maps. In: Proceedings of the 21st International Conference on Pattern Recognition, 2055–2058, 2012.
Zhang, Y. D.; Funkhouser, T. Deep depth completion of a single RGB-D image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 175–185, 2018.
Zeiler, M. D.; Krishnan, D.; Taylor, G. W.; Fergus, R. Deconvolutional networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2528–2535, 2010.
BarronJ TPooleBLeibeBMatasJSebeNWellingMThe fast bilateral solverComputer Vision — ECCV 20162016ChamSpringer61763210.1007/978-3-319-46487-9_38
Bristow, H.; Lucey, S. Optimization methods for convolutional sparse coding. arXiv preprint arXiv: 1406.2407, 2014.
Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Fast scene understanding for autonomous driving. arXiv preprint arXiv:1708. 02550, 2017.
KimJHyeonJDohNGenerative multiview inpainting for object removal in large indoor spacesInternational Journal of Advanced Robotic Systems202118217298814219965410.1177/1729881421996544
Jaritz, M.; de Charette, R.; Wirbel, E.; Perrotton, X.; Nashashibi, F. Sparse and dense data with CNNs: Depth completion and semantic segmentation. In: Proceedings of the International Conference on 3D Vision, 52–60, 2018.
SilbermanNHoiemDKohliPFergusRFitzgibbonALazebnikSPeronaPSatoYSchmidCIndoor segmentation and support inference from RGBD imagesComputer Vision — ECCV 20122012Berlin HeidelbergSpringer74676010.1007/978-3-642-33715-4_54
Imran, S.; Long, Y.; Liu, X.; Morris, D. Depth coefficients for depth completion. arXiv preprint arXiv:1903.05421, 2019.
WangXOngS KNeeA Y CA comprehensive survey of augmented reality assembly researchAdvances in Manufacturing20164112210.1007/s40436-015-0131-4
Hornácek, M.; Rhemann, C.; Gelautz, M.; Rother, C. Depth super resolution by rigid body self-similarity in 3D. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1123–1130, 2013.
Hawe, S.; Kleinsteuber, M.; Diepold, K. Dense disparity maps from sparse disparity measurements. In: Proceedings of the International Conference on Computer Vision, 2126–2133, 2011.
QiFHanJ YWangP JShiG MLiFStructure guided fusion for depth map inpaintingPattern Recognition Letters2013341707610.1016/j.patrec.2012.06.003
Ferstl, D.; Reinbacher, C.; Ranftl, R.; Ruether, M.; Bischof, H. Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of the IEEE International Conference on Computer Vision, 993–1000, 2013.
Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity invariant CNNs. In: Proceedings of the International Conference on 3D Vision, 11–20, 2017.
Zisselman, E.; Sulam, J.; Elad, M. A local block coordinate descent algorithm for the CSC model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8200–8209, 2019.
Tölgyessy, M.; Hubinsky, P. The Kinect sensor in robotics education. In: Proceedings of the 2nd International Conference on Robotics in Education, 143–146, 2011.
YangL LLiCHanJ GChenCYeQ XZhangB CCaoXLiuWImage reconstruction via manifold constrained convolutional sparse coding for image setsIEEE Journal of Selected Topics in Signal Processing20171171072108110.1109/JSTSP.2017.2743683
LiuJGongXHuetBNgoC WTangJZhouZ HHauptmannA GYanSGuided depth enhancement via anisotropic diffusionAdvances in Multimedia Information Processing — PCM 20132013ChamSpringer40841710.1007/978-3-319-03731-8_38
LiuLChanS HNguyenT QDepth reconstruction from sparse samples: Representation, algorithm, and samplingIEEE Transactions on Image Processing201524619831996334234010.1109/TIP.2015.2409551
Affara, L.; Ghanem, B.; Wonka, P. Supervised convolutional sparse coding. arXiv preprint arXiv: 1804.02678, 2018.
Ku, J.; Harakeh, A.; Waslander, S. L. In defense of classical image processing: Fast depth completion on the CPU. In: Proceedings of the 15th Conference on Computer and Robot Vision, 16–22, 2018.
Zhang, H.; Patel, V. M. Convolutional sparse and low-rank coding-based rain streak removal. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 1259–1267, 2017.
Gu, S. H.; Zuo, W. M.; Xie, Q.; Meng, D. Y.; Feng, X. C.; Zhang, L. Convolutional sparse coding for image super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, 1823–1831, 2015.
SteidlGWeickertJBroxTMrázekPWelkMOn the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEsSIAM Journal on Numerical Analysis2004422686713208423210.1137/S0036142903422429
Matyunin, S.; Vatolin, D.; Berdnikov, Y.; Smirnov, M. Temporal filtering for depth maps generated by Kinect depth camera. In: Proceedings of the 3DTV Conference: The True Vision — Capture, Transmission and Display of 3D Video, 1–4, 2011.
Zhang, H.; Patel, V. Convolutional sparse coding-based image decomposition. In: Proceedings of the British Machine Vision Conference, 125.1–125.11, 2016.
HerreraC DKannalaJLadickýLHeikkiläJKämäräinenJ KKoskelaMDepth map inpainting under a second-order smoothness priorImage Analysis2013Berlin HeidelbergSpringer55556610.1007/978-3-642-38886-6_52
HeK MSunJTangX OGuided image filteringIEEE Transactions on Pattern Analysis and Machine Intelligence20133561397140910.1109/TPAMI.2012.213
ChengXWangPYangRFerrariVHebertMSminchisescuCWeissYDepth estimation via affinity learned with convolutional spatial propagation networkComputer Vision — ECCV 20182018ChamSpringer10812510.1007/978-3-030-01270-0_7
Papyan, V.; Romano, Y.; Elad, M.; Sulam, J. Convolutional dictionary learning via local processing. In: Proceedings of the IEEE International Conference on Computer Vision, 5306–5314, 2017.
Ma, F. C.; Cavalheiro, G. V.; Karaman, S. Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera. In: Proceedings of the International Conference on Robotics and Automation, 3288–3295, 2019.
Song, S. R.; Yu, F.; Zeng, A.; Chang, A. X.; Savva, M.; Funkhouser, T. Semantic scene completion from a single depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 190–198, 2017.
Ma, F. C.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4796–4803, 2018.
KeaomaneeYHeednacramAYoungkongPImplementation of four kriging models for depth inpaintingICT Express20206320921310.1016/j.icte.2020.05.004
XueH YZhangS MCaiDDepth image inpainting: Improving low rank matrix completion with low gradient regularizationIEEE Transactions on Image Processing201726943114320367054610.1109/TIP.2017.2718183
Heide, F.; Heidrich, W.; Wetzstein, G. Fast and flexible convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5135–5143, 2015.
X Wang (259_CR2) 2016; 4
259_CR7
259_CR6
259_CR5
259_CR4
259_CR3
J Kim (259_CR11) 2021; 18
259_CR1
J T Barron (259_CR16) 2016
L Liu (259_CR26) 2015; 24
259_CR32
259_CR31
M Liao (259_CR8) 2020
H Y Xue (259_CR18) 2017; 26
L L Yang (259_CR36) 2017; 11
259_CR14
259_CR35
259_CR34
259_CR33
259_CR17
259_CR39
X J Gong (259_CR13) 2013; 31
259_CR38
259_CR37
Y Keaomanee (259_CR19) 2020; 6
X Cheng (259_CR30) 2018
C Zhang (259_CR10) 2021; 1732
J Liu (259_CR15) 2013
K M He (259_CR44) 2013; 35
S Mori (259_CR9) 2020; 26
F Qi (259_CR23) 2013; 34
N Silberman (259_CR43) 2012
G Steidl (259_CR42) 2004; 42
259_CR21
259_CR20
259_CR41
C D Herrera (259_CR12) 2013
259_CR40
259_CR25
259_CR24
259_CR22
259_CR29
259_CR28
259_CR27
References_xml – reference: Zhang, Y. D.; Funkhouser, T. Deep depth completion of a single RGB-D image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 175–185, 2018.
– reference: XueH YZhangS MCaiDDepth image inpainting: Improving low rank matrix completion with low gradient regularizationIEEE Transactions on Image Processing201726943114320367054610.1109/TIP.2017.2718183
– reference: Ma, F. C.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4796–4803, 2018.
– reference: Affara, L.; Ghanem, B.; Wonka, P. Supervised convolutional sparse coding. arXiv preprint arXiv: 1804.02678, 2018.
– reference: KimJHyeonJDohNGenerative multiview inpainting for object removal in large indoor spacesInternational Journal of Advanced Robotic Systems202118217298814219965410.1177/1729881421996544
– reference: Imran, S.; Long, Y.; Liu, X.; Morris, D. Depth coefficients for depth completion. arXiv preprint arXiv:1903.05421, 2019.
– reference: Jaritz, M.; de Charette, R.; Wirbel, E.; Perrotton, X.; Nashashibi, F. Sparse and dense data with CNNs: Depth completion and semantic segmentation. In: Proceedings of the International Conference on 3D Vision, 52–60, 2018.
– reference: Ku, J.; Harakeh, A.; Waslander, S. L. In defense of classical image processing: Fast depth completion on the CPU. In: Proceedings of the 15th Conference on Computer and Robot Vision, 16–22, 2018.
– reference: ZhangCWangTImage inpainting using double discriminator generative adversarial networksJournal of Physics: Conference Series202117321012052
– reference: Liu, J.; Gong, X.; Liu, J. Guided inpainting and filtering for Kinect depth maps. In: Proceedings of the 21st International Conference on Pattern Recognition, 2055–2058, 2012.
– reference: LiaoMLuFZhouDZhangSLiWYangRVedaldiABischofHBroxTFrahmJ MDVI: Depth guided video inpainting for autonomous drivingComputer Vision — ECCV 20202020ChamSpringer11710.1007/978-3-030-58589-1_1
– reference: KeaomaneeYHeednacramAYoungkongPImplementation of four kriging models for depth inpaintingICT Express20206320921310.1016/j.icte.2020.05.004
– reference: HeK MSunJTangX OGuided image filteringIEEE Transactions on Pattern Analysis and Machine Intelligence20133561397140910.1109/TPAMI.2012.213
– reference: Tölgyessy, M.; Hubinsky, P. The Kinect sensor in robotics education. In: Proceedings of the 2nd International Conference on Robotics in Education, 143–146, 2011.
– reference: MoriSEratOBrollWSaitoHSchmalstiegDKalkofenDInpaintFusion: Incremental RGB-D inpainting for 3D scenesIEEE Transactions on Visualization and Computer Graphics202026102994300710.1109/TVCG.2020.3003768
– reference: Bristow, H.; Eriksson, A.; Lucey, S. Fast convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 391–398, 2013.
– reference: Zhang, H.; Patel, V. M. Convolutional sparse and low-rank coding-based rain streak removal. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 1259–1267, 2017.
– reference: YangL LLiCHanJ GChenCYeQ XZhangB CCaoXLiuWImage reconstruction via manifold constrained convolutional sparse coding for image setsIEEE Journal of Selected Topics in Signal Processing20171171072108110.1109/JSTSP.2017.2743683
– reference: SteidlGWeickertJBroxTMrázekPWelkMOn the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEsSIAM Journal on Numerical Analysis2004422686713208423210.1137/S0036142903422429
– reference: LiuJGongXHuetBNgoC WTangJZhouZ HHauptmannA GYanSGuided depth enhancement via anisotropic diffusionAdvances in Multimedia Information Processing — PCM 20132013ChamSpringer40841710.1007/978-3-319-03731-8_38
– reference: Ferstl, D.; Reinbacher, C.; Ranftl, R.; Ruether, M.; Bischof, H. Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of the IEEE International Conference on Computer Vision, 993–1000, 2013.
– reference: Matyunin, S.; Vatolin, D.; Berdnikov, Y.; Smirnov, M. Temporal filtering for depth maps generated by Kinect depth camera. In: Proceedings of the 3DTV Conference: The True Vision — Capture, Transmission and Display of 3D Video, 1–4, 2011.
– reference: Ma, F. C.; Cavalheiro, G. V.; Karaman, S. Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera. In: Proceedings of the International Conference on Robotics and Automation, 3288–3295, 2019.
– reference: Gu, S. H.; Zuo, W. M.; Xie, Q.; Meng, D. Y.; Feng, X. C.; Zhang, L. Convolutional sparse coding for image super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision, 1823–1831, 2015.
– reference: HerreraC DKannalaJLadickýLHeikkiläJKämäräinenJ KKoskelaMDepth map inpainting under a second-order smoothness priorImage Analysis2013Berlin HeidelbergSpringer55556610.1007/978-3-642-38886-6_52
– reference: SilbermanNHoiemDKohliPFergusRFitzgibbonALazebnikSPeronaPSatoYSchmidCIndoor segmentation and support inference from RGBD imagesComputer Vision — ECCV 20122012Berlin HeidelbergSpringer74676010.1007/978-3-642-33715-4_54
– reference: WangXOngS KNeeA Y CA comprehensive survey of augmented reality assembly researchAdvances in Manufacturing20164112210.1007/s40436-015-0131-4
– reference: Heide, F.; Heidrich, W.; Wetzstein, G. Fast and flexible convolutional sparse coding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5135–5143, 2015.
– reference: Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity invariant CNNs. In: Proceedings of the International Conference on 3D Vision, 11–20, 2017.
– reference: Neven, D.; De Brabandere, B.; Georgoulis, S.; Proesmans, M.; Van Gool, L. Fast scene understanding for autonomous driving. arXiv preprint arXiv:1708. 02550, 2017.
– reference: Zeiler, M. D.; Krishnan, D.; Taylor, G. W.; Fergus, R. Deconvolutional networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2528–2535, 2010.
– reference: Bristow, H.; Lucey, S. Optimization methods for convolutional sparse coding. arXiv preprint arXiv: 1406.2407, 2014.
– reference: Chen, L.; Lin, H.; Li, S. Depth image enhancement for Kinect using region growing and bilateral filter. In: Proceedings of the 21st International Conference on Pattern Recognition, 3070–3073, 2012.
– reference: Song, S. R.; Yu, F.; Zeng, A.; Chang, A. X.; Savva, M.; Funkhouser, T. Semantic scene completion from a single depth image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 190–198, 2017.
– reference: ChengXWangPYangRFerrariVHebertMSminchisescuCWeissYDepth estimation via affinity learned with convolutional spatial propagation networkComputer Vision — ECCV 20182018ChamSpringer10812510.1007/978-3-030-01270-0_7
– reference: Hornácek, M.; Rhemann, C.; Gelautz, M.; Rother, C. Depth super resolution by rigid body self-similarity in 3D. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1123–1130, 2013.
– reference: Hawe, S.; Kleinsteuber, M.; Diepold, K. Dense disparity maps from sparse disparity measurements. In: Proceedings of the International Conference on Computer Vision, 2126–2133, 2011.
– reference: LiuLChanS HNguyenT QDepth reconstruction from sparse samples: Representation, algorithm, and samplingIEEE Transactions on Image Processing201524619831996334234010.1109/TIP.2015.2409551
– reference: Zhang, H.; Patel, V. Convolutional sparse coding-based image decomposition. In: Proceedings of the British Machine Vision Conference, 125.1–125.11, 2016.
– reference: BarronJ TPooleBLeibeBMatasJSebeNWellingMThe fast bilateral solverComputer Vision — ECCV 20162016ChamSpringer61763210.1007/978-3-319-46487-9_38
– reference: QiFHanJ YWangP JShiG MLiFStructure guided fusion for depth map inpaintingPattern Recognition Letters2013341707610.1016/j.patrec.2012.06.003
– reference: GongX JLiuJ YZhouW HLiuJ LGuided depth enhancement via a fast marching methodImage and Vision Computing2013311069570310.1016/j.imavis.2013.07.006
– reference: Zisselman, E.; Sulam, J.; Elad, M. A local block coordinate descent algorithm for the CSC model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8200–8209, 2019.
– reference: Papyan, V.; Romano, Y.; Elad, M.; Sulam, J. Convolutional dictionary learning via local processing. In: Proceedings of the IEEE International Conference on Computer Vision, 5306–5314, 2017.
– ident: 259_CR1
  doi: 10.1109/CVPR.2017.28
– start-page: 1
  volume-title: Computer Vision — ECCV 2020
  year: 2020
  ident: 259_CR8
  doi: 10.1007/978-3-030-58589-1_1
– ident: 259_CR25
  doi: 10.1109/ICCV.2011.6126488
– ident: 259_CR39
  doi: 10.1109/ICCV.2017.566
– ident: 259_CR6
  doi: 10.1109/CVPR.2019.01273
– start-page: 555
  volume-title: Image Analysis
  year: 2013
  ident: 259_CR12
  doi: 10.1007/978-3-642-38886-6_52
– volume: 31
  start-page: 695
  issue: 10
  year: 2013
  ident: 259_CR13
  publication-title: Image and Vision Computing
  doi: 10.1016/j.imavis.2013.07.006
– ident: 259_CR22
– ident: 259_CR20
  doi: 10.1109/CVPR.2015.7299149
– volume: 26
  start-page: 4311
  issue: 9
  year: 2017
  ident: 259_CR18
  publication-title: IEEE Transactions on Image Processing
  doi: 10.1109/TIP.2017.2718183
– volume: 42
  start-page: 686
  issue: 2
  year: 2004
  ident: 259_CR42
  publication-title: SIAM Journal on Numerical Analysis
  doi: 10.1137/S0036142903422429
– ident: 259_CR7
  doi: 10.1109/ICRA.2018.8460184
– ident: 259_CR4
– volume: 24
  start-page: 1983
  issue: 6
  year: 2015
  ident: 259_CR26
  publication-title: IEEE Transactions on Image Processing
  doi: 10.1109/TIP.2015.2409551
– start-page: 746
  volume-title: Computer Vision — ECCV 2012
  year: 2012
  ident: 259_CR43
  doi: 10.1007/978-3-642-33715-4_54
– ident: 259_CR24
  doi: 10.1109/3DTV.2011.5877202
– volume: 34
  start-page: 70
  issue: 1
  year: 2013
  ident: 259_CR23
  publication-title: Pattern Recognition Letters
  doi: 10.1016/j.patrec.2012.06.003
– ident: 259_CR34
  doi: 10.1109/CVPR.2013.57
– volume: 1732
  start-page: 012052
  issue: 1
  year: 2021
  ident: 259_CR10
  publication-title: Journal of Physics: Conference Series
– ident: 259_CR21
  doi: 10.1109/CVPR.2013.149
– ident: 259_CR37
  doi: 10.1109/ICCV.2015.212
– ident: 259_CR35
  doi: 10.1109/WACV.2017.145
– volume: 4
  start-page: 1
  issue: 1
  year: 2016
  ident: 259_CR2
  publication-title: Advances in Manufacturing
  doi: 10.1007/s40436-015-0131-4
– ident: 259_CR14
– ident: 259_CR40
  doi: 10.1109/CVPR.2019.00840
– ident: 259_CR5
  doi: 10.1109/CVPR.2018.00026
– volume: 26
  start-page: 2994
  issue: 10
  year: 2020
  ident: 259_CR9
  publication-title: IEEE Transactions on Visualization and Computer Graphics
  doi: 10.1109/TVCG.2020.3003768
– ident: 259_CR32
  doi: 10.1109/CVPR.2010.5539957
– volume: 11
  start-page: 1072
  issue: 7
  year: 2017
  ident: 259_CR36
  publication-title: IEEE Journal of Selected Topics in Signal Processing
  doi: 10.1109/JSTSP.2017.2743683
– start-page: 617
  volume-title: Computer Vision — ECCV 2016
  year: 2016
  ident: 259_CR16
  doi: 10.1007/978-3-319-46487-9_38
– ident: 259_CR29
  doi: 10.1109/3DV.2018.00017
– start-page: 408
  volume-title: Advances in Multimedia Information Processing — PCM 2013
  year: 2013
  ident: 259_CR15
  doi: 10.1007/978-3-319-03731-8_38
– volume: 35
  start-page: 1397
  issue: 6
  year: 2013
  ident: 259_CR44
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2012.213
– ident: 259_CR41
  doi: 10.5244/C.30.125
– ident: 259_CR3
– volume: 6
  start-page: 209
  issue: 3
  year: 2020
  ident: 259_CR19
  publication-title: ICT Express
  doi: 10.1016/j.icte.2020.05.004
– ident: 259_CR27
  doi: 10.1109/3DV.2017.00012
– volume: 18
  start-page: 172988142199654
  issue: 2
  year: 2021
  ident: 259_CR11
  publication-title: International Journal of Advanced Robotic Systems
  doi: 10.1177/1729881421996544
– ident: 259_CR17
  doi: 10.1109/ICCV.2013.127
– ident: 259_CR28
  doi: 10.1109/ICRA.2019.8793637
– start-page: 108
  volume-title: Computer Vision — ECCV 2018
  year: 2018
  ident: 259_CR30
  doi: 10.1007/978-3-030-01270-0_7
– ident: 259_CR31
  doi: 10.1109/CRV.2018.00013
– ident: 259_CR33
  doi: 10.1109/CVPR.2013.57
– ident: 259_CR38
SSID ssib026380179
ssib051367588
ssj0001920416
ssib043749047
ssib038075566
ssib039590095
ssib044084544
Score 2.23051
Snippet Depth information can benefit various computer vision tasks on both images and videos. However, depth maps may suffer from invalid values in many pixels, and...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 597
SubjectTerms Artificial Intelligence
Coding
Color imagery
Computer Graphics
Computer Science
Computer vision
Dictionaries
Image Processing and Computer Vision
Modules
Pixels
Regularization
Research Article
Supervised learning
User Interfaces and Human Computer Interaction
SummonAdditionalLinks – databaseName: ProQuest Central
  dbid: BENPR
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1NT9wwELX4OnCB8lGxlFY5cAJZOLGTOKcKEAhVaIWqgrhFjsemQSgbSJYDv55x1tkAUrn0mjg-eCYzzx7Pe4TsYxJTqU0VTTXuVUUSAVVcJTRjKgSLKct2XWk3l-l4LG9vsyt_4Nb4a5V9TOwCNUy0OyM_ihKJUCGWLP1ZP1KnGuWqq15CY5EsO6Yy9PPlk7Px1e_eoyL0LhYOdSjHrh6_ATA8c6KZQ2eq4KnI3tShnByziIcEGDuCs9gTfN3P8BETnb5qxNxtXsF5Xzp1_XkiZF0DNO7YcZtBX94nvwHRfijCdrntfP1_V-ULWfOoNjieueEGWTDVJln3CDfw8aPZIhe_JmXVBo15sLSZ1i5ONThAVRDM9U7o3bQEfOj1LO4ChNUBmLr9G5RVrcpO2mKbXJ-f_Tm9oF7LgWpMkS2FArGMBK2U4_dRnJk4VVokhjOwSWaSLJTSRJFVhU0gKqwEGwqIM20xDhaWfyVL1aQyOyQwGYAOhSm0jAUAlxI3ldqwBLEuMKFGhPWLnGtPdO70Nh7yOUVzZ5cc7ZI7u-QvI3Iw_6SesXx8Nnivt0Xuf_gmHwwxIoe9NYfX_5xs9_PJvpHVyPVbdPdn9shS-zQ138mKfm7L5umH9_ZXwZT8BA
  priority: 102
  providerName: ProQuest
Title Joint self-supervised and reference-guided learning for depth inpainting
URI https://link.springer.com/article/10.1007/s41095-021-0259-z
https://www.proquest.com/docview/2685225807
Volume 8
WOSCitedRecordID wos000801981000001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2096-0662
  dateEnd: 20241231
  omitProxy: false
  ssIdentifier: ssj0001920416
  issn: 2096-0433
  databaseCode: DOA
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2096-0662
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001920416
  issn: 2096-0433
  databaseCode: M~E
  dateStart: 20150101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
– providerCode: PRVPQU
  databaseName: ProQuest Central
  customDbUrl:
  eissn: 2096-0662
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001920416
  issn: 2096-0433
  databaseCode: BENPR
  dateStart: 20150301
  isFulltext: true
  titleUrlDefault: https://www.proquest.com/central
  providerName: ProQuest
– providerCode: PRVPQU
  databaseName: Publicly Available Content Database
  customDbUrl:
  eissn: 2096-0662
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001920416
  issn: 2096-0433
  databaseCode: PIMPY
  dateStart: 20150301
  isFulltext: true
  titleUrlDefault: http://search.proquest.com/publiccontent
  providerName: ProQuest
– providerCode: PRVAVX
  databaseName: Springer Nature OA Free Journals
  customDbUrl:
  eissn: 2096-0662
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001920416
  issn: 2096-0433
  databaseCode: C24
  dateStart: 20150301
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1JT9wwFLYo9NALSxcxBUY5cGplyYntxD4WNAhQGY2qtqKnyPFCU6EwIpkeOPS38-xxJrQqlcolBy9R9Gy_9zlv-RA6BCOmClcoXGi4q7I8M1hRlWNJVGocmCwXstK-fiymU3F5KWcxj7vto917l2TQ1KtkN5aSkE0M11_A7PjuGdrgqZA-ju94KDmewYYi6eB68gXV-QPMQqXnyRySURktmHzgevIMzIwPNo_7mmY81vT6sYREhAVK1Yz4AF5Gae8t_dtX_m7vBhD7h981mLOTrScJYhttRvSafFhutx20ZpuXaCsi2STqiRaaerKIvu0VOj2_qZsuae21w-1i7lVUC3NUY5IV1Qm-WtQGGiOVxVUCiDoxdt59T-pmrurAavEafTmZfD4-xZHGAWuwjh02FcAYYbRSvrSPosTyQmmWW0qMy6XNZSqEzTKnKpebrHLCuJQZLrUDFVg5-gatNzeN3UWJlcbolNlKC86MoULAfVJbkgPMNYSpESK9sEsda5x7qo3rclWdOQivBOGVXnjl3Qi9W02ZLwt8_Gvwfr-CZTzrbZnlAkAsh302Qu_7FRu6H33Z2_8avYdeZD7zIkTS7KP17nZhD9Bz_bOr29sx2jiaTGefxuEojMOPBXhe_JpAz-zsYvbtHrh1_AA
linkProvider Springer Nature
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Jb9QwFH4qUyS4tGwV0xbIAS4gC8dL4hwQQkA1Q6ejORRUTsHxUoKqTNpkWtEfxW_EziSTgkRvPXDIJXEsRe_zW_KWD-C5M2IytrFEsXKxKouIRpLKCCVYhto6k2WbrrQvk3g6FUdHyWwNfnW9ML6sstOJjaLWc-X_kb8mkXCuAhc4flueIs8a5bOrHYXGEhb75ueFC9mqN-MPTr4vCNn7ePh-hFpWAaScsq6RzpxVFVpJ6SfNSIoNj6VikaFY2ygxURIKYQixMrORJpkV2oZM80RZdyIzS92-t2CdebAPYH02Pph97RBMHJpx2Oe9_DR3fsVhookn6ew7YRmNWXIl7-XpnxnvDS73A9V4O1Dsx9Ifw6zhcyXYVw8zSrtUre8HZCFuGq5Dd_EEXf5pbHsP-q-kb2NL9zb_Nyncg43Waw_eLY_ZfVgzxQPYbD34oNWP1UMYfZrnRR1U5sSialF6PVy5BbLQwYrPBR0vcu1utnwdx4ELGwJtyvp7kBelzBvqjkfw-Ua-ZwsGxbwwjyEwidYqZCZTgjOtqRAuaFYGR86X15jJIeBOqKlqB7l7PpGTdDWCusFB6nCQehykl0N4uXqlXE4xuW7xbif7tFVoVdoLfgivOvT0j_-52fb1mz2DO6PDg0k6GU_3d-Au8b0lTa3QLgzqs4V5ArfVeZ1XZ0_bkxbAt5uG1W-nV1us
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwEB2VghCXlk-xtJQcOIGsOrGTOMeqZdWWatUDoN4ix2OXoCqNmiyH_nrGWWdTECAhro4dRePJzLPG8x7AW0piOne5Zrmhs6rMEmRa6IwVXMfoKGW5oSvty1m-WKiLi-I86Jx24233sSS56mnwLE1Nv9-i2183vsmYD53FdBQm_M5u78F9X5DyLn440Y8n5Fw8nspQnlw9vYNfROE1M6fGVClyWdwpQ3k1ZplO-S_1_GZp4Pf6toJHXA7yqgn3l3mlEGPl9Hdf-XPumwDtLzXYIbXNt__bKI9hK6Da6GDlhk9gwzZPYTsg3CjEj46GRhGJcewZHJ9e100fdfbKsW7Z-tDV0RrdYLSWQGGXyxppMEhcXEaEtCO0bf81qptW14PaxXP4PP_w6fCYBXkHZihr9gwrgjcKjdae8kcLbtNcG5lZwdFlhc2KWCmbJE5XLsOkcgpdLDEtjKPQWDnxAjab68a-hMgWiCaWtjIqlYhCKTpnGsszgr_IpZ4BHw1fmsB97iU4rso1a_NgvJKMV3rjlbczeLde0q6IP_42eXfczTLEgK5MMkXgNiWfm8H7cfemx3982at_mv0GHp4fzcuzk8XHHXiU-OaM4bLNLmz2N0v7Gh6Y733d3ewNf8YPXf8CZg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint+self-supervised+and+reference-guided+learning+for+depth+inpainting&rft.jtitle=Computational+visual+media+%28Beijing%29&rft.au=Wu%2C+Heng&rft.au=Fu%2C+Kui&rft.au=Zhao%2C+Yifan&rft.au=Song%2C+Haokun&rft.date=2022-12-01&rft.issn=2096-0433&rft.eissn=2096-0662&rft.volume=8&rft.issue=4&rft.spage=597&rft.epage=612&rft_id=info:doi/10.1007%2Fs41095-021-0259-z&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s41095_021_0259_z
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2096-0433&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2096-0433&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2096-0433&client=summon