Learning 3D Shape Completion Under Weak Supervision
We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Le...
Uloženo v:
| Vydáno v: | International journal of computer vision Ročník 128; číslo 5; s. 1162 - 1181 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.05.2020
Springer Springer Nature B.V |
| Témata: | |
| ISSN: | 0920-5691, 1573-1405 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e.,
learn
, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository,
2015
.
arXiv:1512.03012
) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR),
2015
) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR),
2012
) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view,
2018
.
arXiv:1802.00411
), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR),
2017
) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR),
2016
), while requiring less supervision and being significantly faster. |
|---|---|
| AbstractList | We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e.,
learn
, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015.
arXiv:1512.03012
) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018.
arXiv:1802.00411
), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster. We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn , maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015 . arXiv:1512.03012 ) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015 ) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012 ) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018 . arXiv:1802.00411 ), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017 ) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016 ), while requiring less supervision and being significantly faster. We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015. arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018. arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016), while requiring less supervision and being significantly faster. We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are either data-driven or learning-based: Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations; Learning-based approaches, in contrast, avoid the expensive optimization step by learning to directly predict complete shapes from incomplete observations in a fully-supervised setting. However, full supervision is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, i.e., learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. On synthetic benchmarks based on ShapeNet (Chang et al. Shapenet: an information-rich 3d model repository, 2015 (See CR8). arXiv:1512.03012) and ModelNet (Wu et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2015 (See CR94)) as well as on real robotics data from KITTI (Geiger et al., in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2012 (See CR24)) and Kinect (Yang et al., 3d object dense reconstruction from a single depth view, 2018 (See CR97). arXiv:1802.00411), we demonstrate that the proposed amortized maximum likelihood approach is able to compete with the fully supervised baseline of Dai et al. (in: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), 2017 (See CR15)) and outperforms the data-driven approach of Engelmann et al. (in: Proceedings of the German conference on pattern recognition (GCPR), 2016 (See CR19)), while requiring less supervision and being significantly faster. |
| Audience | Academic |
| Author | Geiger, Andreas Stutz, David |
| Author_xml | – sequence: 1 givenname: David orcidid: 0000-0002-6286-1805 surname: Stutz fullname: Stutz, David email: david.stutz@mpi-inf.mpg.de organization: Max Planck Institute for Informatics – sequence: 2 givenname: Andreas surname: Geiger fullname: Geiger, Andreas organization: Max Planck Institute for Intelligent Systems and University of Tübingen |
| BookMark | eNp9kE1LAzEQhoNUsK3-AG8LnjxszcduNnss9atQEKzFY4ibSU1ts2uyFfvvTVlBKihzmGF4n_l4B6jnagcInRM8IhgXV4EQylmKiUj3Vbo7Qn2SFywlGc57qI9LitOcl-QEDUJYYYypoKyP2AyUd9YtE3adzF9VA8mk3jRraG3tkoXT4JNnUG_JfNuA_7Ahtk_RsVHrAGffeYgWtzdPk_t09nA3nYxnaZUx2qaZ0ZoQkVOtczCaay60qYggmmbAea4wVkX2okRRUKNyVhrBc60U6NIUDAwbootubuPr9y2EVq7qrXdxpaTxx4LF4WVUjTrVUq1BWmfq1qsqhoaNraJLxsb-mFNaMkEFj8DlARA1LXy2S7UNQU7nj4da0mkrX4fgwcjG243yO0mw3BsvO-NlNF7uK7mLTPGLqWyr9n7Gw-z6X5J2ZIhb3BL8z8N_Q1_BFJfa |
| CitedBy_id | crossref_primary_10_1109_TGRS_2022_3220198 crossref_primary_10_1109_ACCESS_2019_2923842 crossref_primary_10_1049_cvi2_12111 crossref_primary_10_1016_j_gmod_2019_101030 crossref_primary_10_3390_electronics11213551 crossref_primary_10_1007_s10489_022_04219_3 crossref_primary_10_7554_eLife_80918 crossref_primary_10_3390_s19071553 crossref_primary_10_1007_s10723_022_09610_5 crossref_primary_10_1109_TII_2020_3044106 crossref_primary_10_1145_3658236 crossref_primary_10_3390_rs15174163 crossref_primary_10_1016_j_compag_2025_110121 crossref_primary_10_1145_3550454_3555470 crossref_primary_10_3390_s22186774 crossref_primary_10_1016_j_eswa_2023_120723 crossref_primary_10_1016_j_jag_2021_102504 crossref_primary_10_1109_TVCG_2023_3342119 crossref_primary_10_1145_3491224 crossref_primary_10_1016_j_gmod_2023_101173 crossref_primary_10_1109_TVCG_2024_3507013 crossref_primary_10_1007_s00371_021_02192_5 crossref_primary_10_1109_TPAMI_2022_3186876 crossref_primary_10_1016_j_isprsjprs_2023_10_009 crossref_primary_10_1016_j_cag_2024_104104 crossref_primary_10_1007_s11263_020_01367_2 crossref_primary_10_1016_j_neucom_2020_10_089 crossref_primary_10_1007_s11263_023_01820_y crossref_primary_10_1111_cgf_14603 crossref_primary_10_1145_3603544 crossref_primary_10_3390_app12094581 crossref_primary_10_1109_TPAMI_2021_3095302 crossref_primary_10_1109_LRA_2020_3000851 |
| Cites_doi | 10.1109/3DV.2016.32 10.1007/978-3-319-46484-8_38 10.1109/CVPR.2014.59 10.1109/CVPR.2016.586 10.1109/ISMAR.2011.6162880 10.1111/cgf.12573 10.1007/978-3-319-45886-1_18 10.1016/j.cviu.2010.11.019 10.1109/CVPR.2011.5995687 10.1007/978-3-319-24947-6_16 10.1007/978-3-319-46723-8_49 10.1109/TPAMI.2018.2868195 10.1109/CVPRW.2015.7301358 10.1109/CVPR.2017.30 10.1007/978-3-319-10602-1_48 10.1109/ICCV.2017.230 10.1109/HUMANOIDS.2012.6651593 10.1109/CVPR.2016.401 10.1109/CVPR.2015.7299044 10.1109/ICCV.2005.221 10.1109/CVPR.2018.00306 10.1109/ICCVW.2017.86 10.1109/ICCV.2017.19 10.1007/978-3-642-34141-0_16 10.1145/2816795.2818094 10.1109/CVPR.2018.00209 10.1109/3DV.2017.00054 10.1109/34.121791 10.1109/CVPR.2014.89 10.1109/CVPR.2012.6248074 10.1109/TPAMI.2013.87 10.1109/CVPR.2015.7299105 10.1007/978-3-319-46466-4_29 10.1007/978-3-319-10599-4_41 10.1109/ICCV.2015.304 10.1109/CVPR.2014.470 10.1007/978-3-540-74272-2_1 10.1109/CVPR.2017.701 10.1145/1360612.1360642 10.1109/CVPR.2013.167 10.1109/TPAMI.2010.162 10.1109/CVPR.2014.487 10.1109/ICCV.2017.252 10.1109/CVPRW.2009.5206738 10.1007/978-3-319-24574-4_28 10.1109/CVPR.2015.7298863 10.1109/CVPR.2015.7298925 10.1109/IROS.2017.8206060 10.1109/3DV.2017.00017 10.1145/237170.237269 10.1007/978-3-319-46466-4_22 10.1109/CVPRW.2009.5206842 10.1007/978-3-319-49409-8_20 10.1109/CVPR.2013.170 10.1007/978-3-319-10605-2_6 10.1109/CVPR.2016.612 10.1109/CVPR.2017.264 10.1109/ICCV.2013.405 10.1109/CVPR.2015.7298807 10.1109/WACV.2017.51 10.1145/37401.37422 10.15607/RSS.2015.XI.001 10.1109/CVPR.2017.693 10.1109/ICCV.2017.327 10.1109/CVPR.2018.00411 10.1609/aaai.v31i1.10777 10.1109/3DV.2018.00068 10.1007/978-3-642-37331-2_45 |
| ContentType | Journal Article |
| Copyright | The Author(s) 2018 COPYRIGHT 2020 Springer The Author(s) 2018. |
| Copyright_xml | – notice: The Author(s) 2018 – notice: COPYRIGHT 2020 Springer – notice: The Author(s) 2018. |
| DBID | C6C AAYXX CITATION ISR 3V. 7SC 7WY 7WZ 7XB 87Z 8AL 8FD 8FE 8FG 8FK 8FL ABUWG AFKRA ARAPS AZQEC BENPR BEZIV BGLVJ CCPQU DWQXO FRNLG F~G GNUQQ HCIFZ JQ2 K60 K6~ K7- L.- L7M L~C L~D M0C M0N P5Z P62 PHGZM PHGZT PKEHL PQBIZ PQBZA PQEST PQGLB PQQKQ PQUKI PYYUZ Q9U |
| DOI | 10.1007/s11263-018-1126-y |
| DatabaseName | Springer Nature Open Access Journals CrossRef Gale In Context: Science ProQuest Central (Corporate) Computer and Information Systems Abstracts ABI/INFORM Collection ABI/INFORM Global (PDF only) ProQuest Central (purchase pre-March 2016) ABI/INFORM Collection Computing Database (Alumni Edition) Technology Research Database ProQuest SciTech Collection ProQuest Technology Collection ProQuest Central (Alumni) (purchase pre-March 2016) ABI/INFORM Collection (Alumni Edition) ProQuest Central (Alumni) ProQuest Central UK/Ireland Advanced Technologies & Computer Science Collection ProQuest Central Essentials ProQuest Central Business Premium Collection Technology Collection ProQuest One ProQuest Central Korea Business Premium Collection (Alumni) ABI/INFORM Global (Corporate) ProQuest Central Student SciTech Premium Collection ProQuest Computer Science Collection ProQuest Business Collection (Alumni Edition) ProQuest Business Collection Computer Science Database ABI/INFORM Professional Advanced Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional ABI/INFORM Global Computing Database Advanced Technologies & Aerospace Database ProQuest Advanced Technologies & Aerospace Collection ProQuest Databases ProQuest One Academic (New) ProQuest One Academic Middle East (New) ProQuest One Business (UW System Shared) ProQuest One Business (Alumni) ProQuest One Academic Eastern Edition (DO NOT USE) One Applied & Life Sciences ProQuest One Academic (retired) ProQuest One Academic UKI Edition ABI/INFORM Collection China ProQuest Central Basic |
| DatabaseTitle | CrossRef ABI/INFORM Global (Corporate) ProQuest Business Collection (Alumni Edition) ProQuest One Business Computer Science Database ProQuest Central Student Technology Collection Technology Research Database Computer and Information Systems Abstracts – Academic ProQuest One Academic Middle East (New) ProQuest Advanced Technologies & Aerospace Collection ProQuest Central Essentials ProQuest Computer Science Collection Computer and Information Systems Abstracts ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ABI/INFORM Complete ProQuest Central ABI/INFORM Professional Advanced ProQuest One Applied & Life Sciences ProQuest Central Korea ProQuest Central (New) Advanced Technologies Database with Aerospace ABI/INFORM Complete (Alumni Edition) Advanced Technologies & Aerospace Collection Business Premium Collection ABI/INFORM Global ProQuest Computing ABI/INFORM Global (Alumni Edition) ProQuest Central Basic ProQuest Computing (Alumni Edition) ProQuest One Academic Eastern Edition ABI/INFORM China ProQuest Technology Collection ProQuest SciTech Collection ProQuest Business Collection Computer and Information Systems Abstracts Professional Advanced Technologies & Aerospace Database ProQuest One Academic UKI Edition ProQuest One Business (Alumni) ProQuest One Academic ProQuest Central (Alumni) ProQuest One Academic (New) Business Premium Collection (Alumni) |
| DatabaseTitleList | CrossRef ABI/INFORM Global (Corporate) |
| Database_xml | – sequence: 1 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Applied Sciences Computer Science |
| EISSN | 1573-1405 |
| EndPage | 1181 |
| ExternalDocumentID | A622938286 10_1007_s11263_018_1126_y |
| GrantInformation_xml | – fundername: Max Planck Institute for Informatics |
| GroupedDBID | -4Z -59 -5G -BR -EM -Y2 -~C .4S .86 .DC .VR 06D 0R~ 0VY 199 1N0 1SB 2.D 203 28- 29J 2J2 2JN 2JY 2KG 2KM 2LR 2P1 2VQ 2~H 30V 3V. 4.4 406 408 409 40D 40E 5GY 5QI 5VS 67Z 6NX 6TJ 78A 7WY 8FE 8FG 8FL 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAOBN AARHV AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBXA ABDBF ABDZT ABECU ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABQSL ABSXP ABTEG ABTHY ABTKH ABTMW ABULA ABUWG ABWNU ABXPI ACAOD ACBXY ACDTI ACGFO ACGFS ACHSB ACHXU ACIHN ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACUHS ACZOJ ADHHG ADHIR ADIMF ADINQ ADKNI ADKPE ADMLS ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEAQA AEBTG AEFIE AEFQL AEGAL AEGNC AEJHL AEJRE AEKMD AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AFBBN AFEXP AFGCZ AFKRA AFLOW AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGGDS AGJBK AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHSBF AHYZX AIAKS AIGIU AIIXL AILAN AITGF AJBLW AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARAPS ARCSS ARMRJ ASPBG AVWKF AXYYD AYJHY AZFZN AZQEC B-. B0M BA0 BBWZM BDATZ BENPR BEZIV BGLVJ BGNMA BPHCQ BSONS C6C CAG CCPQU COF CS3 CSCUP DDRTE DL5 DNIVK DPUIP DU5 DWQXO EAD EAP EAS EBLON EBS EDO EIOEI EJD EMK EPL ESBYG ESX F5P FEDTE FERAY FFXSO FIGPU FINBP FNLPD FRNLG FRRFC FSGXE FWDCC GGCAI GGRSB GJIRD GNUQQ GNWQR GQ6 GQ7 GQ8 GROUPED_ABI_INFORM_COMPLETE GXS H13 HCIFZ HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I-F I09 IAO IHE IJ- IKXTQ ISR ITC ITM IWAJR IXC IZIGR IZQ I~X I~Y I~Z J-C J0Z JBSCW JCJTX JZLTJ K60 K6V K6~ K7- KDC KOV KOW LAK LLZTM M0C M0N M4Y MA- N2Q N9A NB0 NDZJH NPVJJ NQJWS NU0 O9- O93 O9G O9I O9J OAM OVD P19 P2P P62 P9O PF0 PQBIZ PQBZA PQQKQ PROAC PT4 PT5 QF4 QM1 QN7 QO4 QOK QOS R4E R89 R9I RHV RNI RNS ROL RPX RSV RZC RZE RZK S16 S1Z S26 S27 S28 S3B SAP SCJ SCLPG SCO SDH SDM SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 T16 TAE TEORI TSG TSK TSV TUC TUS U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 Z7R Z7S Z7V Z7W Z7X Z7Y Z7Z Z83 Z86 Z88 Z8M Z8N Z8P Z8Q Z8R Z8S Z8T Z8W Z92 ZMTXR ~8M ~EX AAPKM AAYXX ABBRH ABDBE ABFSG ABRTQ ACSTC ADHKG ADKFA AEZWR AFDZB AFFHD AFHIU AFOHR AGQPQ AHPBZ AHWEU AIXLP ATHPR AYFIA CITATION ICD PHGZM PHGZT PQGLB 7SC 7XB 8AL 8FD 8FK JQ2 L.- L7M L~C L~D PKEHL PQEST PQUKI Q9U |
| ID | FETCH-LOGICAL-c432t-4fdd11852dd5efd6d68dfc181d24e665a00a74ba8772fa539f865daaed9f73ef3 |
| IEDL.DBID | RSV |
| ISICitedReferencesCount | 53 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000531431500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 0920-5691 |
| IngestDate | Tue Nov 04 23:14:55 EST 2025 Sat Nov 29 10:04:42 EST 2025 Wed Nov 26 10:38:43 EST 2025 Tue Nov 18 22:23:45 EST 2025 Sat Nov 29 06:42:27 EST 2025 Fri Feb 21 02:35:18 EST 2025 |
| IsDoiOpenAccess | true |
| IsOpenAccess | true |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Keywords | Benchmark 3D reconstruction Weakly-supervised learning 3D shape completion Amortized inference |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c432t-4fdd11852dd5efd6d68dfc181d24e665a00a74ba8772fa539f865daaed9f73ef3 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-6286-1805 |
| OpenAccessLink | https://link.springer.com/10.1007/s11263-018-1126-y |
| PQID | 2126731859 |
| PQPubID | 1456341 |
| PageCount | 20 |
| ParticipantIDs | proquest_journals_2126731859 gale_infotracacademiconefile_A622938286 gale_incontextgauss_ISR_A622938286 crossref_primary_10_1007_s11263_018_1126_y crossref_citationtrail_10_1007_s11263_018_1126_y springer_journals_10_1007_s11263_018_1126_y |
| PublicationCentury | 2000 |
| PublicationDate | 20200500 2020-05-00 20200501 |
| PublicationDateYYYYMMDD | 2020-05-01 |
| PublicationDate_xml | – month: 5 year: 2020 text: 20200500 |
| PublicationDecade | 2020 |
| PublicationPlace | New York |
| PublicationPlace_xml | – name: New York |
| PublicationTitle | International journal of computer vision |
| PublicationTitleAbbrev | Int J Comput Vis |
| PublicationYear | 2020 |
| Publisher | Springer US Springer Springer Nature B.V |
| Publisher_xml | – name: Springer US – name: Springer – name: Springer Nature B.V |
| References | Jensen, R. R., Dahl, A. L., Vogiatzis, G., Tola, E., & Aanæs, H. (2014). Large scale multi-view stereopsis evaluation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Kar, A., Tulsiani, S., Carreira, J., & Malik, J. (2015). Category-specific object reconstruction from a single image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the international conference on machine learning (ICML). LawAJAliagaDGSingle viewpoint model completion of symmetric objects for digital inspectionComputer Vision and Image Understanding (CVIU)2011115560361010.1016/j.cviu.2010.11.019 Bao, S., Chandraker, M., Lin, Y., & Savarese, S. (2013). Dense object reconstruction with semantic priors. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Wang, W., Huang, Q., You, S., Yang, C., & Neumann, U. (2017). Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In Proceedings of the IEEE international conference on computer vision (ICCV). van der MaatenLHintonGVisualizing high-dimensional data using t-sneJournal of Machine Learning Research (JMLR)20089257926051225.68219 SungMKimVGAngstRGuibasLJData-driven structural priors for shape completionACM Transaction on Graphics2015346175:1175:1110.1145/2816795.2818094 Nguyen, D. T., Hua, B., Tran, M., Pham, Q., & Yeung, S. (2016). A field model for repairing 3d shapes. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Steinbrucker, F., Kerl, C., & Cremers, D. (2013). Large-scale multi-resolution surface reconstruction from rgb-d sequences. In Proceedings of the IEEE international conference on computer vision (ICCV). NashCWilliamsCKIThe shape variational autoencoder: A deep generative model of part-segmented 3d objectsEurographics Symposium on Geometry Processing (SGP)2017365112 Chang, A. X., Funkhouser, T. A., Guibas, L. J., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., & Yu, F. (2015). Shapenet: An information-rich 3d model repository. arXiv:1512.03012. Choy CB, Xu D, Gwak J, Chen K, & Savarese S (2016) 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European conference on computer vision (ECCV). Stutz, D., & Geiger, A. (2018). Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Curless, B., & Levoy, M. (1996). A volumetric method for building complex models from range images. In ACM transaction on graphics (SIGGRAPH). Eigen, D., Puhrsch, C., & Fergus, R. (2014). Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems (NIPS). Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of the international conference on learning representations (ICLR). Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Whelan, T., Leutenegger, S., Salas-Moreno, R. F., Glocker, B., & Davison, A. J. (2015). Elasticfusion: Dense SLAM without a pose graph. In Proceedings of robotics: science and systems (RSS). ZiaMStarkMSchieleBSchindlerKDetailed 3D representations for object recognition and modelingIEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI)201335112608262310.1109/TPAMI.2013.87 Pauly, M., Mitra, N. J., Giesen, J., Gross, M. H., & Guibas, L. J. (2005). Example-based 3d scan completion. In: Eurographics symposium on geometry processing (SGP). AbramowitzMHandbook of mathematical functions, with formulas, graphs, and mathematical tables1974New YorkDover Publications0171.38503 Tulsiani, S., Zhou, T., Efros, A. A., & Malik, J. (2017). Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). ZhengQSharfAWanGLiYMitraNJCohen-OrDChenBNon-local scan consolidation for 3d urban scenes.ACM Trans on Graphics201029494:194:9 Zheng, S., Prisacariu, V. A., Averkiou, M., Cheng, M. M., Mitra, N. J., Shotton, J., Torr, P. H. S., & Rother, C. (2015). Object proposal estimation in depth images using compact 3d shape manifolds. In Proceedings of the German conference on pattern recognition (GCPR). Jones, E., Oliphant, T., & Peterson, P, et al. (2001). SciPy: Open source scientific tools for Python. http://www.scipy.org/. Oswald, M. R., Töppe, E., Nieuwenhuis, C., & Cremers, D. (2013). A review of geometry recovery from a single image focusing on curved object reconstruction, pp. 343–378. Tatarchenko, M., Dosovitskiy, A., & Brox, T. (2017). Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In Proceedings of the IEEE international conference on computer vision (ICCV). Girdhar, R., Fouhey, D. F., Rodriguez, M., & Gupta, A. (2016). Learning a predictable and generative vector representation for objects. In Proceedings of the European conference on computer vision (ECCV). Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., & Fitzgibbon, A. (2011). Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the international symposium on mixed and augmented reality (ISMAR). Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., & Ranzuglia, G. (2008). Meshlab: An open-source mesh processing tool. In Eurographics Italian chapter conference Rock, J., Gupta, T., Thorsen, J., Gwak, J., Shin, D., & Hoiem, D. (2015). Completing 3d object shape from one depth image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Lorensen, W. E., & Cline, H. E. (1987). Marching cubes: A high resolution 3d surface construction algorithm. In ACM transaction on graphics (SIGGRAPH). Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N. (2016). Deeper depth prediction with fully convolutional residual networks. In Proceedings of the international conference on 3D vision (3DV). Wu, J., Zhang, C., Xue, T., Freeman, B., & Tenenbaum, J. (2016b). Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in neural information processing systems (NIPS). PizloZ3D shape: Its unique place in visual perception2010New YorkMIT Press Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Conference on artificial intelligence and statistics (AISTATS). Leotta, M. J., & Mundy, J. L. (2009). Predicting high resolution image edges with a generic, adaptive, 3-d vehicle model. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). SandhuRDambrevilleSYezziAJTannenbaumAA nonrigid kernel-based framework for 2d–3d pose estimation and 2d image segmentationIEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI)20113361098111510.1109/TPAMI.2010.162 NanLXieKSharfAA search-classify approach for cluttered indoor scene understandingACM TG2012316137:1137:10 Häne, C., Tulsiani, S., & Malik, J. (2017). Hierarchical surface prediction for 3d object reconstruction. arXiv:1704.00710. Yang, B., Rosa, S., Markham, A., Trigoni, N., & Wen, H. (2018). 3d object dense reconstruction from a single depth view. arXiv:1802.00411. Ritchie, D., Horsfall, P., & Goodman, N. D. (2016). Deep amortized inference for probabilistic programs. arXiv:1610.05735. Engelmann, F., Stückler, J., & Leibe, B. (2017). SAMP: shape and motion priors for 4d vehicle reconstruction. In Proceedings of the IEEE winter conference on applications of computer vision (WACV), pp. 400–408. Sandhu, R., Dambreville, S., Yezzi, A. J., & Tannenbaum, A. (2009). Non-rigid 2d-3d pose estimation and 2d image segmentation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2016). Variational inference: A review for statisticians. arXiv:1601.00670. Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2016). Generative and discriminative voxel modeling with convolutional neural networks. arXiv:1608.04236. Chen, X., Kundu, K., Zhu, Y., Ma, H., Fidler, S., & Urtasun, R. (2016). 3d object proposals using stereo imagery for accurate object class detection. arXiv:1608.07711. Haene, C., Savinov, N., & Pollefeys, M. (2014). Class specific 3d object shape priors using surface normals. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Varley, J., DeChant, C., Richardson, A., Ruales, J., & Allen, P. K. (2017). Shape completion enabled robotic grasping. In Proceedings of IEEE international conference on intelligent robots and systems (IROS). Li, Y., Dai, A., Guibas, L., & Nießner, M. (2015). Database-assisted object retrieval for real-time 3d reconstruction. In Computer graphics forum. Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). Sharma, A., Grau, O., & Fritz, M. (2016). Vconv-dae: Deep volumetric shape learning without object labels. arXiv:1604.03755. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Proceedings of the European conference on computer vision (ECCV). PaulyMMitraNJWallnerJPottmannHGuibasLJDiscovering structural regularity in 3d geometryACM Transaction on Graphics200827343:143:1110.1145/1360612.1360642 Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., & 1126_CR88 1126_CR89 1126_CR86 1126_CR80 1126_CR81 1126_CR84 1126_CR85 1126_CR83 1126_CR77 1126_CR78 1126_CR75 1126_CR100 R Sandhu (1126_CR76) 2011; 33 1126_CR79 1126_CR70 1126_CR73 1126_CR102 1126_CR74 1126_CR71 1126_CR72 1126_CR22 1126_CR20 1126_CR21 1126_CR26 1126_CR27 1126_CR24 1126_CR25 L van der Maaten (1126_CR87) 2008; 9 1126_CR28 1126_CR29 1126_CR11 1126_CR12 1126_CR97 P Besl (1126_CR5) 1992; 14 1126_CR10 1126_CR98 1126_CR15 1126_CR16 1126_CR13 1126_CR14 1126_CR91 1126_CR92 1126_CR90 1126_CR95 1126_CR96 1126_CR93 1126_CR94 M Zia (1126_CR101) 2013; 35 1126_CR19 1126_CR17 1126_CR18 1126_CR44 1126_CR42 1126_CR43 1126_CR48 1126_CR49 1126_CR46 1126_CR47 M Pauly (1126_CR60) 2008; 27 1126_CR40 1126_CR41 Z Pizlo (1126_CR63) 2010 1126_CR33 1126_CR34 1126_CR31 1126_CR32 1126_CR37 1126_CR38 1126_CR35 1126_CR36 AJ Law (1126_CR45) 2011; 115 1126_CR30 1126_CR39 1126_CR66 1126_CR67 1126_CR64 1126_CR65 1126_CR68 1126_CR69 M Abramowitz (1126_CR1) 1974 Q Zheng (1126_CR99) 2010; 29 1126_CR62 1126_CR61 C Nash (1126_CR55) 2017; 36 L Nan (1126_CR54) 2012; 31 1126_CR56 1126_CR53 1126_CR59 1126_CR57 1126_CR58 1126_CR4 1126_CR2 1126_CR3 1126_CR8 1126_CR51 1126_CR9 1126_CR52 1126_CR6 1126_CR7 1126_CR50 M Sung (1126_CR82) 2015; 34 Y Furukawa (1126_CR23) 2013; 9 |
| References_xml | – reference: PaulyMMitraNJWallnerJPottmannHGuibasLJDiscovering structural regularity in 3d geometryACM Transaction on Graphics200827343:143:1110.1145/1360612.1360642 – reference: Thrun, S., & Wegbreit, B. (2005). Shape from symmetry. In Proceedings of the IEEE international conference on computer vision (ICCV), pp 1824–1831. – reference: AbramowitzMHandbook of mathematical functions, with formulas, graphs, and mathematical tables1974New YorkDover Publications0171.38503 – reference: Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2016). Variational inference: A review for statisticians. arXiv:1601.00670. – reference: Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the international conference on machine learning (ICML). – reference: Wu, J., Xue, T., Lim, J. J., Tian, Y., Tenenbaum, J. B., Torralba, A., & Freeman, W. T. (2016a). Single image 3d interpreter network. In Proceedings of the European conference on computer vision (ECCV). – reference: Bao, S., Chandraker, M., Lin, Y., & Savarese, S. (2013). Dense object reconstruction with semantic priors. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Wu, Z., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: van der MaatenLHintonGVisualizing high-dimensional data using t-sneJournal of Machine Learning Research (JMLR)20089257926051225.68219 – reference: Dai, A., Qi, C. R., & Nießner, M. (2017). Shape completion using 3d-encoder-predictor cnns and shape synthesis. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Kar, A., Tulsiani, S., Carreira, J., & Malik, J. (2015). Category-specific object reconstruction from a single image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. J., Kohli, P., Shotton, J., Hodges, S., & Fitzgibbon, A. (2011). Kinectfusion: Real-time dense surface mapping and tracking. In Proceedings of the international symposium on mixed and augmented reality (ISMAR). – reference: ZiaMStarkMSchieleBSchindlerKDetailed 3D representations for object recognition and modelingIEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI)201335112608262310.1109/TPAMI.2013.87 – reference: FurukawaYHernandezCMulti-view stereo: A tutorialFoundations and Trends in Computer Graphics and Vision201391–21148 – reference: Haene, C., Savinov, N., & Pollefeys, M. (2014). Class specific 3d object shape priors using surface normals. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Jones, E., Oliphant, T., & Peterson, P, et al. (2001). SciPy: Open source scientific tools for Python. http://www.scipy.org/. – reference: NashCWilliamsCKIThe shape variational autoencoder: A deep generative model of part-segmented 3d objectsEurographics Symposium on Geometry Processing (SGP)2017365112 – reference: Lorensen, W. E., & Cline, H. E. (1987). Marching cubes: A high resolution 3d surface construction algorithm. In ACM transaction on graphics (SIGGRAPH). – reference: Wang, W., Huang, Q., You, S., Yang, C., & Neumann, U. (2017). Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In Proceedings of the IEEE international conference on computer vision (ICCV). – reference: Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N. (2016). Deeper depth prediction with fully convolutional residual networks. In Proceedings of the international conference on 3D vision (3DV). – reference: ZhengQSharfAWanGLiYMitraNJCohen-OrDChenBNon-local scan consolidation for 3d urban scenes.ACM Trans on Graphics201029494:194:9 – reference: Fan, H., Su, H., & Guibas, L. J. (2017). A point set generation network for 3d object reconstruction from a single image. In: Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Sandhu, R., Dambreville, S., Yezzi, A. J., & Tannenbaum, A. (2009). Non-rigid 2d-3d pose estimation and 2d image segmentation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Jensen, R. R., Dahl, A. L., Vogiatzis, G., Tola, E., & Aanæs, H. (2014). Large scale multi-view stereopsis evaluation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Nguyen, D. T., Hua, B., Tran, M., Pham, Q., & Yeung, S. (2016). A field model for repairing 3d shapes. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Im, D. J., Ahn, S., Memisevic, R., & Bengio, Y. (2017). Denoising criterion for variational auto-encoding framework. In Proceedings of the conference on artificial intelligence (AAAI), pp. 2059–2065. – reference: Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (NIPS). – reference: Song, S. & Xiao, J. (2014). Sliding shapes for 3D object detection in depth images. In Proceedings of the European Conference on Computer Vision (ECCV). – reference: Lin, C., Kong, C., & Lucey, S. (2017). Learning efficient point cloud generation for dense 3d object reconstruction. arXiv:1706.07036. – reference: Whelan, T., Leutenegger, S., Salas-Moreno, R. F., Glocker, B., & Davison, A. J. (2015). Elasticfusion: Dense SLAM without a pose graph. In Proceedings of robotics: science and systems (RSS). – reference: Zia, M. Z., Stark, M., & Schindler, K. (2014). Are cars just 3d boxes? Jointly estimating the 3d shape of multiple objects. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp. 3678–3685. – reference: Gupta, S., Arbeláez, P. A., Girshick, R. B., & Malik, J. (2015). Aligning 3D models to RGB-D images of cluttered scenes. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Riegler, G., Ulusoy, A. O., & Geiger, A. (2017b). Octnet: Learning deep 3d representations at high resolutions. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Eigen, D., Puhrsch, C., & Fergus, R. (2014). Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems (NIPS). – reference: Rock, J., Gupta, T., Thorsen, J., Gwak, J., Shin, D., & Hoiem, D. (2015). Completing 3d object shape from one depth image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Yan, X., Yang, J., Yumer, E., Guo, Y., & Lee, H. (2016). Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In Advances in neural information processing systems (NIPS). – reference: Tulsiani, S., Efros, A. A., & Malik, J. (2018). Multi-view consistency as supervisory signal for learning shape and pose prediction. arXiv:1801.03910. – reference: SungMKimVGAngstRGuibasLJData-driven structural priors for shape completionACM Transaction on Graphics2015346175:1175:1110.1145/2816795.2818094 – reference: Chang, A. X., Funkhouser, T. A., Guibas, L. J., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., Xiao, J., Yi, L., & Yu, F. (2015). Shapenet: An information-rich 3d model repository. arXiv:1512.03012. – reference: Pauly, M., Mitra, N. J., Giesen, J., Gross, M. H., & Guibas, L. J. (2005). Example-based 3d scan completion. In: Eurographics symposium on geometry processing (SGP). – reference: BeslPMcKayHA method for registration of 3d shapesIEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)19921423925610.1109/34.121791 – reference: Pepik, B., Stark, M., Gehler, P. V., Ritschel, T., & Schiele, B. (2015). 3d object class detection in the wild. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR), pp. 1–10. – reference: Zheng, S., Prisacariu, V. A., Averkiou, M., Cheng, M. M., Mitra, N. J., Shotton, J., Torr, P. H. S., & Rother, C. (2015). Object proposal estimation in depth images using compact 3d shape manifolds. In Proceedings of the German conference on pattern recognition (GCPR). – reference: Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. C., & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (NIPS). – reference: Engelmann, F., Stückler, J., & Leibe, B. (2016). Joint object pose estimation and shape reconstruction in urban street scenes using 3D shape priors. In Proceedings of the German conference on pattern recognition (GCPR). – reference: Engelmann, F., Stückler, J., & Leibe, B. (2017). SAMP: shape and motion priors for 4d vehicle reconstruction. In Proceedings of the IEEE winter conference on applications of computer vision (WACV), pp. 400–408. – reference: Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of the international conference on learning representations (ICLR). – reference: Chen, X., Kundu, K., Zhu, Y., Ma, H., Fidler, S., & Urtasun, R. (2016). 3d object proposals using stereo imagery for accurate object class detection. arXiv:1608.07711. – reference: Curless, B., & Levoy, M. (1996). A volumetric method for building complex models from range images. In ACM transaction on graphics (SIGGRAPH). – reference: Sharma, A., Grau, O., & Fritz, M. (2016). Vconv-dae: Deep volumetric shape learning without object labels. arXiv:1604.03755. – reference: Smith, E., & Meger, D. (2017). Improved adversarial systems for 3d object generation and reconstruction. arXiv:1707.09557. – reference: Eigen, D., & Fergus, R. (2015). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision (ICCV). – reference: NanLXieKSharfAA search-classify approach for cluttered indoor scene understandingACM TG2012316137:1137:10 – reference: Kroemer, O., Amor, H. B., Ewerton, M., & Peters, J. (2012). Point cloud completion using extrusions. In IEEE-RAS international conference on humanoid robots (humanoids). – reference: PizloZ3D shape: Its unique place in visual perception2010New YorkMIT Press – reference: Pizlo, Z. (2007). Human perception of 3d shapes. In Proceedings of the international conference on computer analysis of images and patterns (CAIP). – reference: Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., & Trigoni, N. (2017). 3d object reconstruction from a single depth view with adversarial learning.arXiv:1708.07969. – reference: Kingma, D. P., & Welling, M. (2014). Auto-encoding variational Bayes. In Proceedings of the international conference on learning representations (ICLR). – reference: Yang, B., Rosa, S., Markham, A., Trigoni, N., & Wen, H. (2018). 3d object dense reconstruction from a single depth view. arXiv:1802.00411. – reference: Wu, J., Zhang, C., Xue, T., Freeman, B., & Tenenbaum, J. (2016b). Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in neural information processing systems (NIPS). – reference: Rezende, D. J., Eslami, S. M. A., Mohamed, S., Battaglia, P., Jaderberg, M., & Heess, N. (2016). Unsupervised learning of 3d structure from images. arXiv:1607.00662. – reference: Aubry, M., Maturana, D., Efros, A., Russell, B., & Sivic, J. (2014). Seeing 3D chairs: Exemplar part-based 2D–3D alignment using a large dataset of CAD models. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). – reference: Girdhar, R., Fouhey, D. F., Rodriguez, M., & Gupta, A. (2016). Learning a predictable and generative vector representation for objects. In Proceedings of the European conference on computer vision (ECCV). – reference: Han, X., Li, Z., Huang, H., Kalogerakis, E., & Yu, Y. (2017). High-resolution shape completion using deep neural networks for global structure and local geometry inference. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 85–93. – reference: Xie, J., Kiefel, M., Sun, M. T., & Geiger, A. (2016). Semantic instance annotation of street scenes by 3d–2d label transfer. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Brock, A., Lim, T., Ritchie, J. M., & Weston, N. (2016). Generative and discriminative voxel modeling with convolutional neural networks. arXiv:1608.04236. – reference: Rezende, D. J., & Mohamed, S. (2015). Variational inference with normalizing flows. In Proceedings of the international conference on machine learning (ICML). – reference: LawAJAliagaDGSingle viewpoint model completion of symmetric objects for digital inspectionComputer Vision and Image Understanding (CVIU)2011115560361010.1016/j.cviu.2010.11.019 – reference: Gershman, S., & Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the annual meeting of the cognitive science society. – reference: Li, Y., Dai, A., Guibas, L., & Nießner, M. (2015). Database-assisted object retrieval for real-time 3d reconstruction. In Computer graphics forum. – reference: Wang, S., Bai, M., Mattyus, G., Chu, H., Luo, W., Yang, B., Liang, J., Cheverie, J., Fidler, S., & Urtasun, R. (2016) Torontocity: Seeing the world with a million eyes. arXiv:1612.00423. – reference: Güney, F., & Geiger, A., (2015). Displets: Resolving stereo ambiguities using object knowledge. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In Proceedings of the European conference on computer vision (ECCV). – reference: Choy CB, Xu D, Gwak J, Chen K, & Savarese S (2016) 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In Proceedings of the European conference on computer vision (ECCV). – reference: Prisacariu, V. A., & Reid, I. (2011). Nonlinear shape manifolds as shape priors in level set segmentation and tracking. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Kato, H., Ushiku, Y., & Harada, T. (2017). Neural 3d mesh renderer. arXiv:1711.07566. – reference: Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention (MICCAI). – reference: Leotta, M. J., & Mundy, J. L. (2009). Predicting high resolution image edges with a generic, adaptive, 3-d vehicle model. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). – reference: Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Tatarchenko, M., Dosovitskiy, A., & Brox, T. (2017). Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs. In Proceedings of the IEEE international conference on computer vision (ICCV). – reference: Varley, J., DeChant, C., Richardson, A., Ruales, J., & Allen, P. K. (2017). Shape completion enabled robotic grasping. In Proceedings of IEEE international conference on intelligent robots and systems (IROS). – reference: Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Prisacariu, V., Segal, A., & Reid, I. (2013). Simultaneous monocular 2d segmentation, 3d pose recovery and 3d reconstruction. In Proceedings of the Asian conference on computer vision (ACCV). – reference: Cicek Ö, Abdulkadir A, Lienkamp S. S., Brox, T., & Ronneberger, O. (2016). 3d u-net: Learning dense volumetric segmentation from sparse annotation. arXiv:1606.06650. – reference: Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M., Ganovelli, F., & Ranzuglia, G. (2008). Meshlab: An open-source mesh processing tool. In Eurographics Italian chapter conference – reference: Dame, A., Prisacariu, V., Ren, C., & Reid, I. (2013). Dense reconstruction using 3D object shape priors. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Riegler, G., Ulusoy, A. O., Bischof, H., & Geiger, A. (2017a). OctNetFusion: Learning depth fusion from data. In Proceedings of the international conference on 3D vision (3DV). – reference: Agarwal, S., & Mierle, K. (2012). Others ceres solver. http://ceres-solver.org. – reference: Gwak, J., Choy, C. B., Garg, A., Chandraker, M., & Savarese, S. (2017). Weakly supervised generative adversarial networks for 3d reconstruction. arXiv:1705.10904. – reference: Oswald, M. R., Töppe, E., Nieuwenhuis, C., & Cremers, D. (2013). A review of geometry recovery from a single image focusing on curved object reconstruction, pp. 343–378. – reference: Ritchie, D., Horsfall, P., & Goodman, N. D. (2016). Deep amortized inference for probabilistic programs. arXiv:1610.05735. – reference: Firman, M., Mac Aodha, O., Julier, S., & Brostow, G. J. (2016). Structured prediction of unobserved voxels from a single depth image. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Liu, S, Ago I. I., & Giles, C. L. (2017). Learning a hierarchical latent-variable model of voxelized 3d shapes. arXiv:1705.05994. – reference: Häne, C., Tulsiani, S., & Malik, J. (2017). Hierarchical surface prediction for 3d object reconstruction. arXiv:1704.00710. – reference: Steinbrucker, F., Kerl, C., & Cremers, D. (2013). Large-scale multi-resolution surface reconstruction from rgb-d sequences. In Proceedings of the IEEE international conference on computer vision (ICCV). – reference: Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Conference on artificial intelligence and statistics (AISTATS). – reference: Stutz, D., & Geiger, A. (2018). Learning 3d shape completion from laser scan data with weak supervision. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Ma, L., & Sibley, G. (2014). Unsupervised dense object discovery, detection, tracking and reconstruction. In Proceedings of the European conference on computer vision (ECCV). – reference: Menze, M., & Geiger, A. (2015). Object scene flow for autonomous vehicles. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: Collobert, R., Kavukcuoglu, K., & Farabet, C. (2011). Torch7: A matlab-like environment for machine learning. In Advances in neural information processing systems (NIPS) workshops. – reference: Tulsiani, S., Zhou, T., Efros, A. A., & Malik, J. (2017). Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR). – reference: SandhuRDambrevilleSYezziAJTannenbaumAA nonrigid kernel-based framework for 2d–3d pose estimation and 2d image segmentationIEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI)20113361098111510.1109/TPAMI.2010.162 – ident: 1126_CR44 doi: 10.1109/3DV.2016.32 – ident: 1126_CR69 – ident: 1126_CR10 doi: 10.1007/978-3-319-46484-8_38 – ident: 1126_CR37 doi: 10.1109/CVPR.2014.59 – ident: 1126_CR22 doi: 10.1109/CVPR.2016.586 – ident: 1126_CR56 doi: 10.1109/ISMAR.2011.6162880 – ident: 1126_CR47 doi: 10.1111/cgf.12573 – ident: 1126_CR19 doi: 10.1007/978-3-319-45886-1_18 – volume: 115 start-page: 603 issue: 5 year: 2011 ident: 1126_CR45 publication-title: Computer Vision and Image Understanding (CVIU) doi: 10.1016/j.cviu.2010.11.019 – ident: 1126_CR65 doi: 10.1109/CVPR.2011.5995687 – ident: 1126_CR7 – ident: 1126_CR72 – ident: 1126_CR100 doi: 10.1007/978-3-319-24947-6_16 – ident: 1126_CR28 – ident: 1126_CR11 doi: 10.1007/978-3-319-46723-8_49 – ident: 1126_CR41 – ident: 1126_CR66 – ident: 1126_CR97 doi: 10.1109/TPAMI.2018.2868195 – volume: 31 start-page: 137:1 issue: 6 year: 2012 ident: 1126_CR54 publication-title: ACM TG – ident: 1126_CR61 doi: 10.1109/CVPRW.2015.7301358 – ident: 1126_CR86 doi: 10.1109/CVPR.2017.30 – ident: 1126_CR49 doi: 10.1007/978-3-319-10602-1_48 – ident: 1126_CR93 – ident: 1126_CR2 – ident: 1126_CR83 doi: 10.1109/ICCV.2017.230 – ident: 1126_CR96 – ident: 1126_CR43 doi: 10.1109/HUMANOIDS.2012.6651593 – ident: 1126_CR95 doi: 10.1109/CVPR.2016.401 – ident: 1126_CR29 doi: 10.1109/CVPR.2015.7299044 – ident: 1126_CR48 – ident: 1126_CR84 doi: 10.1109/ICCV.2005.221 – ident: 1126_CR12 – ident: 1126_CR85 doi: 10.1109/CVPR.2018.00306 – ident: 1126_CR98 doi: 10.1109/ICCVW.2017.86 – ident: 1126_CR33 doi: 10.1109/ICCV.2017.19 – ident: 1126_CR58 doi: 10.1007/978-3-642-34141-0_16 – volume: 34 start-page: 175:1 issue: 6 year: 2015 ident: 1126_CR82 publication-title: ACM Transaction on Graphics doi: 10.1145/2816795.2818094 – ident: 1126_CR81 doi: 10.1109/CVPR.2018.00209 – ident: 1126_CR34 doi: 10.1109/3DV.2017.00054 – volume: 14 start-page: 239 year: 1992 ident: 1126_CR5 publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) doi: 10.1109/34.121791 – ident: 1126_CR68 – ident: 1126_CR32 doi: 10.1109/CVPR.2014.89 – ident: 1126_CR24 doi: 10.1109/CVPR.2012.6248074 – volume: 35 start-page: 2608 issue: 11 year: 2013 ident: 1126_CR101 publication-title: IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI) doi: 10.1109/TPAMI.2013.87 – ident: 1126_CR30 doi: 10.1109/CVPR.2015.7299105 – ident: 1126_CR26 doi: 10.1007/978-3-319-46466-4_29 – ident: 1126_CR79 doi: 10.1007/978-3-319-10599-4_41 – ident: 1126_CR17 doi: 10.1109/ICCV.2015.304 – ident: 1126_CR27 – ident: 1126_CR42 – volume: 9 start-page: 2579 year: 2008 ident: 1126_CR87 publication-title: Journal of Machine Learning Research (JMLR) – ident: 1126_CR102 doi: 10.1109/CVPR.2014.470 – ident: 1126_CR62 doi: 10.1007/978-3-540-74272-2_1 – ident: 1126_CR71 doi: 10.1109/CVPR.2017.701 – volume: 27 start-page: 43:1 issue: 3 year: 2008 ident: 1126_CR60 publication-title: ACM Transaction on Graphics doi: 10.1145/1360612.1360642 – ident: 1126_CR18 – ident: 1126_CR94 – ident: 1126_CR4 doi: 10.1109/CVPR.2013.167 – ident: 1126_CR9 – volume: 33 start-page: 1098 issue: 6 year: 2011 ident: 1126_CR76 publication-title: IEEE Transaction on Pattern Analysis and Machine Intelligence (PAMI) doi: 10.1109/TPAMI.2010.162 – volume: 29 start-page: 94:1 issue: 4 year: 2010 ident: 1126_CR99 publication-title: ACM Trans on Graphics – ident: 1126_CR3 doi: 10.1109/CVPR.2014.487 – ident: 1126_CR59 – ident: 1126_CR13 – ident: 1126_CR90 doi: 10.1109/ICCV.2017.252 – ident: 1126_CR46 doi: 10.1109/CVPRW.2009.5206738 – ident: 1126_CR74 doi: 10.1007/978-3-319-24574-4_28 – ident: 1126_CR73 doi: 10.1109/CVPR.2015.7298863 – volume-title: Handbook of mathematical functions, with formulas, graphs, and mathematical tables year: 1974 ident: 1126_CR1 – ident: 1126_CR6 – ident: 1126_CR38 – ident: 1126_CR53 doi: 10.1109/CVPR.2015.7298925 – volume: 36 start-page: 1 issue: 5 year: 2017 ident: 1126_CR55 publication-title: Eurographics Symposium on Geometry Processing (SGP) – ident: 1126_CR8 – ident: 1126_CR88 doi: 10.1109/IROS.2017.8206060 – ident: 1126_CR25 – ident: 1126_CR67 – ident: 1126_CR70 doi: 10.1109/3DV.2017.00017 – volume-title: 3D shape: Its unique place in visual perception year: 2010 ident: 1126_CR63 – ident: 1126_CR14 doi: 10.1145/237170.237269 – ident: 1126_CR92 doi: 10.1007/978-3-319-46466-4_22 – ident: 1126_CR75 doi: 10.1109/CVPRW.2009.5206842 – ident: 1126_CR31 – ident: 1126_CR77 doi: 10.1007/978-3-319-49409-8_20 – ident: 1126_CR16 doi: 10.1109/CVPR.2013.170 – ident: 1126_CR52 doi: 10.1007/978-3-319-10605-2_6 – ident: 1126_CR57 doi: 10.1109/CVPR.2016.612 – ident: 1126_CR21 doi: 10.1109/CVPR.2017.264 – ident: 1126_CR80 doi: 10.1109/ICCV.2013.405 – ident: 1126_CR39 doi: 10.1109/CVPR.2015.7298807 – volume: 9 start-page: 1 issue: 1–2 year: 2013 ident: 1126_CR23 publication-title: Foundations and Trends in Computer Graphics and Vision – ident: 1126_CR20 doi: 10.1109/WACV.2017.51 – ident: 1126_CR36 – ident: 1126_CR51 doi: 10.1145/37401.37422 – ident: 1126_CR91 doi: 10.15607/RSS.2015.XI.001 – ident: 1126_CR15 doi: 10.1109/CVPR.2017.693 – ident: 1126_CR89 doi: 10.1109/ICCV.2017.327 – ident: 1126_CR40 doi: 10.1109/CVPR.2018.00411 – ident: 1126_CR78 – ident: 1126_CR35 doi: 10.1609/aaai.v31i1.10777 – ident: 1126_CR50 doi: 10.1109/3DV.2018.00068 – ident: 1126_CR64 doi: 10.1007/978-3-642-37331-2_45 |
| SSID | ssj0002823 |
| Score | 2.5540597 |
| Snippet | We address the problem of 3D shape completion from sparse and noisy point clouds, a fundamental problem in computer vision and robotics. Recent approaches are... |
| SourceID | proquest gale crossref springer |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 1162 |
| SubjectTerms | Artificial Intelligence Artificial neural networks Computer Imaging Computer Science Computer vision Image Processing and Computer Vision Image reconstruction Machine vision Object recognition Pattern Recognition Pattern Recognition and Graphics Robotics Shape optimization Special Issue on Deep Learning for Robotic Vision Supervised learning Supervision Three dimensional models Vision |
| SummonAdditionalLinks | – databaseName: ABI/INFORM Global dbid: M0C link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LS8QwEB58Hbz4FtcXQQRBKfaVtD2J-EAPiriK3sKYh4rSXe2u4L83001dVPTiLaTTNOkkk5lkZj6ATRQRYhpFASJ3Bgq3IkDO8yBBbbg1osjvwhpsIjs_z29viwt_4FZ5t8pGJtaCWncUnZHvOhErMgr1Lfa6LwGhRtHtqofQGIVx0mzIpe8sPPiUxM6cGEDJOxOJiyJqbjXr0DnXHnkSORvKlYL3L_vSd-n845q03n2Op__b7xmY8non2x9MlFkYMeUcTHsdlPkVXrmqBuahqZuHxOdgvWfJIWs_YNcwIqKs3Z2S1cBJ7MbgE2v3uyR56PxtAa6Pj64OTgKPtRCoNIl7QWq1jiiQWmturBZa5Noqt_3rODVCcAxDzNI7zJ02bpEnhc0F14hGFzZLjE0WYazslGYJGKowy5xhFGW5TW2hncXC0zi0CrVSOspaEDZ_WiqfiJzwMJ7lMIUyMUc65kgqyfcWbH--0h1k4fiLeIPYJym7RUnuM_fYryp52r6U-yJ26g1FzrdgyxPZjvu4Qh-N4IZACbG-UK42jJV-fVdyyNUW7DRTY_j4174t_93YCkzGZM_XDpWrMNZ77Zs1mFBvvcfqdb2e3B-a7v2k priority: 102 providerName: ProQuest |
| Title | Learning 3D Shape Completion Under Weak Supervision |
| URI | https://link.springer.com/article/10.1007/s11263-018-1126-y https://www.proquest.com/docview/2126731859 |
| Volume | 128 |
| WOSCitedRecordID | wos000531431500007&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVAVX databaseName: SpringerLINK Contemporary 1997-Present customDbUrl: eissn: 1573-1405 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0002823 issn: 0920-5691 databaseCode: RSV dateStart: 19970101 isFulltext: true titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22 providerName: Springer Nature |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3da9swED_6sYe9rOs-aLouiDEobBgs25Lsx65rWSkLIenWbi_iqo-2rDihTgb973dy5IbuC7YXIctnW5asu_v5dHcAr1FyxILzBFEQQBFeJihEmeRonfBOVuV52iabUINBeXZWDaMfd9Ptdu9Mki2nXjq78SzYHDmhHqolt6uwTtKuDPkaRuPPd-yXMMQifzzhIiEr3pkyf3eLe8LoZ5b8i220FTmHG__V2cfwKGqYbG_xSWzCiqufwEbUNllcyw01dQkdurankMdoqxcsf8_Glzh1LBCF-NyTmrUpktipw29sPJ8GHhP-tD2DT4cHJ_sfkphVITFFns2SwlvLg8u0tcJ5K60srTck6G1WOCkFpimq4hxL0rs9irzypRQW0dnKq9z5_Dms1ZPabQFDkypFEIir0he-soRNRJGl3qA1xnLVg7QbXm1iyPGQ-eJaL4Mlh3HSNE461PRtD97cXTJdxNv4G_GrMGc6xLGow0aZC5w3jT4aj_SezEiRCT7yPdiNRH5CDzcY_Q7oFULoq3uUO93c67iSG02iXargYl714G0318vTf-zb9j9Rv4CHWQDy7U7KHVib3czdS3hgvs-umps-rKrTL31Yf3cwGI7o6FglVH5M96kciq_9dgH8AIhE-Tc |
| linkProvider | Springer Nature |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9QwEB6VggQXylNsW8BCICRQRF62kwNCFaXqassKsUX0ZqZ-FESVXZrdov1T_EZmsklXBdFbD9wsZ-LY8Xg8Y8_MB_AUVYKYJ0mEKMlAkUFFKGURZei8DF6VxWHcgE3o4bA4OCg_rMCvLhaG3So7mdgIaje2fEb-ikSs0hzqW76Z_IgYNYpvVzsIjQVbDPz8J5ls9ev-Ns3vszTdebf_djdqUQUim2fpNMqDcwmHDDsnfXDKqcIFSxudS3OvlMQ4Rp0fYkF6Z0CZlaFQ0iF6Vwad-ZBRu1fgap4VmtfVQEdnkp_MlwV0PZlkUpVJd4vahOpR_9lziWw2KkXzc_vgn7vBX9eyzW63s_a__adbcLPVq8XWYiHchhVf3YG1VscWrQSrqaqDsejq7kLW5pg9Etm2GH3FiRdMxFnJx5VogKHEZ4_fxWg2YcnK54v34NOljOY-rFbjyj8AgTbWmgy_RBchD6Uji0zmaRwsOmtdonsQdzNrbJtonfE-js0yRTQzgyFmMFwy8x68OHtlssgychHxE2YXw9k7KnYPOsJZXZv-6KPZUimpb5wZoAfPW6Iwpo9bbKMtaAic8Osc5WbHSKaVX7VZclEPXnasuHz8z76tX9zYY7i-u_9-z-z1h4MNuJHy2UXjPLoJq9OTmX8I1-zp9Ft98qhZWAK-XDaH_gZmN109 |
| linkToPdf | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1Lb9NAEB6VghAXylMECqwQCAlk1a9d2weEKkJEVBRFBETFZTvdR0EgJ9QJKH-NX8eMs25UEL31wM1aj9de77fz2J0HwGNUCWKeJBGiJANFehWhlGWUoXXSO1WVh3FbbKIYjcr9_Wq8Ab-6WBh2q-x4Ysuo7dTwHvkOsVhVcKhvteODW8S4P3g5-x5xBSk-ae3KaawgsueWP8l8a14M-zTXT9J08Pr9qzdRqDAQmTxL51HurU04fNha6bxVVpXWGxJ6Ns2dUhLjGIv8EEvSQT3KrPKlkhbR2coXmfMZ9XsBLhZkY7I74Vh-OpECZMqsytiTeSZVlXQnqm3YHo2FvZjIfqOraHlKJv4pGf46om0l32Drf_5n1-Bq0LfF7mqBXIcNV9-AraB7i8DZGmrqylt0bTchC7lnj0TWF5PPOHOCiThb-bQWbcEo8dHhVzFZzJjj8r7jLfhwLqO5DZv1tHZ3QKCJi4IMwqQofe4rS5aazNPYG7TG2KToQdzNsjYhATvXAfmm16mjGRiagKH5Si978Ozkkdkq-8hZxI8YOpqzetQ810e4aBo9nLzTuyoltY4zBvTgaSDyU3q5wRCFQUPgRGCnKLc7UOnA1xq9RlQPnnewXN_-57fdPbuzh3CZgKnfDkd79-BKylsarU_pNmzOjxfuPlwyP-ZfmuMH7RoTcHDeAP0NkZBmYQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+3D+Shape+Completion+Under+Weak+Supervision&rft.jtitle=International+journal+of+computer+vision&rft.au=Stutz%2C+David&rft.au=Geiger%2C+Andreas&rft.date=2020-05-01&rft.pub=Springer+US&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=128&rft.issue=5&rft.spage=1162&rft.epage=1181&rft_id=info:doi/10.1007%2Fs11263-018-1126-y&rft.externalDocID=10_1007_s11263_018_1126_y |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon |