SoftPool++: An Encoder–Decoder Network for Point Cloud Completion

We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related work it does not require any max-pooling or voxelization operation. Instead, the proposed operator used to learn the point cloud embedding in t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International journal of computer vision Ročník 130; číslo 5; s. 1145 - 1164
Hlavní autoři: Wang, Yida, Tan, David Joseph, Navab, Nassir, Tombari, Federico
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.05.2022
Springer
Springer Nature B.V
Témata:
ISSN:0920-5691, 1573-1405
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related work it does not require any max-pooling or voxelization operation. Instead, the proposed operator used to learn the point cloud embedding in the encoder extracts permutation-invariant features from the point cloud via a soft-pooling of feature activations, which are able to preserve fine-grained geometric details. These features are then passed on to a decoder architecture. Due to the compression in the encoder, a typical limitation of this type of architectures is that they tend to lose parts of the input shape structure. We propose to overcome this limitation by using skip connections specifically devised for point clouds, where links between corresponding layers in the encoder and the decoder are established. As part of these connections, we introduce a transformation matrix that projects the features from the encoder to the decoder and vice-versa. The quantitative and qualitative results on the task of object completion from partial scans on the ShapeNet dataset show that incorporating our approach achieves state-of-the-art performance in shape completion both at low and high resolutions.
AbstractList We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related work it does not require any max-pooling or voxelization operation. Instead, the proposed operator used to learn the point cloud embedding in the encoder extracts permutation-invariant features from the point cloud via a soft-pooling of feature activations, which are able to preserve fine-grained geometric details. These features are then passed on to a decoder architecture. Due to the compression in the encoder, a typical limitation of this type of architectures is that they tend to lose parts of the input shape structure. We propose to overcome this limitation by using skip connections specifically devised for point clouds, where links between corresponding layers in the encoder and the decoder are established. As part of these connections, we introduce a transformation matrix that projects the features from the encoder to the decoder and vice-versa. The quantitative and qualitative results on the task of object completion from partial scans on the ShapeNet dataset show that incorporating our approach achieves state-of-the-art performance in shape completion both at low and high resolutions.
We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related work it does not require any max-pooling or voxelization operation. Instead, the proposed operator used to learn the point cloud embedding in the encoder extracts permutation-invariant features from the point cloud via a soft-pooling of feature activations, which are able to preserve fine-grained geometric details. These features are then passed on to a decoder architecture. Due to the compression in the encoder, a typical limitation of this type of architectures is that they tend to lose parts of the input shape structure. We propose to overcome this limitation by using skip connections specifically devised for point clouds, where links between corresponding layers in the encoder and the decoder are established. As part of these connections, we introduce a transformation matrix that projects the features from the encoder to the decoder and vice-versa. The quantitative and qualitative results on the task of object completion from partial scans on the ShapeNet dataset show that incorporating our approach achieves state-of-the-art performance in shape completion both at low and high resolutions.
Audience Academic
Author Tan, David Joseph
Navab, Nassir
Tombari, Federico
Wang, Yida
Author_xml – sequence: 1
  givenname: Yida
  orcidid: 0000-0003-4519-9108
  surname: Wang
  fullname: Wang, Yida
  email: yida.wang@tum.de
  organization: Technische Universität München
– sequence: 2
  givenname: David Joseph
  surname: Tan
  fullname: Tan, David Joseph
  organization: Google
– sequence: 3
  givenname: Nassir
  surname: Navab
  fullname: Navab, Nassir
  organization: Technische Universität München
– sequence: 4
  givenname: Federico
  surname: Tombari
  fullname: Tombari, Federico
  organization: Technische Universität München, Google
BookMark eNp9kMtKAzEUhoNUsK2-gKsBlyU1l0ky466M9QJFC-o6THMpU6dJzUwRd76Db-iTmHYEwUXJIuFwvvznfAPQc94ZAM4xGmOExGWDMeEUIkIgwizLoDgCfcwEhThFrAf6KCcIMp7jEzBomhVCiGSE9kHx5G07974eja6SiUumTnltwvfn17XZv5IH07778JpYH5K5r1ybFLXf6qTw601t2sq7U3Bsy7oxZ7_3ELzcTJ-LOzh7vL0vJjOoKBYCZjw32qbWap4TobQiymKbCy6YZsRyUfJMEFSWJkcLTFlcRqeKC71QnC6wokNw0f27Cf5ta5pWrvw2uBgpCWcsJykTKHaNu65lWRtZOevbUKp4tFlXKmqzVaxPBMJZdJLmESAdoIJvmmCs3IRqXYYPiZHc2ZWdXRkHknu7UkQo-wepqi13NmJaVR9GaYc2McctTfhb4wD1A1rmj5o
CitedBy_id crossref_primary_10_1007_s11263_023_01820_y
crossref_primary_10_1109_ACCESS_2023_3283920
crossref_primary_10_1186_s13640_024_00659_8
crossref_primary_10_1016_j_eswa_2024_126249
crossref_primary_10_1016_j_cag_2023_07_033
crossref_primary_10_1109_TVCG_2023_3344935
crossref_primary_10_1109_TDSC_2024_3371530
crossref_primary_10_1007_s11760_023_02901_8
crossref_primary_10_1007_s00371_024_03364_9
crossref_primary_10_1007_s11263_024_02244_y
crossref_primary_10_1016_j_engappai_2023_107656
crossref_primary_10_1007_s11042_024_18136_9
Cites_doi 10.1109/ICCV.2019.00166
10.1109/CVPR.2015.7298801
10.1109/CVPR42600.2020.01372
10.1109/ICCVW.2019.00052
10.1109/CVPR42600.2020.00087
10.1109/CVPR42600.2020.00700
10.1109/CVPR.2017.693
10.1109/ICCV.2017.25
10.1109/ICCV.2019.00396
10.1109/CVPR46437.2021.01288
10.1109/ICCV48922.2021.00545
10.1007/978-3-030-58545-7_21
10.1109/ICCV.2019.00204
10.1109/CVPR42600.2020.01163
10.1109/CVPR42600.2020.00178
10.1007/s11263-020-01347-6
10.1109/CVPR42600.2020.01398
10.1007/978-3-319-49409-8_20
10.1109/ICCV48922.2021.00964
10.1109/TPAMI.2018.2868195
10.1109/ICCV48922.2021.01227
10.1109/CVPR.2016.181
10.1007/978-3-030-58595-2_31
10.1109/LRA.2020.2994483
10.1109/CVPR.2018.00478
10.1109/CVPR.2019.00352
10.1109/ICCVW.2017.86
10.1109/ICCV.2013.212
10.1109/ICCV.2015.178
10.1007/978-3-030-58580-8_5
10.1007/s11263-019-01217-w
10.1109/ICCV.2019.00937
10.1109/CVPR46437.2021.00736
10.1609/aaai.v34i07.6827
10.1109/CVPR42600.2020.00201
10.1109/CVPR.2018.00030
10.1109/3DV.2018.00088
10.1007/978-3-319-46484-8_38
10.1109/ICCV.2019.00278
10.1109/CVPR42600.2020.00982
10.1109/LRA.2020.3048658
10.1109/ICCV48922.2021.00577
10.1007/978-3-319-46723-8_49
10.1109/ICCV48922.2021.01226
10.1109/CVPR.2019.00397
10.1109/CVPR.2019.00733
10.1109/ICCV48922.2021.01228
10.1109/CVPR.2018.00029
10.1609/aaai.v33i01.33018376
10.1109/CVPR42600.2020.00187
10.1111/cgf.13792
10.1109/CVPR46437.2021.00842
10.1109/CVPR.2019.00047
10.1109/CVPR42600.2020.00768
10.1109/CVPR.2019.00100
10.1109/ICCV.2019.00870
10.1109/ICCV48922.2021.01294
10.1109/CVPR.2019.00025
10.1109/CVPR.2019.01054
ContentType Journal Article
Copyright The Author(s) 2022
COPYRIGHT 2022 Springer
The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2022
– notice: COPYRIGHT 2022 Springer
– notice: The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID C6C
AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8FD
8FE
8FG
8FK
8FL
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PYYUZ
Q9U
DOI 10.1007/s11263-022-01588-7
DatabaseName Springer Nature OA Free Journals
CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Global (Alumni Edition)
Computing Database (Alumni Edition)
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Computer Science Collection
ProQuest Central Essentials
ProQuest Central
Business Premium Collection
ProQuest Technology Collection
ProQuest One
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database (ProQuest)
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic (New)
ProQuest One Academic Middle East (New)
ProQuest One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic (retired)
ProQuest One Academic UKI Edition
ProQuest Central China
ABI/INFORM Collection China
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Central China
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ABI/INFORM China
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest Central (Alumni)
ProQuest One Academic (New)
Business Premium Collection (Alumni)
DatabaseTitleList

ABI/INFORM Global (Corporate)
CrossRef
Database_xml – sequence: 1
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Computer Science
EISSN 1573-1405
EndPage 1164
ExternalDocumentID A701892049
10_1007_s11263_022_01588_7
GrantInformation_xml – fundername: Technische Universität München (1025)
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1N0
1SB
2.D
203
28-
29J
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
8FE
8FG
8FL
8TC
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACIHN
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEAQA
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
C6C
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EAS
EBLON
EBS
EDO
EIOEI
EJD
EMK
EPL
ESBYG
ESX
F5P
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IAO
IHE
IJ-
IKXTQ
ISR
ITC
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Y
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
QF4
QM1
QN7
QO4
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TAE
TEORI
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7V
Z7W
Z7X
Z7Y
Z7Z
Z83
Z86
Z88
Z8M
Z8N
Z8P
Z8Q
Z8R
Z8S
Z8T
Z8W
Z92
ZMTXR
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ABRTQ
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFFHD
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
ICD
PHGZM
PHGZT
PQGLB
7SC
7XB
8AL
8FD
8FK
JQ2
L.-
L7M
L~C
L~D
PKEHL
PQEST
PQUKI
PRINS
Q9U
ID FETCH-LOGICAL-c3177-869edf4ffd6927cdc2cf1f97675d52f67a68720aae90b135022d4c67dbc63b1c3
IEDL.DBID M0C
ISSN 0920-5691
IngestDate Wed Nov 05 00:44:35 EST 2025
Mon Nov 24 15:45:38 EST 2025
Sat Nov 29 06:42:29 EST 2025
Tue Nov 18 21:55:07 EST 2025
Fri Feb 21 02:46:00 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 5
Keywords Skip-connection
SoftPool
Point cloud
Completion
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3177-869edf4ffd6927cdc2cf1f97675d52f67a68720aae90b135022d4c67dbc63b1c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-4519-9108
OpenAccessLink https://link.springer.com/10.1007/s11263-022-01588-7
PQID 2655924570
PQPubID 1456341
PageCount 20
ParticipantIDs proquest_journals_2655924570
gale_infotracacademiconefile_A701892049
crossref_primary_10_1007_s11263_022_01588_7
crossref_citationtrail_10_1007_s11263_022_01588_7
springer_journals_10_1007_s11263_022_01588_7
PublicationCentury 2000
PublicationDate 20220500
2022-05-00
20220501
PublicationDateYYYYMMDD 2022-05-01
PublicationDate_xml – month: 5
  year: 2022
  text: 20220500
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle International journal of computer vision
PublicationTitleAbbrev Int J Comput Vis
PublicationYear 2022
Publisher Springer US
Springer
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer
– name: Springer Nature B.V
References Wang, H., Liu, Q., Yue, X., Lasenby, J., & Kusner, M. J. (2021a). Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9782–9792).
XieHYaoHZhangSZhouSSunWPix2vox++: Multi-scale context-aware 3d object reconstruction from single and multiple imagesInternational Journal of Computer Vision2020128122919293510.1007/s11263-020-01347-6
Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., & Zhou, J. (2021). Pointr: Diverse point cloud completion with geometry-aware transformers. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12498–12507).
Han, Z., Shang, M., Liu, Y. S., & Zwicker, M. (2019). View inter-prediction gan: Unsupervised representation learning for 3d shapes by learning global shape memories to support local view predictions. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, pp. 8376–8384).
Aoki, Y., Goforth, H., Srivatsan, R. A., & Lucey, S. (2019). Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7163–7172).
Azad, R., Asadi-Aghbolaghi, M., Fathy, M., & Escalera, S. (2019). Bi-directional ConvLSTM U-net with Densley connected convolutions. In Proceedings of the IEEE/CVF international conference on computer vision workshops (pp. 406–415).
PanLECG: Edge-aware point cloud completion with graph convolutionIEEE Robotics and Automation Letters2020534392439810.1109/LRA.2020.2994483
Wu, J., Zhang, C., Xue, T., Freeman, W. T., & Tenenbaum, J. B. (2016). Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Proceedings of the 30th international conference on neural information processing systems (pp. 82–90).
Yang, Z., Sun, Y., Liu, S., Shen, X., & Jia, J. (2019). STD: Sparse-to-dense 3d object detector for point cloud. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1951–1960).
Li, P., Wang, Q., & Zhang, L. (2013). A novel earth mover’s distance methodology for image matching with gaussian mixture models. In The IEEE international conference on computer vision (ICCV).
Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). Pointcnn: Convolution on x-transformed points. In Advances in neural information processing systems (pp. 820–830).
Gong, B., Nie, Y., Lin, Y., Han, X., & Yu, Y. (2021). ME-PCN: Point completion conditioned on mask emptiness. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12488–12497).
Zhou, L., Du, Y., & Wu, J. (2021). 3d shape generation and completion through point-voxel diffusion. arXiv preprint arXiv:2104.03670
Qi, C. R., Litany, O., He, K., & Guibas, L. J. (2019). Deep hough voting for 3d object detection in point clouds. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9277–9286).
Choy, C. B., Xu, D., Gwak, J., Chen, K., & Savarese, S. (2016). 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision (pp. 628–644). Springer.
CortesCVapnikVSupport-vector networksMachine Learning19952032732970831.68098
Wang, L., Huang, Y., Hou, Y., Zhang, S., & Shan, J. (2019a). Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10296–10305).
Xu, X., & Lee, G. H. (2020). Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13706–13715).
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016). 3d u-net: Learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention (pp. 424–432). Springer.
Sauder, J., & Sievers, B. (2019). Self-supervised deep learning on point clouds by reconstructing space. arXiv preprint arXiv:1901.08396
Mazaheri, G., Mithun, N. C., Bappy, J. H., & Roy-Chowdhury, A. K. (2019). A skip connection architecture for localization of image manipulations. In CVPR workshops (pp. 119–129).
Lei, H., Akhtar, N., Mian, A. (2020). Seggcn: Efficient 3d point cloud segmentation with fuzzy spherical kernel. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11611–11620).
Noh, H., Hong, S., Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 1520–1528).
Shu, D. W., Park, S. W., & Kwon, J. (2019). 3d point cloud generative adversarial network based on tree structured graph convolutions. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3859–3868).
Yang, Y., Feng, C., Shen, Y., & Tian, D. (2018b). Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 206–215).
Mao, J., Wang, X., & Li, H.(2019). Interpolated convolutional networks for 3d point cloud understanding. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1578–1587).
Lin, Z. H., Huang, S. Y., & Wang, Y. C. F. (2020). Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1800–1809).
Pan, L., Chen, X., Cai, Z., Zhang, J., Zhao, H., Yi, S., & Liu, Z. (2021). Variational relational point completion network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8524–8533).
Kirillov, A., Wu, Y., He, K., & Girshick, R. (2020). Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9799–9808).
Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., & Trigoni, N. (2017). 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (pp. 679–688).
Tchapmi, L. P., Kosaraju, V., Rezatofighi, H., Reid, I., & Savarese, S. (2019). Topnet: Structural point cloud decoder. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 383–392).
YangBRosaSMarkhamATrigoniNWenHDense 3d object reconstruction from a single depth viewIEEE Transactions on Pattern Analysis and Machine Intelligence2018412820283410.1109/TPAMI.2018.2868195
Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (NIPS).
Zhang, W., Yan, Q., & Xiao, C. (2020b). Detail preserved point cloud completion via separated feature aggregation. arXiv preprint arXiv:2007.02374
Dai, A., Qi, C. R., & Nießner, M. (2017). Shape completion using 3d-encoder-predictor CNNS and shape synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (Vol. 3).
Wang, X., Ang Jr, M. H., & Lee, G. H. (2020a). Cascaded refinement network for point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
Gao, H., Tao, X., Shen, X., & Jia, J. (2019). Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3848–3856).
Wang, Y., Tan, D. J., Navab, N., & Tombari, F. (2019b). Forknet: Multi-branch volumetric semantic completion from a single depth image. In Proceedings of the IEEE international conference on computer vision (pp. 8608–8617).
Park, J. J., Florence, P., Straub, J., Newcombe, R., & Lovegrove, S. (2018). Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 165–174).
Xie, H., Yao, H., Sun, X., Zhou, S., & Zhang, S. (2019). Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2690–2698).
Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., & Sun, W. (2020b). GRNET: Gridding residual network for dense point cloud completion. In A. Vedaldi, H. Bischof, T. Brox, & J. M. Frahm (Eds.), Computer vision—ECCV 2020 (pp. 365–381). Springer.
Shi, W., & Rajkumar, R. (2020). Point-GNN: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1711–1719).
Sharma, A., Grau, O., & Fritz, M. (2016). VCONV-DAE: Deep volumetric shape learning without object labels. In European conference on computer vision (pp. 236–250). Springer.
Zhirong W., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In 2015 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1912–1920).
Mo, K., Zhu, S., Chang, A. X., Yi, L., Tripathi, S., Guibas, L. J., & Su, H. (2019). PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In The IEEE conference on computer vision and pattern recognition (CVPR).
Huang, T., Zou, H., Cui, J., Yang, X., Wang, M., Zhao, X., Zhang, J., Yuan, Y., Xu, Y., & Liu, Y. (2021). RFNET: Recurrent forward network for dense point cloud completion. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12508–12517).
Chibane, J., Alldieck, T., & Pons-Moll, G. (2020). Implicit functions in feature space for 3d shape reconstruction and completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6970–6981).
Groueix, T., Fisher, M., Kim, V. G., Russell, B. C., & Aubry, M. (2018). A Papier-Mâché approach to learning 3d
1588_CR48
1588_CR49
1588_CR46
1588_CR1
1588_CR47
1588_CR44
1588_CR45
1588_CR42
1588_CR43
1588_CR6
1588_CR40
1588_CR41
1588_CR8
1588_CR9
1588_CR2
1588_CR3
1588_CR4
1588_CR5
C Cortes (1588_CR7) 1995; 20
1588_CR39
1588_CR37
1588_CR38
1588_CR35
1588_CR36
1588_CR33
1588_CR34
1588_CR31
B Yang (1588_CR56) 2018; 41
1588_CR32
1588_CR30
1588_CR28
1588_CR29
1588_CR26
1588_CR24
1588_CR25
1588_CR22
1588_CR66
1588_CR23
1588_CR67
1588_CR20
1588_CR64
1588_CR21
1588_CR65
1588_CR62
1588_CR63
1588_CR60
1588_CR61
H Xie (1588_CR53) 2020; 128
1588_CR19
1588_CR17
1588_CR18
1588_CR15
1588_CR59
1588_CR16
1588_CR13
1588_CR57
1588_CR14
1588_CR58
1588_CR11
L Pan (1588_CR27) 2020; 5
1588_CR55
1588_CR12
1588_CR10
1588_CR54
1588_CR51
1588_CR52
1588_CR50
References_xml – reference: Sauder, J., & Sievers, B. (2019). Self-supervised deep learning on point clouds by reconstructing space. arXiv preprint arXiv:1901.08396
– reference: Yuan, W., Khot, T., Held, D., Mertz, C., & Hebert, M. (2018). PCN: Point completion network. In 2018 international conference on 3D vision (3DV) (pp. 728–737). IEEE.
– reference: XieHYaoHZhangSZhouSSunWPix2vox++: Multi-scale context-aware 3d object reconstruction from single and multiple imagesInternational Journal of Computer Vision2020128122919293510.1007/s11263-020-01347-6
– reference: Han, Z., Shang, M., Liu, Y. S., & Zwicker, M. (2019). View inter-prediction gan: Unsupervised representation learning for 3d shapes by learning global shape memories to support local view predictions. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, pp. 8376–8384).
– reference: Kim, J., Lee, J. K., & Lee, K. M. (2016). Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1637–1645).
– reference: Wu, J., Zhang, C., Xue, T., Freeman, W. T., & Tenenbaum, J. B. (2016). Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Proceedings of the 30th international conference on neural information processing systems (pp. 82–90).
– reference: YangBRosaSMarkhamATrigoniNWenHDense 3d object reconstruction from a single depth viewIEEE Transactions on Pattern Analysis and Machine Intelligence2018412820283410.1109/TPAMI.2018.2868195
– reference: Yang, Y., Feng, C., Shen, Y., & Tian, D. (2018b). Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 206–215).
– reference: Chibane, J., Alldieck, T., & Pons-Moll, G. (2020). Implicit functions in feature space for 3d shape reconstruction and completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6970–6981).
– reference: Liu, M., Sheng, L., Yang, S., Shao, J., & Hu, S. M. (2020). Morphing and sampling network for dense point cloud completion. In Proceedings of the AAAI conference on artificial intelligence (vol. 34, pp. 11596–11603).
– reference: Xie, H., Yao, H., Zhou, S., Mao, J., Zhang, S., & Sun, W. (2020b). GRNET: Gridding residual network for dense point cloud completion. In A. Vedaldi, H. Bischof, T. Brox, & J. M. Frahm (Eds.), Computer vision—ECCV 2020 (pp. 365–381). Springer.
– reference: Groueix, T., Fisher, M., Kim, V. G., Russell, B. C., & Aubry, M. (2018). A Papier-Mâché approach to learning 3d surface generation. In The IEEE conference on computer vision and pattern recognition (CVPR).
– reference: Li, Y., Bu, R., Sun, M., Wu, W., Di, X., & Chen, B. (2018). Pointcnn: Convolution on x-transformed points. In Advances in neural information processing systems (pp. 820–830).
– reference: Yang, B., Wen, H., Wang, S., Clark, R., Markham, A., & Trigoni, N. (2017). 3d object reconstruction from a single depth view with adversarial learning. In Proceedings of the IEEE international conference on computer vision workshops (pp. 679–688).
– reference: Wang, Y., Tan, D. J., Navab, N., & Tombari, F. (2020b). Softpoolnet: Shape descriptor for point cloud completion and classification. In A. Vedaldi, H. Bischof, T. Brox, & J. M. Frahm (Eds.), Computer vision—ECCV 2020 (pp. 70–85). Springer.
– reference: Kirillov, A., Wu, Y., He, K., & Girshick, R. (2020). Pointrend: Image segmentation as rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9799–9808).
– reference: Lim, I., Ibing, M., & Kobbelt, L. (2019). A convolutional decoder for point clouds using adaptive instance normalization. In Computer graphics forum (Vol. 38, pp. 99–108). Wiley Online Library.
– reference: Shen, Y., Feng, C., Yang, Y., & Tian, D. (2018). Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4548–4557).
– reference: Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., & Xiao, J. (2015). Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012.
– reference: Tchapmi, L. P., Kosaraju, V., Rezatofighi, H., Reid, I., & Savarese, S. (2019). Topnet: Structural point cloud decoder. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 383–392).
– reference: Yang, Z., Sun, Y., Liu, S., Shen, X., & Jia, J. (2019). STD: Sparse-to-dense 3d object detector for point cloud. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1951–1960).
– reference: Yang, B., Wang, S., Markham, A., & Trigoni, N. (2020). Robust attentional aggregation of deep feature sets for multi-view 3d reconstruction. International Journal of Computer Vision,128(1), 53–73.
– reference: Shi, W., & Rajkumar, R. (2020). Point-GNN: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1711–1719).
– reference: Zhou, L., Du, Y., & Wu, J. (2021). 3d shape generation and completion through point-voxel diffusion. arXiv preprint arXiv:2104.03670
– reference: Yang, F., Sun, Q., Jin, H., & Zhou, Z. (2020). Superpixel segmentation with fully convolutional networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13964–13973)
– reference: Lin, Z. H., Huang, S. Y., & Wang, Y. C. F. (2020). Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1800–1809).
– reference: Xiang, P., Wen, X., Liu, Y. S., Cao, Y. P., Wan, P., Zheng, W., & Han, Z. (2021). Snowflakenet: Point cloud completion by snowflake point deconvolution with skip-transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 5499–5509).
– reference: Yu, X., Rao, Y., Wang, Z., Liu, Z., Lu, J., & Zhou, J. (2021). Pointr: Diverse point cloud completion with geometry-aware transformers. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12498–12507).
– reference: Wen, X., Li, T., Han, Z., & Liu, Y. S. (2020a). Point cloud completion by skip-attention network with hierarchical folding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Zhang, J., Chen, W., Wang, Y., Vasudevan, R., & Johnson-Roberson, M. (2020a). Point set voting for partial point cloud analysis. arXiv preprint arXiv:2007.04537
– reference: Gong, B., Nie, Y., Lin, Y., Han, X., & Yu, Y. (2021). ME-PCN: Point completion conditioned on mask emptiness. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12488–12497).
– reference: Lei, H., Akhtar, N., Mian, A. (2020). Seggcn: Efficient 3d point cloud segmentation with fuzzy spherical kernel. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11611–11620).
– reference: Aoki, Y., Goforth, H., Srivatsan, R. A., & Lucey, S. (2019). Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7163–7172).
– reference: Wen, X., Han, Z., Cao, Y. P., Wan, P., Zheng, W., & Liu, Y. S. (2021). Cycle4completion: Unpaired point cloud completion using cycle transformation with missing region coding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13080–13089).
– reference: Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017b). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (NIPS).
– reference: Li, P., Wang, Q., & Zhang, L. (2013). A novel earth mover’s distance methodology for image matching with gaussian mixture models. In The IEEE international conference on computer vision (ICCV).
– reference: Mao, J., Wang, X., & Li, H.(2019). Interpolated convolutional networks for 3d point cloud understanding. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1578–1587).
– reference: Mo, K., Zhu, S., Chang, A. X., Yi, L., Tripathi, S., Guibas, L. J., & Su, H. (2019). PartNet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In The IEEE conference on computer vision and pattern recognition (CVPR).
– reference: Pan, L., Chen, X., Cai, Z., Zhang, J., Zhao, H., Yi, S., & Liu, Z. (2021). Variational relational point completion network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8524–8533).
– reference: Choy, C. B., Xu, D., Gwak, J., Chen, K., & Savarese, S. (2016). 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In European conference on computer vision (pp. 628–644). Springer.
– reference: Zhang, W., Yan, Q., & Xiao, C. (2020b). Detail preserved point cloud completion via separated feature aggregation. arXiv preprint arXiv:2007.02374
– reference: Huang, Z., Yu, Y., Xu, J., Ni, F., & Le, X. (2020). PF-NET: Point fractal network for 3d point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7662–7670).
– reference: Azad, R., Asadi-Aghbolaghi, M., Fathy, M., & Escalera, S. (2019). Bi-directional ConvLSTM U-net with Densley connected convolutions. In Proceedings of the IEEE/CVF international conference on computer vision workshops (pp. 406–415).
– reference: CortesCVapnikVSupport-vector networksMachine Learning19952032732970831.68098
– reference: Park, J. J., Florence, P., Straub, J., Newcombe, R., & Lovegrove, S. (2018). Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 165–174).
– reference: Shu, D. W., Park, S. W., & Kwon, J. (2019). 3d point cloud generative adversarial network based on tree structured graph convolutions. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3859–3868).
– reference: Wen, X., Xiang, P., Han, Z., Cao, Y. P., Wan, P., Zheng, W., & Liu, Y. S. (2020b). PMP-NET: Point cloud completion by learning multi-step point moving paths. arXiv preprint arXiv:2012.03408
– reference: PanLECG: Edge-aware point cloud completion with graph convolutionIEEE Robotics and Automation Letters2020534392439810.1109/LRA.2020.2994483
– reference: Dai, A., Qi, C. R., & Nießner, M. (2017). Shape completion using 3d-encoder-predictor CNNS and shape synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (Vol. 3).
– reference: Xu, X., & Lee, G. H. (2020). Weakly supervised semantic point cloud segmentation: Towards 10x fewer labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 13706–13715).
– reference: Tatarchenko, M., Richter, S. R., Ranftl, R., Li, Z., Koltun, V., & Brox, T. (2019). What do single-view 3d reconstruction networks learn? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3405–3414).
– reference: Wang, X., Ang Jr, M. H., & Lee, G. H. (2020a). Cascaded refinement network for point cloud completion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).
– reference: Noh, H., Hong, S., Han, B. (2015). Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision (pp. 1520–1528).
– reference: Wang, Y., Tan, D. J., Navab, N., & Tombari, F. (2019b). Forknet: Multi-branch volumetric semantic completion from a single depth image. In Proceedings of the IEEE international conference on computer vision (pp. 8608–8617).
– reference: Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., & Ronneberger, O. (2016). 3d u-net: Learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention (pp. 424–432). Springer.
– reference: Huang, T., Zou, H., Cui, J., Yang, X., Wang, M., Zhao, X., Zhang, J., Yuan, Y., Xu, Y., & Liu, Y. (2021). RFNET: Recurrent forward network for dense point cloud completion. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 12508–12517).
– reference: Sharma, A., Grau, O., & Fritz, M. (2016). VCONV-DAE: Deep volumetric shape learning without object labels. In European conference on computer vision (pp. 236–250). Springer.
– reference: Gao, H., Tao, X., Shen, X., & Jia, J. (2019). Dynamic scene deblurring with parameter selective sharing and nested skip connections. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3848–3856).
– reference: Wang, H., Liu, Q., Yue, X., Lasenby, J., & Kusner, M. J. (2021a). Unsupervised point cloud pre-training via occlusion completion. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9782–9792).
– reference: Zhirong W., Song, S., Khosla, A., Yu, F., Zhang, L., Tang, X., & Xiao, J. (2015). 3d shapenets: A deep representation for volumetric shapes. In 2015 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1912–1920).
– reference: Park, J., Zhou, Q.Y., Koltun, V. (2017). Colored point cloud registration revisited. In Proceedings of the IEEE international conference on computer vision (pp. 143–152).
– reference: Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017a). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652–660).
– reference: Qi, C. R., Litany, O., He, K., & Guibas, L. J. (2019). Deep hough voting for 3d object detection in point clouds. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9277–9286).
– reference: Wang, L., Huang, Y., Hou, Y., Zhang, S., & Shan, J. (2019a). Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10296–10305).
– reference: Wang, X., Ang, M. H., & Lee, G. H. (2021b). Voxel-based network for shape completion by leveraging edge generation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 13189–13198).
– reference: Mazaheri, G., Mithun, N. C., Bappy, J. H., & Roy-Chowdhury, A. K. (2019). A skip connection architecture for localization of image manipulations. In CVPR workshops (pp. 119–129).
– reference: Xie, H., Yao, H., Sun, X., Zhou, S., & Zhang, S. (2019). Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 2690–2698).
– ident: 1588_CR32
– ident: 1588_CR23
  doi: 10.1109/ICCV.2019.00166
– ident: 1588_CR66
  doi: 10.1109/CVPR.2015.7298801
– ident: 1588_CR55
  doi: 10.1109/CVPR42600.2020.01372
– ident: 1588_CR2
  doi: 10.1109/ICCVW.2019.00052
– ident: 1588_CR43
  doi: 10.1109/CVPR42600.2020.00087
– ident: 1588_CR4
  doi: 10.1109/CVPR42600.2020.00700
– ident: 1588_CR8
  doi: 10.1109/CVPR.2017.693
– ident: 1588_CR29
  doi: 10.1109/ICCV.2017.25
– ident: 1588_CR38
  doi: 10.1109/ICCV.2019.00396
– ident: 1588_CR47
  doi: 10.1109/CVPR46437.2021.01288
– ident: 1588_CR51
  doi: 10.1109/ICCV48922.2021.00545
– volume: 20
  start-page: 273
  issue: 3
  year: 1995
  ident: 1588_CR7
  publication-title: Machine Learning
– ident: 1588_CR54
  doi: 10.1007/978-3-030-58545-7_21
– ident: 1588_CR61
  doi: 10.1109/ICCV.2019.00204
– ident: 1588_CR17
  doi: 10.1109/CVPR42600.2020.01163
– ident: 1588_CR37
  doi: 10.1109/CVPR42600.2020.00178
– volume: 128
  start-page: 2919
  issue: 12
  year: 2020
  ident: 1588_CR53
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-020-01347-6
– ident: 1588_CR59
  doi: 10.1109/CVPR42600.2020.01398
– ident: 1588_CR35
  doi: 10.1007/978-3-319-49409-8_20
– ident: 1588_CR41
  doi: 10.1109/ICCV48922.2021.00964
– volume: 41
  start-page: 2820
  year: 2018
  ident: 1588_CR56
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2018.2868195
– ident: 1588_CR62
  doi: 10.1109/ICCV48922.2021.01227
– ident: 1588_CR15
  doi: 10.1109/CVPR.2016.181
– ident: 1588_CR65
  doi: 10.1007/978-3-030-58595-2_31
– ident: 1588_CR24
– volume: 5
  start-page: 4392
  issue: 3
  year: 2020
  ident: 1588_CR27
  publication-title: IEEE Robotics and Automation Letters
  doi: 10.1109/LRA.2020.2994483
– ident: 1588_CR36
  doi: 10.1109/CVPR.2018.00478
– ident: 1588_CR39
  doi: 10.1109/CVPR.2019.00352
– ident: 1588_CR58
  doi: 10.1109/ICCVW.2017.86
– ident: 1588_CR18
  doi: 10.1109/ICCV.2013.212
– ident: 1588_CR26
  doi: 10.1109/ICCV.2015.178
– ident: 1588_CR46
  doi: 10.1007/978-3-030-58580-8_5
– ident: 1588_CR57
  doi: 10.1007/s11263-019-01217-w
– ident: 1588_CR31
  doi: 10.1109/ICCV.2019.00937
– ident: 1588_CR34
– ident: 1588_CR49
  doi: 10.1109/CVPR46437.2021.00736
– ident: 1588_CR22
  doi: 10.1609/aaai.v34i07.6827
– ident: 1588_CR48
  doi: 10.1109/CVPR42600.2020.00201
– ident: 1588_CR11
  doi: 10.1109/CVPR.2018.00030
– ident: 1588_CR63
  doi: 10.1109/3DV.2018.00088
– ident: 1588_CR5
  doi: 10.1007/978-3-319-46484-8_38
– ident: 1588_CR19
– ident: 1588_CR52
  doi: 10.1109/ICCV.2019.00278
– ident: 1588_CR16
  doi: 10.1109/CVPR42600.2020.00982
– ident: 1588_CR64
  doi: 10.1109/LRA.2020.3048658
– ident: 1588_CR67
  doi: 10.1109/ICCV48922.2021.00577
– ident: 1588_CR6
  doi: 10.1007/978-3-319-46723-8_49
– ident: 1588_CR10
  doi: 10.1109/ICCV48922.2021.01226
– ident: 1588_CR9
  doi: 10.1109/CVPR.2019.00397
– ident: 1588_CR1
  doi: 10.1109/CVPR.2019.00733
– ident: 1588_CR3
– ident: 1588_CR13
  doi: 10.1109/ICCV48922.2021.01228
– ident: 1588_CR60
  doi: 10.1109/CVPR.2018.00029
– ident: 1588_CR12
  doi: 10.1609/aaai.v33i01.33018376
– ident: 1588_CR21
  doi: 10.1109/CVPR42600.2020.00187
– ident: 1588_CR33
– ident: 1588_CR20
  doi: 10.1111/cgf.13792
– ident: 1588_CR28
  doi: 10.1109/CVPR46437.2021.00842
– ident: 1588_CR40
  doi: 10.1109/CVPR.2019.00047
– ident: 1588_CR14
  doi: 10.1109/CVPR42600.2020.00768
– ident: 1588_CR25
  doi: 10.1109/CVPR.2019.00100
– ident: 1588_CR45
  doi: 10.1109/ICCV.2019.00870
– ident: 1588_CR44
  doi: 10.1109/ICCV48922.2021.01294
– ident: 1588_CR30
  doi: 10.1109/CVPR.2019.00025
– ident: 1588_CR42
  doi: 10.1109/CVPR.2019.01054
– ident: 1588_CR50
SSID ssj0002823
Score 2.4978356
Snippet We propose a novel convolutional operator for the task of point cloud completion. One striking characteristic of our approach is that, conversely to related...
SourceID proquest
gale
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1145
SubjectTerms Artificial Intelligence
Cloud computing
Coders
Computer Imaging
Computer Science
Feature extraction
Image Processing and Computer Vision
Neural networks
Operators (mathematics)
Pattern Recognition
Pattern Recognition and Graphics
Permutations
Special Issue on 3D Computer Vision
Vision
SummonAdditionalLinks – databaseName: Springer LINK
  dbid: RSV
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NTsMwDLZgcODCP2IwUA5IHKBSm7bJwm0aIA5omhggbtGStBIS6tA6OPMOvCFPgpOmm_iV4FalqZPGTmwr9meAA5oYwxTngRaRQgdFqECFqQnSHJWjrW0eOiil20ve67Xv7kTfJ4WVdbR7fSXpTupZsltE3Z2jDSVIkb98HhZQ3bVtwYarwe30_EUnoiogj45RykTkU2W-p_FBHX0-lL_cjjqlc77yv-muwrI3Mkmnkoo1mMuKdVjxBifx27nEprqmQ922Ad0Bnst9B8x5QjoFOSts0vv47eX1NHNPpFcFjhO0dkl_dF9MSPdh9GSIpWWRvEfFJtycn113LwJfaCHQaD6glmIiM3mS54YJyrXRVOdRLizOi0lpzviQtTkNh8NMhCqKU_wpk2jGjdIsVpGOt6BRjIpsGwjDF6lJkEQWJgqtHRQGpjTaeXGsWKibENXrLbVHIbfFMB7kDD_ZLpzEMaRbOMmbcDT95rHC4Pi196Flo7QbFCnroc8zwPlZqCvZ4WHURtlIRBNaNael37mlpAx9LJqkPGzCcc3Z2eufx935W_ddWKJOOGzsZAsak_FTtgeL-nlyX473nUS_A_UR7HY
  priority: 102
  providerName: Springer Nature
Title SoftPool++: An Encoder–Decoder Network for Point Cloud Completion
URI https://link.springer.com/article/10.1007/s11263-022-01588-7
https://www.proquest.com/docview/2655924570
Volume 130
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAVX
  databaseName: Springer LINK
  customDbUrl:
  eissn: 1573-1405
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0002823
  issn: 0920-5691
  databaseCode: RSV
  dateStart: 19970101
  isFulltext: true
  titleUrlDefault: https://link.springer.com/search?facet-content-type=%22Journal%22
  providerName: Springer Nature
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpR3LbtQwcNQHBy598FAX2pUPSBwgwnFie91LtSxbVSqsohZK4WKt7USqVCXt7pYz_9A_7JcwdpyuANELFyuxk7HlGc-M7XkAvGK5c8JImViVGtygKJMYyl3CKxSOPrc5DaGUzj7KyWRwfq6KeOA2j2aVHU8MjNo11p-Rv2MCdV-Wc0kPrq4TnzXK367GFBqrsO41G2_S94mO7jkxbifaVPK4ReJCpdFppnWdS1m4wfSGCRypRf4mmP5kz3_dkwbxc7j5vwPfgo2oeJJhSynbsFLWT2AzKqEkLvE5VnV5Hrq6pzA6RV5dhGCd-2RYk3HtHeFndz9vP5ThiUxaY3KCGjApmot6QUaXzY0jHpaP7t3Uz-DL4fjz6CiJyRcSiyoFSi6hSlflVeWEYtI6y2yVVsrHfnGcVUJOxUAyOp2Wipo04ziTLrdCOmNFZlKbPYe1uqnLHSACG7jLEURJc4MaEBKIMBZ1vywzgtoepN3Maxsjk_sEGZd6GVPZY0tjHzpgS8sevLn_56qNy_Hg1689QrVftAjZTqPvAY7Ph7_SQ0nTAVJJrnqw22FRx9U810sU9uBtRwfL5n_3--JhaC_hMQsU6O0nd2FtMbsp9-CR_bG4mM_6sCq_fuvD-vvxpDjBt2OZ9AN9Y1nw71ienJ79Allc--U
linkProvider ProQuest
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1PT9RAFH_B1UQvIKhxEXUOEg_a2E7bmR0TQjYLBLLrZhPRcBt3ZtqEhLTr7gLxxnfge_Ch-CS-N23ZqJEbB27NtH3Tdn7vX-f9AXjHE-eEkTKwKjLooCgTmDB1QZqjcqTe5qEvpfR9IIfDztGRGi3BVZMLQ2GVjUz0gtqVlv6Rf-ICbV-epDLcnvwMqGsU7a42LTQqWPSzX-foss22DnZwfTc539s97O0HdVeBwKKuRJEsVObyJM-dUFxaZ7nNo1xRUROX8lzIsehIHo7HmQpNFKeo5FxihXTGithENka6D-BhEnck8VVfBjeSH92XqnU9umSpUFGdpFOl6kXc75hSIESK6JR_KMK_1cE_-7Je3e2t3LcP9RSWa8OadStOWIWlrFiDldrIZrUIm-FQ08eiGXsGva-oi0a-GOln1i3YbkGJ_tPri8udzB-xYRUsz9DCZ6PyuJiz3kl56hjRourlZfEcvt3J272AVlEW2UtgAk-kLkESWZgYtPCQAYSxaNvGsRGhbUPUrLS2deV1agByohc1owkdGufQHh1atuHDzT2Tqu7IrVe_JwBpEkpI2Y7r3Ap8PirvpbsyjDqIykS1YaNBja6l1UwvINOGjw3uFqf_P-_67dTewuP9wy8DPTgY9l_BE-7RT7GiG9CaT0-z1_DIns2PZ9M3no8Y_LhrPP4GRxBTgA
linkToPdf http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V3NTtwwEB5RqKpegP4gFmjrQ6se2ojEie11JYRWu6yKQKtI_RHqxaztREJCCd1dWnHjHXgbHocn6dhxWLVVuXHoLXKScRJ_np94_A3Aa5pZy7UQkZGJxgBF6kjHzEasROPoapvHnkrp66EYjbpHRzJfgOt2L4xLq2x1olfUtjbuH_k25ej70oyJeLsMaRH5YLh79j1yFaTcSmtbTqOByEFx8RPDt-nO_gDH-g2lw73P_Y9RqDAQGbSbqJ65LGyZlaXlkgpjDTVlUkpHcGIZLbkY866g8XhcyFgnKUODZzPDhdWGpzoxKcp9AEsCY0yXTpizb7dWAEOZpow9hmeMyyRs2Gm27SXUr566pAiGSBW_GcU_TcNfa7Te9A1X_uePtgrLweEmvWaGPIGFonoKK8H5JkG1TbGprW_Rtj2D_ie0UbknKf1AehXZqxwBwOTm8mpQ-CMyapLoCXr-JK9Pqhnpn9bnljhZjtW8rp7Dl3t5uzVYrOqqWAfC8QSzGYoo4kyj54cTg2uDPm-aah6bDiTtqCsTGNldYZBTNeeSdkhR2IfySFGiA-9u7zlr-EjuvPqtA5Nyygolm3HYc4HP52i_VE_ESRcRmskObLUIUkGLTdUcPh1432Jwfvrf_W7cLe0VPEIYqsP90cEmPKZ-IrgU0i1YnE3Oixfw0PyYnUwnL_2UInB833D8BbxYXKQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=SoftPool%2B%2B%3A+An+Encoder%E2%80%93Decoder+Network+for+Point+Cloud+Completion&rft.jtitle=International+journal+of+computer+vision&rft.au=Wang%2C+Yida&rft.au=Tan%2C+David+Joseph&rft.au=Navab+Nassir&rft.au=Tombari+Federico&rft.date=2022-05-01&rft.pub=Springer+Nature+B.V&rft.issn=0920-5691&rft.eissn=1573-1405&rft.volume=130&rft.issue=5&rft.spage=1145&rft.epage=1164&rft_id=info:doi/10.1007%2Fs11263-022-01588-7&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0920-5691&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0920-5691&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0920-5691&client=summon