3DPointCaps++: Learning 3D Representations with Capsule Networks

We present 3DPointCaps++ for learning robust, flexible and generalizable 3D object representations without requiring heavy annotation efforts or supervision. Unlike conventional 3D generative models, our algorithm aims for building a structured latent space where certain factors of shape variations,...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International journal of computer vision Ročník 130; číslo 9; s. 2321 - 2336
Hlavní autori: Zhao, Yongheng, Fang, Guangchi, Guo, Yulan, Guibas, Leonidas, Tombari, Federico, Birdal, Tolga
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York Springer US 01.09.2022
Springer
Springer Nature B.V
Predmet:
ISSN:0920-5691, 1573-1405
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:We present 3DPointCaps++ for learning robust, flexible and generalizable 3D object representations without requiring heavy annotation efforts or supervision. Unlike conventional 3D generative models, our algorithm aims for building a structured latent space where certain factors of shape variations, such as object parts, can be disentangled into independent sub-spaces. Our novel decoder then acts on these individual latent sub-spaces (i.e. capsules) using deconvolution operators to reconstruct 3D points in a self-supervised manner. We further introduce a cluster loss ensuring that the points reconstructed by a single capsule remain local and do not spread across the object uncontrollably. These contributions allow our network to tackle the challenging tasks of part segmentation, part interpolation/replacement as well as correspondence estimation across rigid / non-rigid shape, and across / within category. Our extensive evaluations on ShapeNet objects and human scans demonstrate that our network can learn generic representations that are robust and useful in many applications.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Communicated by Akihiro Sugimoto.
ISSN:0920-5691
1573-1405
DOI:10.1007/s11263-022-01632-6