Component-aware generative autoencoder for structure hybrid and shape completion

Assembling components of man-made objects to create new structures or complete 3D shapes is a popular approach in 3D modeling techniques. Recently, leveraging deep neural networks for assembly-based 3D modeling has been widely studied. However, exploring new component combinations even across differ...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Graphical models Ročník 129; s. 101185
Hlavní autori: Zhang, Fan, Fu, Qiang, Liu, Yang, Li, Xueming
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Inc 01.10.2023
Elsevier
Predmet:
ISSN:1524-0703, 1524-0711
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Assembling components of man-made objects to create new structures or complete 3D shapes is a popular approach in 3D modeling techniques. Recently, leveraging deep neural networks for assembly-based 3D modeling has been widely studied. However, exploring new component combinations even across different categories is still challenging for most of the deep-learning-based 3D modeling methods. In this paper, we propose a novel generative autoencoder that tackles the component combinations for 3D modeling of man-made objects. We use the segmented input objects to create component volumes that have redundant components and random configurations. By using the input objects and the associated component volumes to train the autoencoder, we can obtain an object volume consisting of components with proper quality and structure as the network output. Such a generative autoencoder can be applied to either multiple object categories for structure hybrid or a single object category for shape completion. We conduct a series of evaluations and experimental results to demonstrate the usability and practicability of our method. [Display omitted]
ISSN:1524-0703
1524-0711
DOI:10.1016/j.gmod.2023.101185