Learning 3D Deformation of Animals from 2D Images

Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that pr...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer graphics forum Ročník 35; číslo 2; s. 365 - 374
Hlavní autori: Kanazawa, Angjoo, Kovalsky, Shahar, Basri, Ronen, Jacobs, David
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Oxford Blackwell Publishing Ltd 01.05.2016
Predmet:
ISSN:0167-7055, 1467-8659
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user‐clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user‐clicked images. Our framework is based on a novel locally‐bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user‐clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness.
Bibliografia:ArticleID:CGF12838
ark:/67375/WNG-FZWP4X6Q-X
Supporting InformationSupporting InformationSupporting InformationSupporting Information
istex:22A6D4E391E39832D67E2D6047DC99087053F439
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.12838