Human Shape from Silhouettes Using Generative HKS Descriptors and Cross-Modal Neural Networks

In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to fi...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) s. 5504 - 5514
Hlavní autoři: Dibra, Endri, Jain, Himanshu, Oztireli, Cengiz, Ziegler, Remo, Gross, Markus
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.07.2017
Témata:
ISSN:1063-6919, 1063-6919
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits correlation of multi-view data during training time, to improve prediction at test time. We extensively validate our results on synthetic and real data, demonstrating significant improvements in accuracy as compared to the state-of-the-art, and providing a practical system for detailed human body measurements from a single image.
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2017.584