Markerless Shape and Motion Capture From Multiview Video Sequences

We propose a new markerless shape and motion capture approach from multiview video sequences. The shape recovery method consists of two steps: separating and merging. In the separating step, the depth map represented with a point cloud for each view is generated by solving a proposed variational mod...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology Jg. 21; H. 3; S. 320 - 334
Hauptverfasser: Li, Kun, Dai, Qionghai, Xu, Wenli
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York, NY IEEE 01.03.2011
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1051-8215, 1558-2205
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a new markerless shape and motion capture approach from multiview video sequences. The shape recovery method consists of two steps: separating and merging. In the separating step, the depth map represented with a point cloud for each view is generated by solving a proposed variational model, which is regularized by four constraints to ensure the accuracy and completeness of the reconstruction. Then, in the merging step, the point clouds of all the views are merged together and reconstructed into a 3-D mesh using a marching cubes method with silhouette constraints. Experiments show that the geometric details are faithfully preserved in each estimated depth map. The 3-D meshes reconstructed from the estimated depth maps are watertight and present rich geometric details, even for non-convex objects. Taking the reconstructed 3-D mesh as the underlying scene representation, a volumetric deformation method with a new positional-constraint computation scheme is proposed to automatically capture motions of the 3-D object. Our method can capture non-rigid motions even for loosely dressed humans without the aid of markers.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2011.2106251