Reconstructing Three-Dimensional Models of Interacting Humans

Understanding 3D human interactions is fundamental for fine-grained scene analysis and behavioural modeling. However, most of the existing models predict incorrect, lifeless 3D estimates, that miss the subtle human contact aspects-the essence of the event-and are of little use for detailed behaviora...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on pattern analysis and machine intelligence Ročník 47; číslo 12; s. 10870 - 10881
Hlavní autoři: Fieraru, Mihai, Zanfir, Mihai, Oneata, Elisabeta, Popa, Alin-Ionut, Olaru, Vlad, Sminchisescu, Cristian
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.12.2025
Témata:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Understanding 3D human interactions is fundamental for fine-grained scene analysis and behavioural modeling. However, most of the existing models predict incorrect, lifeless 3D estimates, that miss the subtle human contact aspects-the essence of the event-and are of little use for detailed behavioral understanding. This paper addresses such issues with several contributions: (1) we introduce models for interaction signature estimation (ISP) encompassing contact detection, segmentation, and 3D contact signature prediction; (2) we show how such components can be leveraged to ensure contact consistency during 3D reconstruction; (3) we construct several large datasets for learning and evaluating 3D contact prediction and reconstruction methods; specifically, we introduce CHI3D, a lab-based accurate 3D motion capture dataset with 631 sequences containing 2,525 contact events, 728,664 ground truth 3D poses, as well as FlickrCI3D, a dataset of 11,216 images, with 14,081 processed pairs of people, and 81,233 facet-level surface correspondences. Finally, (4) we propose methodology for recovering the ground-truth pose and shape of interacting people in a controlled setup and (5) annotate all 3D interaction motions in CHI3D with textual descriptions.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2025.3601974