Reconstructing Three-Dimensional Models of Interacting Humans

Understanding 3D human interactions is fundamental for fine-grained scene analysis and behavioural modeling. However, most of the existing models predict incorrect, lifeless 3D estimates, that miss the subtle human contact aspects-the essence of the event-and are of little use for detailed behaviora...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence Vol. 47; no. 12; pp. 10870 - 10881
Main Authors: Fieraru, Mihai, Zanfir, Mihai, Oneata, Elisabeta, Popa, Alin-Ionut, Olaru, Vlad, Sminchisescu, Cristian
Format: Journal Article
Language:English
Published: United States IEEE 01.12.2025
Subjects:
ISSN:0162-8828, 1939-3539, 2160-9292, 1939-3539
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Understanding 3D human interactions is fundamental for fine-grained scene analysis and behavioural modeling. However, most of the existing models predict incorrect, lifeless 3D estimates, that miss the subtle human contact aspects-the essence of the event-and are of little use for detailed behavioral understanding. This paper addresses such issues with several contributions: (1) we introduce models for interaction signature estimation (ISP) encompassing contact detection, segmentation, and 3D contact signature prediction; (2) we show how such components can be leveraged to ensure contact consistency during 3D reconstruction; (3) we construct several large datasets for learning and evaluating 3D contact prediction and reconstruction methods; specifically, we introduce CHI3D, a lab-based accurate 3D motion capture dataset with 631 sequences containing 2,525 contact events, 728,664 ground truth 3D poses, as well as FlickrCI3D, a dataset of 11,216 images, with 14,081 processed pairs of people, and 81,233 facet-level surface correspondences. Finally, (4) we propose methodology for recovering the ground-truth pose and shape of interacting people in a controlled setup and (5) annotate all 3D interaction motions in CHI3D with textual descriptions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2025.3601974