Uncertainty‐Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images

Although single‐image 3D human reconstruction has made significant progress in recent years, few of the current state‐of‐the‐art methods can accurately restore the appearance and geometric details of loose clothing. To achieve high‐quality reconstruction of a human body wearing loose clothing, we pr...

Full description

Saved in:
Bibliographic Details
Published in:Computer graphics forum Vol. 44; no. 7
Main Authors: Yang, Yadan, Li, Yunze, Ying, Fangli, Phaphuangwittayakul, Aniwat, Dhuny, Riyad
Format: Journal Article
Language:English
Published: Oxford Blackwell Publishing Ltd 01.10.2025
Subjects:
ISSN:0167-7055, 1467-8659
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although single‐image 3D human reconstruction has made significant progress in recent years, few of the current state‐of‐the‐art methods can accurately restore the appearance and geometric details of loose clothing. To achieve high‐quality reconstruction of a human body wearing loose clothing, we propose a learnable dynamic adjustment framework that integrates side‐view features and the uncertainty of the parametric human body model to adaptively regulate its reliability based on the clothing type. Specifically, we first adopt the Vision Transformer model as an encoder to capture the image features of the input image, and then employ SMPL‐X to decouple the side‐view body features. Secondly, to reduce the limitations imposed by the regularization of the parametric model, particularly for loose garments, we introduce a learnable coefficient to reduce the reliance on SMPL‐X. This strategy effectively accommodates the large deformations caused by loose clothing, thereby accurately expressing the posture and clothing in the image. To evaluate the effectiveness, we validate our method on the public CLOTH4D and Cape datasets, and the experimental results demonstrate better performance compared to existing approaches. The code is available at https://github.com/yyd0613/CoRe-Human.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.70239