Uncertainty‐Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images

Although single‐image 3D human reconstruction has made significant progress in recent years, few of the current state‐of‐the‐art methods can accurately restore the appearance and geometric details of loose clothing. To achieve high‐quality reconstruction of a human body wearing loose clothing, we pr...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer graphics forum Ročník 44; číslo 7
Hlavní autori: Yang, Yadan, Li, Yunze, Ying, Fangli, Phaphuangwittayakul, Aniwat, Dhuny, Riyad
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Oxford Blackwell Publishing Ltd 01.10.2025
Predmet:
ISSN:0167-7055, 1467-8659
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Although single‐image 3D human reconstruction has made significant progress in recent years, few of the current state‐of‐the‐art methods can accurately restore the appearance and geometric details of loose clothing. To achieve high‐quality reconstruction of a human body wearing loose clothing, we propose a learnable dynamic adjustment framework that integrates side‐view features and the uncertainty of the parametric human body model to adaptively regulate its reliability based on the clothing type. Specifically, we first adopt the Vision Transformer model as an encoder to capture the image features of the input image, and then employ SMPL‐X to decouple the side‐view body features. Secondly, to reduce the limitations imposed by the regularization of the parametric model, particularly for loose garments, we introduce a learnable coefficient to reduce the reliance on SMPL‐X. This strategy effectively accommodates the large deformations caused by loose clothing, thereby accurately expressing the posture and clothing in the image. To evaluate the effectiveness, we validate our method on the public CLOTH4D and Cape datasets, and the experimental results demonstrate better performance compared to existing approaches. The code is available at https://github.com/yyd0613/CoRe-Human.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.70239