DEGAS: Detailed Expressions on Full-Body Gaussian Avatars

Although neural rendering has made significant ad-vances in creating lifelike, animatable full-body and head avatars, incorporating detailed expressions into full-body avatars remains largely unexplored. We present DEGAS, the first 3D Gaussian Splatting (3DGS)-based modeling method for full-body ava...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings (International Conference on 3D Vision. Online) 3DV s. 1529 - 1540
Hlavní autori: Shao, Zhijing, Wang, Duotun, Tian, Qing-Yao, Yang, Yao-Dong, Meng, Hengyu, Cai, Zeyu, Dong, Bo, Zhang, Yu, Zhang, Kang, Wang, Zeyu
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 25.03.2025
Predmet:
ISSN:2475-7888
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Although neural rendering has made significant ad-vances in creating lifelike, animatable full-body and head avatars, incorporating detailed expressions into full-body avatars remains largely unexplored. We present DEGAS, the first 3D Gaussian Splatting (3DGS)-based modeling method for full-body avatars with rich facial expressions. Trained on multiview videos of a given subject, our method learns a conditional variational autoencoder that takes both the body motion and facial expression as driving signals to generate Gaussian maps in the UV layout. To drive the facial expressions, instead of the commonly used 3D Mor-phable Models (3DMMs) in 3D head avatars, we propose to adopt the expression latent space trained solely on 2D portrait images, bridging the gap between 2D talking faces and 3D avatars. Leveraging the rendering capability of 3DGS and the rich expressiveness of the expression latent space, the learned avatars can be reenacted to reproduce photo-realistic rendering images with subtle and accurate facial expressions. Experiments on an existing dataset and our newly proposed dataset offull-body talking avatars demonstrate the efficacy of our method. We also propose an audio-driven extension of our method with the help of 2D talking faces, opening new possibilities for interactive AI agents. Project page: https://initialneil.github.io/DEGAS.
ISSN:2475-7888
DOI:10.1109/3DV66043.2025.00143