Lip-Synchronized 3D Facial Animation Using Audio-Driven Graph Convolutional Autoencoder
The majority of state-of-the-art audio-driven facial animation methods implement a differentiable rendering phase within their models, and as such, their output is a 2D raster image. However, existing development pipelines for MR (Mixed Reality) applications utilize platform-specific render engines...
Uložené v:
| Vydané v: | Proceedings of the ... IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems (Online) Ročník 1; s. 346 - 351 |
|---|---|
| Hlavní autori: | , , , |
| Médium: | Konferenčný príspevok.. |
| Jazyk: | English |
| Vydavateľské údaje: |
IEEE
07.09.2023
|
| Predmet: | |
| ISSN: | 2770-4254 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
| Shrnutí: | The majority of state-of-the-art audio-driven facial animation methods implement a differentiable rendering phase within their models, and as such, their output is a 2D raster image. However, existing development pipelines for MR (Mixed Reality) applications utilize platform-specific render engines optimized for specific HMDs (Head-mounted displays), which in turn necessitates the use of a technique that works directly on the facial mesh geometry. This work proposes an innovative lip-synchronized, audio-driven 3D face animation method utilizing a graph convolutional autoencoder that learns detailed facial deformations of a talking subject while generating a compact latent representation of the 3D model. The representation is later conditioned with the processed audio data to achieve synchronized lip and jaw movement while retaining the subject's facial features. The audio processing involves the extraction of semantic features, which strongly correlate with facial deformation and expression. Qualitative and quantitative experiments exhibit the method's potential usage in MR applications, as well as shed light on some of the disadvantages of the current approaches. |
|---|---|
| ISSN: | 2770-4254 |
| DOI: | 10.1109/IDAACS58523.2023.10348935 |