Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders

Generating conversational gestures from speech audio is challenging due to the inherent one-to-many mapping be-tween audio and body motions. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, resulting in plain/boring motions during...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings / IEEE International Conference on Computer Vision s. 11273 - 11282
Hlavní autori: Li, Jing, Kang, Di, Pei, Wenjie, Zhe, Xuefei, Zhang, Ying, He, Zhenyu, Bao, Linchao
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.10.2021
Predmet:
ISSN:2380-7504
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Generating conversational gestures from speech audio is challenging due to the inherent one-to-many mapping be-tween audio and body motions. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, resulting in plain/boring motions during inference. In order to over-come this problem, we propose a novel conditional variational autoencoder (VAE) that explicitly models one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code mainly models the strong correlation between audio and motion (such as the synchronized audio and motion beats), while the motion-specific code captures diverse motion information independent of the audio. However, splitting the latent code into two parts poses training difficulties for the VAE model. A mapping network facilitating random sampling along with other techniques including relaxed motion loss, bicycle constraint, and diversity loss are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than state-of-the-art methods, quantitatively and qualitatively. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline. Code and more results are at https://jingli513.github.io/audio2gestures.
ISSN:2380-7504
DOI:10.1109/ICCV48922.2021.01110