Generating robotic emotional body language with variational autoencoders

Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's intere...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:International Conference on Affective Computing and Intelligent Interaction and workshops s. 545 - 551
Hlavní autori: Marmpena, Mina, Lim, Angelica, Dahl, Torbjorn S., Hemion, Nikolas
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 01.09.2019
Predmet:
ISSN:2156-8111
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's interest beyond the novelty effect period. Hand-coded, pose-to-pose robotic animations can be of high quality and interpretability, but the demanding process of creating them results in limited sets; therefore, after a while, the user will realize that the behavior is repetitive. This work proposes the application of deep learning methods, and more specifically the variational autoencoder framework, for generating numerous emotional body language animations for the Pepper robot, after being trained with a few examples of hand-coded animations. Interestingly, the latent space of the model exhibits topological features that can be used to modulate the amplitude of the motion; we propose that this can be potentially useful for generating animations of specific arousal according to the dimensional theory of emotion.
ISSN:2156-8111
DOI:10.1109/ACII.2019.8925459