Generating robotic emotional body language with variational autoencoders

Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's intere...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International Conference on Affective Computing and Intelligent Interaction and workshops s. 545 - 551
Hlavní autoři: Marmpena, Mina, Lim, Angelica, Dahl, Torbjorn S., Hemion, Nikolas
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.09.2019
Témata:
ISSN:2156-8111
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's interest beyond the novelty effect period. Hand-coded, pose-to-pose robotic animations can be of high quality and interpretability, but the demanding process of creating them results in limited sets; therefore, after a while, the user will realize that the behavior is repetitive. This work proposes the application of deep learning methods, and more specifically the variational autoencoder framework, for generating numerous emotional body language animations for the Pepper robot, after being trained with a few examples of hand-coded animations. Interestingly, the latent space of the model exhibits topological features that can be used to modulate the amplitude of the motion; we propose that this can be potentially useful for generating animations of specific arousal according to the dimensional theory of emotion.
ISSN:2156-8111
DOI:10.1109/ACII.2019.8925459