Generating robotic emotional body language with variational autoencoders

Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's intere...

Full description

Saved in:
Bibliographic Details
Published in:International Conference on Affective Computing and Intelligent Interaction and workshops pp. 545 - 551
Main Authors: Marmpena, Mina, Lim, Angelica, Dahl, Torbjorn S., Hemion, Nikolas
Format: Conference Proceeding
Language:English
Published: IEEE 01.09.2019
Subjects:
ISSN:2156-8111
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Humanoid robots in social environments can become more engaging by using their embodiment to display emotional body language. For such expressions to be effective in long term interaction, they need to be characterized by variation and complexity, so that the robot can sustain the user's interest beyond the novelty effect period. Hand-coded, pose-to-pose robotic animations can be of high quality and interpretability, but the demanding process of creating them results in limited sets; therefore, after a while, the user will realize that the behavior is repetitive. This work proposes the application of deep learning methods, and more specifically the variational autoencoder framework, for generating numerous emotional body language animations for the Pepper robot, after being trained with a few examples of hand-coded animations. Interestingly, the latent space of the model exhibits topological features that can be used to modulate the amplitude of the motion; we propose that this can be potentially useful for generating animations of specific arousal according to the dimensional theory of emotion.
ISSN:2156-8111
DOI:10.1109/ACII.2019.8925459