Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Frontiers in robotics and AI Ročník 7; s. 532279
Hlavní autori: Spezialetti, Matteo, Placidi, Giuseppe, Rossi, Silvia
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Switzerland Frontiers Media S.A 21.12.2020
Predmet:
ISSN:2296-9144, 2296-9144
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
Edited by: Pablo Vinicius Alves De Barros, Italian Institute of Technology (IIT), Italy
Reviewed by: Bruno José Torres Fernandes, Universidade de Pernambuco, Brazil; Maya Dimitrova, Bulgarian Academy of Sciences (BAS), Bulgaria; Nicolás Navarro-Guerrero, Aarhus University, Denmark
This article was submitted to Sensor Fusion and Machine Perception, a section of the journal Frontiers in Robotics and AI
ISSN:2296-9144
2296-9144
DOI:10.3389/frobt.2020.532279