Emotion recognition from unimodal to multimodal analysis: A review

The omnipresence of numerous information sources in our daily life brings up new alternatives for emotion recognition in several domains including e-health, e-learning, robotics, and e-commerce. Due to the variety of data, the research area of multimodal machine learning poses special problems for c...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Information fusion Ročník 99; s. 101847
Hlavní autoři: Ezzameli, K., Mahersia, H.
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.11.2023
Témata:
ISSN:1566-2535, 1872-6305
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The omnipresence of numerous information sources in our daily life brings up new alternatives for emotion recognition in several domains including e-health, e-learning, robotics, and e-commerce. Due to the variety of data, the research area of multimodal machine learning poses special problems for computer scientists; how did the field of emotion recognition progress in each modality and what are the most common strategies for recognizing emotions? What part does deep learning play in this? What is multimodality? How did it progress? What are the methods of information fusion? What are the most used datasets in each modality and in multimodal recognition? We can understand and compare the various methods by answering these questions. •In this paper, a review of unimodal and multimodal emotion recognition’s evolution is given.•Binary classification of emotional dimension is the most used in literature.•For each modality, deep learning gives better results than traditional methods.•CNN and LSTM are most effective for all modalities and Transformers for text.•Combining two or more modalities gives better results for emotion recognition.
ISSN:1566-2535
1872-6305
DOI:10.1016/j.inffus.2023.101847