MIST: Multimodal emotion recognition using DeBERTa for text, Semi-CNN for speech, ResNet-50 for facial, and 3D-CNN for motion analysis

Human emotion recognition is a rapidly evolving field in artificial intelligence, crucial for improving human–computer interaction. This paper introduces the MIST (Motion, Image, Speech, and Text) framework, a novel multimodal approach to emotion recognition that integrates diverse data modalities....

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Expert systems with applications Ročník 270; s. 126236
Hlavní autori: Boitel, Enguerrand, Mohasseb, Alaa, Haig, Ella
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 25.04.2025
Predmet:
ISSN:0957-4174
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Human emotion recognition is a rapidly evolving field in artificial intelligence, crucial for improving human–computer interaction. This paper introduces the MIST (Motion, Image, Speech, and Text) framework, a novel multimodal approach to emotion recognition that integrates diverse data modalities. Unlike existing models focusing on unimodal analysis, MIST leverages the complementary strengths of text (using DeBERTa), speech (using Semi-CNN), facial (using ResNet-50), and motion (using 3D-CNN) data to enhance accuracy and reliability. Our evaluation, conducted on the BAUM-1 and SAVEE datasets, demonstrates that MIST significantly outperforms traditional unimodal and some multimodal approaches in emotion recognition tasks. This research advances the field by providing a better understanding of emotional states, with potential applications in social robots, personal assistants, and educational technologies.
ISSN:0957-4174
DOI:10.1016/j.eswa.2024.126236