MIST: Multimodal emotion recognition using DeBERTa for text, Semi-CNN for speech, ResNet-50 for facial, and 3D-CNN for motion analysis

Human emotion recognition is a rapidly evolving field in artificial intelligence, crucial for improving human–computer interaction. This paper introduces the MIST (Motion, Image, Speech, and Text) framework, a novel multimodal approach to emotion recognition that integrates diverse data modalities....

Full description

Saved in:
Bibliographic Details
Published in:Expert systems with applications Vol. 270; p. 126236
Main Authors: Boitel, Enguerrand, Mohasseb, Alaa, Haig, Ella
Format: Journal Article
Language:English
Published: Elsevier Ltd 25.04.2025
Subjects:
ISSN:0957-4174
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Human emotion recognition is a rapidly evolving field in artificial intelligence, crucial for improving human–computer interaction. This paper introduces the MIST (Motion, Image, Speech, and Text) framework, a novel multimodal approach to emotion recognition that integrates diverse data modalities. Unlike existing models focusing on unimodal analysis, MIST leverages the complementary strengths of text (using DeBERTa), speech (using Semi-CNN), facial (using ResNet-50), and motion (using 3D-CNN) data to enhance accuracy and reliability. Our evaluation, conducted on the BAUM-1 and SAVEE datasets, demonstrates that MIST significantly outperforms traditional unimodal and some multimodal approaches in emotion recognition tasks. This research advances the field by providing a better understanding of emotional states, with potential applications in social robots, personal assistants, and educational technologies.
ISSN:0957-4174
DOI:10.1016/j.eswa.2024.126236