SAE+LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG

EEG-based automatic emotion recognition can help brain-inspired robots in improving their interactions with humans. This paper presents a novel framework for emotion recognition using multi-channel electroencephalogram (EEG). The framework consists of a linear EEG mixing model and an emotion timing...

Full description

Saved in:
Bibliographic Details
Published in:Frontiers in neurorobotics Vol. 13; p. 37
Main Authors: Xing, Xiaofen, Li, Zhenqi, Xu, Tianyuan, Shu, Lin, Hu, Bin, Xu, Xiangmin
Format: Journal Article
Language:English
Published: Switzerland Frontiers Research Foundation 12.06.2019
Frontiers Media S.A
Subjects:
ISSN:1662-5218, 1662-5218
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:EEG-based automatic emotion recognition can help brain-inspired robots in improving their interactions with humans. This paper presents a novel framework for emotion recognition using multi-channel electroencephalogram (EEG). The framework consists of a linear EEG mixing model and an emotion timing model. Our proposed framework considerably decomposes the EEG source signals from the collected EEG signals and improves classification accuracy by using the context correlations of the EEG feature sequences. Specially, Stack AutoEncoder (SAE) is used to build and solve the linear EEG mixing model and the emotion timing model is based on the Long Short-Term Memory Recurrent Neural Network (LSTM-RNN). The framework was implemented on the DEAP dataset for an emotion recognition experiment, where the mean accuracy of emotion recognition achieved 81.10% in valence and 74.38% in arousal, and the effectiveness of our framework was verified. Our framework exhibited a better performance in emotion recognition using multi-channel EEG than the compared conventional approaches in the experiments.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Reviewed by: Sung Chan Jun, Gwangju Institute of Science and Technology, South Korea; Oluwarotimi WIlliams Samuel, Shenzhen Institutes of Advanced Technology (CAS), China
Edited by: Jan Babic, Jožef Stefan Institute (IJS), Slovenia
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2019.00037