Subject-independent EEG emotion recognition with hybrid spatio-temporal GRU-Conv architecture

Recently, various deep learning frameworks have shown excellent performance in decoding electroencephalogram (EEG) signals, especially in human emotion recognition. However, most of them just focus on temporal features and ignore the features based on spatial dimensions. Traditional gated recurrent...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Medical & biological engineering & computing Ročník 61; číslo 1; s. 61 - 73
Hlavní autoři: Xu, Guixun, Guo, Wenhui, Wang, Yanjiang
Médium: Journal Article
Jazyk:angličtina
Vydáno: Berlin/Heidelberg Springer Berlin Heidelberg 01.01.2023
Springer Nature B.V
Témata:
ISSN:0140-0118, 1741-0444, 1741-0444
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Recently, various deep learning frameworks have shown excellent performance in decoding electroencephalogram (EEG) signals, especially in human emotion recognition. However, most of them just focus on temporal features and ignore the features based on spatial dimensions. Traditional gated recurrent unit (GRU) model performs well in processing time series data, and convolutional neural network (CNN) can obtain spatial characteristics from input data. Therefore, this paper introduces a hybrid GRU and CNN deep learning framework named GRU-Conv to fully leverage the advantages of both. Nevertheless, contrary to most previous GRU architectures, we retain the output information of all GRU units. So, the GRU-Conv model could extract crucial spatio-temporal features from EEG data. And more especially, the proposed model acquires the multi-dimensional features of multi-units after temporal processing in GRU and then uses CNN to extract spatial information from the temporal features. In this way, the EEG signals with different characteristics could be classified more accurately. Finally, the subject-independent experiment shows that our model has good performance on SEED and DEAP databases. The average accuracy of the former is 87.04%. The mean accuracy of the latter is 70.07% for arousal and 67.36% for valence. Graphical abstract
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0140-0118
1741-0444
1741-0444
DOI:10.1007/s11517-022-02686-x