Cross-subject emotion recognition using visibility graph and genetic algorithm-based convolution neural network

An efficient emotion recognition model is an important research branch in electroencephalogram (EEG)-based brain-computer interfaces. However, the input of the emotion recognition model is often a whole set of EEG channels obtained by electrodes placed on subjects. The unnecessary information produc...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Chaos (Woodbury, N.Y.) Ročník 32; číslo 9; s. 093110
Hlavní autori: Cai, Qing, An, Jian-Peng, Li, Hao-Yu, Guo, Jia-Yi, Gao, Zhong-Ke
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: 01.09.2022
ISSN:1089-7682, 1089-7682
On-line prístup:Zistit podrobnosti o prístupe
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:An efficient emotion recognition model is an important research branch in electroencephalogram (EEG)-based brain-computer interfaces. However, the input of the emotion recognition model is often a whole set of EEG channels obtained by electrodes placed on subjects. The unnecessary information produced by redundant channels affects the recognition rate and depletes computing resources, thereby hindering the practical applications of emotion recognition. In this work, we aim to optimize the input of EEG channels using a visibility graph (VG) and genetic algorithm-based convolutional neural network (GA-CNN). First, we design an experiment to evoke three types of emotion states using movies and collect the multi-channel EEG signals of each subject under different emotion states. Then, we construct VGs for each EEG channel and derive nonlinear features representing each EEG channel. We employ the genetic algorithm (GA) to find the optimal subset of EEG channels for emotion recognition and use the recognition results of the CNN as fitness values. The experimental results show that the recognition performance of the proposed method using a subset of EEG channels is superior to that of the CNN using all channels for each subject. Last, based on the subset of EEG channels searched by the GA-CNN, we perform cross-subject emotion recognition tasks employing leave-one-subject-out cross-validation. These results demonstrate the effectiveness of the proposed method in recognizing emotion states using fewer EEG channels and further enrich the methods of EEG classification using nonlinear features.An efficient emotion recognition model is an important research branch in electroencephalogram (EEG)-based brain-computer interfaces. However, the input of the emotion recognition model is often a whole set of EEG channels obtained by electrodes placed on subjects. The unnecessary information produced by redundant channels affects the recognition rate and depletes computing resources, thereby hindering the practical applications of emotion recognition. In this work, we aim to optimize the input of EEG channels using a visibility graph (VG) and genetic algorithm-based convolutional neural network (GA-CNN). First, we design an experiment to evoke three types of emotion states using movies and collect the multi-channel EEG signals of each subject under different emotion states. Then, we construct VGs for each EEG channel and derive nonlinear features representing each EEG channel. We employ the genetic algorithm (GA) to find the optimal subset of EEG channels for emotion recognition and use the recognition results of the CNN as fitness values. The experimental results show that the recognition performance of the proposed method using a subset of EEG channels is superior to that of the CNN using all channels for each subject. Last, based on the subset of EEG channels searched by the GA-CNN, we perform cross-subject emotion recognition tasks employing leave-one-subject-out cross-validation. These results demonstrate the effectiveness of the proposed method in recognizing emotion states using fewer EEG channels and further enrich the methods of EEG classification using nonlinear features.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1089-7682
1089-7682
DOI:10.1063/5.0098454