EEG-Based Emotion Recognition of Deaf Subjects by Integrated Genetic Firefly Algorithm

In recent years, many researchers have explored different methods to obtain discriminative features for electroencephalogram-based (EEG-based) emotion recognition, but a few studies have been investigated on deaf subjects. In this study, we have established a deaf EEG emotion dataset, which contains...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement Jg. 70; S. 1 - 11
Hauptverfasser: Tian, Zekun, Li, Dahua, Song, Yu, Gao, Qiang, Kang, Qiaoju, Yang, Yi
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:0018-9456, 1557-9662
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, many researchers have explored different methods to obtain discriminative features for electroencephalogram-based (EEG-based) emotion recognition, but a few studies have been investigated on deaf subjects. In this study, we have established a deaf EEG emotion dataset, which contains three kinds of emotion (positive, neutral, and negative) with 15 subjects. Ten kinds of time-frequency domain features and eleven kinds of nonlinear dynamic system features were extracted from the EEG signals. To obtain the optimal feature combination and optimal classifier, an integrated genetic firefly algorithm (IGFA) was proposed. The multi-objective function with variable weight was utilized to balance the classification accuracy and the feature reduction ratio that are contradictory goals to find brighter fireflies in each generation. To retain the historical optimal solution and reduce the feature dimension, an optimal population protection scheme and subgroups generation scheme was carried out. The experimental results show that the averaged feature reduction rate of the proposed method is 0.959, and the averaged classification accuracy is 0.961. By investigating important brain regions, deaf subjects have common areas in the frontal and temporal lobes for EEG emotion recognition, while individual areas occur in the occipital and parietal lobes.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2021.3121473