Emotion Recognition Empowered Human-Computer Interaction With Domain Adaptation Network

Multi-modal emotion recognition plays a vital role in the human-computer interaction (HCI) for consumer electronics. Nowadays, many studies have developed multi-modal fusion algorithms for this purpose. However, two challenging issues remain unsolved, i.e., inefficient multi-modal feature fusion and...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on consumer electronics Ročník 71; číslo 2; s. 6777 - 6786
Hlavní autoři: Xu, Xu, Fu, Chong, Chen, Junxin
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York IEEE 01.05.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:0098-3063, 1558-4127
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Multi-modal emotion recognition plays a vital role in the human-computer interaction (HCI) for consumer electronics. Nowadays, many studies have developed multi-modal fusion algorithms for this purpose. However, two challenging issues remain unsolved, i.e., inefficient multi-modal feature fusion and unclear distance in feature space. To this end, we develop a novel framework, namely LAFDA-Net, for cross-subject emotion recognition using EEG and eye movement signals. It is based on low-rank fusion and domain adaptation network. More specifically, the multi-modal signals are input into the feature extraction branch in parallel to generate features. Then, these features are fused by the low-rank fusion branch, reducing complexity and avoiding overfitting. Next, the fused features are flattened and sent to the classification branch to determine the emotion status. During training, these features are input into the domain adaptive branch to bridge the gap between the source domain and target domain. Three benchmark datasets, i.e., SEED, SEED-IV, and SEED-V, are employed for performance validation. Extensive results demonstrate that the proposed LAFDA-Net is robust, effective, and has advantages over peer methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0098-3063
1558-4127
DOI:10.1109/TCE.2024.3524401