Multi-modal supervised domain adaptation with a multi-level alignment strategy and consistent decision boundaries for cross-subject emotion recognition from EEG and eye movement signals

Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information expressed by a single modality, leveraging the complementarity of multiple modal information. However, the applicability of these systems is still r...

Full description

Saved in:
Bibliographic Details
Published in:Knowledge-based systems Vol. 315; p. 113238
Main Authors: Jiménez-Guarneros, Magdiel, Fuentes-Pineda, Gibran
Format: Journal Article
Language:English
Published: Elsevier B.V 22.04.2025
Subjects:
ISSN:0950-7051
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information expressed by a single modality, leveraging the complementarity of multiple modal information. However, the applicability of these systems is still restricted to new users since signal patterns vary across subjects, decreasing the recognition performance. In this sense, supervised domain adaptation has emerged as an effective method to solve such problem by reducing distribution differences between multi-modal signals from known subjects and a new one. Nevertheless, existing works exhibit a sub-optimal feature distribution alignment, avoiding a correct knowledge transfer. Likewise, although multi-modal approaches present robustness by learning a shared latent space, EEG data are still exposed to noise and perturbations, producing misclassifications in sensitive decision boundaries. To solve these issues, we introduced a multi-modal supervised domain adaptation method, named Multi-level Alignment and Consistent Decision Boundaries (MACDB), which introduces a three-fold strategy for multi-level feature alignment comprising modality-specific normalization, angular cosine distance, and Joint Maximum Mean Discrepancy to achieve (1) an alignment per modality, (2) an alignment between modalities, and (3) an alignment across domains. Also, robust decision boundaries are encouraged over the EEG feature space by ensuring consistent predictions with respect to adversarial perturbations on EEG data. We evaluated our proposal on three public datasets, SEED, SEED-IV and SEED-V, employing leave-one-subject-out cross-validation. Experiments showed that the effectiveness of our proposal achieves an average accuracy of 86.68%, 85.03%, and 86.48% on SEED, SEED-IV, and SEED-V across the three available sessions, outperforming the state-of-the-art results.
AbstractList Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information expressed by a single modality, leveraging the complementarity of multiple modal information. However, the applicability of these systems is still restricted to new users since signal patterns vary across subjects, decreasing the recognition performance. In this sense, supervised domain adaptation has emerged as an effective method to solve such problem by reducing distribution differences between multi-modal signals from known subjects and a new one. Nevertheless, existing works exhibit a sub-optimal feature distribution alignment, avoiding a correct knowledge transfer. Likewise, although multi-modal approaches present robustness by learning a shared latent space, EEG data are still exposed to noise and perturbations, producing misclassifications in sensitive decision boundaries. To solve these issues, we introduced a multi-modal supervised domain adaptation method, named Multi-level Alignment and Consistent Decision Boundaries (MACDB), which introduces a three-fold strategy for multi-level feature alignment comprising modality-specific normalization, angular cosine distance, and Joint Maximum Mean Discrepancy to achieve (1) an alignment per modality, (2) an alignment between modalities, and (3) an alignment across domains. Also, robust decision boundaries are encouraged over the EEG feature space by ensuring consistent predictions with respect to adversarial perturbations on EEG data. We evaluated our proposal on three public datasets, SEED, SEED-IV and SEED-V, employing leave-one-subject-out cross-validation. Experiments showed that the effectiveness of our proposal achieves an average accuracy of 86.68%, 85.03%, and 86.48% on SEED, SEED-IV, and SEED-V across the three available sessions, outperforming the state-of-the-art results.
ArticleNumber 113238
Author Fuentes-Pineda, Gibran
Jiménez-Guarneros, Magdiel
Author_xml – sequence: 1
  givenname: Magdiel
  orcidid: 0000-0001-9675-7494
  surname: Jiménez-Guarneros
  fullname: Jiménez-Guarneros, Magdiel
  email: mjmnzg@gmail.com
– sequence: 2
  givenname: Gibran
  surname: Fuentes-Pineda
  fullname: Fuentes-Pineda, Gibran
BookMark eNqFkE1u2zAQRrlIgOanN-iCF5BLUhZtZRGgCJykQIps2jUxJEfuuBJpkLQLHy23qyx11UWzGoLg-77hu2YXIQZk7JMUCymk_rxb_Aoxn_JCCdUspKxVvb5gV6JtRLUSjfzArnPeCSGUkusr9vbt0Beqhuih5_mwx3SkjJ77OAAFDh72BQrFwH9T-cmBD9P7Ho_Yc-hpGwYMheeSoOD2xCF47mLIlMv53qOjfKZtPAQPiTDzLibuUsy5yge7Q1c4DnGqSOjiNtB07lIc-GbzNCXiCfkQjzh3jaXQ51t22Y0DP_6dN-zH4-b7w3P18vr09eHLS-VUrUvlV852LTTrRmNjVe1to7WyEjSCA6Fb3bpWohCrtfLLxlmrnFu1HTjVCWV1fcOWc-60c8LO7BMNkE5GCnNWbnZmVm7Oys2sfMTu_sEczSZHVdS_B9_PMI4fOxImkx1hcOhpdFSMj_T_gD9UFKsG
CitedBy_id crossref_primary_10_1016_j_aei_2025_103744
Cites_doi 10.1016/j.inffus.2023.102129
10.1109/IJCNN48605.2020.9207625
10.1109/ACCESS.2022.3193768
10.1109/TAMD.2015.2431497
10.1088/1741-2552/ac5c8d
10.1016/j.inffus.2020.01.011
10.1109/TCYB.2018.2797176
10.1016/j.eswa.2021.115581
10.1016/j.compbiomed.2022.105907
10.1016/j.measurement.2022.112379
10.1109/JAS.2022.105515
10.1109/TCDS.2021.3071170
10.1016/j.eswa.2024.124001
10.18653/v1/D18-2029
10.1109/MSP.2021.3106895
10.1016/j.dsp.2023.104278
10.1145/1553374.1553497
10.1016/j.knosys.2021.107982
10.1088/1741-2552/ac49a7
10.1109/TAFFC.2017.2712143
10.1016/j.neunet.2023.03.039
10.1016/j.eswa.2024.125089
10.1109/TAFFC.2019.2916015
10.1016/j.bspc.2022.104314
10.1016/j.patcog.2020.107626
10.1007/s00521-022-07643-1
10.1109/TAFFC.2020.2994159
10.1016/j.ipm.2019.102185
10.1109/ACCESS.2023.3318751
10.1016/j.aei.2022.101601
10.1016/j.ins.2022.12.014
10.1109/JPROC.2015.2404941
10.1109/TPAMI.2018.2858821
10.1088/1741-2552/ad3987
ContentType Journal Article
Copyright 2025 Elsevier B.V.
Copyright_xml – notice: 2025 Elsevier B.V.
DBID AAYXX
CITATION
DOI 10.1016/j.knosys.2025.113238
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
ExternalDocumentID 10_1016_j_knosys_2025_113238
S0950705125002850
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
4.4
457
4G.
5VS
7-5
71M
77K
8P~
9JN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AATTM
AAXKI
AAXUO
AAYFN
AAYWO
ABAOU
ABBOA
ABIVO
ABJNI
ABMAC
ACDAQ
ACGFS
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADTZH
AEBSH
AECPX
AEIPS
AEKER
AENEX
AFJKZ
AFTJW
AFXIZ
AGCQF
AGHFR
AGRNS
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIIUN
AIKHN
AITUG
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMRAJ
ANKPU
AOUOD
APXCP
ARUGR
AXJTR
BJAXD
BKOJK
BLXMC
BNPGV
CS3
DU5
EBS
EFJIC
EO8
EO9
EP2
EP3
FDB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
IHE
J1W
JJJVA
KOM
LG9
LY7
M41
MHUIS
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
PQQKQ
Q38
ROL
RPZ
SDF
SDG
SDP
SES
SEW
SPC
SPCBC
SSH
SST
SSV
SSW
SSZ
T5K
WH7
XPP
ZMT
~02
~G-
29L
77I
9DU
AAQXK
AAYXX
ABDPE
ABWVN
ABXDB
ACLOT
ACNNM
ACRPL
ACVFH
ADCNI
ADJOM
ADMUD
ADNMO
AEUPX
AFPUW
AGQPQ
AIGII
AKBMS
AKYEP
ASPBG
AVWKF
AZFZN
CITATION
EFKBS
EFLBG
EJD
FEDTE
FGOYB
G-2
HLZ
HVGLF
HZ~
R2-
SBC
SET
UHS
WUQ
~HD
ID FETCH-LOGICAL-c236t-d7cbf9a5856e5b23db5662b1a6eaca06969c91e00782d45cbb2cc79fac2f02b63
ISICitedReferencesCount 2
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001453205400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 0950-7051
IngestDate Tue Nov 18 22:20:17 EST 2025
Sat Nov 29 06:53:13 EST 2025
Sat May 24 17:04:56 EDT 2025
IsPeerReviewed true
IsScholarly true
Keywords Deep learning
Multi-modal emotion recognition
Multi-modal supervised domain adaptation
Electroencephalogram
Eye tracking
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c236t-d7cbf9a5856e5b23db5662b1a6eaca06969c91e00782d45cbb2cc79fac2f02b63
ORCID 0000-0001-9675-7494
ParticipantIDs crossref_primary_10_1016_j_knosys_2025_113238
crossref_citationtrail_10_1016_j_knosys_2025_113238
elsevier_sciencedirect_doi_10_1016_j_knosys_2025_113238
PublicationCentury 2000
PublicationDate 2025-04-22
PublicationDateYYYYMMDD 2025-04-22
PublicationDate_xml – month: 04
  year: 2025
  text: 2025-04-22
  day: 22
PublicationDecade 2020
PublicationTitle Knowledge-based systems
PublicationYear 2025
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References (b49) 2012
Shneiderman, Plaisant, Cohen, Jacobs, Elmqvist, Diakopoulos (b2) 2016
Wang, Qiu, Ma, He (b23) 2021; 110
Joyce (b37) 2011
Wang, Wang, Yang, Zhang (b4) 2022; 149
Zheng, Liu, Lu, Lu, Cichocki (b7) 2019; 49
Wang, Qiu, Li, Du, Lu, He (b20) 2022; 9
Y.-T. Lan, W. Liu, B.-L. Lu, Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism, in: 2020 International Joint Conference on Neural Networks, IJCNN, 2020, pp. 1–6.
Zheng, Zhu, Lu (b19) 2019; 10
Yin, Wu, Yang, Li, Li, Liang, Lv (b17) 2024; 73
Zheng, Lu (b48) 2015; 7
Li, Wang, Huang, Qi, Pan (b47) 2023; 163
Bi, Wang, Yan, Ping, Wen (b26) 2022; 34
Wang, Wang, Yang, Zhang (b12) 2022; 149
Liu, Zheng, Li, Wu, Gan, Lu (b29) 2022; 19
Siddharth T. Jung, Sejnowski (b5) 2022; 13
Tang, Ma, Gan, Zhang, Yin (b32) 2024; 103
Li, Qiu, Shen, Liu, He (b38) 2020; 50
Li, Bao, Li, Zhao (b13) 2020; 57
Tarvainen, Valpola (b46) 2017; vol. 30
Wu, Zheng, Li, Lu (b11) 2022; 19
Chen, Vong, Wang, Wang, Pang (b15) 2022; 239
Gong, Chen, Zhang (b31) 2023
Wang, Liu, Ruan, Wang, Wang (b24) 2021; 185
Zhang, Yin, Chen, Nichele (b3) 2020; 59
Zhu, Qi, Hu, Hao (b10) 2022; 52
Zhang, Liu, Wang, Zhang, Lou, Zheng, Quek (b14) 2024; 21
Li, Zhou, Liu, Jung, Wan, Duan, Li, Yu, Song, Dong, Wen (b40) 2024; 257
Dai, Yan, Cheng, Duan, Wang (b18) 2023; 623
L. Song, J. Huang, A. Smola, K. Fukumizu, Hilbert space embeddings of conditional distributions with applications to dynamical systems, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, Association for Computing Machinery, New York, NY, USA, 2009, pp. 961–968.
Kim, Kim (b36) 2020
Tang, Jiang, Wang (b25) 2022; 10
Yu, Wang, Chen, Huang (b41) 2019
Foreman (b42) 2013
Zhu, Wu, Bai, Song, Gao (b39) 2024; 251
Liu, Qiu, Zheng, Lu (b8) 2021; 14
S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: International Conference on International Conference on Machine Learning, ICML’15, JMLR.Org, 2015, pp. 448–456.
Zhao, Jia, Yang, Ding, Keutzer (b1) 2021; 38
Gong, Chen, Li, Zhang (b16) 2024; 144
Zhang, Huang, Li, Zhang, Xia, Liu (b28) 2023
Zhong, Wang, Miao (b51) 2022; 13
D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R.S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, et al., Universal sentence encoder for english, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2018, pp. 169–174.
Chen, She, Meng, Zhang, Zhang (b27) 2023; 80
Gong, Dong, Zhang (b30) 2023
Miyato, S.-i. Maeda, Koyama, Ishii (b45) 2018; 41
Gretton, Borgwardt, Rasch, Schölkopf, Smola (b44) 2012; 13
Li, Hou, Li, Qiu, Peng, Tian (b22) 2023; 207
Demšar (b50) 2006; 7
Long, Zhu, Wang, Jordan (b35) 2017
Lotte (b21) 2015; 103
Yildiz, Tanabe, Kobayashi, Tuncer, Barua, Dogan, Tuncer, Tan, Acharya (b6) 2023; 11
Li (10.1016/j.knosys.2025.113238_b47) 2023; 163
Yildiz (10.1016/j.knosys.2025.113238_b6) 2023; 11
Foreman (10.1016/j.knosys.2025.113238_b42) 2013
Kim (10.1016/j.knosys.2025.113238_b36) 2020
Miyato (10.1016/j.knosys.2025.113238_b45) 2018; 41
Lotte (10.1016/j.knosys.2025.113238_b21) 2015; 103
Yin (10.1016/j.knosys.2025.113238_b17) 2024; 73
Zhong (10.1016/j.knosys.2025.113238_b51) 2022; 13
Gong (10.1016/j.knosys.2025.113238_b30) 2023
(10.1016/j.knosys.2025.113238_b49) 2012
Wu (10.1016/j.knosys.2025.113238_b11) 2022; 19
Liu (10.1016/j.knosys.2025.113238_b8) 2021; 14
Zhu (10.1016/j.knosys.2025.113238_b10) 2022; 52
Chen (10.1016/j.knosys.2025.113238_b15) 2022; 239
Yu (10.1016/j.knosys.2025.113238_b41) 2019
Tarvainen (10.1016/j.knosys.2025.113238_b46) 2017; vol. 30
Zhu (10.1016/j.knosys.2025.113238_b39) 2024; 251
Siddharth T. Jung (10.1016/j.knosys.2025.113238_b5) 2022; 13
Wang (10.1016/j.knosys.2025.113238_b12) 2022; 149
Zheng (10.1016/j.knosys.2025.113238_b48) 2015; 7
Gong (10.1016/j.knosys.2025.113238_b16) 2024; 144
Dai (10.1016/j.knosys.2025.113238_b18) 2023; 623
Wang (10.1016/j.knosys.2025.113238_b20) 2022; 9
Bi (10.1016/j.knosys.2025.113238_b26) 2022; 34
Zhang (10.1016/j.knosys.2025.113238_b28) 2023
Shneiderman (10.1016/j.knosys.2025.113238_b2) 2016
Li (10.1016/j.knosys.2025.113238_b13) 2020; 57
Li (10.1016/j.knosys.2025.113238_b22) 2023; 207
Tang (10.1016/j.knosys.2025.113238_b32) 2024; 103
Joyce (10.1016/j.knosys.2025.113238_b37) 2011
Wang (10.1016/j.knosys.2025.113238_b23) 2021; 110
10.1016/j.knosys.2025.113238_b43
Li (10.1016/j.knosys.2025.113238_b38) 2020; 50
Chen (10.1016/j.knosys.2025.113238_b27) 2023; 80
Li (10.1016/j.knosys.2025.113238_b40) 2024; 257
10.1016/j.knosys.2025.113238_b9
10.1016/j.knosys.2025.113238_b33
Gretton (10.1016/j.knosys.2025.113238_b44) 2012; 13
Liu (10.1016/j.knosys.2025.113238_b29) 2022; 19
Zhao (10.1016/j.knosys.2025.113238_b1) 2021; 38
Wang (10.1016/j.knosys.2025.113238_b4) 2022; 149
10.1016/j.knosys.2025.113238_b34
Zhang (10.1016/j.knosys.2025.113238_b14) 2024; 21
Zheng (10.1016/j.knosys.2025.113238_b19) 2019; 10
Wang (10.1016/j.knosys.2025.113238_b24) 2021; 185
Long (10.1016/j.knosys.2025.113238_b35) 2017
Zheng (10.1016/j.knosys.2025.113238_b7) 2019; 49
Zhang (10.1016/j.knosys.2025.113238_b3) 2020; 59
Tang (10.1016/j.knosys.2025.113238_b25) 2022; 10
Demšar (10.1016/j.knosys.2025.113238_b50) 2006; 7
Gong (10.1016/j.knosys.2025.113238_b31) 2023
References_xml – start-page: 591
  year: 2020
  end-page: 607
  ident: b36
  article-title: Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation
  publication-title: European Conference on Computer Vision
– volume: 10
  start-page: 78114
  year: 2022
  end-page: 78122
  ident: b25
  article-title: Deep neural network for emotion recognition based on meta-transfer learning
  publication-title: IEEE Access
– volume: 11
  start-page: 108705
  year: 2023
  end-page: 108715
  ident: b6
  article-title: Ff-btp model for novel sound-based community emotion detection
  publication-title: IEEE Access
– volume: 9
  start-page: 1612
  year: 2022
  end-page: 1626
  ident: b20
  article-title: Multi-modal domain adaptation variational autoencoder for eeg-based emotion recognition
  publication-title: IEEE/ CAA J. Autom. Sin.
– volume: 19
  year: 2022
  ident: b11
  article-title: Investigating eeg-based functional connectivity patterns for multimodal emotion recognition
  publication-title: J. Neural Eng.
– year: 2012
  ident: b49
  publication-title: Neural Networks: Tricks of the Trade
– volume: 239
  year: 2022
  ident: b15
  article-title: Easy domain adaptation for cross-subject multi-view emotion recognition
  publication-title: Knowl.-Based Syst.
– year: 2016
  ident: b2
  article-title: Designing the user interface: strategies for effective human–computer interaction
  publication-title: Pearson
– reference: S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: International Conference on International Conference on Machine Learning, ICML’15, JMLR.Org, 2015, pp. 448–456.
– volume: 623
  start-page: 164
  year: 2023
  end-page: 183
  ident: b18
  article-title: Analysis of multimodal data fusion from an information theory perspective
  publication-title: Inform. Sci.
– volume: 7
  start-page: 162
  year: 2015
  end-page: 175
  ident: b48
  article-title: Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks
  publication-title: IEEE Trans. Auton. Ment. Dev.
– volume: 41
  start-page: 1979
  year: 2018
  end-page: 1993
  ident: b45
  article-title: Virtual adversarial training: a regularization method for supervised and semi-supervised learning
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
– volume: 13
  start-page: 1290
  year: 2022
  end-page: 1301
  ident: b51
  article-title: Eeg-based emotion recognition using regularized graph neural networks
  publication-title: IEEE Trans. Affect. Comput.
– reference: Y.-T. Lan, W. Liu, B.-L. Lu, Multimodal emotion recognition using deep generalized canonical correlation analysis with an attention mechanism, in: 2020 International Joint Conference on Neural Networks, IJCNN, 2020, pp. 1–6.
– volume: 207
  year: 2023
  ident: b22
  article-title: Tmlp+srdann: A domain adaptation method for eeg-based emotion recognition
  publication-title: Measurement
– start-page: 1
  year: 2023
  end-page: 14
  ident: b30
  article-title: CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals
  publication-title: Int. J. Mach. Learn. Cybern.
– volume: 38
  start-page: 59
  year: 2021
  end-page: 73
  ident: b1
  article-title: Emotion recognition from multiple modalities: Fundamentals and methodologies
  publication-title: IEEE Signal Process. Mag.
– volume: 52
  year: 2022
  ident: b10
  article-title: A new approach for product evaluation based on integration of eeg and eye-tracking
  publication-title: Adv. Eng. Inform.
– volume: 103
  year: 2024
  ident: b32
  article-title: Hierarchical multimodal-fusion of physiological signals for emotion recognition with scenario adaption and contrastive alignment
  publication-title: Inf. Fusion
– volume: 14
  start-page: 715
  year: 2021
  end-page: 729
  ident: b8
  article-title: Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition
  publication-title: IEEE Trans. Cogn. Dev. Syst.
– volume: 80
  year: 2023
  ident: b27
  article-title: Similarity constraint style transfer mapping for emotion recognition
  publication-title: Biomed. Signal Process. Control.
– reference: L. Song, J. Huang, A. Smola, K. Fukumizu, Hilbert space embeddings of conditional distributions with applications to dynamical systems, in: Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, Association for Computing Machinery, New York, NY, USA, 2009, pp. 961–968.
– volume: 185
  year: 2021
  ident: b24
  article-title: Cross-subject eeg emotion classification based on few-label adversarial domain adaption
  publication-title: Expert Syst. Appl.
– volume: 257
  year: 2024
  ident: b40
  article-title: A radial basis deformable residual convolutional neural model embedded with local multi-modal feature knowledge and its application in cross-subject classification
  publication-title: Expert Syst. Appl.
– volume: 251
  year: 2024
  ident: b39
  article-title: EEG-eye movement based subject dependence, cross-subject, and cross-session emotion recognition with multidimensional homogeneous encoding space alignment
  publication-title: Expert Syst. Appl.
– start-page: 720
  year: 2011
  end-page: 722
  ident: b37
  article-title: Kullback–Leibler Divergence
– volume: 19
  year: 2022
  ident: b29
  article-title: Identifying similarities and differences in emotion recognition with EEG and eye movements among chinese, german, and french people
  publication-title: J. Neural Eng.
– volume: 57
  year: 2020
  ident: b13
  article-title: Exploring temporal representations by leveraging attention-based bidirectional lstm-rnns for multi-modal emotion recognition
  publication-title: Inf. Process. Manage.
– volume: 34
  start-page: 22241
  year: 2022
  end-page: 22255
  ident: b26
  article-title: Multi-domain fusion deep graph convolution neural network for eeg emotion recognition
  publication-title: Neural Comput. Appl.
– reference: D. Cer, Y. Yang, S.-y. Kong, N. Hua, N. Limtiaco, R.S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, et al., Universal sentence encoder for english, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2018, pp. 169–174.
– volume: vol. 30
  start-page: 1195
  year: 2017
  end-page: 1204
  ident: b46
  article-title: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
  publication-title: Advances in Neural Information Processing Systems
– volume: 49
  start-page: 1110
  year: 2019
  end-page: 1122
  ident: b7
  article-title: Emotionmeter: A multimodal framework for recognizing human emotions
  publication-title: IEEE Trans. Cybern.
– start-page: 1
  year: 2023
  end-page: 12
  ident: b31
  article-title: Cross-cultural emotion recognition with EEG and Eye movement signals based on multiple stacked broad learning system
  publication-title: IEEE Trans. Comput. Soc. Syst.
– volume: 149
  year: 2022
  ident: b4
  article-title: Multi-modal emotion recognition using eeg and speech signals
  publication-title: Comput. Biol. Med.
– volume: 10
  start-page: 417
  year: 2019
  end-page: 429
  ident: b19
  article-title: Identifying stable patterns over time for emotion recognition from eeg
  publication-title: IEEE Trans. Affect. Comput.
– start-page: 2208
  year: 2017
  end-page: 2217
  ident: b35
  article-title: Deep transfer learning with joint adaptation networks
  publication-title: International Conference on Machine Learning
– volume: 13
  start-page: 96
  year: 2022
  end-page: 107
  ident: b5
  article-title: Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing
  publication-title: IEEE Trans. Affect. Comput.
– volume: 50
  start-page: 3281
  year: 2020
  end-page: 3293
  ident: b38
  article-title: Multisource transfer learning for cross-subject eeg emotion recognition
  publication-title: IEEE Trans. Cybern.
– start-page: 1
  year: 2023
  end-page: 12
  ident: b28
  article-title: Self-training maximum classifier discrepancy for eeg emotion recognition
  publication-title: CAAI Trans. Intell. Technol.
– volume: 7
  start-page: 1
  year: 2006
  end-page: 30
  ident: b50
  article-title: Statistical comparisons of classifiers over multiple data sets
  publication-title: J. Mach. Learn. Res.
– volume: 149
  year: 2022
  ident: b12
  article-title: Multi-modal emotion recognition using eeg and speech signals
  publication-title: Comput. Biol. Med.
– volume: 73
  start-page: 1
  year: 2024
  end-page: 12
  ident: b17
  article-title: Research on multimodal emotion recognition based on fusion of electroencephalogram and electrooculography
  publication-title: IEEE Trans. Instrum. Meas.
– volume: 13
  start-page: 723
  year: 2012
  end-page: 773
  ident: b44
  article-title: A kernel two-sample test
  publication-title: J. Mach. Learn. Res.
– volume: 59
  start-page: 103
  year: 2020
  end-page: 126
  ident: b3
  article-title: Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
  publication-title: Inf. Fusion
– start-page: 778
  year: 2019
  end-page: 786
  ident: b41
  article-title: Transfer learning with dynamic adversarial adaptation network
  publication-title: 2019 IEEE International Conference on Data Mining
– volume: 163
  start-page: 195
  year: 2023
  end-page: 204
  ident: b47
  article-title: A novel semi-supervised meta learning method for subject-transfer brain–computer interface
  publication-title: Neural Netw.
– volume: 21
  year: 2024
  ident: b14
  article-title: Cross-modal credibility modelling for eeg-based multimodal emotion recognition
  publication-title: J. Neural Eng.
– volume: 110
  year: 2021
  ident: b23
  article-title: A prototype-based spd matrix network for domain adaptation eeg emotion recognition
  publication-title: Pattern Recognit.
– volume: 103
  start-page: 871
  year: 2015
  end-page: 890
  ident: b21
  article-title: Signal processing approaches to minimize or suppress calibration time in oscillatory activity-based brain x2013;computer interfaces
  publication-title: Proc. IEEE
– year: 2013
  ident: b42
  article-title: Data Smart: Using Data Science To Transform Information Into Insight
– volume: 144
  year: 2024
  ident: b16
  article-title: Emotion recognition from multiple physiological signals using intra- and inter-modality attention fusion network
  publication-title: Digit. Signal Process.
– volume: 103
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b32
  article-title: Hierarchical multimodal-fusion of physiological signals for emotion recognition with scenario adaption and contrastive alignment
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2023.102129
– ident: 10.1016/j.knosys.2025.113238_b9
  doi: 10.1109/IJCNN48605.2020.9207625
– volume: 10
  start-page: 78114
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b25
  article-title: Deep neural network for emotion recognition based on meta-transfer learning
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2022.3193768
– start-page: 720
  year: 2011
  ident: 10.1016/j.knosys.2025.113238_b37
– year: 2016
  ident: 10.1016/j.knosys.2025.113238_b2
  article-title: Designing the user interface: strategies for effective human–computer interaction
  publication-title: Pearson
– volume: vol. 30
  start-page: 1195
  year: 2017
  ident: 10.1016/j.knosys.2025.113238_b46
  article-title: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
– volume: 7
  start-page: 162
  issue: 3
  year: 2015
  ident: 10.1016/j.knosys.2025.113238_b48
  article-title: Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks
  publication-title: IEEE Trans. Auton. Ment. Dev.
  doi: 10.1109/TAMD.2015.2431497
– volume: 19
  issue: 2
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b29
  article-title: Identifying similarities and differences in emotion recognition with EEG and eye movements among chinese, german, and french people
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/ac5c8d
– volume: 59
  start-page: 103
  year: 2020
  ident: 10.1016/j.knosys.2025.113238_b3
  article-title: Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
  publication-title: Inf. Fusion
  doi: 10.1016/j.inffus.2020.01.011
– start-page: 1
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b30
  article-title: CoDF-Net: coordinated-representation decision fusion network for emotion recognition with EEG and eye movement signals
  publication-title: Int. J. Mach. Learn. Cybern.
– volume: 49
  start-page: 1110
  issue: 3
  year: 2019
  ident: 10.1016/j.knosys.2025.113238_b7
  article-title: Emotionmeter: A multimodal framework for recognizing human emotions
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2018.2797176
– volume: 185
  year: 2021
  ident: 10.1016/j.knosys.2025.113238_b24
  article-title: Cross-subject eeg emotion classification based on few-label adversarial domain adaption
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2021.115581
– volume: 149
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b4
  article-title: Multi-modal emotion recognition using eeg and speech signals
  publication-title: Comput. Biol. Med.
  doi: 10.1016/j.compbiomed.2022.105907
– volume: 207
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b22
  article-title: Tmlp+srdann: A domain adaptation method for eeg-based emotion recognition
  publication-title: Measurement
  doi: 10.1016/j.measurement.2022.112379
– volume: 9
  start-page: 1612
  issue: 9
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b20
  article-title: Multi-modal domain adaptation variational autoencoder for eeg-based emotion recognition
  publication-title: IEEE/ CAA J. Autom. Sin.
  doi: 10.1109/JAS.2022.105515
– start-page: 591
  year: 2020
  ident: 10.1016/j.knosys.2025.113238_b36
  article-title: Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation
– ident: 10.1016/j.knosys.2025.113238_b33
– volume: 14
  start-page: 715
  issue: 2
  year: 2021
  ident: 10.1016/j.knosys.2025.113238_b8
  article-title: Comparing recognition performance and robustness of multimodal deep learning models for multimodal emotion recognition
  publication-title: IEEE Trans. Cogn. Dev. Syst.
  doi: 10.1109/TCDS.2021.3071170
– volume: 50
  start-page: 3281
  issue: 7
  year: 2020
  ident: 10.1016/j.knosys.2025.113238_b38
  article-title: Multisource transfer learning for cross-subject eeg emotion recognition
  publication-title: IEEE Trans. Cybern.
– volume: 251
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b39
  article-title: EEG-eye movement based subject dependence, cross-subject, and cross-session emotion recognition with multidimensional homogeneous encoding space alignment
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2024.124001
– ident: 10.1016/j.knosys.2025.113238_b34
  doi: 10.18653/v1/D18-2029
– volume: 38
  start-page: 59
  issue: 6
  year: 2021
  ident: 10.1016/j.knosys.2025.113238_b1
  article-title: Emotion recognition from multiple modalities: Fundamentals and methodologies
  publication-title: IEEE Signal Process. Mag.
  doi: 10.1109/MSP.2021.3106895
– volume: 144
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b16
  article-title: Emotion recognition from multiple physiological signals using intra- and inter-modality attention fusion network
  publication-title: Digit. Signal Process.
  doi: 10.1016/j.dsp.2023.104278
– start-page: 2208
  year: 2017
  ident: 10.1016/j.knosys.2025.113238_b35
  article-title: Deep transfer learning with joint adaptation networks
– year: 2013
  ident: 10.1016/j.knosys.2025.113238_b42
– ident: 10.1016/j.knosys.2025.113238_b43
  doi: 10.1145/1553374.1553497
– volume: 239
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b15
  article-title: Easy domain adaptation for cross-subject multi-view emotion recognition
  publication-title: Knowl.-Based Syst.
  doi: 10.1016/j.knosys.2021.107982
– volume: 19
  issue: 1
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b11
  article-title: Investigating eeg-based functional connectivity patterns for multimodal emotion recognition
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/ac49a7
– volume: 10
  start-page: 417
  issue: 3
  year: 2019
  ident: 10.1016/j.knosys.2025.113238_b19
  article-title: Identifying stable patterns over time for emotion recognition from eeg
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2017.2712143
– volume: 163
  start-page: 195
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b47
  article-title: A novel semi-supervised meta learning method for subject-transfer brain–computer interface
  publication-title: Neural Netw.
  doi: 10.1016/j.neunet.2023.03.039
– volume: 149
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b12
  article-title: Multi-modal emotion recognition using eeg and speech signals
  publication-title: Comput. Biol. Med.
  doi: 10.1016/j.compbiomed.2022.105907
– volume: 257
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b40
  article-title: A radial basis deformable residual convolutional neural model embedded with local multi-modal feature knowledge and its application in cross-subject classification
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2024.125089
– start-page: 1
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b28
  article-title: Self-training maximum classifier discrepancy for eeg emotion recognition
  publication-title: CAAI Trans. Intell. Technol.
– volume: 13
  start-page: 96
  issue: 1
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b5
  article-title: Utilizing deep learning towards multi-modal bio-sensing and vision-based affective computing
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2019.2916015
– volume: 80
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b27
  article-title: Similarity constraint style transfer mapping for emotion recognition
  publication-title: Biomed. Signal Process. Control.
  doi: 10.1016/j.bspc.2022.104314
– volume: 110
  year: 2021
  ident: 10.1016/j.knosys.2025.113238_b23
  article-title: A prototype-based spd matrix network for domain adaptation eeg emotion recognition
  publication-title: Pattern Recognit.
  doi: 10.1016/j.patcog.2020.107626
– volume: 73
  start-page: 1
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b17
  article-title: Research on multimodal emotion recognition based on fusion of electroencephalogram and electrooculography
  publication-title: IEEE Trans. Instrum. Meas.
– start-page: 778
  year: 2019
  ident: 10.1016/j.knosys.2025.113238_b41
  article-title: Transfer learning with dynamic adversarial adaptation network
– volume: 34
  start-page: 22241
  issue: 24
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b26
  article-title: Multi-domain fusion deep graph convolution neural network for eeg emotion recognition
  publication-title: Neural Comput. Appl.
  doi: 10.1007/s00521-022-07643-1
– volume: 13
  start-page: 1290
  issue: 3
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b51
  article-title: Eeg-based emotion recognition using regularized graph neural networks
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2020.2994159
– year: 2012
  ident: 10.1016/j.knosys.2025.113238_b49
– volume: 57
  issue: 3
  year: 2020
  ident: 10.1016/j.knosys.2025.113238_b13
  article-title: Exploring temporal representations by leveraging attention-based bidirectional lstm-rnns for multi-modal emotion recognition
  publication-title: Inf. Process. Manage.
  doi: 10.1016/j.ipm.2019.102185
– volume: 11
  start-page: 108705
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b6
  article-title: Ff-btp model for novel sound-based community emotion detection
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2023.3318751
– volume: 7
  start-page: 1
  year: 2006
  ident: 10.1016/j.knosys.2025.113238_b50
  article-title: Statistical comparisons of classifiers over multiple data sets
  publication-title: J. Mach. Learn. Res.
– volume: 52
  year: 2022
  ident: 10.1016/j.knosys.2025.113238_b10
  article-title: A new approach for product evaluation based on integration of eeg and eye-tracking
  publication-title: Adv. Eng. Inform.
  doi: 10.1016/j.aei.2022.101601
– volume: 623
  start-page: 164
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b18
  article-title: Analysis of multimodal data fusion from an information theory perspective
  publication-title: Inform. Sci.
  doi: 10.1016/j.ins.2022.12.014
– volume: 103
  start-page: 871
  issue: 6
  year: 2015
  ident: 10.1016/j.knosys.2025.113238_b21
  article-title: Signal processing approaches to minimize or suppress calibration time in oscillatory activity-based brain x2013;computer interfaces
  publication-title: Proc. IEEE
  doi: 10.1109/JPROC.2015.2404941
– volume: 41
  start-page: 1979
  issue: 8
  year: 2018
  ident: 10.1016/j.knosys.2025.113238_b45
  article-title: Virtual adversarial training: a regularization method for supervised and semi-supervised learning
  publication-title: IEEE Trans. Pattern Anal. Mach. Intell.
  doi: 10.1109/TPAMI.2018.2858821
– volume: 21
  issue: 2
  year: 2024
  ident: 10.1016/j.knosys.2025.113238_b14
  article-title: Cross-modal credibility modelling for eeg-based multimodal emotion recognition
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/ad3987
– volume: 13
  start-page: 723
  issue: 1
  year: 2012
  ident: 10.1016/j.knosys.2025.113238_b44
  article-title: A kernel two-sample test
  publication-title: J. Mach. Learn. Res.
– start-page: 1
  year: 2023
  ident: 10.1016/j.knosys.2025.113238_b31
  article-title: Cross-cultural emotion recognition with EEG and Eye movement signals based on multiple stacked broad learning system
  publication-title: IEEE Trans. Comput. Soc. Syst.
SSID ssj0002218
Score 2.4340408
Snippet Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 113238
SubjectTerms Deep learning
Electroencephalogram
Eye tracking
Multi-modal emotion recognition
Multi-modal supervised domain adaptation
Title Multi-modal supervised domain adaptation with a multi-level alignment strategy and consistent decision boundaries for cross-subject emotion recognition from EEG and eye movement signals
URI https://dx.doi.org/10.1016/j.knosys.2025.113238
Volume 315
WOSCitedRecordID wos001453205400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 0950-7051
  databaseCode: AIEXJ
  dateStart: 19950201
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0002218
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtZ3JbtswEIYJI-mhl-5F0w1z6E1gIFPWdgwKJ12DAE0L3wSKogIntmxYdpD0hfoMfbvOkNTSuugG9GIYgkUKmM_kz9EsjL1IVV4KWWiufRnyUZwEPNEhnlLSodBpESvf1Cn49C4-Pk4mk_RkMPjS5MJczuKqSq6u0uV_NTVeQ2NT6uxfmLsdFC_gdzQ6fqLZ8fOPDG9Savl8UVAmyGZJa0GNqrJYzOW08mQhly7A0Ka12ZBCPqPgIQ9F-ZkND6ht1dprl_dW1YQDXi9cTx4vN_2Y6KBtIhXNbsvrTU5-HU_b5kBeG55E8YyUyDIeH5kR9bX25gtTqxznwknlrO7r5LeNq4_TNlu4gtNduON0bt_wV_ozJ8orvbLxgu_lWTHVbdjI4YZqjtb8BMV0YXTyEfkHqr6zQ4T03kZ0R-PtLBznyvR57LvCtW5VD2yW6NYOYZ0V5_sX1QIffZ8mob42wtaY-aH29gcamkZGoYhCjHxDuyIOU1w-dw9ejydv2k1fCONKbh-lydI0oYTbc_1cBfWUzekddssdSeDAonSXDXR1j91u2n2AW_3vs689sqAjCyxZ0JEFRBZI6JEFLVnQkAXIAXRkQUMWdGQBkgXfkQWOLOiRBUQWIFlmRCQLGrLAkfWAfTwcn758xV3nD65EEK05rhF5mUo8ykY6zEVQ5HjqEPlQRqgTpB-lUarSoTb6thiFKs-FUnFaSiVKX-RR8JDtVItKP2KgAhXjRhWrMghHkuR3IrUfJjKnmAK_3GNBY4lMubL41J1lljXxj-eZtV9G9sus_fYYb-9a2rIwv_l93Bg5c9LWStYMufzlnY__-c4n7Gb3F3rKdtarjX7GbqjL9bRePXcAfwNPDNh5
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-modal+supervised+domain+adaptation+with+a+multi-level+alignment+strategy+and+consistent+decision+boundaries+for+cross-subject+emotion+recognition+from+EEG+and+eye+movement+signals&rft.jtitle=Knowledge-based+systems&rft.au=Jim%C3%A9nez-Guarneros%2C+Magdiel&rft.au=Fuentes-Pineda%2C+Gibran&rft.date=2025-04-22&rft.pub=Elsevier+B.V&rft.issn=0950-7051&rft.volume=315&rft_id=info:doi/10.1016%2Fj.knosys.2025.113238&rft.externalDocID=S0950705125002850
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0950-7051&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0950-7051&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0950-7051&client=summon