Multi-View Self-Supervised Domain Adaptation for EEG-Based Emotion Recognition

Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods, but real-life data cannot meet the requirement of high quality with labels. In addition, EEG signals have individual variability and instabili...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on affective computing Ročník 16; číslo 4; s. 3055 - 3066
Hlavní autori: Zhang, Lu, Shi, Hanwen, Li, Ziyi, Zheng, Wei-Long, Lu, Bao-Liang
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1949-3045, 1949-3045
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods, but real-life data cannot meet the requirement of high quality with labels. In addition, EEG signals have individual variability and instability, which requires transfer learning to enhance the model generalization. In this paper, we propose a multi-view self-supervised domain adaptation model that combines self-supervised learning techniques with domain-adaptive transfer learning algorithms, which can solve the last two problems mentioned above. Specifically, we add a multi-class domain discriminator to construct the adversarial relationship between the sub-networks so that distribution discrepancy of different subjects can be reduced effectively. We conduct both subject-dependent and subject-independent experiments on the SEED and SEED-IV datasets to thoroughly evaluate the performance of our model. The results show that our model achieves outstanding emotion recognition performance even with limited labeled data. In the subject-dependent experiments on both datasets, our model achieves accuracy rates of 85.91% and 87.19% respectively, surpassing the original self-supervised masked autoencoder model by about 3%. In subject-independent experiments, our model demonstrates strong data distribution adaptation capabilities, achieving an accuracy of 69.72% and 62.87%, respectively on the SEED and SEED-IV datasets using only 90 samples for subject-independent experiments. This effectively mitigates the accuracy degradation caused by differences in data distribution across subjects. Furthermore, our model is capable of extracting meaningful features from corrupted EEG data, highlighting its robustness and effectiveness.
AbstractList Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods, but real-life data cannot meet the requirement of high quality with labels. In addition, EEG signals have individual variability and instability, which requires transfer learning to enhance the model generalization. In this paper, we propose a multi-view self-supervised domain adaptation model that combines self-supervised learning techniques with domain-adaptive transfer learning algorithms, which can solve the last two problems mentioned above. Specifically, we add a multi-class domain discriminator to construct the adversarial relationship between the sub-networks so that distribution discrepancy of different subjects can be reduced effectively. We conduct both subject-dependent and subject-independent experiments on the SEED and SEED-IV datasets to thoroughly evaluate the performance of our model. The results show that our model achieves outstanding emotion recognition performance even with limited labeled data. In the subject-dependent experiments on both datasets, our model achieves accuracy rates of 85.91% and 87.19% respectively, surpassing the original self-supervised masked autoencoder model by about 3%. In subject-independent experiments, our model demonstrates strong data distribution adaptation capabilities, achieving an accuracy of 69.72% and 62.87%, respectively on the SEED and SEED-IV datasets using only 90 samples for subject-independent experiments. This effectively mitigates the accuracy degradation caused by differences in data distribution across subjects. Furthermore, our model is capable of extracting meaningful features from corrupted EEG data, highlighting its robustness and effectiveness.
Author Zheng, Wei-Long
Li, Ziyi
Zhang, Lu
Shi, Hanwen
Lu, Bao-Liang
Author_xml – sequence: 1
  givenname: Lu
  orcidid: 0009-0002-0927-3673
  surname: Zhang
  fullname: Zhang, Lu
  email: 3216-0506@sjtu.edu.cn
  organization: School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
– sequence: 2
  givenname: Hanwen
  orcidid: 0009-0006-9534-3760
  surname: Shi
  fullname: Shi, Hanwen
  email: shihanwen@sjtu.edu.cn
  organization: School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
– sequence: 3
  givenname: Ziyi
  orcidid: 0000-0002-8944-741X
  surname: Li
  fullname: Li, Ziyi
  email: liziyi@sjtu.edu.cn
  organization: School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
– sequence: 4
  givenname: Wei-Long
  orcidid: 0000-0002-9474-6369
  surname: Zheng
  fullname: Zheng, Wei-Long
  email: weilong@sjtu.edu.cn
  organization: School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
– sequence: 5
  givenname: Bao-Liang
  orcidid: 0000-0001-8359-0058
  surname: Lu
  fullname: Lu, Bao-Liang
  email: bllu@sjtu.edu.cn
  organization: School of Computer Science, Shanghai Jiao Tong University, Shanghai, China
BookMark eNpNkF9LwzAUxYNMcM59AfGh4HNn_qd5nLObwlRw09cQ00QytqamreK3t90Gel_O5d5z7oXfORiUobQAXCI4QQjKm_V0Pp9NMMRsQpigGc9OwBBJKlMCKRv868_AuK43sCtCCMdiCJ4e223j0zdvv5OV3bp01VY2fvnaFsld2GlfJtNCV41ufCgTF2KS54v0Vvf7fBf20xdrwkfp-_4CnDq9re34qCPwOs_Xs_t0-bx4mE2XqUGcNmlBGXXwHRFtMogMF0gakelCa264pM6QwgmoM6gFZbgTIrE2hWHMQUedICNwfbhbxfDZ2rpRm9DGsnupCBYcYsIl61z44DIx1HW0TlXR73T8UQiqHp3ao1M9OnVE14WuDiFvrf0LIEg4x5L8AkI4a10
CODEN ITACBQ
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TAFFC.2025.3574868
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE/IET Electronic Library
CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE/IET Electronic Library
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1949-3045
EndPage 3066
ExternalDocumentID 10_1109_TAFFC_2025_3574868
11036629
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 62376158
  funderid: 10.13039/501100001809
– fundername: Shanghai Pujiang Program
  grantid: 22PJ1408600
– fundername: Shanghai Pilot Program for Basic Research - Shanghai Jiao Tong University
  grantid: 21TQ1400203
– fundername: Shanghai Jiao Tong University 2030 Initiative
– fundername: Shanghai Municipal Science and Technology Major Project
  grantid: 2021SHZD ZX
– fundername: STI 2030-Major Projects
  grantid: 2022ZD0208500
– fundername: Shanghai Jiao Tong University SCS-Shanghai EmoRays Technology Company Ltd. Joint Laboratory of Affective Brain-Computer Interfaces
– fundername: Medical-Engineering Interdisciplinary Research Foundation of Shanghai Jiao Tong University "Jiao Tong Star" Program
  grantid: YG2023ZD25; YG2024ZD25; YG2024QNA03
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AASAJ
AAWTH
ABJNI
ABQJQ
ABVLG
AENEX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNI
RZB
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c164t-d454f0b13ac801c6719c78adaa6c694fc3df70a80a745280a392acdc55f0f4f73
IEDL.DBID RIE
ISSN 1949-3045
IngestDate Thu Nov 27 15:41:50 EST 2025
Sat Nov 29 07:49:58 EST 2025
Wed Dec 10 09:46:56 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c164t-d454f0b13ac801c6719c78adaa6c694fc3df70a80a745280a392acdc55f0f4f73
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0009-0006-9534-3760
0000-0001-8359-0058
0000-0002-9474-6369
0009-0002-0927-3673
0000-0002-8944-741X
PQID 3276023695
PQPubID 2040414
PageCount 12
ParticipantIDs proquest_journals_3276023695
ieee_primary_11036629
crossref_primary_10_1109_TAFFC_2025_3574868
PublicationCentury 2000
PublicationDate 2025-00-00
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – year: 2025
  text: 2025-00-00
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transactions on affective computing
PublicationTitleAbbrev TAFFC
PublicationYear 2025
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
SSID ssj0000333627
Score 2.361042
Snippet Research on emotion recognition based on EEG signals has made significant progress. Most of the existing studies have focused on supervised learning methods,...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 3055
SubjectTerms Accuracy
Adaptation
Adaptation models
Adaptive algorithms
Brain modeling
Calibration
Data mining
Data models
Datasets
domain adaptation
EEG
Electroencephalography
Emotion recognition
Emotions
Experiments
Feature extraction
Machine learning
Performance evaluation
Self-supervised learning
Title Multi-View Self-Supervised Domain Adaptation for EEG-Based Emotion Recognition
URI https://ieeexplore.ieee.org/document/11036629
https://www.proquest.com/docview/3276023695
Volume 16
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE/IET Electronic Library
  customDbUrl:
  eissn: 1949-3045
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000333627
  issn: 1949-3045
  databaseCode: RIE
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LT4MwGG908eDF-ZhxOk0P3kw3oC2lxzmHnhbjptmNlD6SJQrLHvrv2xZQE-PBExAKId-Pfq_2-34AXOvIWMVoCGKKU0TsICTy0KAwp7lRiaFGV2QTbDJJ5nP-WBer-1oYrbXffKb77tSv5atSbl2qbGBNFY7jiO-CXcbiqljrK6ESYGyVMWsKYwI-mA3TdGRDwIj2MWUkce1Ufxgfz6bySwV7u5K2__lFh-CgdiDhsEL8COzo4hi0G3IGWM_VEzDxpbXoZaE_4FS_GjTdLp1eWGsF78o3sSjgUIlltRIPresKx-N7dCvc_XHF7QOfmt1FZdEBz-l4NnpANXkCkjYC2iBFKDFBHmIhrRGSMQu5ZIlQQsQy5sRIrAwLRBIIRmhkD9ZRElJJSk1giGH4FLSKstBnADKFc2tKA01Cbf0Xd4GJIkTwxETa6C64aaSaLaseGZmPLQKeeQwyh0FWY9AFHSfH75G1CLug1yCR1fNoneGIxa7HPafnfzx2Afbd26usSA-0NqutvgR78n2zWK-u_C_yCXX-vBA
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1JT9wwFH5ik-gFKAV1KIsPvSEzTmzH8XGgM4Cgowqm1dwix4s0EmRGs7R_v7aTAFLVA6ckiqNE74vfZr_3AXy1qfOK0TEsjOSY-UFYlYnDSclLZ3LHna3JJsRwmI_H8kdTrB5rYay1cfOZvQincS3fTPUqpMq63lTRLEvlOmxyxlJSl2u9pFQIpV4di7Y0hsjuqDcYXPkgMOUXlAuWh4aqb8xP5FP5RwlHyzLYfec37cFO40KiXo35R1iz1T7stvQMqJmtn2AYi2vxr4n9gx7tk8OPq1nQDAtr0Lfps5pUqGfUrF6LR955Rf3-Nb5U4X6_ZvdBD-3-oml1AD8H_dHVDW7oE7D2MdASG8aZI2VClfZmSGcikVrkyiiV6Uwyp6lxgqicKMF46g_eVVLaaM4dccwJeggb1bSynwEJQ0tvTIllifUeTLigzDCmZO5S62wHzlupFrO6S0YRowsii4hBETAoGgw6cBDk-DqyEWEHjlskimYmLQqaiix0uZf86D-PncH2zej7fXF_O7z7Ah_Cm-ocyTFsLOcrewJb-vdyspifxt_lL2qfv1c
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-View+Self-Supervised+Domain+Adaptation+for+EEG-Based+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+affective+computing&rft.au=Zhang%2C+Lu&rft.au=Shi%2C+Hanwen&rft.au=Li%2C+Ziyi&rft.au=Zheng%2C+Wei-Long&rft.date=2025&rft.pub=IEEE&rft.eissn=1949-3045&rft.volume=16&rft.issue=4&rft.spage=3055&rft.epage=3066&rft_id=info:doi/10.1109%2FTAFFC.2025.3574868&rft.externalDocID=11036629
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1949-3045&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1949-3045&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1949-3045&client=summon