Multi-Modal Domain Adaptation Variational Autoencoder for EEG-Based Emotion Recognition

Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface (BCI) in practice. We attempt to use the multi-modal data from the past...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE/CAA journal of automatica sinica Ročník 9; číslo 9; s. 1612 - 1626
Hlavní autori: Wang, Yixin, Qiu, Shuang, Li, Dan, Du, Changde, Lu, Bao-Liang, He, Huiguang
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway Chinese Association of Automation (CAA) 01.09.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:2329-9266, 2329-9274
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface (BCI) in practice. We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples. To solve this problem, we propose a multi-modal domain adaptive variational autoencoder (MMDA-VAE) method, which learns shared cross-domain latent representations of the multi-modal data. Our method builds a multi-modal variational autoencoder (MVAE) to project the data of multiple modalities into a common space. Through adversarial learning and cycle-consistency regularization, our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge. Extensive experiments are conducted on two public datasets, SEED and SEED-IV, and the results show the superiority of our proposed method. Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
AbstractList Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which restricts the application of the affective brain computer interface (BCI) in practice. We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples. To solve this problem, we propose a multi-modal domain adaptive variational autoencoder (MMDA-VAE) method, which learns shared cross-domain latent representations of the multi-modal data. Our method builds a multi-modal variational autoencoder (MVAE) to project the data of multiple modalities into a common space. Through adversarial learning and cycle-consistency regularization, our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge. Extensive experiments are conducted on two public datasets, SEED and SEED-IV, and the results show the superiority of our proposed method. Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
Author Li, Dan
He, Huiguang
Qiu, Shuang
Wang, Yixin
Lu, Bao-Liang
Du, Changde
Author_xml – sequence: 1
  givenname: Yixin
  surname: Wang
  fullname: Wang, Yixin
  email: wangyxai@hotmail.com
  organization: Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science,Beijing,100190
– sequence: 2
  givenname: Shuang
  surname: Qiu
  fullname: Qiu, Shuang
  email: shuang.qiu@ia.ac.cn
  organization: Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science,Beijing,100190
– sequence: 3
  givenname: Dan
  surname: Li
  fullname: Li, Dan
  email: danliai@hotmail.com
  organization: Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science,Beijing,100190
– sequence: 4
  givenname: Changde
  surname: Du
  fullname: Du, Changde
  email: duchangde@gmail.com
  organization: Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science,Beijing,100190
– sequence: 5
  givenname: Bao-Liang
  surname: Lu
  fullname: Lu, Bao-Liang
  email: bllu@sjtu.edu.cn
  organization: Shanghai Jiao Tong University,Department of Computer Science and Engineering,Shanghai,China,200240
– sequence: 6
  givenname: Huiguang
  surname: He
  fullname: He, Huiguang
  email: huiguang.he@ia.ac.cn
  organization: Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science,Beijing,100190
BookMark eNp9UT1PwzAQtVCRKKUzA0sk5rT-djKWEgqoFRKfo-XaDnKVxsVOBv49aYM6MDDdk-69d3fvzsGg9rUF4BLBCUIwnz7OXiYYYjxBkDHETsAQE5ynORZ0cMScn4FxjBsIIcJM8JwOwceqrRqXrrxRVXLrt8rVycyoXaMa5-vkXQV3QF131jbe1tobG5LSh6QoFumNitYkxdYf2M9W-8_a7fEFOC1VFe34t47A213xOr9Pl0-Lh_lsmWosYJNawhQihlGdYVQSxXJuDCUkF4hkpRVCYUHKtcko4ogyoakWPCPcrImwSkMyAte97y74r9bGRm58G7p1o-wGsIxh3jmMAOtZOvgYgy2ldv2FTVCukgjKfYyyi1HuY5R9jJ1u-ke3C26rwvc_iqte4ay1R3YuGO1-QH4AlBJ9dA
CODEN IJASJC
CitedBy_id crossref_primary_10_1007_s11042_023_17588_9
crossref_primary_10_1109_TIM_2024_3369130
crossref_primary_10_1016_j_inffus_2023_102156
crossref_primary_10_1016_j_inffus_2024_102753
crossref_primary_10_3390_s23146434
crossref_primary_10_1109_JIOT_2025_3574456
crossref_primary_10_1007_s11571_024_10123_y
crossref_primary_10_1016_j_jneumeth_2024_110276
crossref_primary_10_1016_j_knosys_2025_113018
crossref_primary_10_1016_j_neunet_2024_106617
crossref_primary_10_1016_j_neucom_2024_128940
crossref_primary_10_1109_TNNLS_2023_3319315
crossref_primary_10_1145_3712259
crossref_primary_10_1109_TAI_2024_3524976
crossref_primary_10_1016_j_eswa_2025_127456
crossref_primary_10_1109_JAS_2024_125016
crossref_primary_10_1109_TAI_2024_3523250
crossref_primary_10_1109_JAS_2024_124344
crossref_primary_10_2174_0123520965303142240430101645
crossref_primary_10_1016_j_bspc_2022_104314
crossref_primary_10_1109_JPROC_2023_3309299
crossref_primary_10_1016_j_asoc_2025_113478
crossref_primary_10_1016_j_asoc_2025_112868
crossref_primary_10_1007_s11517_025_03384_0
crossref_primary_10_1109_TCE_2024_3524401
crossref_primary_10_3389_fnins_2023_1213099
crossref_primary_10_1109_TIM_2023_3277985
crossref_primary_10_3390_electronics12112359
crossref_primary_10_1016_j_neunet_2025_107457
crossref_primary_10_1109_TSMC_2023_3315541
crossref_primary_10_1109_TMM_2022_3222965
crossref_primary_10_1016_j_bspc_2024_106953
crossref_primary_10_1007_s00521_024_10821_y
crossref_primary_10_1109_TSMC_2022_3228314
crossref_primary_10_1007_s11571_024_10187_w
crossref_primary_10_1016_j_bspc_2025_108016
crossref_primary_10_1109_TAFFC_2024_3357656
crossref_primary_10_1049_cit2_12174
crossref_primary_10_1016_j_brainresbull_2024_110901
crossref_primary_10_1016_j_knosys_2025_113238
crossref_primary_10_1109_JAS_2025_125393
crossref_primary_10_1109_JAS_2023_123318
crossref_primary_10_1016_j_jvcir_2025_104581
crossref_primary_10_7717_peerj_cs_2065
crossref_primary_10_1016_j_eswa_2024_124001
crossref_primary_10_1016_j_eswa_2024_124236
crossref_primary_10_1016_j_inffus_2024_102862
crossref_primary_10_1016_j_bspc_2025_107912
crossref_primary_10_1145_3654664
crossref_primary_10_1007_s00530_025_01894_3
crossref_primary_10_3390_electronics13244905
crossref_primary_10_1016_j_rcim_2023_102610
crossref_primary_10_3390_s25103178
crossref_primary_10_1007_s42154_023_00270_z
crossref_primary_10_1039_D5TA00253B
crossref_primary_10_1109_JAS_2023_123447
crossref_primary_10_3390_brainsci13091326
crossref_primary_10_1016_j_eswa_2024_125089
Cites_doi 10.1109/TNN.2010.2091281
10.1109/TPAMI.2018.2798607
10.1109/FG.2011.5771357
10.1007/978-3-319-46672-9_58
10.1109/TCDS.2019.2949306
10.1109/JSEN.2021.3119074
10.1088/1741-2560/4/2/R01
10.1109/WACV.2016.7477679
10.1145/2647868.2654916
10.1023/B:JONB.0000023655.25550.be
10.1109/TAFFC.2015.2436926
10.1145/1500879.1500888
10.1136/bmj.310.6988.1213
10.1145/25065.950626
10.1109/TPAMI.2008.52
10.1109/T-AFFC.2011.37
10.1080/2326263X.2014.912881
10.1109/ICDM.2019.00088
10.1109/JAS.2019.1911393
10.1109/TSMC.2018.2875452
10.1186/s40537-016-0043-6
10.1109/ICCV.2017.301
10.1109/TMM.2015.2482228
10.1016/j.inffus.2020.01.011
10.1109/JBHI.2017.2688239
10.1162/089976602760128018
10.1109/TSMC.2021.3096065
10.1109/ICCV.2015.463
10.1109/ICSCSE.2016.0051
10.1109/T-AFFC.2010.1
10.1109/TBDATA.2021.3090905
10.1109/TCYB.2018.2797176
10.1177/0956797616687364
10.1162/089976698300017467
10.1109/JAS.2017.7510421
10.1109/JAS.2018.7511189
10.1109/JSTSP.2017.2764438
10.3390/s17051014
10.1109/TAFFC.2017.2712143
10.1109/TAFFC.2018.2885474
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
7TB
8FD
FR3
JQ2
L7M
L~C
L~D
DOI 10.1109/JAS.2022.105515
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Mechanical & Transportation Engineering Abstracts
Technology Research Database
Engineering Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Engineering Research Database
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2329-9274
EndPage 1626
ExternalDocumentID 10_1109_JAS_2022_105515
9754329
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61976209,62020106015,U21A20388
  funderid: 10.13039/501100001809
GroupedDBID -0I
-0Y
-SI
-S~
0R~
4.4
5VR
6IK
92M
97E
9D9
9DI
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
AFUIB
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CAJEI
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
Q--
RIA
RIE
RT9
T8Y
TCJ
TGT
U1F
U1G
U5I
U5S
AAYXX
CITATION
7SC
7SP
7TB
8FD
FR3
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c270t-e35a13d54c821f3a596dd43397138fe77a273fbd84161457c4c76836db37eac03
IEDL.DBID RIE
ISSN 2329-9266
IngestDate Sun Nov 09 07:32:45 EST 2025
Sat Nov 29 03:31:06 EST 2025
Tue Nov 18 22:41:18 EST 2025
Wed Aug 27 02:28:35 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 9
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c270t-e35a13d54c821f3a596dd43397138fe77a273fbd84161457c4c76836db37eac03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2705852627
PQPubID 2040495
PageCount 15
ParticipantIDs proquest_journals_2705852627
crossref_primary_10_1109_JAS_2022_105515
crossref_citationtrail_10_1109_JAS_2022_105515
ieee_primary_9754329
PublicationCentury 2000
PublicationDate 2022-09-01
PublicationDateYYYYMMDD 2022-09-01
PublicationDate_xml – month: 09
  year: 2022
  text: 2022-09-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE/CAA journal of automatica sinica
PublicationTitleAbbrev JAS
PublicationYear 2022
Publisher Chinese Association of Automation (CAA)
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: Chinese Association of Automation (CAA)
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref14
ref53
ref52
ref10
ref17
ref16
ref19
ref18
Liu (ref20) 2016
Ganin (ref45) 2016; 17
ref51
ref50
ref47
ref42
ref41
ref43
Lu (ref11) 2015
ref8
ref7
ref9
ref4
ref6
Kingma (ref32) 2013
ref5
ref40
Long (ref48) 2018
Shen (ref33) 2017
ref35
ref37
ref36
ref31
ref30
ref2
ref1
ref39
Long (ref44) 2015
Zheng (ref24) 2016
Wu (ref38) 2018
ref23
Liu (ref49) 2016
ref26
ref25
ref22
ref21
Collobert (ref27) 2006; 7
Long (ref46) 2017
ref28
ref29
Ekman (ref3) 1984; 3
Liu (ref34) 2017
References_xml – start-page: 1640
  year: 2018
  ident: ref48
  article-title: Conditional adversarial domain adaptation
  publication-title: Advances in Neural Information Processing Systems
– ident: ref25
  doi: 10.1109/TNN.2010.2091281
– start-page: 2208
  volume-title: Proc. 34th Int. Conf. Machine Learning-Volume 70
  year: 2017
  ident: ref46
  article-title: Deep transfer learning with joint adaptation networks
– ident: ref17
  doi: 10.1109/TPAMI.2018.2798607
– ident: ref6
  doi: 10.1109/FG.2011.5771357
– start-page: 521
  volume-title: Proc. Int. Conf. Neural Information Processing
  year: 2016
  ident: ref20
  article-title: Emotion recognition using multi-modal deep learning
  doi: 10.1007/978-3-319-46672-9_58
– ident: ref31
  doi: 10.1109/TCDS.2019.2949306
– start-page: 5575
  year: 2018
  ident: ref38
  article-title: Multi-modal generative models for scalable weakly-supervised learning
  publication-title: Advances in Neural Information Processing Systems
– ident: ref42
  doi: 10.1109/JSEN.2021.3119074
– ident: ref8
  doi: 10.1088/1741-2560/4/2/R01
– ident: ref12
  doi: 10.1109/WACV.2016.7477679
– ident: ref28
  doi: 10.1145/2647868.2654916
– start-page: 97
  volume-title: Proc. Int. Conf. Machine Learning
  year: 2015
  ident: ref44
  article-title: Learning transferable features with deep adaptation networks
– ident: ref5
  doi: 10.1023/B:JONB.0000023655.25550.be
– ident: ref10
  doi: 10.1109/TAFFC.2015.2436926
– ident: ref51
  doi: 10.1145/1500879.1500888
– start-page: 700
  year: 2017
  ident: ref34
  article-title: Unsupervised image-to-image translation networks
  publication-title: Advances in Neural Information Processing Systems
– ident: ref1
  doi: 10.1136/bmj.310.6988.1213
– ident: ref2
  doi: 10.1145/25065.950626
– ident: ref4
  doi: 10.1109/TPAMI.2008.52
– ident: ref15
  doi: 10.1109/T-AFFC.2011.37
– ident: ref13
  doi: 10.1080/2326263X.2014.912881
– ident: ref50
  doi: 10.1109/ICDM.2019.00088
– ident: ref18
  doi: 10.1109/JAS.2019.1911393
– ident: ref36
  doi: 10.1109/TSMC.2018.2875452
– volume: 17
  start-page: 2096
  issue: 1
  year: 2016
  ident: ref45
  article-title: Domain-adversarial training of neural networks
  publication-title: The Journal of Machine Learning Research
– ident: ref16
  doi: 10.1186/s40537-016-0043-6
– ident: ref47
  doi: 10.1109/ICCV.2017.301
– ident: ref19
  doi: 10.1109/TMM.2015.2482228
– start-page: 6830
  year: 2017
  ident: ref33
  article-title: Style transfer from nonparallel text by cross-alignment
  publication-title: Advances in Neural Information Processing Systems
– volume-title: Proc. 24th Int. Joint Conf. Artificial Intelligence
  year: 2015
  ident: ref11
  article-title: Combining eye movements and EEG to enhance emotion recognition
– ident: ref9
  doi: 10.1016/j.inffus.2020.01.011
– ident: ref52
  doi: 10.1109/JBHI.2017.2688239
– ident: ref39
  doi: 10.1162/089976602760128018
– ident: ref37
  doi: 10.1109/TSMC.2021.3096065
– ident: ref43
  doi: 10.1109/ICCV.2015.463
– ident: ref53
  doi: 10.1109/ICSCSE.2016.0051
– ident: ref7
  doi: 10.1109/T-AFFC.2010.1
– ident: ref23
  doi: 10.1109/TBDATA.2021.3090905
– year: 2013
  ident: ref32
  article-title: Auto-encoding variational bayes
  publication-title: arXiv preprint
– ident: ref40
  doi: 10.1109/TCYB.2018.2797176
– start-page: 469
  year: 2016
  ident: ref49
  article-title: Coupled generative adversarial networks
  publication-title: Advances in Neural Information Processing Systems
– ident: ref14
  doi: 10.1177/0956797616687364
– start-page: 2732
  volume-title: Proc. 25th Int. Joint Conf. Artificial Intelligence
  year: 2016
  ident: ref24
  article-title: Personalizing EEG-based affective models with transfer learning
– volume: 7
  start-page: 1687
  issue: 1
  year: 2006
  ident: ref27
  article-title: Large scale transductive SVMs
  publication-title: Journal of Machine Learning Research
– ident: ref26
  doi: 10.1162/089976698300017467
– volume: 3
  start-page: 19
  year: 1984
  ident: ref3
  article-title: Expression and the nature of emotion
  publication-title: Approaches to emotion
– ident: ref22
  doi: 10.1109/JAS.2017.7510421
– ident: ref35
  doi: 10.1109/JAS.2018.7511189
– ident: ref21
  doi: 10.1109/JSTSP.2017.2764438
– ident: ref29
  doi: 10.3390/s17051014
– ident: ref41
  doi: 10.1109/TAFFC.2017.2712143
– ident: ref30
  doi: 10.1109/TAFFC.2018.2885474
SSID ssj0001257694
Score 2.508594
Snippet Traditional electroencephalograph (EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject, which...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1612
SubjectTerms Adaptation models
Brain modeling
Calibration
Cycle-consistency
Data models
domain adaptation
Domains
electroencephalograph (EEG)
Electroencephalography
Emotion recognition
Emotions
Human-computer interface
Image reconstruction
Knowledge management
Modal data
multi modality
Regularization
Representations
variational autoencoder
Title Multi-Modal Domain Adaptation Variational Autoencoder for EEG-Based Emotion Recognition
URI https://ieeexplore.ieee.org/document/9754329
https://www.proquest.com/docview/2705852627
Volume 9
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2329-9274
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0001257694
  issn: 2329-9266
  databaseCode: RIE
  dateStart: 20140101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED5RxAADb0R5yQMDA6ZJnMT2WKAFIUCIZ7cosR2pEiRVSfn9nJ20qgQMbBnsKPHZvu_zne8DONaI2nSsfSp0llFciTFNjWaUhZESnOcxc8njr7f8_l4MBvJhAU5nd2GMMS75zJzZRxfL16Wa2KOyjuRRyALZghbnvL6rNXeegsjZ6R4iRpBUouNpKvn4nuzcdJ-QCwaBlbWNrATunBNyqio_tmLnX_pr__uydVhtcCTp1obfgAVTbMLKXHXBLXhzl2vpXamx4WX5kQ4L0tXpqA6-k1dkyc1JIOlOqtJWtNRmTBDFkl7vip6jf9OkV8v8kMdpolFZbMNLv_d8cU0bHQWqAu5V1LAo9ZmOQiUCP2dpJGOtQ4ZIxGciN5yniGHyTNsIpB9GXIUKSQiLdcY47sse24HFoizMLhCRS6S0JjJKhwhlYuEZgYQScaSKWRaINpxNBzZRTZFxq3Xxnjiy4ckELZFYSyS1JdpwMuswqutr_N10yw78rFkz5m04mFouadbfZ4L_jTwoiAO-93uvfVi2r66zxQ5gsRpPzCEsqa9q-Dk-clPrG-EryYw
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NT9wwEB0BRaIc2vIlFmjxgQOHGhJ_xPZx2y6FsqxQSym3KLEdCakkaNnt72fsZFcrlR56y8FWEo_tec8zngdw5BC1ucylVLuypLgSM1p4xykX0mqlqozH5PHboRqN9N2duV6Cj_O7MN77mHzmT8JjjOW7xk7DUdmpUVJwZpbhlRSCpe1trYUTFcTOUfkQUYKhBl1PV8snTczpt_4PZIOMBWFbGURwF9xQ1FX5azOOHubs7f992zt40yFJ0m9NvwFLvt6E9YX6glvwK16vpVeNw4ZfmofiviZ9Vzy24Xdyizy5Owsk_emkCTUtnR8TxLFkMPhKP6GHc2TQCv2Q77NUo6behp9ng5vP57RTUqCWqWRCPZdFyp0UVrO04oU0mXOCIxZJua68UgWimKp0IQaZCqmssEhDeOZKrnBnTvgOrNRN7XeB6MogqfXSWycQzGQ68RopJSJJm_GS6R6czAY2t12Z8aB28TuPdCMxOVoiD5bIW0v04Hje4bGtsPHvplth4OfNujHvwcHMcnm3Ap9y_G9kQixjau_lXoewdn5zNcyHF6PLfXgdXtPmjh3AymQ89e9h1f6Z3D-NP8Rp9gyJW8zT
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Modal+Domain+Adaptation+Variational+Autoencoder+for+EEG-Based+Emotion+Recognition&rft.jtitle=IEEE%2FCAA+journal+of+automatica+sinica&rft.au=Wang%2C+Yixin&rft.au=Qiu%2C+Shuang&rft.au=Li%2C+Dan&rft.au=Du%2C+Changde&rft.date=2022-09-01&rft.pub=Chinese+Association+of+Automation+%28CAA%29&rft.issn=2329-9266&rft.volume=9&rft.issue=9&rft.spage=1612&rft.epage=1626&rft_id=info:doi/10.1109%2FJAS.2022.105515&rft.externalDocID=9754329
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2329-9266&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2329-9266&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2329-9266&client=summon