Multi-Scale Masked Autoencoders for Cross-Session Emotion Recognition

Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary char...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on neural systems and rehabilitation engineering Ročník 32; s. 1637 - 1646
Hlavní autori: Pang, Miaoqi, Wang, Hongtao, Huang, Jiayang, Vong, Chi-Man, Zeng, Zhiqiang, Chen, Chuangquan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: United States IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1534-4320, 1558-0210, 1558-0210
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Abstract Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) an improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.
AbstractList Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) an improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.
Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) Multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) An improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) Invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG) technology for emotion recognition. However, the time-consuming process of annotating EEG data, inherent individual differences, non-stationary characteristics of EEG data, and noise artifacts in EEG data collection pose formidable challenges in developing subject-specific cross-session emotion recognition models. To simultaneously address these challenges, we propose a unified pre-training framework based on multi-scale masked autoencoders (MSMAE), which utilizes large-scale unlabeled EEG signals from multiple subjects and sessions to extract noise-robust, subject-invariant, and temporal-invariant features. We subsequently fine-tune the obtained generalized features with only a small amount of labeled data from a specific subject for personalization and enable cross-session emotion recognition. Our framework emphasizes: 1) Multi-scale representation to capture diverse aspects of EEG signals, obtaining comprehensive information; 2) An improved masking mechanism for robust channel-level representation learning, addressing missing channel issues while preserving inter-channel relationships; and 3) Invariance learning for regional correlations in spatial-level representation, minimizing inter-subject and inter-session variances. Under these elaborate designs, the proposed MSMAE exhibits a remarkable ability to decode emotional states from a different session of EEG data during the testing phase. Extensive experiments conducted on the two publicly available datasets, i.e., SEED and SEED-IV, demonstrate that the proposed MSMAE consistently achieves stable results and outperforms competitive baseline methods in cross-session emotion recognition.
Author Vong, Chi-Man
Chen, Chuangquan
Pang, Miaoqi
Wang, Hongtao
Zeng, Zhiqiang
Huang, Jiayang
Author_xml – sequence: 1
  givenname: Miaoqi
  surname: Pang
  fullname: Pang, Miaoqi
  organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
– sequence: 2
  givenname: Hongtao
  orcidid: 0000-0002-6564-5753
  surname: Wang
  fullname: Wang, Hongtao
  organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
– sequence: 3
  givenname: Jiayang
  surname: Huang
  fullname: Huang, Jiayang
  organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
– sequence: 4
  givenname: Chi-Man
  orcidid: 0000-0001-7997-8279
  surname: Vong
  fullname: Vong, Chi-Man
  organization: Department of Computer and Information Science, University of Macau, Macau, China
– sequence: 5
  givenname: Zhiqiang
  orcidid: 0000-0002-9544-5605
  surname: Zeng
  fullname: Zeng, Zhiqiang
  organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
– sequence: 6
  givenname: Chuangquan
  orcidid: 0000-0002-3811-296X
  surname: Chen
  fullname: Chen, Chuangquan
  email: chenchuangquan87@163.com
  organization: School of Electronics and Information Engineering, Wuyi University, Jiangmen, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38619940$$D View this record in MEDLINE/PubMed
BookMark eNp9kUtv1DAUhS1URB_wBxBCkdiwyXD9tpfVaIBKLUidsrYc-6bKkImLnSz49ySdKUJdsLpX1neOfM85JydDGpCQtxRWlIL9dPdte7tZMWBixbmxwPULckalNDUwCifLzkUtOINTcl7KDoBqJfUrcsqNotYKOCObm6kfu3obfI_VjS8_MVaX05hwCCliLlWbcrXOqZR6i6V0aag2-zQu8xZDuh-6ZX9NXra-L_jmOC_Ij8-bu_XX-vr7l6v15XUdhGJj3XCNrNFGsahopCwaCYIHhRoNAnK0gnnVshhViNBC4GABRWuV95IH4Bfk6uAbk9-5h9ztff7tku_c40PK987nsQs9uihpi8xrjg0XjeHWqwhcGtlqOlvb2evjweshp18TltHtuxKw7_2AaSqOA7cGtNFqRj88Q3dpysN86UwJoSRlwGfq_ZGamj3Gv997CnsGzAEIS54ZWxe60S_5jdl3vaPgll7dY69u6dUde52l7Jn0yf2_oncHUYeI_wgkzEFo_gdzHKwO
CODEN ITNSB3
CitedBy_id crossref_primary_10_1038_s41598_025_95178_5
crossref_primary_10_1016_j_knosys_2025_113018
crossref_primary_10_3390_s24216912
crossref_primary_10_1007_s12559_025_10463_9
crossref_primary_10_1007_s11571_025_10277_3
crossref_primary_10_1109_TIM_2025_3590828
crossref_primary_10_3389_fphys_2024_1425582
crossref_primary_10_1088_1741_2552_ade290
crossref_primary_10_1109_TIM_2025_3565702
crossref_primary_10_1016_j_neunet_2025_107853
Cites_doi 10.1109/TAFFC.2022.3199075
10.1016/0013-4694(94)90053-1
10.1109/TCYB.2019.2904052
10.1109/TAFFC.2017.2712143
10.1109/TCYB.2018.2797176
10.1109/ICASSP43922.2022.9747398
10.1016/j.patrec.2014.05.011
10.1109/TBME.2012.2217495
10.1109/LSP.2019.2906826
10.1109/NER.2013.6695876
10.1109/TCYB.2017.2788081
10.1016/j.ins.2022.07.121
10.1109/TAFFC.2020.2994159
10.1109/TAFFC.2019.2922912
10.14569/IJACSA.2017.081046
10.1364/JOSA.55.000247
10.1109/TCDS.2019.2949306
10.1109/TAFFC.2017.2714671
10.1109/LSENS.2023.3347648
10.1109/TAFFC.2022.3170369
10.1155/2021/2520394
10.1109/TBME.2019.2897651
10.1109/ACCESS.2019.2945059
10.1038/s41597-023-02650-w
10.1109/TAFFC.2018.2817622
10.1109/tnnls.2022.3225855
10.1109/tim.2022.3168927
10.1145/3503161.3548243
10.1016/j.knosys.2021.107982
10.1016/j.dsp.2018.02.020
10.1177/1557234X11410385
10.1016/0013-4694(94)00181-2
10.1109/JSEN.2018.2883497
10.48550/ARXIV.1706.03762
10.1109/CVPR52688.2022.01553
10.1109/MCI.2015.2501545
10.1609/aaai.v34i03.5656
10.1109/TITB.2009.2034649
10.3389/fnins.2021.778488
10.1109/ACCESS.2020.2971600
10.1007/978-3-030-36708-4_3
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024
DBID 97E
ESBDL
RIA
RIE
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
DOA
DOI 10.1109/TNSRE.2024.3389037
DatabaseName IEEE Xplore (IEEE)
IEEE Xplore Open Access (Activated by CARLI)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Ceramic Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Nursing & Allied Health Premium
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DOAJ
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
Materials Research Database
Civil Engineering Abstracts
Aluminium Industry Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Ceramic Abstracts
Neurosciences Abstracts
Materials Business File
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Aerospace Database
Nursing & Allied Health Premium
Engineered Materials Abstracts
Biotechnology Research Abstracts
Solid State and Superconductivity Abstracts
Engineering Research Database
Corrosion Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
MEDLINE - Academic
DatabaseTitleList
MEDLINE
Materials Research Database

MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 4
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Occupational Therapy & Rehabilitation
EISSN 1558-0210
EndPage 1646
ExternalDocumentID oai_doaj_org_article_d51fe2a73eb34b839a6d03585f710f09
38619940
10_1109_TNSRE_2024_3389037
10500357
Genre orig-research
Research Support, U.S. Gov't, Non-P.H.S
Research Support, Non-U.S. Gov't
Journal Article
GrantInformation_xml – fundername: Projects for International Scientific and Technological Cooperation of Guangdong Province
  grantid: 2023A0505050144
– fundername: Hong Kong and Macau Joint Research and Development Fund of Wuyi University
  grantid: 2021WGALH19
– fundername: Guangdong Basic and Applied Basic Research Foundation
  grantid: 2023A1515011978; 2020A1515111154
  funderid: 10.13039/501100021171
– fundername: National Natural Science Foundation of China
  grantid: 62201402
  funderid: 10.13039/501100001809
– fundername: Department of Education of Guangdong Province; Educational Commission of Guangdong Province
  grantid: 2021KTSCX136
  funderid: 10.13039/501100010226
GroupedDBID ---
-~X
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAFWJ
AAJGR
AASAJ
AAWTH
ABAZT
ABVLG
ACGFO
ACGFS
ACIWK
ACPRK
AENEX
AETIX
AFPKN
AFRAH
AGSQL
AIBXA
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
ESBDL
F5P
GROUPED_DOAJ
HZ~
H~9
IFIPE
IPLJI
JAVBF
LAI
M43
O9-
OCL
OK1
P2P
RIA
RIE
RNS
AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
RIG
7QF
7QO
7QQ
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
NAPCQ
P64
7X8
ID FETCH-LOGICAL-c462t-b37e2b7862d61d12d85043c6e7e8e0e3e942a6f2dd6cd0f0c3090e4f96aa53c03
IEDL.DBID RIE
ISICitedReferencesCount 20
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001209532400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1534-4320
1558-0210
IngestDate Fri Oct 03 12:47:41 EDT 2025
Sun Nov 09 09:34:16 EST 2025
Fri Jul 25 08:30:31 EDT 2025
Wed Feb 19 01:58:14 EST 2025
Tue Nov 18 21:45:11 EST 2025
Sat Nov 29 01:47:19 EST 2025
Wed Aug 27 02:06:33 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Language English
License https://creativecommons.org/licenses/by/4.0/legalcode
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c462t-b37e2b7862d61d12d85043c6e7e8e0e3e942a6f2dd6cd0f0c3090e4f96aa53c03
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-7997-8279
0000-0002-6564-5753
0000-0002-3811-296X
0000-0002-9544-5605
OpenAccessLink https://ieeexplore.ieee.org/document/10500357
PMID 38619940
PQID 3044651203
PQPubID 85423
PageCount 10
ParticipantIDs proquest_journals_3044651203
pubmed_primary_38619940
crossref_citationtrail_10_1109_TNSRE_2024_3389037
doaj_primary_oai_doaj_org_article_d51fe2a73eb34b839a6d03585f710f09
ieee_primary_10500357
proquest_miscellaneous_3039807876
crossref_primary_10_1109_TNSRE_2024_3389037
PublicationCentury 2000
PublicationDate 20240000
2024-00-00
20240101
2024-01-01
PublicationDateYYYYMMDD 2024-01-01
PublicationDate_xml – year: 2024
  text: 20240000
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on neural systems and rehabilitation engineering
PublicationTitleAbbrev TNSRE
PublicationTitleAlternate IEEE Trans Neural Syst Rehabil Eng
PublicationYear 2024
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref35
ref12
ref34
ref15
ref14
ref36
ref31
ref30
ref11
ref10
ref32
ref2
ref1
ref17
ref39
ref16
ref38
ref19
ref18
Beyer (ref33) 2022
Dosovitskiy (ref37) 2020
ref24
ref23
ref26
ref25
ref20
ref42
ref41
ref22
ref21
ref43
ref28
ref27
ref29
ref8
ref7
ref9
ref4
ref3
ref6
ref5
ref40
References_xml – ident: ref42
  doi: 10.1109/TAFFC.2022.3199075
– ident: ref3
  doi: 10.1016/0013-4694(94)90053-1
– ident: ref6
  doi: 10.1109/TCYB.2019.2904052
– ident: ref31
  doi: 10.1109/TAFFC.2017.2712143
– ident: ref32
  doi: 10.1109/TCYB.2018.2797176
– ident: ref12
  doi: 10.1109/ICASSP43922.2022.9747398
– ident: ref1
  doi: 10.1016/j.patrec.2014.05.011
– volume-title: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
  year: 2020
  ident: ref37
– ident: ref40
  doi: 10.1109/TBME.2012.2217495
– ident: ref10
  doi: 10.1109/LSP.2019.2906826
– ident: ref16
  doi: 10.1109/NER.2013.6695876
– ident: ref24
  doi: 10.1109/TCYB.2017.2788081
– ident: ref18
  doi: 10.1016/j.ins.2022.07.121
– ident: ref8
  doi: 10.1109/TAFFC.2020.2994159
– ident: ref7
  doi: 10.1109/TAFFC.2019.2922912
– ident: ref23
  doi: 10.14569/IJACSA.2017.081046
– ident: ref15
  doi: 10.1364/JOSA.55.000247
– ident: ref26
  doi: 10.1109/TCDS.2019.2949306
– ident: ref4
  doi: 10.1109/TAFFC.2017.2714671
– ident: ref19
  doi: 10.1109/LSENS.2023.3347648
– ident: ref35
  doi: 10.1109/TAFFC.2022.3170369
– ident: ref34
  doi: 10.1155/2021/2520394
– ident: ref41
  doi: 10.1109/TBME.2019.2897651
– ident: ref21
  doi: 10.1109/ACCESS.2019.2945059
– ident: ref43
  doi: 10.1038/s41597-023-02650-w
– ident: ref36
  doi: 10.1109/TAFFC.2018.2817622
– ident: ref39
  doi: 10.1109/tnnls.2022.3225855
– ident: ref22
  doi: 10.1109/tim.2022.3168927
– ident: ref28
  doi: 10.1145/3503161.3548243
– ident: ref9
  doi: 10.1016/j.knosys.2021.107982
– ident: ref20
  doi: 10.1016/j.dsp.2018.02.020
– ident: ref2
  doi: 10.1177/1557234X11410385
– ident: ref13
  doi: 10.1016/0013-4694(94)00181-2
– ident: ref17
  doi: 10.1109/JSEN.2018.2883497
– year: 2022
  ident: ref33
  article-title: Better plain ViT baselines for ImageNet-1k
  publication-title: arXiv:2205.01580
– ident: ref30
  doi: 10.48550/ARXIV.1706.03762
– ident: ref29
  doi: 10.1109/CVPR52688.2022.01553
– ident: ref5
  doi: 10.1109/MCI.2015.2501545
– ident: ref38
  doi: 10.1609/aaai.v34i03.5656
– ident: ref14
  doi: 10.1109/TITB.2009.2034649
– ident: ref25
  doi: 10.3389/fnins.2021.778488
– ident: ref11
  doi: 10.1109/ACCESS.2020.2971600
– ident: ref27
  doi: 10.1007/978-3-030-36708-4_3
SSID ssj0017657
Score 2.5057106
Snippet Affective brain-computer interfaces (aBCIs) have garnered widespread applications, with remarkable advancements in utilizing electroencephalogram (EEG)...
SourceID doaj
proquest
pubmed
crossref
ieee
SourceType Open Website
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 1637
SubjectTerms Adult
Algorithms
Brain modeling
Brain-Computer Interfaces
cross-session
Data collection
Data mining
Data models
EEG
EEG-based emotion recognition
Electroencephalography
Electroencephalography - methods
Emotion recognition
Emotional factors
Emotions
Emotions - physiology
Feature extraction
Female
Human-computer interface
Humans
Invariants
Machine Learning
Male
Neural Networks, Computer
Representations
Robustness
self-supervised learning
Task analysis
transformer
SummonAdditionalLinks – databaseName: DOAJ
  dbid: DOA
  link: http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nj9MwELXQigMXxMcCgQUZCbggs27s2PFxWXXFhQq1Rdqb5dgTCYHS1bbl9zNjJ1U5ABeujeNO3owzMxn7DWNvbJJgjA7ChmSENsGJDjMh0bU6GacBs66Um03YxaK9vnZfjlp90Z6wQg9cgDtPzayHOliFWZ_u0J0Hk6TCILdH39iXo3vSuimZGusH1mSOT1zOWmhVy-m4jHTn68VqOcfEsNYfUCgnqQP6kUvKzP1jq5U_R53Z-1w9YPfHsJFfFHEfsjswPGJvjymC-brwA_B3fPkb-_ZjNs-nbMUK1QH8c9h-h8Qv9rsNcVjSPmaOgSu_JNnEqtB08Hnp7sOX0_6izXDKvl7N15efxNg-QURt6h1ibqHuLKYsyczSrE4tsZVFAxZakKDA6TqYvk7JxIRQRiWdBN07E0KjolRP2MmwGeAZ4wGaGGvoWtkDzu26hsp31vb4Omih0xWbTQj6OD4dtbj44XOOIZ3PqHtC3Y-oV-z94Z6bwqzx19EfSTGHkcSKnX9AW_Gjrfh_2UrFTkmtR3_XUCUVJz-b9OzHJbz1ikrdGA5JVbHXh8u4-KiiEgbY7GmMckTYb03Fnhb7OEyuWkO8y_L5_5D8BbtHaJRvP2fsZHe7h5fsbvy5-7a9fZVt_xfqDQJu
  priority: 102
  providerName: Directory of Open Access Journals
Title Multi-Scale Masked Autoencoders for Cross-Session Emotion Recognition
URI https://ieeexplore.ieee.org/document/10500357
https://www.ncbi.nlm.nih.gov/pubmed/38619940
https://www.proquest.com/docview/3044651203
https://www.proquest.com/docview/3039807876
https://doaj.org/article/d51fe2a73eb34b839a6d03585f710f09
Volume 32
WOSCitedRecordID wos001209532400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 1558-0210
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017657
  issn: 1534-4320
  databaseCode: DOA
  dateStart: 20210101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1558-0210
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0017657
  issn: 1534-4320
  databaseCode: RIE
  dateStart: 20010101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwELZoxYELzwIpZWUk4IJSvLZjx8dSbcWFFdpdpL1Fjj2REChB3V1-f2ech9pDkbhFieNHPk884_F8w9h7GwUYo31ufTS5Nt7lNVpCeV3qaJwGtLpiSjZhl8tyu3Xfh2D1FAsDAOnwGZzTZfLlxy4caKsMJbwgz5c9YkfWmj5Ya3IZWJNoPVGCda6VFGOEjHCfN8v1aoG2oNTn2A8nFKXeU6UhXlxxZ0FKvP1DopX7dc609lw9-c9eP2WPByWTX_Sz4hl7AO1z9uE2oTDf9GwC_CNf3eHqfsEWKSY3XyN4wL_53S-I_OKw74jxkk49c1Rz-SWNJV_3pB580ecC4qvxNFLXnrAfV4vN5dd8SLaQB23kHhGyIGuLBk408ziXsSRus2DAQgkCFDgtvWlkjCZE0YighBOgG2e8L1QQ6iU7brsWXjPuoQhBQl2KBrBuVxfk7LO2wZ9HCbXO2Hz84lUYRkcJMX5XySIRrkqAVQRYNQCWsU_TO396Ho5_lv5CQE4liUM73UBgqkEkq1jMG5DeKqiVrlFR9CYiTmXRoNbVCJexEwLzVnM9jhk7G-dFNQj8rlLkGEflSaiMvZseo6iS_8W30B2ojHJE729Nxl7182mqfJyNp_c0-oY9ogH2mz9n7Hh_fYC37GH4u_-5u56hPGzLWdpPmCWpuAFuQwPR
linkProvider IEEE
linkToHtml http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB5BQYILzxYCBYwEXFCK13bs-FiqrYpoV2h3kXqLHHsioVZZ1N3l9-NxHmoPReIWJY4f-TzxjMfzDcAHEzhqrVxuXNC50s7mdbSE8rpUQVuF0eoKKdmEmc3K83P7ow9WT7EwiJgOn-EBXSZfflj5LW2VRQkvyPNl7sK9QinBu3Ct0WlgdCL2jDKsciUFH2JkuP2ynC3m02gNCnUQe2K5pOR7stTEjMtvLEmJub9PtXK71plWn-PH_9nvJ_CoVzPZYTcvnsIdbJ_Bx-uUwmzZ8QmwT2x-g637OUxTVG6-iPAhO3PrCwzscLtZEeclnXtmUdFlRzSWfNHRerBplw2IzYfzSKt2F34eT5dHJ3mfbiH3SotNxMigqE00cYKehIkIJbGbeY0GS-Qo0SrhdCNC0D7whnvJLUfVWO1cIT2Xe7DTrlp8Ccxh4b3AuuQNxrptXZC7z5gm_j5KrFUGk-GLV74fHaXEuKySTcJtlQCrCLCqByyDz-M7vzsmjn-W_kpAjiWJRTvdiMBUvVBWoZg0KJyRWEtVR1XR6RBxKosm6l0NtxnsEpjXmutwzGB_mBdVL_LrSpJrPKpPXGbwfnwchZU8MK7F1ZbKSEsE_0Zn8KKbT2Plw2x8dUuj7-DByfLstDr9Nvv-Gh7SYLutoH3Y2Vxt8Q3c9382v9ZXb5NU_AXaMAU7
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multi-Scale+Masked+Autoencoders+for+Cross-Session+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+neural+systems+and+rehabilitation+engineering&rft.au=Pang%2C+Miaoqi&rft.au=Wang%2C+Hongtao&rft.au=Huang%2C+Jiayang&rft.au=Vong%2C+Chi-Man&rft.date=2024&rft.pub=IEEE&rft.issn=1534-4320&rft.volume=32&rft.spage=1637&rft.epage=1646&rft_id=info:doi/10.1109%2FTNSRE.2024.3389037&rft_id=info%3Apmid%2F38619940&rft.externalDocID=10500357
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1534-4320&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1534-4320&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1534-4320&client=summon