DMAE-EEG: A Pretraining Framework for EEG Spatiotemporal Representation Learning
Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and difficulty in labeling. To address these challenges, we develop a pretraining framework named DMAE-EEG, a denoising masked autoencoder for mining ge...
Saved in:
| Published in: | IEEE transaction on neural networks and learning systems Vol. 36; no. 10; pp. 17664 - 17678 |
|---|---|
| Main Authors: | , , , , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
United States
IEEE
01.10.2025
|
| Subjects: | |
| ISSN: | 2162-237X, 2162-2388, 2162-2388 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Abstract | Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and difficulty in labeling. To address these challenges, we develop a pretraining framework named DMAE-EEG, a denoising masked autoencoder for mining generalizable spatiotemporal representation from massive unlabeled EEG. First, we propose a novel brain region topological heterogeneity (BRTH) division method to partition the nonuniform data into fixed patches based on neuroscientific priors. Second, we design a denoised pseudo-label generator (DPLG), which utilizes a denoising reconstruction pretext task to enable the learning of generalizable representations from massive unlabeled EEG, suppressing the influence of noise and artifacts. Furthermore, we utilize an asymmetric autoencoder with self-attention as the backbone in the proposed DMAE-EEG, which captures long-range spatiotemporal dependencies and interactions from unlabeled EEG data across 14 public datasets. The proposed DMAE-EEG is validated on both generative (signal quality enhancement) and discriminative tasks (motion intention recognition). In the quality enhancement, DMAE-EEG outperforms existing statistical methods with normalized mean squared error (nMSE) reduction of 27.78%-50.00% under corruption levels of 25%, 50%, and 75%, respectively. In motion intention recognition, DMAE-EEG achieves a relative improvement of 2.71%-6.14% in intrasession classification balanced accuracy across 2-6 class motor execution and imagery tasks, outperforming state-of-the-art methods. Overall, the results suggest that the pretraining framework DMAE-EEG can capture generalizable spatiotemporal representations from massive unlabeled EEG and enhance the knowledge transferability across sessions, subjects, and tasks in various downstream scenarios, advancing EEG-aided diagnosis and brain-computer communication and control, and other clinical practice. |
|---|---|
| AbstractList | Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and difficulty in labeling. To address these challenges, we develop a pretraining framework named DMAE-EEG, a denoising masked autoencoder for mining generalizable spatiotemporal representation from massive unlabeled EEG. First, we propose a novel brain region topological heterogeneity (BRTH) division method to partition the nonuniform data into fixed patches based on neuroscientific priors. Second, we design a denoised pseudo-label generator (DPLG), which utilizes a denoising reconstruction pretext task to enable the learning of generalizable representations from massive unlabeled EEG, suppressing the influence of noise and artifacts. Furthermore, we utilize an asymmetric autoencoder with self-attention as the backbone in the proposed DMAE-EEG, which captures long-range spatiotemporal dependencies and interactions from unlabeled EEG data across 14 public datasets. The proposed DMAE-EEG is validated on both generative (signal quality enhancement) and discriminative tasks (motion intention recognition). In the quality enhancement, DMAE-EEG outperforms existing statistical methods with normalized mean squared error (nMSE) reduction of 27.78%-50.00% under corruption levels of 25%, 50%, and 75%, respectively. In motion intention recognition, DMAE-EEG achieves a relative improvement of 2.71%-6.14% in intrasession classification balanced accuracy across 2-6 class motor execution and imagery tasks, outperforming state-of-the-art methods. Overall, the results suggest that the pretraining framework DMAE-EEG can capture generalizable spatiotemporal representations from massive unlabeled EEG and enhance the knowledge transferability across sessions, subjects, and tasks in various downstream scenarios, advancing EEG-aided diagnosis and brain-computer communication and control, and other clinical practice. Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and difficulty in labeling. To address these challenges, we develop a pretraining framework named DMAE-EEG, a denoising masked autoencoder for mining generalizable spatiotemporal representation from massive unlabeled EEG. First, we propose a novel brain region topological heterogeneity (BRTH) division method to partition the nonuniform data into fixed patches based on neuroscientific priors. Second, we design a denoised pseudo-label generator (DPLG), which utilizes a denoising reconstruction pretext task to enable the learning of generalizable representations from massive unlabeled EEG, suppressing the influence of noise and artifacts. Furthermore, we utilize an asymmetric autoencoder with self-attention as the backbone in the proposed DMAE-EEG, which captures long-range spatiotemporal dependencies and interactions from unlabeled EEG data across 14 public datasets. The proposed DMAE-EEG is validated on both generative (signal quality enhancement) and discriminative tasks (motion intention recognition). In the quality enhancement, DMAE-EEG outperforms existing statistical methods with normalized mean squared error (nMSE) reduction of 27.78%-50.00% under corruption levels of 25%, 50%, and 75%, respectively. In motion intention recognition, DMAE-EEG achieves a relative improvement of 2.71%-6.14% in intrasession classification balanced accuracy across 2-6 class motor execution and imagery tasks, outperforming state-of-the-art methods. Overall, the results suggest that the pretraining framework DMAE-EEG can capture generalizable spatiotemporal representations from massive unlabeled EEG and enhance the knowledge transferability across sessions, subjects, and tasks in various downstream scenarios, advancing EEG-aided diagnosis and brain-computer communication and control, and other clinical practice.Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and difficulty in labeling. To address these challenges, we develop a pretraining framework named DMAE-EEG, a denoising masked autoencoder for mining generalizable spatiotemporal representation from massive unlabeled EEG. First, we propose a novel brain region topological heterogeneity (BRTH) division method to partition the nonuniform data into fixed patches based on neuroscientific priors. Second, we design a denoised pseudo-label generator (DPLG), which utilizes a denoising reconstruction pretext task to enable the learning of generalizable representations from massive unlabeled EEG, suppressing the influence of noise and artifacts. Furthermore, we utilize an asymmetric autoencoder with self-attention as the backbone in the proposed DMAE-EEG, which captures long-range spatiotemporal dependencies and interactions from unlabeled EEG data across 14 public datasets. The proposed DMAE-EEG is validated on both generative (signal quality enhancement) and discriminative tasks (motion intention recognition). In the quality enhancement, DMAE-EEG outperforms existing statistical methods with normalized mean squared error (nMSE) reduction of 27.78%-50.00% under corruption levels of 25%, 50%, and 75%, respectively. In motion intention recognition, DMAE-EEG achieves a relative improvement of 2.71%-6.14% in intrasession classification balanced accuracy across 2-6 class motor execution and imagery tasks, outperforming state-of-the-art methods. Overall, the results suggest that the pretraining framework DMAE-EEG can capture generalizable spatiotemporal representations from massive unlabeled EEG and enhance the knowledge transferability across sessions, subjects, and tasks in various downstream scenarios, advancing EEG-aided diagnosis and brain-computer communication and control, and other clinical practice. |
| Author | Wu, Anqi Yu, Yang Chen, Xin Zhang, Yifan Li, Hao Zeng, Ling-Li Liu, Jinfang Hu, Dewen |
| Author_xml | – sequence: 1 givenname: Yifan orcidid: 0000-0002-4671-723X surname: Zhang fullname: Zhang, Yifan organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China – sequence: 2 givenname: Yang orcidid: 0000-0002-8967-0427 surname: Yu fullname: Yu, Yang organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China – sequence: 3 givenname: Hao orcidid: 0000-0002-9542-787X surname: Li fullname: Li, Hao organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China – sequence: 4 givenname: Anqi surname: Wu fullname: Wu, Anqi organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China – sequence: 5 givenname: Xin orcidid: 0000-0002-0672-0207 surname: Chen fullname: Chen, Xin organization: Department of Neurosurgery, Xiangya Hospital, National Clinical Medical Research Center for Geriatric Diseases, Central South University, Changsha, China – sequence: 6 givenname: Jinfang orcidid: 0000-0001-6986-1173 surname: Liu fullname: Liu, Jinfang organization: Department of Neurosurgery, Xiangya Hospital, National Clinical Medical Research Center for Geriatric Diseases, Central South University, Changsha, China – sequence: 7 givenname: Ling-Li orcidid: 0000-0002-0515-256X surname: Zeng fullname: Zeng, Ling-Li email: zengphd@nudt.edu.cn organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China – sequence: 8 givenname: Dewen orcidid: 0000-0001-7357-0053 surname: Hu fullname: Hu, Dewen email: dwhu@nudt.edu.cn organization: College of Intelligence Science and Technology, National University of Defense Technology, Changsha, China |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/40601454$$D View this record in MEDLINE/PubMed |
| BookMark | eNpFkE1PwkAQhjcGI4j8AWNMj16K-71dbwQLmiASwcRbs7Szpko_3C0x_nuLoM5l3kyedw7PKeqUVQkInRM8JATr69V8PlsOKaZiyEREtCZHqEeJpCFlUdT5y-qliwbev-F2JBaS6xPU5W0kXPAeWtw-jOIwjqc3wShYOGicycu8fA0mzhTwWbn3wFYuaIFgWZsmrxoo6sqZTfAEtQMPZbO7lsEMjNsVz9CxNRsPg8Puo-dJvBrfhbPH6f14NAtTqlQTMgMpaGZJZFOiuFbCCLBWRVryVJOMg-KZ4oSnlnGss8hKkKnBAmewXgvN-uhq_7d21ccWfJMUuU9hszElVFufMEqlIkwI3qKXB3S7LiBLapcXxn0lvxZagO6B1FXeO7B_CMHJznbyYzvZ2U4OttvSxb6UA8B_gWBJtZLsGzi1ebY |
| CODEN | ITNNAL |
| Cites_doi | 10.3390/app11041380 10.1007/s11023-020-09548-1 10.1145/3577925 10.1109/CVPR46437.2021.01212 10.1109/TPAMI.2014.2330598 10.1007/s11571-020-09634-1 10.1016/j.jneumeth.2020.108833 10.1109/EMBC44109.2020.9175874 10.1038/nrneurol.2016.113 10.1097/00004691-198804000-00005 10.1088/1741-2552/ac23e2 10.1007/978-3-658-40442-0_9 10.1016/s1388-2457(00)00527-7 10.1161/01.CTR.101.23.e215 10.1038/nrneurol.2012.150 10.1016/S0167-8760(00)00075-1 10.1016/j.neunet.2020.05.032 10.48550/ARXIV.1706.03762 10.1088/1741-2552/aace8c 10.1109/CVPR52688.2022.01553 10.1016/j.inffus.2023.03.022 10.1016/j.patcog.2022.108757 10.1093/gigascience/giaa098 10.1109/TNNLS.2020.3015505 10.1109/TBME.2004.827072 10.3389/fnins.2012.00039 10.1109/TNSRE.2022.3230250 10.1016/0013-4694(91)90202-F 10.1038/s42256-019-0091-7 10.1038/s41551-022-00914-1 10.1162/089976602317318938 10.1007/s11042-015-2717-z 10.1109/TAFFC.2022.3164516 10.1126/science.1127647 10.3389/fneur.2019.00325 10.1109/TPAMI.2022.3152247 10.1038/s42256-023-00714-5 10.1590/1980-57642016dn11-010002 10.1016/j.neuroimage.2021.118721 10.24003/emitter.v5i1.165 10.1109/JBHI.2023.3335854 10.1109/CCMB.2013.6609178 10.1111/j.1528-1167.2006.00655.x 10.1038/s41467-024-45922-8 10.1109/EMBC46164.2021.9629837 10.1109/CVPR42600.2020.00674 |
| ContentType | Journal Article |
| DBID | 97E RIA RIE AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
| DOI | 10.1109/TNNLS.2025.3581991 |
| DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed MEDLINE - Academic |
| DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
| DatabaseTitleList | MEDLINE MEDLINE - Academic |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://ieeexplore.ieee.org/ sourceTypes: Publisher – sequence: 3 dbid: 7X8 name: MEDLINE - Academic url: https://search.proquest.com/medline sourceTypes: Aggregation Database |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 2162-2388 |
| EndPage | 17678 |
| ExternalDocumentID | 40601454 10_1109_TNNLS_2025_3581991 11062976 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Science and Technology Innovation Program of Hunan Province grantid: 2023RC1004; 2024QK2006 funderid: 10.13039/501100019081 – fundername: STI 2030—Major Projects grantid: 2022ZD0208903 – fundername: National Natural Science Foundation of China grantid: U24A20339; 62036013 funderid: 10.13039/501100001809 |
| GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION CGR CUY CVF ECM EIF NPM 7X8 |
| ID | FETCH-LOGICAL-c277t-3aece93f18fc174975a5eff78964c91d4e74d7414cf3409d8f6e6ca050debb593 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 1 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001522955400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| ISSN | 2162-237X 2162-2388 |
| IngestDate | Thu Oct 02 22:28:33 EDT 2025 Thu Oct 09 01:30:56 EDT 2025 Sat Nov 29 07:11:57 EST 2025 Wed Oct 15 14:20:22 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | true |
| Issue | 10 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c277t-3aece93f18fc174975a5eff78964c91d4e74d7414cf3409d8f6e6ca050debb593 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
| ORCID | 0000-0002-8967-0427 0000-0002-9542-787X 0000-0002-4671-723X 0000-0001-7357-0053 0000-0002-0515-256X 0000-0002-0672-0207 0000-0001-6986-1173 |
| PMID | 40601454 |
| PQID | 3226713554 |
| PQPubID | 23479 |
| PageCount | 15 |
| ParticipantIDs | ieee_primary_11062976 proquest_miscellaneous_3226713554 crossref_primary_10_1109_TNNLS_2025_3581991 pubmed_primary_40601454 |
| PublicationCentury | 2000 |
| PublicationDate | 2025-10-01 |
| PublicationDateYYYYMMDD | 2025-10-01 |
| PublicationDate_xml | – month: 10 year: 2025 text: 2025-10-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States |
| PublicationTitle | IEEE transaction on neural networks and learning systems |
| PublicationTitleAbbrev | TNNLS |
| PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
| PublicationYear | 2025 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| References | ref13 ref12 Teplan (ref1) 2002; 2 ref15 Jasper (ref22) 1958; 10 ref14 Alsentzer (ref26) ref11 ref10 ref54 ref17 ref19 ref18 Jiang (ref53) ref50 ref46 ref45 ref48 ref47 ref41 ref44 ref43 ref49 ref8 ref7 ref9 ref4 Yang (ref51); 36 ref3 ref6 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref39 ref38 Dosovitskiy (ref42) ref24 ref23 ref25 ref20 Devlin (ref16) 2018; 1 ref21 Wang (ref52) ref28 ref27 ref29 |
| References_xml | – ident: ref30 doi: 10.3390/app11041380 – ident: ref17 doi: 10.1007/s11023-020-09548-1 – ident: ref18 doi: 10.1145/3577925 – ident: ref40 doi: 10.1109/CVPR46437.2021.01212 – ident: ref47 doi: 10.1109/TPAMI.2014.2330598 – start-page: 238 volume-title: Proc. Mach. Learn. Health Workshop (ML4H) ident: ref26 article-title: Contrastive representation learning for electroencephalogram classification – ident: ref8 doi: 10.1007/s11571-020-09634-1 – ident: ref48 doi: 10.1016/j.jneumeth.2020.108833 – volume: 36 start-page: 32039 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref51 article-title: BIOT: Biosignal transformer for cross-data learning in the wild – ident: ref49 doi: 10.1109/EMBC44109.2020.9175874 – ident: ref2 doi: 10.1038/nrneurol.2016.113 – ident: ref23 doi: 10.1097/00004691-198804000-00005 – volume: 10 start-page: 371 year: 1958 ident: ref22 article-title: Ten-twenty electrode system of the international federation publication-title: Electroencephalogr. Clin. Neurophysiol. – ident: ref54 doi: 10.1088/1741-2552/ac23e2 – ident: ref29 doi: 10.1007/978-3-658-40442-0_9 – ident: ref24 doi: 10.1016/s1388-2457(00)00527-7 – ident: ref45 doi: 10.1161/01.CTR.101.23.e215 – ident: ref4 doi: 10.1038/nrneurol.2012.150 – ident: ref34 doi: 10.1016/S0167-8760(00)00075-1 – ident: ref39 doi: 10.1016/j.neunet.2020.05.032 – ident: ref44 doi: 10.48550/ARXIV.1706.03762 – ident: ref7 doi: 10.1088/1741-2552/aace8c – ident: ref43 doi: 10.1109/CVPR52688.2022.01553 – volume: 2 start-page: 1 issue: 2 year: 2002 ident: ref1 article-title: Fundamentals of EEG measurement publication-title: Meas. Sci. Technol. – ident: ref3 doi: 10.1016/j.inffus.2023.03.022 – ident: ref12 doi: 10.1016/j.patcog.2022.108757 – ident: ref46 doi: 10.1093/gigascience/giaa098 – ident: ref38 doi: 10.1109/TNNLS.2020.3015505 – ident: ref31 doi: 10.1109/TBME.2004.827072 – ident: ref37 doi: 10.3389/fnins.2012.00039 – start-page: 1 volume-title: Proc. 12th Int. Conf. Learn. Represent. ident: ref53 article-title: Large brain model for learning generic representations with tremendous EEG data in BCI – ident: ref11 doi: 10.1109/TNSRE.2022.3230250 – ident: ref13 doi: 10.1016/0013-4694(91)90202-F – ident: ref6 doi: 10.1038/s42256-019-0091-7 – start-page: 39249 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref52 article-title: EEGPT: Pretrained transformer for universal and reliable representation of EEG signals – ident: ref19 doi: 10.1038/s41551-022-00914-1 – ident: ref27 doi: 10.1162/089976602317318938 – ident: ref32 doi: 10.1007/s11042-015-2717-z – ident: ref21 doi: 10.1109/TAFFC.2022.3164516 – ident: ref28 doi: 10.1126/science.1127647 – ident: ref25 doi: 10.3389/fneur.2019.00325 – ident: ref41 doi: 10.1109/TPAMI.2022.3152247 – ident: ref10 doi: 10.1038/s42256-023-00714-5 – volume-title: Proc. Int. Conf. Learn. Represent. (ICLR) ident: ref42 article-title: An image is worth 16×16 words: Transformers for image recognition at scale – ident: ref35 doi: 10.1590/1980-57642016dn11-010002 – ident: ref15 doi: 10.1016/j.neuroimage.2021.118721 – volume: 1 start-page: 4171 year: 2018 ident: ref16 article-title: BERT: Pre-training of deep bidirectional transformers for language understanding publication-title: North Amer. Chapter Assoc. Comput. Linguistics – ident: ref33 doi: 10.24003/emitter.v5i1.165 – ident: ref9 doi: 10.1109/JBHI.2023.3335854 – ident: ref36 doi: 10.1109/CCMB.2013.6609178 – ident: ref5 doi: 10.1111/j.1528-1167.2006.00655.x – ident: ref14 doi: 10.1038/s41467-024-45922-8 – ident: ref50 doi: 10.1109/EMBC46164.2021.9629837 – ident: ref20 doi: 10.1109/CVPR42600.2020.00674 |
| SSID | ssj0000605649 |
| Score | 2.4929416 |
| Snippet | Electroencephalography (EEG) plays a crucial role in neuroscience research and clinical practice, but it remains limited by nonuniform data, noise, and... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Publisher |
| StartPage | 17664 |
| SubjectTerms | Algorithms Autoencoders Brain - physiology Brain modeling Databases, Factual Decoding Electrodes Electroencephalography Electroencephalography (EEG) Electroencephalography - methods Feature extraction Humans Machine Learning masked autoencoder motion intention recognition Motors Neural Networks, Computer Noise Representation learning Signal Processing, Computer-Assisted signal quality enhancement Signal-To-Noise Ratio Spatio-Temporal Analysis Spatiotemporal phenomena |
| Title | DMAE-EEG: A Pretraining Framework for EEG Spatiotemporal Representation Learning |
| URI | https://ieeexplore.ieee.org/document/11062976 https://www.ncbi.nlm.nih.gov/pubmed/40601454 https://www.proquest.com/docview/3226713554 |
| Volume | 36 |
| WOSCitedRecordID | wos001522955400001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2162-2388 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0000605649 issn: 2162-237X databaseCode: RIE dateStart: 20120101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1Lb9QwEB7RCqFeWh4FlkdlJG7IbdaxY5vbCnbhAKsVLdLeIsceIySURdvd_n7GjlPEoQduiWQ70Xy25xt7HgBvQyc6FyrDo_OKE_933NngOAoRiCAbQrrLxSb0cmnWa7sqweo5FgYRs_MZnqfHfJcfNn6fjsouSFU1gvTnARxorYdgrdsDlYqIeZPprpg2gotar8cgmcpeXC2XXy7JHBTqPGX8IlZ0BA9kSkYilfxHJ-UiK3fzzax3Fif_-ccP4bgQTDYbZsQjuIf9YzgZizewspafwOrj19mcz-ef3rMZW21xrBXBFqO7FiM-y6gBu8xe1yWJ1S_2LTvPlpilnpUMrT9O4ftifvXhMy_lFbgXWu947dCjrePURE92idXKKYxRG9tIb6dBopaBCIf0sSYrMJjYYONdpaqAXads_RQO-02Pz4GZTvigamkUWmli1TWmoveO-tMOEsUE3o0Cbn8PWTTabH1Uts3ItAmZtiAzgdMkyb8tixAn8GYEpaVFkG42XI-b_XVLu1KTag0qOYFnA1q3vUeQX9wx6ks4Sh8fHPReweFuu8fXcN_f7H5eb89opq3NWZ5pfwCTtsyu |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1LbxMxEB6VFkEvLY8C6QOMxA253XjtXbu3qE1aRLqKaJByW3n9QEhog9KE38_Y6y3qoQduu5JtWfPZnm_seQB8sg1rtM0k9doIivxfU62spo4xiwRZItJNLDZRVpVcLNQsBavHWBjnXHQ-c6fhM77l26XZhKuyM1RVBUP9-QR2BOds2IVr3V-pZEjNi0h42bBglOXlog-TydTZvKqmt2gQMnEacn4hL9qFZzykI-GCP9BKsczK44wzap7J_n_O-QXsJYpJRt2aeAlbrn0F-335BpJ282uYXd6MxnQ8vjonIzJbub5aBJn0DlsEGS3BBuQ2-l2nNFa_yLfoPpuillqScrT-OIDvk_H84pqmAgvUsLJc01w741Tuh9IbtExUKbRw3pdSFdyooeWu5BYpBzc-RzvQSl-4wuhMZNY1jVD5G9hul617B0Q2zFiRcymc4tJnTSEz_G-wP54hng3gcy_g-neXR6OO9kem6ohMHZCpEzIDOAiS_NcyCXEAH3tQatwG4W1Dt265uavxXCpCtUHBB_C2Q-u-dw_y4SOjfoDn1_ObaT39Un09gt0wkc5d7xi216uNO4Gn5s_6593qfVxvfwGTac8N |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=DMAE-EEG%3A+A+Pretraining+Framework+for+EEG+Spatiotemporal+Representation+Learning&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhang%2C+Yifan&rft.au=Yu%2C+Yang&rft.au=Li%2C+Hao&rft.au=Wu%2C+Anqi&rft.date=2025-10-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=36&rft.issue=10&rft.spage=17664&rft.epage=17678&rft_id=info:doi/10.1109%2FTNNLS.2025.3581991&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2025_3581991 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |