EEG generalizable representations learning via masked fractional fourier domain modeling

Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of supervised models. However, most supervised methods are task-specific and lack the ability to generate generalizable latent features for use across...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied soft computing Ročník 170; s. 112731
Hlavní autoři: Zhang, Shubin, An, Dong, Liu, Jincun, Wei, Yaoguang
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier B.V 01.02.2025
Témata:
ISSN:1568-4946
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Abstract Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of supervised models. However, most supervised methods are task-specific and lack the ability to generate generalizable latent features for use across different BCI paradigms. Additionally, as subjects engage in diverse brain–computer interaction tasks, the distribution of recorded EEG data varies according to the specific cognitive paradigms involved. The process of data collection and model training for each task is time-consuming. One potential solution is to construct a pre-trained model capable of transferring knowledge across various tasks. To improve the generalization ability of pre-trained models, we propose a novel masked autoencoder based on fractional Fourier domain reconstruction, denoted as Masked Fractional Fourier Domain Modeling (MFrFM), for learning generalizable time–frequency features. We systematically explore the effects of different degradation methods used within the denoising autoencoder to enhance the robustness of the pre-training model. Moreover, we examine the impact of various masking strategies on model performance. Our experiments demonstrate that the pre-trained MFrFM can effectively capture generalizable representations. Additionally, we conduct a comprehensive evaluation of fine-tuning performance through both cross-task and intra-task experiments. The experimental results show that MFrFM achieves a maximum accuracy of 98.09% in transferring from MI to SSVEP, and 79.76% in transferring from SSVEP to MI. The code is available at https://github.com/zshubin/MFrFM-for-cross-task-EEG-pre-training. •A EEG generalizable representations learning model for cross-task transfer.•A pre-training model based on Masked fractional fourier domain modeling.•A specific masking strategy for EEG reconstruction-based pre-training.•Denoising mechanism based on various degradation methods.
AbstractList Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of supervised models. However, most supervised methods are task-specific and lack the ability to generate generalizable latent features for use across different BCI paradigms. Additionally, as subjects engage in diverse brain–computer interaction tasks, the distribution of recorded EEG data varies according to the specific cognitive paradigms involved. The process of data collection and model training for each task is time-consuming. One potential solution is to construct a pre-trained model capable of transferring knowledge across various tasks. To improve the generalization ability of pre-trained models, we propose a novel masked autoencoder based on fractional Fourier domain reconstruction, denoted as Masked Fractional Fourier Domain Modeling (MFrFM), for learning generalizable time–frequency features. We systematically explore the effects of different degradation methods used within the denoising autoencoder to enhance the robustness of the pre-training model. Moreover, we examine the impact of various masking strategies on model performance. Our experiments demonstrate that the pre-trained MFrFM can effectively capture generalizable representations. Additionally, we conduct a comprehensive evaluation of fine-tuning performance through both cross-task and intra-task experiments. The experimental results show that MFrFM achieves a maximum accuracy of 98.09% in transferring from MI to SSVEP, and 79.76% in transferring from SSVEP to MI. The code is available at https://github.com/zshubin/MFrFM-for-cross-task-EEG-pre-training. •A EEG generalizable representations learning model for cross-task transfer.•A pre-training model based on Masked fractional fourier domain modeling.•A specific masking strategy for EEG reconstruction-based pre-training.•Denoising mechanism based on various degradation methods.
ArticleNumber 112731
Author Liu, Jincun
Wei, Yaoguang
An, Dong
Zhang, Shubin
Author_xml – sequence: 1
  givenname: Shubin
  surname: Zhang
  fullname: Zhang, Shubin
  email: zhangshubin@ouc.edu.cn
  organization: Fisheries college, Ocean University of China, Qingdao, Shandong, 266003, China
– sequence: 2
  givenname: Dong
  surname: An
  fullname: An, Dong
  email: andong@cau.edu.cn
  organization: National Innovation Center for Digital Fishery, Beijing, 100083, China
– sequence: 3
  givenname: Jincun
  surname: Liu
  fullname: Liu, Jincun
  email: liujincun@cau.edu.cn
  organization: National Innovation Center for Digital Fishery, Beijing, 100083, China
– sequence: 4
  givenname: Yaoguang
  surname: Wei
  fullname: Wei, Yaoguang
  email: wyg@cau.edu.cn
  organization: National Innovation Center for Digital Fishery, Beijing, 100083, China
BookMark eNp9kMtOwzAQRb0oEm3hB1j5BxL8yFNig6pSkCqxAYmdNbHHlUviVHaoBF9PorBi0dUs5p7R3LMiC997JOSOs5QzXtwfU4i9TgUTecq5KCVfkCXPiyrJ6qy4JqsYj2wM1qJako_tdkcP6DFA636gaZEGPAWM6AcYXO8jbRGCd_5Azw5oB_ETDbUB9LSFltr-KzgM1PQdOE-73mA7pm_IlYU24u3fXJP3p-3b5jnZv-5eNo_7REvGhsRiycX4o6hzFA2zoHUj88JUouKYWWkQoDC5zCWrTVaWVdMYmfOiaKSuLW_kmoj5rg59jAGtOgXXQfhWnKnJhzqqyYeafKjZxwhV_yDt5rpDANdeRh9mFMdS57G4itqh12hcQD0o07tL-C9bgYGK
CitedBy_id crossref_primary_10_1016_j_eswa_2025_129603
Cites_doi 10.1109/ICMLA55696.2022.00208
10.1109/JBHI.2022.3213171
10.1016/j.asoc.2023.110079
10.1109/JBHI.2023.3304646
10.1088/1741-2552/ab260c
10.1109/MLSP.2019.8918693
10.1088/1741-2552/abca18
10.1109/TNSRE.2020.3006180
10.1038/nature17435
10.1145/3534678.3539178
10.1109/TLA.2020.9099676
10.1109/CVPR52688.2022.01553
10.1145/3503161.3548243
10.1109/TAFFC.2022.3170428
10.1109/TNSRE.2018.2803066
10.1109/ICASSP49357.2023.10097183
10.1109/TCYB.2019.2905157
10.1109/ACCESS.2020.2994593
10.1109/ACCESS.2021.3078534
10.1088/1741-2560/12/4/046006
10.1007/978-3-031-16437-8_38
10.1109/JBHI.2024.3373019
ContentType Journal Article
Copyright 2025
Copyright_xml – notice: 2025
DBID AAYXX
CITATION
DOI 10.1016/j.asoc.2025.112731
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
ExternalDocumentID 10_1016_j_asoc_2025_112731
S1568494625000420
GrantInformation_xml – fundername: National Key Research and Development Program of China
  grantid: 2022YFD2001704
GroupedDBID --K
--M
.DC
.~1
0R~
1B1
1~.
1~5
23M
4.4
457
4G.
53G
5GY
5VS
6J9
7-5
71M
8P~
AABNK
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXKI
AAXUO
AAYFN
ABBOA
ABFNM
ABFRF
ABJNI
ABMAC
ABWVN
ABXDB
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACRPL
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADNMO
ADTZH
AEBSH
AECPX
AEFWE
AEIPS
AEKER
AENEX
AFJKZ
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
ANKPU
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
GBLVA
GBOLZ
HVGLF
HZ~
IHE
J1W
JJJVA
KOM
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SDF
SDG
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
UHS
UNMZH
~G-
9DU
AATTM
AAYWO
AAYXX
ACLOT
ACVFH
ADCNI
AEUPX
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
APXCP
CITATION
EFKBS
EFLBG
~HD
ID FETCH-LOGICAL-c300t-fe712273295e2b0faccb356d8281e4f3deaa6d535309d4778bbd35166b3c9f1b3
ISICitedReferencesCount 1
ISICitedReferencesURI http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=001417868600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
ISSN 1568-4946
IngestDate Sat Nov 29 08:09:04 EST 2025
Tue Nov 18 22:18:59 EST 2025
Sat Feb 15 15:52:05 EST 2025
IsPeerReviewed true
IsScholarly true
Keywords Self-supervised learning (SSL)
Motor imagery (MI)
Masked autoencoder (MAE)
Steady-state visual evoked potential (SSVEP)
Fractional fourier transform (frFT)
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c300t-fe712273295e2b0faccb356d8281e4f3deaa6d535309d4778bbd35166b3c9f1b3
ParticipantIDs crossref_primary_10_1016_j_asoc_2025_112731
crossref_citationtrail_10_1016_j_asoc_2025_112731
elsevier_sciencedirect_doi_10_1016_j_asoc_2025_112731
PublicationCentury 2000
PublicationDate February 2025
2025-02-00
PublicationDateYYYYMMDD 2025-02-01
PublicationDate_xml – month: 02
  year: 2025
  text: February 2025
PublicationDecade 2020
PublicationTitle Applied soft computing
PublicationYear 2025
Publisher Elsevier B.V
Publisher_xml – name: Elsevier B.V
References Cheng, Fu, Li, Zhang, Huang, Peng, Chen, Fan (b2) 2023; 136
J. Chen, Y. Yang, T. Yu, Y. Fan, X. Mo, C. Yang, Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning, in: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2741–2751.
Li, Chen, Li, Fu, Wu, Ji, Zhou, Niu, Shi, Zheng (b25) 2023; 14
Cheng, Zhang, Qin, Wang, Wu, Song (b32) 2024; 28
Banville, Chehab, Hyvärinen, Engemann, Gramfort (b22) 2021; 18
Brown, Mann, Ryder, Subbiah (b21) 2020
R. Li, Y. Wang, W. Zheng, B. Lu, A Multi-View Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-Supervised Learning, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6–14.
K. He, X. Chen, S. Xie, Y. Li, P. Dollár, R. Girshick, Masked Autoencoders Are Scalable Vision Learners, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, New Orleans, LA, USA, 2022, pp. 15979–15988.
Devlin, Chang, Lee, Toutanova (b15) 2019
Li, Yu, Gu, Wu, Li, Jin (b5) 2018; 26
Li, Luo, Zhang, Zhang, Zhang, Lo (b26) 2023; 27
Emadeldeen (b33) 2021
Bouton, Shaikhouni, Annetta, Bockbrader, Friedenberg, Nielson, Sharma, Sederberg, Glenn, Mysiw, Morgan, Deogaonkar, Rezai (b1) 2016; 533
Yuan, Chen, Wang, Gao, Gao (b7) 2015; 12
Zhang, Yao, Chen, Wang, Chang, Liu (b27) 2020; 50
H. Banville, I. Albuquerque, A. Hyvärinen, G. Moffat, D.-A. Engemann, A. Gramfort, Self-Supervised Representation Learning from Electroencephalography Signals, in: 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing, MLSP, Pittsburgh, PA, USA, 2019, pp. 1–6.
Liu (b8) 2023; 35
V. Kumar, L. Reddy, S.K. Sharma, K. Dadi, C. Yarra, R.S. Bapi, S. Rajendran, mulEEG: a multi-view representation learning on EEG signals, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2022, pp. 398–407.
Wu, Ye, Gu, Zhang, Wang, He (b17) 2023
Deny, Cheon, Son, Choi (b3) 2023; 27
Y. Nie, H. Nguyen Nam, S. Phanwadee, K. Jayant, A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, in: International Conference on Learning Representations, 2023.
Panwar, Rad, Jung, Huang (b29) 2020; 28
R. Peng, et al., WAVELET2VEC: A Filter Bank Masked Autoencoder for EEG-Based Seizure Subtype Classification, in: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Rhodes Island, Greece, 2023, pp. 1–5.
Mh (b34) 2019; 8
Kostas, Aroca-Ouellette, Rudzicz (b10) 2021; 15
Tian (b19) 2022
Jiahao, Wei, Xiaohang Zhan (b20) 2023
Weng, Gu, Guo, Ma, Yang, Liu, Chen (b14) 2024
Zhao, Dong, Zhou (b28) 2020; 8
Shin, Sun, Lee, Kim (b30) 2021; 9
Roy, Banville, Albuquerque, Gramfort, Falk, Faubert (b6) 2019; 16
İ.Y. Potter, G. Zerveas, C. Eickhoff, D. Duncan, Unsupervised Multivariate Time-Series Transformers for Seizure Identification on EEG, in: 2022 21st IEEE International Conference on Machine Learning and Applications, ICMLA, Nassau, Bahamas, 2022, pp. 1304–1311.
Siyuan, Di, Fang, Zelin (b18) 2023
Damian da Silva, da Cruz Júnior, Galvão Pinheiro Júnior (b4) 2020; 18
10.1016/j.asoc.2025.112731_b12
10.1016/j.asoc.2025.112731_b13
10.1016/j.asoc.2025.112731_b11
Bouton (10.1016/j.asoc.2025.112731_b1) 2016; 533
Damian da Silva (10.1016/j.asoc.2025.112731_b4) 2020; 18
Li (10.1016/j.asoc.2025.112731_b26) 2023; 27
10.1016/j.asoc.2025.112731_b31
Yuan (10.1016/j.asoc.2025.112731_b7) 2015; 12
Zhang (10.1016/j.asoc.2025.112731_b27) 2020; 50
Mh (10.1016/j.asoc.2025.112731_b34) 2019; 8
Devlin (10.1016/j.asoc.2025.112731_b15) 2019
Brown (10.1016/j.asoc.2025.112731_b21) 2020
Zhao (10.1016/j.asoc.2025.112731_b28) 2020; 8
Banville (10.1016/j.asoc.2025.112731_b22) 2021; 18
Weng (10.1016/j.asoc.2025.112731_b14) 2024
Cheng (10.1016/j.asoc.2025.112731_b32) 2024; 28
Liu (10.1016/j.asoc.2025.112731_b8) 2023; 35
Cheng (10.1016/j.asoc.2025.112731_b2) 2023; 136
10.1016/j.asoc.2025.112731_b16
Li (10.1016/j.asoc.2025.112731_b25) 2023; 14
10.1016/j.asoc.2025.112731_b9
10.1016/j.asoc.2025.112731_b23
Li (10.1016/j.asoc.2025.112731_b5) 2018; 26
10.1016/j.asoc.2025.112731_b24
Jiahao (10.1016/j.asoc.2025.112731_b20) 2023
Tian (10.1016/j.asoc.2025.112731_b19) 2022
Wu (10.1016/j.asoc.2025.112731_b17) 2023
Roy (10.1016/j.asoc.2025.112731_b6) 2019; 16
Shin (10.1016/j.asoc.2025.112731_b30) 2021; 9
Kostas (10.1016/j.asoc.2025.112731_b10) 2021; 15
Emadeldeen (10.1016/j.asoc.2025.112731_b33) 2021
Siyuan (10.1016/j.asoc.2025.112731_b18) 2023
Deny (10.1016/j.asoc.2025.112731_b3) 2023; 27
Panwar (10.1016/j.asoc.2025.112731_b29) 2020; 28
References_xml – volume: 27
  start-page: 5459
  year: 2023
  end-page: 5470
  ident: b3
  article-title: Hierarchical transformer for motor imagery-based brain computer interface
  publication-title: IEEE J. Biomed. Health Inf.
– volume: 15
  year: 2021
  ident: b10
  article-title: BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data
  publication-title: Front. Hum. Neurosci.
– year: 2021
  ident: b33
  article-title: Time-series representation learning via temporal and contextual contrasting
– reference: K. He, X. Chen, S. Xie, Y. Li, P. Dollár, R. Girshick, Masked Autoencoders Are Scalable Vision Learners, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, New Orleans, LA, USA, 2022, pp. 15979–15988.
– volume: 533
  start-page: 247
  year: 2016
  end-page: 250
  ident: b1
  article-title: Restoring cortical control of functional movement in a human with quadriplegia
  publication-title: Nature
– volume: 16
  year: 2019
  ident: b6
  article-title: Deep learning-based electroencephalography analysis: a systematic review
  publication-title: J. Neural Eng.
– volume: 14
  start-page: 2512
  year: 2023
  end-page: 2525
  ident: b25
  article-title: GMSS: Graph-based multi-task self-supervised learning for EEG emotion recognition
  publication-title: IEEE Trans. Affect. Comput.
– reference: H. Banville, I. Albuquerque, A. Hyvärinen, G. Moffat, D.-A. Engemann, A. Gramfort, Self-Supervised Representation Learning from Electroencephalography Signals, in: 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing, MLSP, Pittsburgh, PA, USA, 2019, pp. 1–6.
– volume: 9
  start-page: 70639
  year: 2021
  end-page: 70649
  ident: b30
  article-title: Complementary photoplethysmogram synthesis from electrocardiogram using generative adversarial network
  publication-title: IEEE Access
– reference: İ.Y. Potter, G. Zerveas, C. Eickhoff, D. Duncan, Unsupervised Multivariate Time-Series Transformers for Seizure Identification on EEG, in: 2022 21st IEEE International Conference on Machine Learning and Applications, ICMLA, Nassau, Bahamas, 2022, pp. 1304–1311.
– year: 2024
  ident: b14
  article-title: Self-supervised learning for electroencephalogram: A systematic survey
– year: 2023
  ident: b17
  article-title: Denoising masked AutoEncoders helps robust classification
  publication-title: 2023 ICLR
– volume: 28
  start-page: 2687
  year: 2024
  end-page: 2698
  ident: b32
  article-title: MaskCAE: Masked convolutional AutoEncoder via sensor data reconstruction for self-supervised human activity recognition
  publication-title: IEEE J. Biomed. Health Inf.
– year: 2022
  ident: b19
  article-title: Beyond masking: Demystifying token-based pre-training for vision transformers
– volume: 18
  start-page: 1000
  year: 2020
  end-page: 1007
  ident: b4
  article-title: A fast and accurate SSVEP brain machine interface using dry electrodes and high frequency stimuli by employing ensemble learning
  publication-title: IEEE Latin Am. Trans.
– year: 2020
  ident: b21
  article-title: Language models are few-shot learners
  publication-title: NeurIPS
– volume: 28
  start-page: 1720
  year: 2020
  end-page: 1730
  ident: b29
  article-title: Modeling EEG data distribution with a Wasserstein generative adversarial network to predict RSVP events
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
– volume: 12
  year: 2015
  ident: b7
  article-title: Enhancing performances of SSVEP-based brain-computer interfaces via exploiting inter-subject information
  publication-title: J. Neural Eng.
– reference: J. Chen, Y. Yang, T. Yu, Y. Fan, X. Mo, C. Yang, Brainnet: Epileptic wave detection from seeg with hierarchical graph diffusion learning, in: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 2741–2751.
– volume: 136
  year: 2023
  ident: b2
  article-title: Evolutionary computation-based multitask learning network for railway passenger comfort evaluation from EEG signals
  publication-title: Appl. Soft Comput.
– volume: 8
  year: 2019
  ident: b34
  article-title: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy
  publication-title: Gigascience
– reference: V. Kumar, L. Reddy, S.K. Sharma, K. Dadi, C. Yarra, R.S. Bapi, S. Rajendran, mulEEG: a multi-view representation learning on EEG signals, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2022, pp. 398–407.
– reference: R. Li, Y. Wang, W. Zheng, B. Lu, A Multi-View Spectral-Spatial-Temporal Masked Autoencoder for Decoding Emotions with Self-Supervised Learning, in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6–14.
– volume: 50
  start-page: 3033
  year: 2020
  end-page: 3044
  ident: b27
  article-title: Making sense of spatio-temporal preserving representations for EEG-based human intention recognition
  publication-title: IEEE Trans. Cybern.
– volume: 26
  start-page: 563
  year: 2018
  end-page: 572
  ident: b5
  article-title: A hybrid network for ERP detection and analysis based on restricted Boltzmann machine
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
– volume: 35
  start-page: 857
  year: 2023
  end-page: 876
  ident: b8
  article-title: Self-supervised learning: Generative or contrastive
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 18
  year: 2021
  ident: b22
  article-title: Uncovering the structure of clinical EEG signals with self-supervised learning
  publication-title: J. Neural Eng.
– year: 2019
  ident: b15
  article-title: Toutanova BERT: Pre-training of deep bidirectional transformers for language understanding
  publication-title: NAACL
– volume: 8
  start-page: 93907
  year: 2020
  end-page: 93921
  ident: b28
  article-title: Self-supervised learning from multi-sensor data for sleep recognition
  publication-title: IEEE Access
– reference: Y. Nie, H. Nguyen Nam, S. Phanwadee, K. Jayant, A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, in: International Conference on Learning Representations, 2023.
– reference: R. Peng, et al., WAVELET2VEC: A Filter Bank Masked Autoencoder for EEG-Based Seizure Subtype Classification, in: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Rhodes Island, Greece, 2023, pp. 1–5.
– year: 2023
  ident: b20
  article-title: Masked frequency modeling for self-supervised visual pre-training
  publication-title: 2023 ICLR
– year: 2023
  ident: b18
  article-title: Architecture-agnostic masked image modeling - From ViT back to CNN
  publication-title: 2023 ICLR
– volume: 27
  start-page: 2647
  year: 2023
  end-page: 2655
  ident: b26
  article-title: MtCLSS: Multi-task contrastive learning for semi-supervised pediatric sleep staging
  publication-title: IEEE J. Biomed. Health Inf.
– ident: 10.1016/j.asoc.2025.112731_b12
  doi: 10.1109/ICMLA55696.2022.00208
– year: 2019
  ident: 10.1016/j.asoc.2025.112731_b15
  article-title: Toutanova BERT: Pre-training of deep bidirectional transformers for language understanding
– volume: 27
  start-page: 2647
  issue: 6
  year: 2023
  ident: 10.1016/j.asoc.2025.112731_b26
  article-title: MtCLSS: Multi-task contrastive learning for semi-supervised pediatric sleep staging
  publication-title: IEEE J. Biomed. Health Inf.
  doi: 10.1109/JBHI.2022.3213171
– volume: 136
  year: 2023
  ident: 10.1016/j.asoc.2025.112731_b2
  article-title: Evolutionary computation-based multitask learning network for railway passenger comfort evaluation from EEG signals
  publication-title: Appl. Soft Comput.
  doi: 10.1016/j.asoc.2023.110079
– volume: 27
  start-page: 5459
  issue: 11
  year: 2023
  ident: 10.1016/j.asoc.2025.112731_b3
  article-title: Hierarchical transformer for motor imagery-based brain computer interface
  publication-title: IEEE J. Biomed. Health Inf.
  doi: 10.1109/JBHI.2023.3304646
– ident: 10.1016/j.asoc.2025.112731_b31
– volume: 16
  issue: 5
  year: 2019
  ident: 10.1016/j.asoc.2025.112731_b6
  article-title: Deep learning-based electroencephalography analysis: a systematic review
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/ab260c
– ident: 10.1016/j.asoc.2025.112731_b11
  doi: 10.1109/MLSP.2019.8918693
– volume: 18
  year: 2021
  ident: 10.1016/j.asoc.2025.112731_b22
  article-title: Uncovering the structure of clinical EEG signals with self-supervised learning
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2552/abca18
– volume: 15
  issue: 653659
  year: 2021
  ident: 10.1016/j.asoc.2025.112731_b10
  article-title: BENDR: Using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data
  publication-title: Front. Hum. Neurosci.
– year: 2023
  ident: 10.1016/j.asoc.2025.112731_b18
  article-title: Architecture-agnostic masked image modeling - From ViT back to CNN
– volume: 28
  start-page: 1720
  issue: 8
  year: 2020
  ident: 10.1016/j.asoc.2025.112731_b29
  article-title: Modeling EEG data distribution with a Wasserstein generative adversarial network to predict RSVP events
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
  doi: 10.1109/TNSRE.2020.3006180
– volume: 533
  start-page: 247
  year: 2016
  ident: 10.1016/j.asoc.2025.112731_b1
  article-title: Restoring cortical control of functional movement in a human with quadriplegia
  publication-title: Nature
  doi: 10.1038/nature17435
– year: 2023
  ident: 10.1016/j.asoc.2025.112731_b17
  article-title: Denoising masked AutoEncoders helps robust classification
– ident: 10.1016/j.asoc.2025.112731_b23
  doi: 10.1145/3534678.3539178
– volume: 18
  start-page: 1000
  issue: 06
  year: 2020
  ident: 10.1016/j.asoc.2025.112731_b4
  article-title: A fast and accurate SSVEP brain machine interface using dry electrodes and high frequency stimuli by employing ensemble learning
  publication-title: IEEE Latin Am. Trans.
  doi: 10.1109/TLA.2020.9099676
– year: 2022
  ident: 10.1016/j.asoc.2025.112731_b19
– year: 2023
  ident: 10.1016/j.asoc.2025.112731_b20
  article-title: Masked frequency modeling for self-supervised visual pre-training
– ident: 10.1016/j.asoc.2025.112731_b16
  doi: 10.1109/CVPR52688.2022.01553
– ident: 10.1016/j.asoc.2025.112731_b9
  doi: 10.1145/3503161.3548243
– volume: 14
  start-page: 2512
  issue: 3
  year: 2023
  ident: 10.1016/j.asoc.2025.112731_b25
  article-title: GMSS: Graph-based multi-task self-supervised learning for EEG emotion recognition
  publication-title: IEEE Trans. Affect. Comput.
  doi: 10.1109/TAFFC.2022.3170428
– year: 2020
  ident: 10.1016/j.asoc.2025.112731_b21
  article-title: Language models are few-shot learners
– year: 2021
  ident: 10.1016/j.asoc.2025.112731_b33
– volume: 26
  start-page: 563
  issue: 3
  year: 2018
  ident: 10.1016/j.asoc.2025.112731_b5
  article-title: A hybrid network for ERP detection and analysis based on restricted Boltzmann machine
  publication-title: IEEE Trans. Neural Syst. Rehabil. Eng.
  doi: 10.1109/TNSRE.2018.2803066
– ident: 10.1016/j.asoc.2025.112731_b13
  doi: 10.1109/ICASSP49357.2023.10097183
– volume: 50
  start-page: 3033
  issue: 7
  year: 2020
  ident: 10.1016/j.asoc.2025.112731_b27
  article-title: Making sense of spatio-temporal preserving representations for EEG-based human intention recognition
  publication-title: IEEE Trans. Cybern.
  doi: 10.1109/TCYB.2019.2905157
– volume: 8
  start-page: 93907
  year: 2020
  ident: 10.1016/j.asoc.2025.112731_b28
  article-title: Self-supervised learning from multi-sensor data for sleep recognition
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2020.2994593
– volume: 9
  start-page: 70639
  year: 2021
  ident: 10.1016/j.asoc.2025.112731_b30
  article-title: Complementary photoplethysmogram synthesis from electrocardiogram using generative adversarial network
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2021.3078534
– volume: 35
  start-page: 857
  issue: 1
  year: 2023
  ident: 10.1016/j.asoc.2025.112731_b8
  article-title: Self-supervised learning: Generative or contrastive
  publication-title: IEEE Trans. Knowl. Data Eng.
– volume: 12
  issue: 4
  year: 2015
  ident: 10.1016/j.asoc.2025.112731_b7
  article-title: Enhancing performances of SSVEP-based brain-computer interfaces via exploiting inter-subject information
  publication-title: J. Neural Eng.
  doi: 10.1088/1741-2560/12/4/046006
– ident: 10.1016/j.asoc.2025.112731_b24
  doi: 10.1007/978-3-031-16437-8_38
– volume: 28
  start-page: 2687
  issue: 5
  year: 2024
  ident: 10.1016/j.asoc.2025.112731_b32
  article-title: MaskCAE: Masked convolutional AutoEncoder via sensor data reconstruction for self-supervised human activity recognition
  publication-title: IEEE J. Biomed. Health Inf.
  doi: 10.1109/JBHI.2024.3373019
– year: 2024
  ident: 10.1016/j.asoc.2025.112731_b14
– volume: 8
  issue: 5
  year: 2019
  ident: 10.1016/j.asoc.2025.112731_b34
  article-title: EEG dataset and OpenBMI toolbox for three BCI paradigms: an investigation into BCI illiteracy
  publication-title: Gigascience
SSID ssj0016928
Score 2.4361985
Snippet Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of...
SourceID crossref
elsevier
SourceType Enrichment Source
Index Database
Publisher
StartPage 112731
SubjectTerms Fractional fourier transform (frFT)
Masked autoencoder (MAE)
Motor imagery (MI)
Self-supervised learning (SSL)
Steady-state visual evoked potential (SSVEP)
Title EEG generalizable representations learning via masked fractional fourier domain modeling
URI https://dx.doi.org/10.1016/j.asoc.2025.112731
Volume 170
WOSCitedRecordID wos001417868600001&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: Elsevier SD Freedom Collection Journals 2021
  issn: 1568-4946
  databaseCode: AIEXJ
  dateStart: 20010601
  customDbUrl:
  isFulltext: true
  dateEnd: 99991231
  titleUrlDefault: https://www.sciencedirect.com
  omitProxy: false
  ssIdentifier: ssj0016928
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwELZgy4ELlJcoLcgHbquskjiO4-OqWkpRVXEosJwiO3ZK-siuNpuqP59xbCfsAhU9cIkiazx5zKfxZDLjD6H3WkSRojoJGA-zIJFUB5lKo6AMlemLhJC9K_n_esJOT7P5nH92f_Cbjk6A1XV2e8uX_9XUMAbGNq2z9zB3rxQG4ByMDkcwOxz_yfCz2ZHhRTa5JlOxdWV4UZZDk1HdeKaI8_FNJcbXormEoLNc2Q6Hrp_R0tipxbWoasuV4xc4v1-ti10bcOJdVXq79hIbSegfrax69E2tg1sMgidV26Goqou2F_umuwKD72Jx3gon69ISMfWVzIMnTcH23OUXvau1JCHOWUKox-wS8JsftymFi4kAiE6M-skgvLlp9tZi1pcY-uq1i9zoyI2O3Op4iHZiRnk2QjvT49n8U__TKeUdFW9_567HypYDbt_Jn-OYX2KTs130xH1U4KkFwzP0QNfP0VNP2IGd_36B5oANvIENvIUN7LGBARvYYgMP2MAOG9hiA3tsvERfPszODj8GjlojKEgYroNSsyiGx4g51bEMS1EUktBUwfd3pJOSKC1EqiihJOQqYSyTUhEapakkBS8jSV6hUb2o9WuEiRRMFgllmpOkLEvOQiFiBuoyknKS7qHIv6i8cPvOG_qTq_zvJtpD437O0u66cqc09e8_d3GjjQdzgNMd897c6yr76PGA8wM0Wq9a_RY9Km7WVbN657D0E-Wyj6Q
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=EEG+generalizable+representations+learning+via+masked+fractional+fourier+domain+modeling&rft.jtitle=Applied+soft+computing&rft.au=Zhang%2C+Shubin&rft.au=An%2C+Dong&rft.au=Liu%2C+Jincun&rft.au=Wei%2C+Yaoguang&rft.date=2025-02-01&rft.issn=1568-4946&rft.volume=170&rft.spage=112731&rft_id=info:doi/10.1016%2Fj.asoc.2025.112731&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_asoc_2025_112731
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1568-4946&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1568-4946&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1568-4946&client=summon