EEG generalizable representations learning via masked fractional fourier domain modeling
Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of supervised models. However, most supervised methods are task-specific and lack the ability to generate generalizable latent features for use across...
Uloženo v:
| Vydáno v: | Applied soft computing Ročník 170; s. 112731 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
Elsevier B.V
01.02.2025
|
| Témata: | |
| ISSN: | 1568-4946 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | Deep learning methods currently represent the state-of-the-art (SOTA) in electroencephalography (EEG) decoding, primarily focusing on the development of supervised models. However, most supervised methods are task-specific and lack the ability to generate generalizable latent features for use across different BCI paradigms. Additionally, as subjects engage in diverse brain–computer interaction tasks, the distribution of recorded EEG data varies according to the specific cognitive paradigms involved. The process of data collection and model training for each task is time-consuming. One potential solution is to construct a pre-trained model capable of transferring knowledge across various tasks. To improve the generalization ability of pre-trained models, we propose a novel masked autoencoder based on fractional Fourier domain reconstruction, denoted as Masked Fractional Fourier Domain Modeling (MFrFM), for learning generalizable time–frequency features. We systematically explore the effects of different degradation methods used within the denoising autoencoder to enhance the robustness of the pre-training model. Moreover, we examine the impact of various masking strategies on model performance. Our experiments demonstrate that the pre-trained MFrFM can effectively capture generalizable representations. Additionally, we conduct a comprehensive evaluation of fine-tuning performance through both cross-task and intra-task experiments. The experimental results show that MFrFM achieves a maximum accuracy of 98.09% in transferring from MI to SSVEP, and 79.76% in transferring from SSVEP to MI. The code is available at https://github.com/zshubin/MFrFM-for-cross-task-EEG-pre-training.
•A EEG generalizable representations learning model for cross-task transfer.•A pre-training model based on Masked fractional fourier domain modeling.•A specific masking strategy for EEG reconstruction-based pre-training.•Denoising mechanism based on various degradation methods. |
|---|---|
| ISSN: | 1568-4946 |
| DOI: | 10.1016/j.asoc.2025.112731 |