Gram: A Large-Scale General EEG Model for Raw Data Classification and Restoration Tasks

Drawing insights from Large Language Models, researchers have developed several large-scale Electroencephalogram (EEG) models (LEMs) to learn a generalized representation adaptable to various tasks. However, such LEMs are scarce and neglecting the potential in reconstruction tasks. Meanwhile, how to...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) s. 1 - 5
Hlavní autori: Li, Ziyi, Zheng, Wei-Long, Lu, Bao-Liang
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 06.04.2025
Predmet:
ISSN:2379-190X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Drawing insights from Large Language Models, researchers have developed several large-scale Electroencephalogram (EEG) models (LEMs) to learn a generalized representation adaptable to various tasks. However, such LEMs are scarce and neglecting the potential in reconstruction tasks. Meanwhile, how to efficiently integrate temporal view and spectral view of EEG data has always been a focal point. In this paper, we propose Gram, a large general EEG model for raw EEG data classification and reconstruction tasks. Gram consists of two stages. 1) The initial stage quantizes raw EEG patches into base classes rich in temporal information. 2) The second stage features a multi-view layer-fusion masked autoencoder that exploits EEG's complex Temporal and Spectral views through dual training objectives: a spectral mimic target after layer-fusion encoder for visible patches and a base-class classification target after decoder for masked patches. Pretrained on 7000 hours of EEG data, Gram achieves SOTA performance on 3 cross-subject classification tasks including event, emotion, and sleep stage classification. For EEG data restoration, our model significantly improves classification performance by repairing corrupted data in comparison to using noisy data. The code and pretrained weights are in https://github.com/iiieeeve/Gram.
ISSN:2379-190X
DOI:10.1109/ICASSP49660.2025.10890831