Self-Selecting Semi-Supervised Transformer-Attention Convolutional Network for Four Class EEG-Based Motor Imagery Decoding

Brain-computer interfaces (BCI) serve as an important tool in areas such as neurorehabilitation and constructing prostheses. Electroencephalogram (EEG) motor imagery (MI) signal is a common method used to communicate between the human brain and the computer interface. However, differentiating betwee...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems s. 4636 - 4642
Hlavní autori: Ng, Han Wei, Guan, Cuntai
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: IEEE 14.10.2024
Predmet:
ISSN:2153-0866
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Brain-computer interfaces (BCI) serve as an important tool in areas such as neurorehabilitation and constructing prostheses. Electroencephalogram (EEG) motor imagery (MI) signal is a common method used to communicate between the human brain and the computer interface. However, differentiating between multiple motor imagery signals may be challenging due to the presence of high noise-to-signal ratio and small dataset sizes. In this study, we propose a variational autoencoder and transformer-attention based convolutional neural network (SSTACNet) for multi-class EEG-based motor imagery classification. The SSTACNet model leverages upon variational autoencoders' ability to measure the contrastive distance between two sets of inputs to perform data self-selection. The model further utilizes multi-head self-attention as well as spatial and temporal convolutional filters to achieve superior extraction of signal features. The model additionally utilizes the variational autoencoder's ability to augment the dataset with feature-informed pseudo-data, achieving stronger classification results. The proposed model outperforms the current state-of-the-art techniques in the BCI Competition IV-2a dataset with an accuracy of 85.52% and 70.56% for the subject-dependent and subject-independent modes, respectively. Codes may be found at: https://github.com/NgHanWei/SSTACNet
ISSN:2153-0866
DOI:10.1109/IROS58592.2024.10801654