EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech

Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better...

Full description

Saved in:
Bibliographic Details
Published in:The ... International Winter Conference on Brain-Computer Interface pp. 1 - 4
Main Authors: Lee, Young-Eun, Lee, Seo-Hyun
Format: Conference Proceeding
Language:English
Published: IEEE 21.02.2022
Subjects:
ISSN:2572-7672
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Be the first to leave a comment!
You must be logged in first