EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech
Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better...
Uloženo v:
| Vydáno v: | The ... International Winter Conference on Brain-Computer Interface s. 1 - 4 |
|---|---|
| Hlavní autoři: | , |
| Médium: | Konferenční příspěvek |
| Jazyk: | angličtina |
| Vydáno: |
IEEE
21.02.2022
|
| Témata: | |
| ISSN: | 2572-7672 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Abstract | Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better performance than RNN. Herein, we investigate the decoding technique for electroencephalography (EEG) composed of self-attention module from transformer architecture during imagined speech and overt speech. We performed classification of nine subjects using convolutional neural network based on EEGNet that captures temporal-spectral-spatial features from EEG of imagined speech and overt speech. Furthermore, we applied the self-attention module to decoding EEG to improve the performance and lower the number of parameters. Our results demonstrate the possibility of decoding brain activities of imagined speech and overt speech using attention modules. Also, only single channel EEG or ear-EEG can be used to decode the imagined speech for practical BCIs. |
|---|---|
| AbstractList | Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer architectures. Transformers implemented only with attention with encoder-decoder structure following seq2seq without using RNN, but had better performance than RNN. Herein, we investigate the decoding technique for electroencephalography (EEG) composed of self-attention module from transformer architecture during imagined speech and overt speech. We performed classification of nine subjects using convolutional neural network based on EEGNet that captures temporal-spectral-spatial features from EEG of imagined speech and overt speech. Furthermore, we applied the self-attention module to decoding EEG to improve the performance and lower the number of parameters. Our results demonstrate the possibility of decoding brain activities of imagined speech and overt speech using attention modules. Also, only single channel EEG or ear-EEG can be used to decode the imagined speech for practical BCIs. |
| Author | Lee, Young-Eun Lee, Seo-Hyun |
| Author_xml | – sequence: 1 givenname: Young-Eun surname: Lee fullname: Lee, Young-Eun email: ye_lee@korea.ac.kr organization: Korea University,Dept. Brain and Cognitive Engineering,Seoul,Republic of Korea – sequence: 2 givenname: Seo-Hyun surname: Lee fullname: Lee, Seo-Hyun email: seohyunlee@korea.ac.kr organization: Korea University,Dept. Brain and Cognitive Engineering,Seoul,Republic of Korea |
| BookMark | eNpNkM1Kw0AUhUdRsNY-gQjzAqn3zk8mcVdrrYGCi3ZfJpM77UgzKZO48O0N2IWrA4ePj8O5Zzexi8TYE8IcEcrn12WlpREwFyDEvDRSo1BXbFaaAvNcK5TSqGs2EdqIzORG3LFZ338BgMSiLEFNmF2t1tku2dj7LrWUXviWTj6zw0BxCF3kPnUt_wfwRXLHMJAbvhPxseNv5LomxAMfVbzzvGrtIURq-PZM5I4P7NbbU0-zS07Z7n21W35km891tVxssiBADpnHYtzUmNoUikBpXzuba4eovQZwJUpVW100StVotPMkhFcKGwdWj4ycssc_bSCi_TmF1qaf_eUT-QuQVFdq |
| ContentType | Conference Proceeding |
| DBID | 6IE 6IL CBEJK RIE RIL |
| DOI | 10.1109/BCI53720.2022.9735124 |
| DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Xplore POP ALL IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) IEEE Proceedings Order Plans (POP All) 1998-Present |
| DatabaseTitleList | |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Anatomy & Physiology |
| EISBN | 9781665413374 1665413379 |
| EISSN | 2572-7672 |
| EndPage | 4 |
| ExternalDocumentID | 9735124 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: Interface funderid: 10.13039/100004712 |
| GroupedDBID | 6IE 6IF 6IL 6IN AAJGR AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK OCL RIE RIL |
| ID | FETCH-LOGICAL-i203t-f18189d7b784e045fbca65c115f500c9134ba58d44b175cfe22f441dc0a515f3 |
| IEDL.DBID | RIE |
| ISICitedReferencesCount | 34 |
| ISICitedReferencesURI | http://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=Summon&SrcAuth=ProQuest&DestLinkType=CitingArticles&DestApp=WOS_CPL&KeyUT=000814683300048&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| IngestDate | Wed Aug 27 02:47:41 EDT 2025 |
| IsPeerReviewed | false |
| IsScholarly | false |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-i203t-f18189d7b784e045fbca65c115f500c9134ba58d44b175cfe22f441dc0a515f3 |
| PageCount | 4 |
| ParticipantIDs | ieee_primary_9735124 |
| PublicationCentury | 2000 |
| PublicationDate | 2022-Feb.-21 |
| PublicationDateYYYYMMDD | 2022-02-21 |
| PublicationDate_xml | – month: 02 year: 2022 text: 2022-Feb.-21 day: 21 |
| PublicationDecade | 2020 |
| PublicationTitle | The ... International Winter Conference on Brain-Computer Interface |
| PublicationTitleAbbrev | BCI |
| PublicationYear | 2022 |
| Publisher | IEEE |
| Publisher_xml | – name: IEEE |
| SSID | ssj0003189904 |
| Score | 2.1133645 |
| Snippet | Transformers are groundbreaking architectures that have changed a flow of deep learning, and many high-performance models are developing based on transformer... |
| SourceID | ieee |
| SourceType | Publisher |
| StartPage | 1 |
| SubjectTerms | attention module brain-computer interface Communication systems Deep learning Electroencephalography Hardware imagined speech Speech recognition Statistical analysis transformer Transformers |
| Title | EEG-Transformer: Self-attention from Transformer Architecture for Decoding EEG of Imagined Speech |
| URI | https://ieeexplore.ieee.org/document/9735124 |
| WOSCitedRecordID | wos000814683300048&url=https%3A%2F%2Fcvtisr.summon.serialssolutions.com%2F%23%21%2Fsearch%3Fho%3Df%26include.ft.matches%3Dt%26l%3Dnull%26q%3D |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTgIxFG2QuHDlA4zvdGFcWSgzHdpxhwjKhpDAgh3ptLeRRAaCYOLfe1tGxMSNu0mnM03apvfcx-kh5DYW0MxSaxk3DcVELDlTLtYsNs4kUmcG3wWxCdnvq_E4HZTI_ZYLAwCh-Axq_jHk8u3crH2orJ7KGO2T2CN7UsoNV2sbT8G9iQerKEg6DZ7WH9u9xGuwoBMYRbXi218iKsGGdA__N_oRqf6Q8ehga2aOSQnyE1Jp5egtzz7pHQ01nCE4XiG603lmo28sCssHOoQ3x_wdmqGqkXo2Cd3pQFs7iQSKbfQJ_VE_EMVf0bmjvZnXMQJLhwsA81olo25n1H5hhYgCm0Y8XjGHJlylVmZSCUD85jKjm4lBIOgSzo1PvGc6UVaIDJGEcRBFDiGSNVwj1HHxKSnn8xzOCAWEtklTSB8DETrRacatkhqk0NI6Yc9JxU_aZLG5JmNSzNfF382X5MCvS2CHN65IebVcwzXZNx-r6fvyJqztF0WXpbk |
| linkProvider | IEEE |
| linkToHtml | http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3LTgIxFG0QTXTlA4xvuzCuLMyjpTPuEEGISEiYBTvSaW8jiQwEwcS_ty0jYuLG3aTTmSZt03vu4_QgdBNSqKWxUsSTfkRoyD0S6VCQUGrJuEileefEJnivFw2Hcb-A7tZcGABwxWdQsY8ul6-mcmlDZdWYh8Y-0S20zSgN_BVbax1RMbvTHK00p-n4Xlx9aHSYVWExbmAQVPKvf8moOCvS2v_f-Aeo_EPHw_21oTlEBciOUKmeGX958olvsavidOHxEhLN5hNJvtEozO_xAN40sbdourpGbPkkeKMDrm-kErBpw4_GI7UDYfMrPNW4M7FKRqDwYAYgX8soaTWTRpvkMgpkHHjhgmhjxKNY8ZRHFAyC06kUNSYNFNTM86RNvaeCRYrS1GAJqSEItAFJSnrCgB0dHqNiNs3gBGEw4JbVKLdRECqYiFNPRVwAp4IrTdUpKtlJG81WF2WM8vk6-7v5Gu22k5fuqNvpPZ-jPbtGjivuX6DiYr6ES7QjPxbj9_mVW-cvEzCpAA |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=The+...+International+Winter+Conference+on+Brain-Computer+Interface&rft.atitle=EEG-Transformer%3A+Self-attention+from+Transformer+Architecture+for+Decoding+EEG+of+Imagined+Speech&rft.au=Lee%2C+Young-Eun&rft.au=Lee%2C+Seo-Hyun&rft.date=2022-02-21&rft.pub=IEEE&rft.eissn=2572-7672&rft.spage=1&rft.epage=4&rft_id=info:doi/10.1109%2FBCI53720.2022.9735124&rft.externalDocID=9735124 |