DCAE: A dual conditional autoencoder framework for the reconstruction from EEG into image

How to design a suitable model to extract the semantic features contained in Electroencephalography (EEG) and to visualize them as corresponding images, also known as Reconstruction from EEG to Image (RE2I), plays an important role in promoting EEG-based brain–computer interface (BCI) applications....

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Biomedical signal processing and control Ročník 81; s. 104440
Hlavní autoři: Zeng, Hong, Xia, Nianzhang, Tao, Ming, Pan, Deng, Zheng, Haohao, Wang, Chu, Xu, Feifan, Zakaria, Wael, Dai, Guojun
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.03.2023
Témata:
ISSN:1746-8094, 1746-8108
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:How to design a suitable model to extract the semantic features contained in Electroencephalography (EEG) and to visualize them as corresponding images, also known as Reconstruction from EEG to Image (RE2I), plays an important role in promoting EEG-based brain–computer interface (BCI) applications. However, due to the low signal-to-noise ratio (SNR) and the significant individual differences of EEG, it is difficult to extract semantic features contained in EEG signals effectively, making the implementation of RE2I still a huge challenge. In this study, we propose a dual conditional convolutional auto-encoder framework (DCAE) to tackle this challenge. DCAE framework includes two parts: The first part aims to extract and fuse features by means of a multimodal learning method from both EEG and its corresponding real images, and the other part aims to generate images with the same semantics as the corresponding EEG by the EEG-UNet module. The experimental results show DCAE outperforms most of the existing state-of-the-art models, which might bring a novel idea in the implementation of RE2I.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2022.104440