Stacked Autoencoders for the P300 Component Detection

Novel neural network training methods (commonly referred to as deep learning) have emerged in recent years. Using a combination of unsupervised pre-training and subsequent fine-tuning, deep neural networks have become one of the most reliable classification methods. Since deep neural networks are es...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Frontiers in neuroscience Jg. 11; S. 302
Hauptverfasser: Vařeka, Lukáš, Mautner, Pavel
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Switzerland Frontiers Research Foundation 30.05.2017
Frontiers Media S.A
Schlagworte:
ISSN:1662-453X, 1662-4548, 1662-453X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Novel neural network training methods (commonly referred to as deep learning) have emerged in recent years. Using a combination of unsupervised pre-training and subsequent fine-tuning, deep neural networks have become one of the most reliable classification methods. Since deep neural networks are especially powerful for high-dimensional and non-linear feature vectors, electroencephalography (EEG) and event-related potentials (ERPs) are one of the promising applications. Furthermore, to the authors' best knowledge, there are very few papers that study deep neural networks for EEG/ERP data. The aim of the experiments subsequently presented was to verify if deep learning-based models can also perform well for single trial P300 classification with possible application to P300-based brain-computer interfaces. The P300 data used were recorded in the EEG/ERP laboratory at the Department of Computer Science and Engineering, University of West Bohemia, and are publicly available. Stacked autoencoders (SAEs) were implemented and compared with some of the currently most reliable state-of-the-art methods, such as LDA and multi-layer perceptron (MLP). The parameters of stacked autoencoders were optimized empirically. The layers were inserted one by one and at the end, the last layer was replaced by a supervised softmax classifier. Subsequently, fine-tuning using backpropagation was performed. The architecture of the neural network was 209-130-100-50-20-2. The classifiers were trained on a dataset merged from four subjects and subsequently tested on different 11 subjects without further training. The trained SAE achieved 69.2% accuracy that was higher ( < 0.01) than the accuracy of MLP (64.9%) and LDA (65.9%). The recall of 58.8% was slightly higher when compared with MLP (56.2%) and LDA (58.4%). Therefore, SAEs could be preferable to other state-of-the-art classifiers for high-dimensional event-related potential feature vectors.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Patrick Ruther, University of Freiburg, Germany
This article was submitted to Neural Technology, a section of the journal Frontiers in Neuroscience
Reviewed by: Xiaoli Li, Beijing Normal University, China; Quentin Noirhomme, Maastricht University, Netherlands
ISSN:1662-453X
1662-4548
1662-453X
DOI:10.3389/fnins.2017.00302