Fatigue driving detection method based on Time-Space-Frequency features of multimodal signals

•Fatigue driving detection using EEG and EOG multimodality feature fusion.•A Fatigue Detection Model Combining Convolutional Autoencoder and Recurrent Neural Networks.•Making full use of Time-Space-Frequency multi-domain features to improve detection performance.•The test results take the mean and s...

Full description

Saved in:
Bibliographic Details
Published in:Biomedical signal processing and control Vol. 84; p. 104744
Main Authors: Shi, Jinxuan, Wang, Kun
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.07.2023
Subjects:
ISSN:1746-8094, 1746-8108
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Fatigue driving detection using EEG and EOG multimodality feature fusion.•A Fatigue Detection Model Combining Convolutional Autoencoder and Recurrent Neural Networks.•Making full use of Time-Space-Frequency multi-domain features to improve detection performance.•The test results take the mean and standard deviation of the RMSE and COR of 23 single subjects.•Comparing the effects of different EEG and EOG features on model performance. Fatigue detection for drivers in public transportation is crucial. To effectively detect the driver's fatigue state, we investigated the deep learning-based fatigue detection method and proposed a multimodality signal fatigue detection method. In the proposed method, the convolutional autoencoder (CAE) is used to fuse electroencephalogram (EEG) and electrooculography (EOG) signal features, and the convolutional neural network (CNN) is used to maintain spatial locality. After that, the fusion features are input into the recurrent neural network (RNN) for fatigue recognition. We tested the proposed algorithmic framework on the SEED-VIG dataset and evaluated it using two statistical indicators, root mean square error (RMSE) and correlation coefficient (COR), achieving the mean RMSE/COR of 0.10/0.93 and 0.11/0.88 on single modality EOG and EEG features, respectively, and improved performance to 0.08/0.96 on multimodality features. In addition, this paper analyzes the effect of different signal features on recognition results, and the comparison illustrates that the performance of the model using multimodality features is better than that of signle modality features. The experimental results show that the algorithm framework proposed in this paper outperforms other recognition algorithms, which also proves the effectiveness of the algorithm applied to fatigue driving detection.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2023.104744