Sleep Classification With Artificial Synthetic Imaging Data Using Convolutional Neural Networks

Objective: We propose a new analytic framework, "Artificial Synthetic Imaging Data (ASID) Workflow," for sleep classification from a wearable device comprising: 1) the creation of ASID from data collected by a non-invasive wearable device that permits real-time multi-modal physiological mo...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE journal of biomedical and health informatics Ročník 27; číslo 1; s. 421 - 432
Hlavní autoři: Shi, Lan, Wank, Marianthie, Chen, Yan, Wang, Yibo, Liu, Yachuan, Hector, Emily C., Song, Peter X.K.
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2168-2194, 2168-2208, 2168-2208
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Objective: We propose a new analytic framework, "Artificial Synthetic Imaging Data (ASID) Workflow," for sleep classification from a wearable device comprising: 1) the creation of ASID from data collected by a non-invasive wearable device that permits real-time multi-modal physiological monitoring on heart rate (HR), 3-axis accelerometer, electrodermal activity, and skin temperature, denoted as "Temporal E4 Data" (TED) and 2) the use of an image classification supervised learning algorithm, convolutional neural network (CNN), to classify periods of sleep. Methods: We investigate ASID Workflow under 6 settings (3 data resolutions × 2 HR scenarios). Competing machine/deep learning classification algorithms, including logistic regression, support vector machine, random forest, k-nearest neighbors, and Long Short-Term Memory, are applied to TED as comparisons, termed "Competing Workflow." Results: The ASID Workflow achieves excellent performance with mean weighted accuracy across settings of 94.7%, and is superior to the Competing Workflow with high and low resolution data regardless of the inclusion of HR modality. This superiority is maximized for low resolution data without HR. Additionally, CNN has a relatively low subject-wise test computational cost compared with competing algorithms. Conclusion: We demonstrate the utility of creating ASID from multi-modal physiological data and applying a preexisting image classification algorithm to achieve better classification accuracy. We shed light on the influence of data resolution and HR modality on the Workflow's performance. Significance: Applying CNN to ASID allows us to capture both temporal and spatial dependency among physiological variables and modalities by using 2D images' topological structure that competing algorithms fail to utilize.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2168-2194
2168-2208
2168-2208
DOI:10.1109/JBHI.2022.3210485