Improving Sensor-Based Affect Detection with Multimodal Data Imputation

Utilizing sensors for affect detection in adaptive learning technologies has been the subject of growing interest in recent years. This extends to the collection of multiple concurrent sensor-based input channels to enable multimodal affective modeling. However, sensors pose significant challenges t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International Conference on Affective Computing and Intelligent Interaction and workshops S. 669 - 675
Hauptverfasser: Henderson, Nathan, Emerson, Andrew, Rowe, Jonathan, Lester, James
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.09.2019
Schlagworte:
ISSN:2156-8111
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Utilizing sensors for affect detection in adaptive learning technologies has been the subject of growing interest in recent years. This extends to the collection of multiple concurrent sensor-based input channels to enable multimodal affective modeling. However, sensors pose significant challenges to affect detection, including sensor connectivity issues, background noise, inconsistent data logging, and loss of data due to hardware failure. In this paper, we introduce a framework for multimodal data imputation to improve automated detection of student affect in adaptive learning technologies. Through the use of an autoencoder neural network trained on Microsoft Kinect-based posture data and electrodermal activity data with synthetic noise injection, we approximate missing values within the original dataset while still preserving the inter-related context between features when reconstructing the dataset. The reconstructed dataset can be used in conjunction with multimodal data fusion techniques to further boost affect detector accuracy. Results indicate that this framework improves the effectiveness of multimodal affect detectors when compared to unimodal baseline models, as well as models using baseline data imputation techniques such as mean imputation. Further, it maintains cross-modality information that influences the multimodal affect detectors' performance, as the approach also outperforms previous work using the latent representation of the imputed dataset as training data instead of a complete reconstruction of the original dataset's dimensionality.
ISSN:2156-8111
DOI:10.1109/ACII.2019.8925538