QCRUFT: Quaternion Context Recognition under Uncertainty using Fusion and Temporal Learning
Human Context Recognition (HCR) and Context-Aware (CA) computing on smartphones have received increased research attention recently. HCR is a challenging multi-label classification task as smartphone sensor values for various contexts (defined as <Activity, Phone Placement>) vary across phone...
Gespeichert in:
| Veröffentlicht in: | 2022 IEEE 16th International Conference on Semantic Computing (ICSC) S. 41 - 50 |
|---|---|
| Hauptverfasser: | , |
| Format: | Tagungsbericht |
| Sprache: | Englisch |
| Veröffentlicht: |
IEEE
01.01.2022
|
| Schlagworte: | |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Human Context Recognition (HCR) and Context-Aware (CA) computing on smartphones have received increased research attention recently. HCR is a challenging multi-label classification task as smartphone sensor values for various contexts (defined as <Activity, Phone Placement>) vary across phone models, and what pocket the user places the phone (proprioception). While realistic, HCR data collected in-the-wild frequently have missing or wrong user-provided context labels or timestamps. In this paper, we propose Quaternion Context Recog-nition under Uncertainty using Fusion and Temporal Learning (QCRUFT), an end-to-end deep learning HCR framework that integrates several mechanisms for mitigating multiple challenges in in-the-wild HCR data including sensor signal variability, extreme data imbalance, and noisy context labels. QCRUFT has two main branches: Branch one uses a Multi-Layer Perceptron (MLP) to analyze handcrafted features. Branch two analyzes raw data using a Convolutional Neural Network (CNN) followed by a Bi-Directional Quaternion Long Short Term Memory (Bi-QLSTM) model. Initially proposed for speech recognition tasks, QCRUFT innovatively adapts Bi-QLSTMs for HCR. Its quater-nion component captures relationships among strongly correlated spatial features while its Bi-LSTM component learns temporal relationships in bursts of data. QCRUFT also uses quaternions to correct arbitrary user orientations, by rotating the phone back to a universal reference frame. Finally, to mitigate errors in user-supplied context timestamps, QCRUFT incorporates two novel response features, Response Time and Thinking Time, which estimate the quality of user-provided context labels based on participants' delay and the time taken to complete context reports. In rigorous evaluation, QCRUFT achieved 76.4% and 70.6% in overall Balanced Accuracy (BA) on two real-world in-the-wild HCR datasets, improving on the best performing state-of-the-art baselines by 5.2% and 2.6%, respectively. |
|---|---|
| DOI: | 10.1109/ICSC52841.2022.00014 |