A Resource Efficient System for On-Smartwatch Audio Processing

While audio data shows promise in addressing various health challenges, there is a lack of research on on-device audio processing for smartwatches. Privacy concerns make storing raw audio and performing post-hoc analysis undesirable for many users. Additionally, current on-device audio processing sy...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the annual International Conference on Mobile Computing and Networking Vol. 2024; p. 1805
Main Authors: Ahmed, Md Sabbir, Rahman, Arafat, Wang, Zhiyuan, Rucker, Mark, Barnes, Laura E
Format: Journal Article
Language:English
Published: United States 01.11.2024
Subjects:
ISSN:1543-5679
Online Access:Get more information
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:While audio data shows promise in addressing various health challenges, there is a lack of research on on-device audio processing for smartwatches. Privacy concerns make storing raw audio and performing post-hoc analysis undesirable for many users. Additionally, current on-device audio processing systems for smartwatches are limited in their feature extraction capabilities, restricting their potential for understanding user behavior and health. We developed a real-time system for on-device audio processing on smartwatches, which takes an average of 1.78 minutes (SD = 0.07 min) to extract 22 spectral and rhythmic features from a 1-minute audio sample, using a small window size of 25 milliseconds. Using these extracted audio features on a public dataset, we developed and incorporated models into a watch to classify foreground and background speech in real-time. Our Random Forest-based model classifies speech with a balanced accuracy of 80.3%.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1543-5679
DOI:10.1145/3636534.3698866