Wearable Saudi Sign Language Recognition Device Based on Neural Network

In Saudi Arabia, over 750,000 individuals are deaf or mute, often facing communication barriers due to limited sign language familiarity among the general population. This research introduces a wearable device leveraging neural networks to recog-nize Saudi Sign Language, aiming to bridge this commun...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:International Multi-Conference on Systems, Signals, and Devices s. 199 - 208
Hlavní autoři: Algahtani, Shouq, Zuhairy, Rana, Almarghalani, Olla, Aljohani, Mawadda, Elmanfaloty, Rania A.
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 22.04.2024
Témata:
ISSN:2474-0446
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In Saudi Arabia, over 750,000 individuals are deaf or mute, often facing communication barriers due to limited sign language familiarity among the general population. This research introduces a wearable device leveraging neural networks to recog-nize Saudi Sign Language, aiming to bridge this communication gap. The device simultaneously processes data from both hands, integrating inputs from the non-dominant hand with those from the dominant. Three primary phases define the system: data acquisition, machine learning model development, and real-time sign recognition. Data were gathered using electromyography (EMG) and inertial measurement unit (IMU) sensors, facilitated by the Arduino IDE, with two Arduino Nano 33 BLE micro-controllers managing collection and transmission. The dataset, comprising 25 signs from four subjects with each sign repeated 20 times, informed the machine learning process. Utilizing Python, a model incorporating dual Convolutional Neural Networks for feature extraction from sensor readings and dense neural networks for training was built. Subsequent sign recognition was realized by deploying the model on Arduino via TensorFlow Lite. Impressively, this innovative system boasts a 92 % accuracy rate, marking a significant stride toward inclusive communication solutions.
ISSN:2474-0446
DOI:10.1109/SSD61670.2024.10548435