CNN based Sign Language Recognition System with Multi-format Output

Despite being one of the oldest and most natural forms of communication, sign language is challenging to comprehend since very few people are conversant in it. In this article a real time method for sign recognition using neural networks for finger spelling based on American sign language is present...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2023 5th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N) S. 767 - 771
Hauptverfasser: Pandey, Harshit, Ahmed, Amaan, Kumar, Tushar, Singh, Vaibhav Kumar, Dutta, Lipika, Yadav, Pinki
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 15.12.2023
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Despite being one of the oldest and most natural forms of communication, sign language is challenging to comprehend since very few people are conversant in it. In this article a real time method for sign recognition using neural networks for finger spelling based on American sign language is presented. The purpose is to recognize hand gestures of human task activities from a camera image. The position of hand and orientation are applied to obtain the training and testing data for the CNN after applying several image processing techniques. The hand is first put through a filter, and once that has been done, it is put through a classifier, which determines what class the hand movements belong to. Then the calibrated images are used to train CNN. Further various performance parameters such as accuracy, precision, recall and f1 score are calculated to analyze the proposed model. Using the DenseNet-169 CNN model the proposed system showed excellent performance with an accuracy of 99%. Moreover, the output is obtained in both text as well as audio format for ease of communication.
DOI:10.1109/ICAC3N60023.2023.10541724