Enhancing Brazilian Sign Language Recognition Through Skeleton Image Representation

Effective communication is paramount for the inclusion of deaf individuals in society. However, persistent communication barriers due to limited Sign Language (SL) knowledge hinder their full participation. In this context, Sign Language Recognition (SLR) systems have been developed to improve commu...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings - Brazilian Symposium on Computer Graphics and Image Processing pp. 1 - 6
Main Authors: Alves, Carlos Eduardo G. R., De A. Boldt, Francisco, Paixao, Thiago M.
Format: Conference Proceeding
Language:English
Published: IEEE 30.09.2024
Subjects:
ISSN:2377-5416
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Effective communication is paramount for the inclusion of deaf individuals in society. However, persistent communication barriers due to limited Sign Language (SL) knowledge hinder their full participation. In this context, Sign Language Recognition (SLR) systems have been developed to improve communication between signing and non-signing individuals. In particular, there is the problem of recognizing isolated signs (Isolated Sign Language Recognition, ISLR) of great relevance in the development of vision-based SL search engines, learning tools, and translation systems. This work proposes an ISLR approach where body, hands, and facial landmarks are extracted throughout time and encoded as 2-D images. These images are processed by a convolutional neural network, which maps the visual-temporal information into a sign label. Experimental results demonstrate that our method surpassed the state-of-the-art in terms of performance metrics on two widely recognized datasets in Brazilian Sign Language (LIBRAS), the primary focus of this study. In addition to being more accurate, our method is more time-efficient and easier to train due to its reliance on a simpler network architecture and solely RGB data as input. Source code and pre-trained models are publicly available at https://github.com/Dudu197/sign-language-recognition.
ISSN:2377-5416
DOI:10.1109/SIBGRAPI62404.2024.10716301