Enhancing User Interaction: Gesture-Enabled Virtual Cursor with Voice Integration

The research article proposes a novel approach combining voice control with gesture-enabled virtual mouse technology to enhance human-computer interaction. Using this cutting-edge technology, which uses computer vision techniques to identify hand landmarks, users may control cursor motions using ges...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2024 4th International Conference on Pervasive Computing and Social Networking (ICPCSN) S. 279 - 286
Hauptverfasser: Devi, V. Anjana, E, Jahnavi, R, Kavipriya
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 03.05.2024
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The research article proposes a novel approach combining voice control with gesture-enabled virtual mouse technology to enhance human-computer interaction. Using this cutting-edge technology, which uses computer vision techniques to identify hand landmarks, users may control cursor motions using gestures. However, a number of problems with the current methods, such as snapshot algorithms, cloud-based voice assistants, and hand contour extraction, restrict their effectiveness. The accuracy of cursor control may be impacted by challenges in precisely interpreting user gestures due to the intricacy of gesture mapping and a restricted gesture language. Furthermore, the voice assistant component of the system is not robust enough to handle a variety of speech instructions and dynamic voice commands, and it is also dependent on an internet connection. Additionally, there aren't many error handling capabilities, which might lead to worse-than-ideal user experiences when orders aren't carried out precisely. Notwithstanding these difficulties, the proposed system incorporates voice assistant features beyond mouse control, enabling users to utilize mouse motions and natural language instructions to do a variety of activities, including sending emails, opening apps, executing mouse functions, searching the web, and managing files. To fully realize the potential of this integrated voice and gesture-controlled system in improving user experiences and accessibility, it will be imperative to address these issues through enhanced gesture vocabulary, expanded gesture recognition algorithms, and improved voice command processing.
DOI:10.1109/ICPCSN62568.2024.00052