Development of a Fully Autonomous Offline Assistive System for Visually Impaired Individuals: A Privacy-First Approach.

Gespeichert in:
Bibliographische Detailangaben
Titel: Development of a Fully Autonomous Offline Assistive System for Visually Impaired Individuals: A Privacy-First Approach.
Autoren: Mekonnen, Fitsum Yebeka, Al Bataineh, Mohammad F., Abu Abdoun, Dana, Serag, Ahmed, Tamiru, Kena Teshale, Abula, Winner, Darota, Simon
Quelle: Sensors (14248220); Oct2025, Vol. 25 Issue 19, p6006, 22p
Schlagwörter: PEOPLE with visual disabilities, ASSISTIVE technology, PRIVACY, INTERACTIVE computer systems, ELECTRONIC data processing, SINGLE-board computers, AUTONOMOUS robots
Abstract: Visual impairment affects millions worldwide, creating significant barriers to environmental interaction and independence. Existing assistive technologies often rely on cloud-based processing, raising privacy concerns and limiting accessibility in resource-constrained environments. This paper explores the integration and potential of open-source AI models in developing a fully offline assistive system that can be locally set up and operated to support visually impaired individuals. Built on a Raspberry Pi 5, the system combines real-time object detection (YOLOv8), optical character recognition (Tesseract), face recognition with voice-guided registration, and offline voice command control (VOSK), delivering hands-free multimodal interaction without dependence on cloud infrastructure. Audio feedback is generated using Piper for real-time environmental awareness. Designed to prioritize user privacy, low latency, and affordability, the platform demonstrates that effective assistive functionality can be achieved using only open-source tools on low-power edge hardware. Evaluation results in controlled conditions show 75–90% detection and recognition accuracies, with sub-second response times, confirming the feasibility of deploying such systems in privacy-sensitive or resource-constrained environments. [ABSTRACT FROM AUTHOR]
Copyright of Sensors (14248220) is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Complementary Index
Beschreibung
Abstract:Visual impairment affects millions worldwide, creating significant barriers to environmental interaction and independence. Existing assistive technologies often rely on cloud-based processing, raising privacy concerns and limiting accessibility in resource-constrained environments. This paper explores the integration and potential of open-source AI models in developing a fully offline assistive system that can be locally set up and operated to support visually impaired individuals. Built on a Raspberry Pi 5, the system combines real-time object detection (YOLOv8), optical character recognition (Tesseract), face recognition with voice-guided registration, and offline voice command control (VOSK), delivering hands-free multimodal interaction without dependence on cloud infrastructure. Audio feedback is generated using Piper for real-time environmental awareness. Designed to prioritize user privacy, low latency, and affordability, the platform demonstrates that effective assistive functionality can be achieved using only open-source tools on low-power edge hardware. Evaluation results in controlled conditions show 75–90% detection and recognition accuracies, with sub-second response times, confirming the feasibility of deploying such systems in privacy-sensitive or resource-constrained environments. [ABSTRACT FROM AUTHOR]
ISSN:14248220
DOI:10.3390/s25196006