Résumé:
Navigating outdoor environments poses substantial challenges for blind and visually impaired people, limiting
their ability to move independently and safely. This thesis presents a novel AI-based system designed to enhance
mobility for visually impaired users by providing real-time object detection and depth sensing. Utilizing deep
learning techniques and the YOLOv8 object detection algorithm, the system is implemented on embedded
systems with the Raspberry Pi 4 and integrated with a 3D camera to assess the spatial proximity of detected
objects.
The custom WOTR (walk on the road) dataset developed for this project, tailored to the needs of visually
impaired individuals, ensures high accuracy in object detection and depth estimation. The system delivers
real-time audio feedback, offering practical guidance for non-controlled outdoor assistive navigation.
Comprehensive testing in various outdoor settings demonstrates the system’s effectiveness in detecting objects,
estimating their depth, and providing timely feedback. The portability and cost-effectiveness of the Raspberry
Pi 4 make this solution accessible to a wide audience, potentially improving the quality of life for visually
impaired individuals by enabling safer and more confident navigation. This work advances the field of assistive
technologies, offering a practical tool that empowers blind and visually impaired individuals to navigate outdoor
spaces with greater ease and independence.