Overview
VisionHat translates sight into sound. A camera embedded in a hat captures the wearer's field of view, while onboard AI detects surrounding objects and delivers instant audio descriptions, giving visually impaired users a new way to navigate the world.

Workflow
Step 1
Built a wearable embedded vision system using a Raspberry Pi 5 with a mounted USB camera for live video capture.
Step 2
Implemented YOLOv8 object detection to perform real-time identification of objects using computer vision.
Step 3
Developed an inference pipeline with Python to capture frames, run model inference, and extract detected object classes.
Step 4
Integrated a text-to-speech module to convert detected object labels into spoken audio feedback for the user.


Skills & Tools
Raspberry Pi 5YoloV8 UltralyticsComputer VisionEmbedded LinuxOpenCVPythonReal-Time Object DetectionText-To-Speech FeedbackAssistive Technology Design
