S

See++

A Dynamic Real-Time visual aid for blind people

S

See++

A Dynamic Real-Time visual aid for blind people


The problem See++ solves

Often, an individual with visual disability faces a lot of problems due to lack of understanding of the surroundings. The individual also struggles a lot while performing his/her daily indoor activities, such as switching off the lights, etc. We propose a very cost efficient and a light weight wearable device which aims to provide aid for visual assistance. Our system interacts with the user's sorroundings in real time, describing the environment through voice outputs, thus providing him/her with a very powerful tool for mobility. Our system works completely on voice commands, making it hand-free to work with.
We have also equipped the system with Alexa voice services, which enables user to ask queries through voice commands and can be upgraded to be used for home automation.
Keeping in mind the face that technology can never be an alternate for human care, we also have an inbuilt feature of real time location sharing of the user with his/her caretakers.

Challenges we ran into

Being limited to very less compute power of raspberry pi, it was quite challenging for us to run heavy machine learning models in parallel and provide fast real-time service. Our hardware had only 1GBs of ram and thus we had to efficiently manage running our scripts. To tackle the problem, we exported our machine learning models into tensorflow-lite, compressing the model parameters and architecture into a single file optimized for runnning on embedded systems. With this, we achieved a significance performance boost and helped us built our hack.

Discussion