Created on 6th September 2020
•
Nearly 40 million people in India alone are visually impaired(285 million world wide). It is hard for them to do their own jobs. They find themselves dependent on someone almost always. Devices that help the visually impaired people by scanning the environment and guiding them accordingly do exist but they are priced at several thousand dollars. The cost factor makes the accessibility a question mark.
This inspired us to develop VC4U (We See For You) - an affordable device for the Visually Impaired
VC4U has 2 components - An APP and a Spectacle The spectacle is embedded with a camera and a touch sensor. Whenever the user wishes to use the device he touches the sensor. This action triggers the app to listen to the voice command the visually impaired person is about to give. With NLP the speech data is processed and is converted to commands.
Here are a few commands (and their motive):
-> Detect objects : Gets image from the onboard camera in the spectacle and detects objects , which is then converted as Voice
feedback.( Sign Boards, Person, Traffic Signs, Trucks , Bus are few classes to mention)
->Where am I? : Gives the exact location of the user as a Voice output.
The main objective was to provide a low cost solution , which was possible - the cost price was under 1000 INR.(14$)
Many such problems kept popping in our journey and the 'Programmer's Best Friend' ( Google ) was always there for us. XD