E

E.D.I.T.H (Inspired from Marvel )

Now, the visually impaired can see. (Who said they abnormal or are handicapped. We bring EDITH to their aid. The future of senses for the blind, deaf, mute to lead a normal, happy & a dignified life)

E

E.D.I.T.H (Inspired from Marvel )

Now, the visually impaired can see. (Who said they abnormal or are handicapped. We bring EDITH to their aid. The future of senses for the blind, deaf, mute to lead a normal, happy & a dignified life)

The problem E.D.I.T.H (Inspired from Marvel ) solves

Problem Statement:
• Globally, 2.2 billion people suffer from vision impairment of which India
accounts for 12 million individuals.Travelling alone and interacting with
surroundings are extremely challenging tasks for them.
• Constantly depending on another person for basic necessities is not always
convenient.Blind people are often scammed and mislead, thus they need
assistance for security insurance to be self-reliant.
• Visually impaired people face challenges in completing their daily chores and
interacting with their surroundings.
So do the deaf people when they are not able to hear.

Solution:
The proposed smart glass "EDITH" is designed to work in 4 modes as mentioned below.
● Walking Mode: Real-time Obstacle detection,currency detection , Image
Recognition and feedback through voice assistance to avoid those obstacles in
front of the user.
● Reading Mode: Read texts/ Signboards and use of text to speech conversion to
provide real-time feedback to the user. Currency detection feature is also included
in this.
● SOS Mode: In case of an emergency, an alert will be sent to contact the guardian.
● Google Assistant Mode: In this mode user can access a virtual assistant to
perform tasks like Google map navigation, reminder and alarm etc.

Uniqueness & Innovation
● The model proposed in this report has been designed using a modular system
design approach to ensure user ergonomics and user comfort.
● The proposed smart glass is designed to work in 4 modes as mentioned below.

  1. Walking Mode: Real-time Obstacle detection and feedback through voice
    assistance to avoid those obstacles in front of the user.
  2. Reading Mode: Read texts/ Signboards and use of text to speech conversion to
    provide real-time feedback to the user.
  3. Google Assistant: Provide real time voice assistance to the user through google
    voice APIs
  4. SOS Mode: In case of an emergency, an alert will be sent to contact the
    guardian.

Challenges we ran into

The challenges that we majorly faced while making and building this project were:

Time Constraint: Such projects require a lot of time but we had to make this within 48 hours and that too in an online competetion wherein this was also a hardware solution so it was the biggest challenge to train the model in such less time and built it perfectly in the time of teh hackathon itself.

Training the Model: Training the model in so less time and that too with finese and accuracy was greatest deal for us and a hard nut to crack as there were a lot of signs and combinations for all the entire sign language and all the objects which was a very vast domain so it required a lot of time, but for now we have trained the model with more than a 1000 combinations and we will keep on doing this until we reach our desired goal.

Accuracy: Challenges we ran into were while training the model to get best accuracy and dealing with the pytesseract for image to text conversion and testing accuracy

Social Cause: As this project was built for such a greater cause so we had to deal with the sentiments of people and we couldn't even leave it because it was a solution for a greater audience and their problems so the social and sentimental causes attached to this project were also a big challenge but acted as a greater motivation as well.

Discussion