Created on 7th August 2022
•
Applications like Siri and Google Assistant are inaccessible to people with hearing and speech impairments. They frequently feel excluded from many commonplace events in addition to being misunderstood.
Sign language is used by the hearing and speech impaired to communicate and should be regarded as a natural language much as Nepali, English, French, etc. Why then shouldn't sign-related questions be answered by software that can respond to queries entered in English or Nepali?
The scope of our project promises to not only make sign language more widely understood but also bridge this existent gap between sign and other natural languages.
The biggest challenge faced by our team was finding ASL datasets. Having no speech or hearing impaired member in our team, we turned to research to learn more about what constitutes American Sign Language. The datasets that were present only included static hand signs however the ASL largely depends on parameters like facial expressions and hand movements too.
We understood we needed to create our own datasets in order to create our prediction model. However, due to time constraints, we were only able to create 30 sequences of 30 frames for each ASL sign. With the lack of a large dataset, the aspired accuracy in the results generated was hard to meet.
In addition to this, being beginners in the field, none of us had ever dabbled with ML except some simple face detection and OCR. This meant that each one of us was working with something new and learning as we built.