We are helping people who have no knowledge whatsoever about sign language by giving them a bit of training using our deep learning model. With the help of this project, a person can train himself/herself in sign language with the help of pre-trained models. Also, there would be some essential tools for the disabled to be used in order to communicate with others. Text-to-speech conversion is one of them. We also created a pose template that can be used to create models which can be found in our github.
While integrating TensorFlow was a tough job as a beginner in Ml and the deep learning training model was also a tough job.
Defining the signs such as A hand sign, B hand sign, etc and training the user model took a lot of time but some were easy such as sign D. We also ran into some OpenGL errors where component was not rendering due to some bug but at the end was fixed.
Technologies used
Discussion