Through this project we can allow deaf people to talk to people directly. Our model takes user sign language and form senteces using sign language, Then people can interact via this i.e. when a text to speech API is used we can directly convert text into speakable language. This when more smoothened model created can allow millions of deaf people to interact directly using sign language and give them sense of talking via sign language.
We had to go through a lot of datasets for American Sign Language our accuracy wasn't enough so we used transfer learning finally and that allowed us to get an acceptable model. Our model still struggles a little but given the results it can produce we see it as a big win. We still need to finish for our UI which we are implementing.
Discussion