A

American Sign Language Capture Model

The project here uses Machine Learning to predict American Sign Language symbols given by the user. The user can directly form sentences through this technique. Usefull for deaf people.

A

American Sign Language Capture Model

The project here uses Machine Learning to predict American Sign Language symbols given by the user. The user can directly form sentences through this technique. Usefull for deaf people.

The problem American Sign Language Capture Model solves

Through this project we can allow deaf people to talk to people directly. Our model takes user sign language and form senteces using sign language, Then people can interact via this i.e. when a text to speech API is used we can directly convert text into speakable language. This when more smoothened model created can allow millions of deaf people to interact directly using sign language and give them sense of talking via sign language.

Challenges we ran into

We had to go through a lot of datasets for American Sign Language our accuracy wasn't enough so we used transfer learning finally and that allowed us to get an acceptable model. Our model still struggles a little but given the results it can produce we see it as a big win. We still need to finish for our UI which we are implementing.

Discussion