A

AI For Sign Language

The goal of this project is to create an AI enabled system which will generate a text conversion of hand-gesture based signs, made using American Sign Language by the user, by training a DL model.

A

AI For Sign Language

The goal of this project is to create an AI enabled system which will generate a text conversion of hand-gesture based signs, made using American Sign Language by the user, by training a DL model.

The problem AI For Sign Language solves

In this day and age, communication is key. When this is done through an online mode, such as video conferencing, a whole community of people with the inability to hear and speak face difficulties. The main objective of this project is to translate sign language to text. To begin with, we are translating ASL gestures for alphabets.

The project would contain a user-friendly environment by providing a text output for a sign gesture input. When you use the hand signs for letters to spell out a word, you are finger-spelling. The finger-spelling technique is useful to convey names or to ask someone the sign for a particular concept. ASL uses one-handed signals for each letter of the alphabet. Many people find finger-spelling the most challenging hurdle when learning to sign, as accomplished speakers are very fast finger spellers.

Team SudoCode has developed a framework that provides a helping-hand for the speech-impaired to communicate with the rest of the world using sign language. This also leads to the elimination of the middle person who generally acts as a medium of translation.

When AI is not in picture, the translation is manually done with the help of a middleman. This makes it difficult for the community of people with impaired hearing and speech to communicate effectively. Hence AI adoption helps avoid this situation. Our approach ensures that everyone irrespective of their mode of communication can “get the floor” to express themselves. Especially in situations like lockdown, this enables effective remote-working for deaf and hearing impaired communities.

Challenges we ran into

We were unable to train the model using a GPU as we didn't have access to it. However, we were able to train the model effectiveley for 10 epochs with the local CPU although it took longer.

We tried to develop a better GUI for the final product, but due to shortage of time, we had to keep it minimal.

Discussion