Deaf and dumb people face significant challenges in traditional video calling platforms, as they heavily rely on visual cues and sign language for communication.
Existing video calling solutions do not incorporate sign language recognition technology. As a result, deaf and dumb individuals are unable to fully express themselves through sign language, limiting their ability to communicate naturally.
Hand movements in sign language often involve one hand partially or fully occluding the other. This occlusion can make it difficult for computer vision algorithms to accurately track and recognize the hand movements and gestures of both hands simultaneously.
Many sign gestures are quite similar to each other for example thank you and good both the gestures are almost same so we were facing difficulty in prediction.We were doing the gesture for good but it was predicting thank you ... So for that we faced a lot of difficulty .But now after increasing the accuracy of our model by increasing the training data and by making our data more diverse by collecting real time image in different lightning we are able to overcome this problem
Tracks Applied (1)
Technologies used
Discussion