Deaf and hard-of-hearing individuals face communication barriers when trying to interact with those who do not understand sign language. This project aims to bridge the gap by providing a sign language translator that uses video input to generate recognized text and text-to-speech output, allowing for seamless communication between deaf/hard-of-hearing individuals and the wider community.
We had issues in integrating our mediapipe model with the web and app, so we had to use different approaches like teachable
Improving the accuracy of the model was a tiresome task, we had to make a custom dataset of our own .
We also had issues in linking it with the web3 approach which is a future potential for our project
Tracks Applied (1)
Discussion