Created on 7th April 2024
•
The project SilentSignal endeavors to tackle the profound challenges confronted by the global hearing-impaired community, numbering approximately 70 million individuals. This marginalized group often encounters substantial barriers to effective communication, perpetuating isolation and limiting their access to education and employment opportunities. Notably, a staggering 98% of deaf people worldwide lack formal education in sign language, hindering their ability to express themselves and engage meaningfully with society. This educational deficit, compounded by the alarming statistic that 72% of families do not sign with their deaf children, contributes to communication gaps within households. Consequently, the deaf community faces a staggering 70% underemployment rate, with one in four individuals quitting jobs due to discrimination.
In the development of our innovative SilentSignal project, a significant technical challenge we encountered revolved around creating and training a machine learning model capable of accurately recognizing sign language gestures in video frames. To overcome this hurdle, we strategically implemented various techniques. Firstly, we applied an OpenCV edge-detection transformation to simplify the images, providing our Convolutional Neural Network (CNN) with a more digestible input for extracting information. This preprocessing step played a crucial role in enhancing the model's ability to classify frames into distinct letters of the sign language alphabet.
Technologies used