FLICK

FLICK

A multi-lingual sign language interpreter platform that bridges the communication gap for the deaf and hard-of-hearing community, with real-time translations, community-driven sign updates.

Created on 23rd October 2024

FLICK

FLICK

A multi-lingual sign language interpreter platform that bridges the communication gap for the deaf and hard-of-hearing community, with real-time translations, community-driven sign updates.

The problem FLICK solves

The multi-lingual sign language interpretation platform is designed to make communication easier and more accessible for the deaf and hard-of-hearing community. It enables real-time conversations, allowing users to smoothly interact with hearing individuals in everyday situations. Educational institutions can integrate the platform into classrooms to provide instant sign language translations during lectures, fostering a more inclusive learning environment. In healthcare settings, the platform ensures that communication between medical staff and deaf patients is accurate and safe, reducing the potential for misunderstandings. For companies, it enhances remote work experiences by providing sign language support during virtual meetings, allowing deaf employees to fully participate. The platform’s mobile app ensures on-the-go accessibility, making everyday interactions—from grocery shopping to parent-teacher meetings—more manageable. With its community-driven updates, it stays current with new signs and regional variations, offering precise translations. Additionally, it can play a crucial role during emergencies, enabling first responders to communicate quickly with deaf individuals and potentially saving lives. This makes it a valuable tool across various contexts, improving inclusivity, safety, and ease of communication.

Challenges I ran into

The problem of not having a database for ISL(Indian Sign Language) is something that troubled us alot. Unable to find a dataset we trained our very own dataset it was a very tedious and time consuming task. Also we posed for different hands, different people did it under different lighting conditions and also with different backgrounds. All this work has been put in to increase the accuracy of the model, increasing the number of samples per sign has enabled us to increase the confidence score for prediction from 0.45 to 0.85. This shows our conviction in the model and the dataset. Also to increase the accuracy of the model modules like GausianFlow were integrated that inhanced our already existing model by isolating the sign and bluring the background. As keras libraries also get updated from time to time getting familiar with the new concept was also a task, but me and my team managed to plan and execute the model upto to our best potential

Discussion

Builders also viewed

See more projects on Devfolio