LEXIGN
Bridge the gap between signers and non-signers with real-time AI-powered communication and learning
Created on 29th May 2025
•
LEXIGN
Bridge the gap between signers and non-signers with real-time AI-powered communication and learning
The problem LEXIGN solves
Communication barriers between the hearing and hearing-impaired communities often result in social and educational exclusion. Our project provides a smart, accessible tool to help people *learn sign language interactively, convert **spoken words into signs, and *translate signs into text and speech using AI. This ensures more inclusive and seamless communication in real-world situations.
Challenges we ran into
Gesture Recognition Accuracy:Training the model to differentiate between similar hand signs required large amounts of clean, labeled data and extensive fine-tuning.
Real-time Processing: Achieving low-latency gesture detection from webcam input while maintaining accuracy was technically challenging.
Integrating Multimodal Components: Coordinating between Flask (backend), TensorFlow models, OpenCV input, and JavaScript frontend took significant debugging and optimization.
Voice & Sign Syncing: Syncing speech-to-sign animations and sign-to-text output with gTTS for seamless user interaction was tricky.
