Skip to content
I

InTouch

Speak with Signs, Connect with All.

Created on 15th June 2025

I

InTouch

Speak with Signs, Connect with All.

The problem InTouch solves

Our project is a real-time sign language translator that enables seamless communication between deaf/mute individuals and the hearing population. It converts sign language to text and text to sign using a combination of AI models, computer vision, and a modern web interface.

The system uses a Next.js frontend and a Python (FastAPI) backend integrated with MediaPipe, OpenCV, and a custom-trained CNN model (asl_model.h5). This model was trained on a self-created dataset using Canva, where hand landmarks were captured and labeled.

In Sign-to-Text mode, hand gestures are detected via webcam, processed using MediaPipe, and passed to the AI model to predict each character, eventually forming words and sentences.

In Text-to-Sign mode, users enter text, and the system returns corresponding Indian Sign Language (ISL) signs as images, allowing hearing individuals to communicate back effectively.

Challenges we ran into

One exciting challenge was integrating real-time webcam input with our gesture recognition model. Capturing frames, converting them to a backend-compatible format, and getting accurate predictions was tricky. We overcame it by using MediaPipe for precise landmark detection and built a smooth, responsive interface that feels intuitive and reliable for users.

Tracks Applied (2)

GEN AI&ML

The system takes gesture inputs (visual signs) and generates textual output, which is a form of AI-based content generat...Read More

Best use of GitHub

GitHub

Discussion

Builders also viewed

See more projects on Devfolio