Skip to content
S

SenseMesh

Communication that adapts — to everyone.

Created on 6th December 2025

S

SenseMesh

Communication that adapts — to everyone.

The problem SenseMesh solves

People with different accessibility needs — Deaf, Blind, Mute, Elderly, or neurodiverse users — struggle to communicate smoothly across digital platforms. Existing tools usually solve one need at a time and don’t adapt to mixed or multi-user conversations.

Communication gaps appear when:

A Deaf user receives an audio message.

A Blind user receives text without context.

A Mute user needs to respond quickly without typing.

An Elderly user receives complex or cluttered messages.

People with different abilities try to chat in real time.

SenseMesh solves this by unifying communication.
It takes any message — text, audio, sign, gesture — and adapts it into the most accessible form for each receiver.

People can use SenseMesh to:

Have barrier-free conversations with anyone, regardless of ability.

Convert audio → sign → text → gestures automatically.

Simplify messages for elderly users or low-vision readers.

Translate gestures for mute users in real time.

Enhance safety by providing clear alerts in the user’s preferred modality.

Enable inclusive communication in classrooms, workplaces, emergencies, and online interactions.

How it makes tasks easier & safer:

Instant adaptive messaging removes misunderstanding.

Unified UI means no switching apps for accessibility features.

Automatic personalization ensures every user gets content in their ideal format.

AI assistance improves decision-making, context clarity, and accessibility accuracy.

SenseMesh turns communication from ability-dependent → ability-inclusive.

Challenges we ran into

Challenges I Ran Into

Building SenseMesh wasn’t straightforward. A few major challenges included:

  1. Multi-modal AI inputs

Handling text, audio, gestures, and signs in one engine required a unified structure.
I overcame this by designing a Multi-Input Engine that normalizes all inputs into a single semantic layer.

  1. Accessibility profiles syncing across the app

Ensuring every feature respected the user’s accessibility preferences caused unexpected conflicts.
This was solved by implementing a global Adaptive Output Layer that routes all messages through one accessibility filter.

  1. Next.js build issues during deployment

I ran into:

JSX inside .ts files

Windows environment variable errors

Locked .next build folders

Vulnerable Next.js version warnings

Vercel authentication blocking public access

Each issue was fixed step-by-step:

Renamed TypeScript files to .tsx

Used cross-env for Windows

Cleared locked build folders

Updated Next.js version

Disabled Vercel’s default deployment protection

  1. AI model integration

Making Gemini interpret mixed inputs (speech + gesture + tone) required careful prompt engineering and consistent responses.

By iterating quickly and modularizing the AI requests, I achieved stable, predictable results.

Discussion

Builders also viewed

See more projects on Devfolio