We’ve seen volunteers run around the venue passing microphones in events like hackathons, conferences, etc. when audience is supposed to interact with the speakers. This wastes a lot of crucial time for everybody.
We also noticed that there’s no way for people with accessibility challenges, especially speech and hearing impairments, to meaningfully participate in such events.
So we set out to solve these problems with our hack, Mercury (Greek god of communication).
We really wanted to solve for gesture-to-speech problem, wherein, a speech-impaired person could convey their messages using gestures on their smartphone’s front camera and it’d get converted to voice in real-time. We spent a lot of time trying to do this, but need more time to crack it.
Lots of speech and sign language recognition libraries don’t have apt documentation or training datasets used, which makes it difficult to re-use them within a time constraint.