About 70 million people in this world are unable to communicate with us properly due to their inability to speak or hear. What we sought to create was an innovative product that would be accessible to all and would help them attend and communicate within a virtual meeting with ease. Our product is based on a virtual camera that runs locally on the user's machine and allows them to use the said virtual camera as the main video source on any platform that they wish to use. It uses machine learning-based technology to identify the hand gestures of the user and converts them to subtitles for everyone to read.
The major challenges we ran into were the integration of TensorFlow with the virtual camera system, wired to different systems. Apart from that, there were only a few puddles that our team came across namely deciding the project's name and solving a few logic-based codes.
Discussion