*Our AI-based project generates sign language from video content, improving communication for hearing and speech impaired individuals via mobile devices and desktops.
*Project aims to identify regional dialects and generate corresponding sign language interpretations, catering to diverse linguistic needs within the hearing and speech impaired community.
*Technology bridges communication gaps and promotes inclusivity, positively impacting the lives of individuals with hearing and speech impairments. It can also be used as a translator tool.
*Traditional methods of this idea has a human translator, is limited to a specific field, lack focus on special educational content. But our generator has a lot of features like 3D datasets, live video capture, local video input, customisable playback speed and screen mode, caption generation ability, multilingual access! We have also a planned sketch of the methodology used to use the product without internet!
We were not able to take a dataset to preprocess it. So, we created our own dataset using motion capture and landmark mapping technology which is inexpensive. We were only having a feature of uploading files from our own local storage. Now, we also have the feature of live capturing. We ran into a lot of glitches to add features to make the app more viable and user friendly. Our jury suggested that our product must also run at a no net zone. So we sketched a plan to reach rural communities through edge computing technologies.
Discussion