- The web-based application takes in the data from
the webcam available on the laptop/phone
(machine). This data stream is processed to narrow
down the live data stream.
- The data is now fed into a Cnn based machine
learning model which makes a prediction on the basis
of the symbols present in the video stream made by
the user
- The predictions are displayed and fed to the chat for
the user
- The Process of reading data, preprocessing the video
stream and making predictions on the video is done
for every time the User wants to send a message.
- Blind people will place their fingers over the
button which has been placed at convenient
locations for them.
- They will type the Braille messages which will be
interpreted by the app and then the messages will
be sent to the receiver in both speech and sign
language form
Exatracting gestures from their respective backgrounds has been a major difficulty as , there can be various lighting condition and backgrounds distubances through which one has to extract the gesture and run it through machine learning classifier
Discussion