P

Paired

Connecting the disabled

P

Paired

Connecting the disabled

The problem Paired solves

  • Disabled people face difficulty in
    communicating with each other when at
    distance.
  • Chatting apps currently do not focus on
    connecting disabled people.
  • Communication using Sign Language
    feature is also not enabled in many apps.
  • Communication using the Braille feature also
    lacks in many chatting apps.
  • Current messaging apps are also not feasible
    for illiterate disabled people.

Features of Paired

  • The app can recognize American Sign
    Language from gestures using Machine
    Learning enabling Deaf and Dumb to
    send messages using it.
  • Incoming messages will be converted
    from text to American Sign Language
    for the deaf and dumb people to
    recognize the messages even if they are
    not literate.
  • Blind people can also send messages
    using their Braille.
  • The incoming messages will also be
    converted to speech to enable blind
    people to hear the messages

Modus Operandi

  1. The web-based application takes in the data from
    the webcam available on the laptop/phone
    (machine). This data stream is processed to narrow
    down the live data stream.
  1. The data is now fed into a Cnn based machine
    learning model which makes a prediction on the basis
    of the symbols present in the video stream made by
    the user
  1. The predictions are displayed and fed to the chat for
    the user
  1. The Process of reading data, preprocessing the video
    stream and making predictions on the video is done
    for every time the User wants to send a message.
  1. Blind people will place their fingers over the
    button which has been placed at convenient
    locations for them.
  1. They will type the Braille messages which will be
    interpreted by the app and then the messages will
    be sent to the receiver in both speech and sign
    language form

Challenges we ran into

Exatracting gestures from their respective backgrounds has been a major difficulty as , there can be various lighting condition and backgrounds distubances through which one has to extract the gesture and run it through machine learning classifier

Discussion