In this pandemic world , where everything be it college, school , offices are operating online. Staring into the screens 24*7 not only is increasing health issues like back & body pain, eye strains etc but social interaction of people is also confined to the workplace i.e. computer screens basically leading to mental health issues. Recent studies have shown the bluish glow of our screens also reduces the body’s melatonin levels that we need to naturally sleep, causing people to stay up later and have more difficulty falling asleep when they do go to bed.
Doctors often recommend taking breaks from looking at your screen to prevent eye strain, but not everyone is so good at sticking to schedules on their own. Cutting back on the use of those devices might be the best advice but unfortunately we are bound to use/watch screens. Rectifying all that we all face during our increased screen time.
We propose CYSA that is Correcting Your Screen Addiction.
We planned to collect users real time behavioral symptoms that is it would sense where the user is actually sitting, whether close to the screen or at a distance the screen time monitoring will remind you to look at something 20 feet away for 20 sec after 20 min of screen on time according to 20-20-20 rule. The screen to face distance is measured in real time and it will prevent you to sit too close to the digital screen. Before turning on your video it is crucial that you sit at a one arm distance from screen since that will be taken as the reference.
Accordingly it would generate alarms and reminders to poke the person so that he could sit at a safe distance and the eye could get the relief.
Moreover our project also solves the following:
If one get those reckless regular habit of a nail bite our app will also nudge the person if nail biting is observed in front of the camera. If it is detected it says "detected" while if it is not it says "not detected"
The problems we faced were basically over the Machine Learning part where using different algorithms and facing multiple errors we later built it, also Hands & FaceMesh model of Mediapipe, an open source framework from Google, the backbone of our application was intricate to connect with our application.
The nail biting feature : Its was diffcult to make our application so accurate and fast as if the user nail bites the application shall notify the user. We found difficulty in increasing our model's accuracy. Although the application is much more accurate now.
Data Retrieving and Maintaining it confidential: full confidentiality of the data was maintained and providing that data to our ML model.
The technology used in our application like JavaScript css html web dev tensorflow and machine learning was difficult to connect them all.
Basically using open cv we have taken the small fingerprint points and then trained our ml model.
Deploying the dark mode feature :As we all are accustomed to the dark mode features these days we have also added a dark theme mode feature into our app which will be a user experience. To provide a good user experience dark mode feature took time and effort.
Discussion