CogniSense integrates various features, including autism testing, daily comprehension questions, focus tracking, emotional monitoring through computer vision, and supportive chatbots powered by natural language processing (NLP).
CogniSens is a project dedicated to supporting autistic children and their families. The platform aims to enhance the overall well-being of autistic children by providing a range of tools and features focused on their cognitive development, emotional well-being, and support for parents.
Features
Autism Testing
CogniSense offers a dedicated page for testing autism, utilizing a random forest classifier trained on various parameters. This testing module provides valuable insights and aids in early detection, allowing for timely intervention and support.
Grow
The Grow page generates daily comprehension questions to improve a child's focus and cognitive abilities. The platform encourages consistency with a streak feature, motivating children to engage in daily cognitive exercises.
Streak
The streak feature is designed to gamify the learning experience, encouraging children to maintain a consistent routine of daily comprehension questions. This helps in fostering a habit of regular engagement with cognitive exercises.
Focus and Emotion Tracking
During the comprehension quiz, It utilizes computer vision applications to track the frequency of a child's focus and emotional expressions. This data is valuable for understanding the child's engagement levels and emotional well-being during learning activities. This data is in turn extremely useful for progress reporting and professional caregivers
Help Bots
It incorporates two chatbots powered by NLP to support autistic children and their parents. One bot serves as a companion for the child, offering assistance whenever they feel low. The other bot is dedicated to assisting parents, offering guidance and resources to help them navigate the challenges og raising an autistic child.
We ran into challenges with integrating computer vision with our portal , we had build them as standalone apps but to add interactivity based on browser based control was a challenge . We overcame this through the use of websockets . Once we did figure out websockets and established a connection we faced problem with format and transfer , the API would receive a byte64 encoded data but we has to somehow convert into a pillow image file, so we stored it in a byte buffer and then converted into it , after that we had to again convert into byte64 encoding to send it back which was a task in itelsf to convert the byte string to a encoded string. We faced problem with selection of model that would accurately determine the parameters for focus and retention . There was also a lot of research involves so we can get as close as possible to formal clinical processes
Discussion