According to a study reported in WHO, conducted for the National Care Of Medical Health, at least 6.5 per cent of the Indian population suffers from some form of serious mental disorder. Also reported is the extreme shortage of mental health workers, as low as ''one in 100,000 people'’ (as of 2014). The average suicide rate in India is 10.9 for every lakh people and the majority of people who commit suicide are below 44 years of age.
Given this shortage, we intend to reduce the drastic effects of such disorders by bringing effective music therapy directly to consumers. There have been several studies on the sensorimotor modulation of mood and depression using stimulation. Participants were administered several modes of music therapy (binaural beats being the most widely used) and positive effects were documented.
Inspired by such research and by the growing popularity of virtual reality headsets in India, we intend to take music therapy a step further by creating immersive visual environments that let the users escape to another world while administering binaural beats that helps modulate mood.
Mental disorder is something that is not widely talked about, and people are hesitant to accept and approach professionals for help, for fear of social stigma surrounding mental disorders. Inspired by sufferings of few of our friends who couldn’t access professionals due to fear, we feel responsible to use technology to at least begin reducing the suffering by some extent, hence this idea.
Our preliminary idea is to develop high quality virtual environments suitable for current mobile GPUs that let the user experience worlds completely different, with the proper research-driven ambience specifically created to calm various centers of the brain, and reduce depressive mood aggravations. The design of the environments is based on research based mood moduation techniques such as
We ran into a number of challanges while buiding this project.
Now that we run the application on a Google Pixel, we see the main problem accompanying deployment of high-end graphics on Android devices: the amount of requests the application makes to the GPU to render the application. After a bit of search online, we find the following to be the problems. The scene we developed, simply put, is intensive. This implies it has lot of vertices (and polygons) for the GPU to render. Lighting and shadows takes computation power (lots of it) to render. Imagine a highly detailed object (~ 1000 vertices) in the scene. When the player sees that object from a considerable distance, they don't see the details (just the normal shape and some texture). But Unity renders the entire mesh, textures, and lighting as if you were viewing the object up close. This assumption takes GPU resources.
Solutions we found to the problems faced in the scene:
Occlusion Culling: Don't render what you don't see. The image below shows the Occlusion map generated for the current scene. We have designed our scene in a way to ensure maximization of Occlusion.
LOD (Level of Depth): Unreal Engine supports a mechanism to divide a mesh into several simpler meshes. The farther the renderer (the camera) from the mesh, the simpler (less number of vertices, hence requiring less computation) the mesh becomes.
Static/Dynamic Batching: Similar objects are drawn on the screen together, making sure their textures, materials etc. are loaded only once.
Baked lighting: Unity's system of pre-calculating the lighting in the scene, and baking (merging the lights and shadows) onto the very texture of the mesh under consideration.
Technologies used
Discussion