When we first installed & used Snapchat, we were baffled to see the vast amount of lenses made and consumed on a daily basis. There were all sorts of lenses ranging from those using material & textures to those using connected lens features. But we noticed that there was a missing piece in the puzzle, that is, there was no lens that could harness the power of Machine Learning & integrate it with a game so that the end user won't only get detached from the infinity lens scrollbar but also gathers/caters some knowledge while using the lens along with beautiful UI & UX. With these thoughts, we started making ML PAT: The OG Lens. Read the following points to see what it does:
1️⃣ Whenever a user visits the lens, the landing introductory UI will pop up. Two buttons: Next & Skip will be there. Next will take the user to the instructions page whereas the Skip button will take the user to the game.
2️⃣ Instructions page just tells the user about the game and also familiarizes him with the scoring pattern. The scoring pattern is as follows:
a. +3 will be awarded for each correct answer cycle if the user has not lost any life.
b. With each reducing life, a point will be deducted from +3, that is if the user gets the answer right after losing life then +2 will be awarded and with 2 lives lost, +1 will be awarded.
c. -1 will be awarded from the session score with each wrong answer
3️⃣ If the skip button is pressed then the instructions page will be bypassed & the user will directly enter into the game
4️⃣ When the user uses the lens, in the initial load the ML section will pop up and it will prompt something like 👀 Cat. Eyes basically tell that the lens is expecting the user to point to a particular class of object out of the 7 objects (car, cup, cat, dog, plant, tv, bottle) that the tflite model can detect.
We guess everyone who will be reading this paragraph will be startled. Actually, after we finished the PAT section, we thought of adding the game to the persistent cloud storage and we did that & it was working fine, but when we tried to integrate the ML model into it, it was throwing an error because the ML model & persistent cloud storage didn't work together as both of them are resource intensive task and running both of them synchronously is not possible. We actually did that. (Through persistent storage, we made the game remember the PAT questions that the user has answered and in this manner, there were 2 game scores shown to the user at the time of playing, one is Session score and the other one is Global score). The session score is for the current game sessions & the global one depicts the overall score accumulated over various sessions of using the lens. The fact that the game remembers the scores and the question visited, made the game a real game-changer.
It works absolutely fine and adds a new dimension to the lens. The reason why we made another lens is that the persistent cloud storage lens was having the following shortcomings:
a. Each user has dedicated cloud storage for each lens and the maximum capacity of that is 3072 bytes & which is why our game was only storing 10 strings to store data points which was constraining us to a fewer number of questions.
b. Moreover, along with persistent cloud storage we couldn't use the ML model.
When my team saw the themes, we thought of making a project related to AI/ML theme, but that was not available in the themes. So, we thought that we can make the same under the Miscellaneous theme because generally, it is there to accommodate all ideas. In the checkpoint 1 meeting, we were not told about anything related to the theme & we think that we have done a great job in pulling off in educative ML lens in a day's time
Technologies used
Discussion