Many a time people are obliged to go through a large paragraph just to get a glimpse of something very small and hence compromising a lot of energy and time in the process. We thought why not just ease this once and for all and so we kept that in mind and developed a project which would not only help the user to shorten their text but to paraphrase it also so that they could easily not only shorten their text but could also get a much more gentle sounding version of their text as well hence resulting win-win situation for everybody.
It was full of challenges be it the backend where it nearly took us the whole day to upload our machine learning model to the docker container or be it the UI which was really baffling at first and which took the most of our time.
We encountered a number of significant challenges while developing the API that makes use of an ML Model. We initially desired a dockerized and self-hosted version of the model, but due to large download and upload size requirements and slower computational speeds, we chose to use Huggingface.co's hosted version of the model. This made the API quick but comes at the cost of lesser control.
Technologies used
Discussion