You all might be wondering if the type of model which we created, solves which type of problem.
So let us tell where its solution is applied. The biggest error in this world is a misunderstanding. It is so strong that it can even ruin a very happy atmosphere. People post their thoughts on social media with some intention. It may be positive it may be negative. No issues with it. But the problem arises when a person takes that emotion or thought in a wrong way and gets offended. Our model analyses such posts and predict an almost accurate thought which might be actually behind that post. With this, we step forward to change our thoughts which may be wrong about that person who makes a post.
Data Cleaning: The input data contained mistyped and misspelled words in a huge number. It was a kind of challenge for us to remove those type of data from the dataset and convert it into the language that our model understands it properly and give out meaningful result.
Large-sized Source files: The dataset we used to train our model is huge real-time data. As soon as the model was trained, it generated a few source files. The size of the source files was approximately 1.2 GB each. The challenge here was to reduce the size of source files so that the model could be easily deployed on a free web server for its functionality.
The next challenge we faced was to convert the input data into the categorical form. So that the result could be easily known to us and could be easily flushed to the front end of the website
Discussion