P

Project Veracity

Confirm the veracity of what you are reading in real time.

P

Project Veracity

Confirm the veracity of what you are reading in real time.

The problem Project Veracity solves

We at ProjectVeracity intend to help users make an informed decision by providing exposure to opinions from the other side.​ Our project is aimed at researchers who require plenty of analysis and a need to have the most accurate information possible at their disposal. It can also prove to be an invaluable tool for casual readers on the internet who simply wish to be well-informed about the environments and conditions they find themselves in. True to our name, ProjectVeracity will also help users gauge the veracity of online content instantly and efficiently fight against echo chambers.

It is important to add that, ProjectVeracity doesn't highlight what is "right" in an article - simply because history has shown that information deemed true or accurate has changed overtime.

Challenges we ran into

Obstacle: On what metrics can you tell an article is misleading?

Approach: For this we realized that just utilizing a single machine learning model might not be the best idea. An article can comprise of all factual information but may be phrased in a way to evoke a completely opposite emotion. - such an article will score great for veracity while achieving it's ulterior motive.

To combat this, we decided to stack several machine learning models together and created a processing block that takes a weighted average of several machine learning models for different fields including.

Text Analytics.
a. Sentiment Analysis.
b. Misleading Featureset.
c. Entity Highlighter.

Cross-reference Database.
a. Trusted Publications.
b. Retracted Researches.

The resulting model will be more robust to edge cases.

Obstacle: How to verify information of an article, specially one which combines multiple disciplines.

Approach: We came up with the idea of modular branches. On an atomic level, each knowledge field may have it's own processing block. (as explained above.)

An article fed to the system will be analysed for meta-tags and appropriately catagorized and send to that specific processing block. - This has the added benefit of trickling down the overall bandwith used by the system as only the required branches may be utilized at any time.

In action it would mean that an article could be from a field of technology, fitness, - or perhaps a combination of both. Because of abstracted processing blocks, misleading terms from both fields will be highlighted. And so will the misleading score of the article be affected.

The fact that we could add more processing blocks in future means that the system can start from a really small scale and can be expanded in functionality overtime. Hence requiring a substantially small capital to initiate and improved upon.

Technologies used

Discussion