Skip to content
Unveil

Unveil

Unveil detects bias in news, highlights loaded words, and suggests neutral alternatives using ML & NLP. It ensures transparent, fact-based reporting by analyzing sentiment, political framing.

Created on 16th March 2025

Unveil

Unveil

Unveil detects bias in news, highlights loaded words, and suggests neutral alternatives using ML & NLP. It ensures transparent, fact-based reporting by analyzing sentiment, political framing.

The problem Unveil solves

  1. Verify News Bias Instantly
    Readers can analyze any news article to check for political bias, making it easier to spot manipulated narratives.

  2. Compare Different Perspectives
    Unveil finds similar articles from various sources, helping users see all sides of a story instead of just one viewpoint.

  3. Rewrite Biased Articles into Neutral Ones
    AI-powered rewriting transforms biased content into neutral versions, ensuring fact-based reporting without loaded language.

  4. Quick & Transparent Bias Detection
    Unlike traditional fact-checkers, Unveil highlights specific biased words and explains why an article may be biased, making analysis faster & clearer

  5. Safer & Smarter Decision-Making
    For journalists, researchers, and policymakers, Unveil provides a trusted tool to evaluate media credibility before using news for reports or policies.

Challenges we ran into

Major Hurdle: Handling Subtle Bias in News Articles
One big challenge we faced was that bias isn't always obvious—it’s often hidden in framing, word choice, and tone rather than just specific keywords. Our initial Logistic Regression model relied heavily on TF-IDF, but it failed to detect subtle biases like:

❌ "The government introduced a new policy." (Neutral)
❌ "The corrupt government pushed another policy." (Biased, but TF-IDF alone couldn’t catch it.)
How We Fixed It:
✅ Added Sentiment Analysis: Integrated VADER sentiment scores to catch emotionally charged language.
✅ Word Embeddings (Word2Vec): Instead of just TF-IDF, we trained Word2Vec to understand the context of words.
✅ Manual Dataset Labeling: We curated real biased articles (OpIndia, etc.) to train the model on real-world political bias patterns.

Result: The improved model could now detect subtle bias beyond just keywords, making it far more accurate and reliable

Discussion

Builders also viewed

See more projects on Devfolio