DeceptiScan
Detect Deceptive Patterns with AI
The problem DeceptiScan solves
This project aims to detect and highlight deceptive design patterns (dark patterns) on websites. Dark patterns are UI/UX strategies used to manipulate users into making unintended decisions, such as forced subscriptions, hidden costs, misleading urgency, or FOMO tactics.
The Problem:
Many websites use psychological tricks to increase conversions at the cost of user trust and autonomy. Users often don’t realize they are being manipulated until it’s too late.
The Solution:
This project automatically scans web pages, detects potential dark patterns using a deep learning model, and visually highlights them. The AI classifies deceptive elements with confidence scores, using color-coded warnings (green for low certainty, red for high certainty) to help users identify manipulative tactics at a glance.
Impact:
Empowers users by making manipulative design tactics transparent.
Increases awareness of deceptive UI/UX practices.
Encourages ethical web design by exposing dark patterns.
In short, this tool helps fight against deceptive digital practices and promotes a more transparent, user-friendly internet.
Challenges we ran into
One of the biggest challenges was perfecting the visual styling of the detected dark patterns. We wanted the highlights to be clear, non-intrusive, and informative, ensuring that users could easily spot manipulative elements without disrupting their browsing experience.
Another challenge was displaying the confidence score for each detection in a way that made sense. Simply showing a percentage wasn’t enough—we needed a color gradient system that visually conveyed how certain the model was about each classification, from green (low confidence) to red (high confidence).
Finally, the most significant hurdle was improving the accuracy of our AI/ML model. Dark patterns are often subtle and context-dependent, making them hard to detect with traditional methods. However, through rigorous data training, fine-tuning, and smart optimizations, we pushed our model’s accuracy to an impressive 97%, making it highly reliable in real-world scenarios.
Technologies used
