SlopScore.ai
YouTube with context, not just content.
Created on 17th October 2025
•
SlopScore.ai
YouTube with context, not just content.
Description of your solution
SlopScore.ai
SlopScore.ai will be a browser extension built with WXT, aimed to provide user with a fact, citation and claim analysis of the YouTube content they are about to watch. Instead of asking "Is this AI-generated?"—a question current detectors answer unreliably with lots of false positives—we ask better questions: "What claims does this video make? Can reputable sources verify them? Are the citations legitimate? Does this source cite AI-generated content as credible?"
Our unique approach is rooted in lateral reading, the gold standard technique professional fact-checkers use. This method is highly effective but manual and time-consuming, requiring expertise average users don't possess. We aim to automate it in 30 seconds.
The multi-agent architecture
SlopScore.ai deploys five specialized autonomous agents that work in parallel, each investigating a different dimension of content credibility:
1. Claim Extraction Agent
Extracts factual assertions (statistics, dates, causal relationships) from transcripts. Scans for AI linguistic patterns: M-dash overuse, parallel structures, buzzwords like "delve" and "innovative". OpenAI provides confidence scores stored in Convex.
2. Source Check Agent
Automates lateral reading by querying Google's ClaimReview database (Reuters, AP, PolitiFact). Reports which claims were corroborated, contradicted, or lack coverage. Cached in Convex.
3. Author & Outlet Reputation Agent
Checks domains against AI content farm databases, analyzes domain age (WHOIS), performs recursive AI text analysis on cited articles using OpenAI. Outputs credibility scores with evidence.
4. Image Analysis Agent
Reverse image searches randomly selected frames from the video. Uses Hugging Face deepfake detection models to scan for anatomical impossibilities, inconsistent lighting, AI noise patterns. Flags context mismatches.
5. Synthesis & Supervisor Agent
Supervises parallel execution of the four specialized agents and compiles findings into a coherent, actionable Trust Report using LangChain's StateGraph.
Citation Chain Verification (2-3 Levels Deep)
Our killer differentiator—no existing tool addresses the AI-citing-AI echo chamber.
Level 1: Node.js fetches cited URLs, OpenAI detects AI patterns (87%), cross-references domains against Convex content farm database.
Level 2: LangChain recursively analyzes citations in those articles, identifying circular reference networks. Convex stores citation graph relationships.
Level 3: Google Fact Check API verifies against authoritative sources for claims that pass Levels 1-2.
Smart Early Stopping: LangChain skips deeper investigation if Level 1 identifies trusted sources cached in Convex.
How This Addresses Each Problem
Problem 1: Invisible Misinformation
We give the user context: OpenAI-detected patterns, Google Fact Check verification results, Convex credibility scores, Hugging Face visual analysis. Students get 30-second fact-checks before citing sources.
Problem 2: AI-Citing-AI Echo Chamber
We break the loop. Convex stores contaminated citation graphs. Next.js visualizes where human verification disappears.
Problem 3: Crisis Exploitation
Parallel agents on Vercel's edge network provide verification in 30 seconds. Google Fact Check API distinguishes verified authorities from content farms.
Frontend: WXT browser extension with Next.js for responsive UI integrated into YouTube's interface. WXT handles Chrome APIs (tab detection, storage, content injection).
Backend: Node.js on Vercel's edge network for global low-latency, serverless auto-scaling.
Agent Orchestration: LangChain coordinates five parallel agents using LangGraph's supervisor architecture.
Data Layer: Convex real-time database stores analysis results and verified video database. Reactive queries enable instant cache retrieval (30-day window, zero API cost, sub-second response).
External APIs: YouTube Data API (transcripts), OpenAI Moderation API (pattern detection), Google Fact Check API (corroboration), Hugging Face Transformers (image artifacts).
User Experience
Users see progressive results (5s: transcript score; 12s: source verification; 20s: image analysis; 30s: full Trust Report) and choose whether to watch, skip, or explore alternatives.
Why This Is Uniquely Defensible
Not Detection-Based: Focus on verification infrastructure (Google Fact Check + citation analysis), not pattern matching.
Augments Judgment: We leaves decisions to users. Our role is to help them make better of these decisions.
Network Effects: Every analysis enriches Convex database. More users = faster results, better recommendations, richer content farm databases.
We're building verification infrastructure for the post-AI internet, not another unreliable AI detector.
Tracks Applied (1)