HarmShield
AI-powered defense against cyber threats.
Created on 22nd March 2025
•
HarmShield
AI-powered defense against cyber threats.
The problem HarmShield solves
HarmShield is an advanced AI-powered system designed to detect and categorize harmful content—such as hate speech, misinformation, and cyberbullying—on social media in real time. Using NLP models (BERT, GPT), computer vision (CNN, OCR), and speech-to-text processing, it analyzes text, images, and videos to ensure safer online interactions. Built for high-speed detection with Apache Kafka and Spark, it classifies content instantly and escalates uncertain cases for human review to prevent errors. Deployed on AWS and Google Cloud, Sentinel AI continuously learns from new data, ensuring ethical, bias-free moderation while reducing the spread of toxic content and protecting users from online harm.H
Challenges we ran into
One major hurdle we faced while developing our AI-powered content moderation system was efficiently analyzing multimodal content-where text and images needed to be processed together for accurate classification. Initially, analyzing them separately led to misclassifications, such as flagging an image as harmful while its accompanying text was harmless. To solve this, we implemented a multimodal AI model that processes text and images simultaneously using BERT for NLP and ResNet for computer vision, along with a cross-modal attention mechanism to improve contextual understanding. Additionally, we introduced an adaptive confidence scoring system, ensuring uncertain cases were reviewed by humans, which significantly reduced false positives and improved accuracy while maintaining real-time performance.
Tracks Applied (1)
Ethereum Track
ETHIndia
Technologies used
