PragyanAI
personal, multilingual AI learning agent
Created on 21st June 2025
•
PragyanAI
personal, multilingual AI learning agent
The problem PragyanAI solves
Challenges in Traditional Learning
- Language Barriers: Prevent access to quality education.
- One-Size-Fits-All Content: Doesn’t cater to individual learning styles.
- Passive Learning: Fails to engage students effectively.
- Limited Accessibility: Not adaptable to different learning preferences (visual, auditory, etc.).
- Difficulty with Complex Concepts: Lack of proper visualization.
- Lack of Personalization: No customized study materials based on progress.
What Pragyan Does
Pragyan AI is an AI-powered educational platform that transforms learning with personalized, interactive, and multilingual experiences. It adapts to each student’s learning style and simplifies complex topics.
How Pragyan AI Makes Learning Better
For Students:
- Personalized Learning: Content adapts to your unique learning style and pace.
- Visual Learning: Complex ideas simplified through AI-generated videos and mind maps.
- Language Flexibility: Study in your preferred language with real-time translations.
- Active Engagement: Interactive quizzes and assessments to keep you motivated.
- Time Efficiency: AI quickly finds relevant study materials, saving you time.
For Educators:
- Automated Content Creation: Easily generate educational videos and materials.
- Progress Tracking: Monitor student engagement and understanding.
- Multilingual Teaching: Teach students in their native languages.
- Efficient Resource Management: Organize and distribute learning materials effortlessly.
For Organizations:
- Scalable Training: Deliver consistent training across multiple languages.
- Cost-Effective: Automate content creation and translation to reduce costs.
- Accessibility: Ensure learning is available to diverse populations.
- Quality Assurance: Maintain high standards for educational content.
- Comprehensive Analytics: Track outcomes and engagement metrics for continuous improvement.
Challenges we ran into
Challenges We Ran Into During the Hackathon
1. Centralized ChromaDB Implementation
Setting up a centralized vector database for all agents presented connection pooling and data consistency issues under concurrent access loads.
Agent 1 ──┐ Agent 2 ──┼──► ChromaDB Cluster Agent N ──┘
2. Information Chunking and Retrieval
Developing optimal chunking strategies that maintain semantic coherence while ensuring accurate retrieval across diverse content types.
3. Dynamic Agent Integration
Creating a flexible orchestration system for automatic agent registration, load balancing, and failure handling without hardcoded dependencies.
┌─────────────────────┐ │ Agent Orchestrator │ └─────────┬───────────┘ ┌─────┼─────┬─────┬ ▼ ▼ ▼ ▼ ▼ ▼ Mindmap Video Audio AR Fetcher Agent Agent Agent Agent Agent etc...
4. GCP ChromaDB Deployment
Deploying production-ready ChromaDB on Google Cloud Platform with proper scaling, security configurations, and persistent storage management.
5. Multi-Source Data Integration
Aggregating information from diverse APIs while managing rate limits, format normalization, and content deduplication for mindmap generation.
6. Asynchronous Video Generation Polling
Implementing robust polling mechanisms to monitor video generation progress while managing timeouts and state tracking for multiple concurrent jobs.
7. REST API Timeout Management
Handling timeout issues across multiple API calls with exponential backoff, retry logic, and circuit breaker patterns to prevent cascade failures.
8. Manim Animation Code Generation
Generating syntactically correct Manim code programmatically while ensuring proper asset management and animation timing synchronization.
9. Reinforcement Learning Error Handling
Building a multi-layered error handling system that learns from failures and adapts recovery strategies through reinforcement learning.
Error ──► RL Agent ──► Action Selection ──► Recovery │ │ └───────────── Feedback Learning ──────────┘
10. Audio-Video Synchronization
Creating contextually appropriate audio scripts and implementing precise timing alignment with generated video content.
11. Custom Service Deployment
Building and deploying containerized services on GCP with Docker, implementing CI/CD pipelines, and configuring proper monitoring.
12. AR Feature Integration
Creating a seamless pipeline integrating mindmap generation, video processing, and AR data collection into a unified augmented reality experience.
AR Input ──► Pipeline Processor ──┬──► Mindmap Integration ├──► Video Integration └──► Audio Integration │ ▼ AR Experience Renderer
Progress made before hackathon
Progress Made Before the Hackathon
Before diving into the problem we intended to solve during the hackathon, we spent some time testing key ideas to check their feasibility. Concepts like agent orchestration and AI-driven video generation were entirely new to us we weren’t sure we could pull them off (now we know we can!).
When we became aware of the sponsors and organisers of the hackathon, we were really curious to try out few of their api services and hence began playing with them little by little.
To facilitate this testing phase, we created two repositories on June 19, each owned by a different team member:
- Frontend: https://github.com/Pranay50x/Warp_hack/commits/main/
- Backend: https://github.com/Kamalllx/Warp-Hack/commits/main/
Frontend (UI)
Minimal Early Progress
Minimal progress was made on the frontend before the hackathon. The UI code at that time was largely boilerplate—basic structure and styling to test potential themes.
Complete Redesign During Hackathon
The early commits show that none of this initial code remained by the end of the hackathon. Once the hackathon began, the UI was entirely redesigned and went through multiple complete revisions. Almost nothing from the pre-hackathon version survived.
Backend (AI + Video Generation)
Focus on Feasibility Testing
Our main pre-hackathon focus was testing whether we could even generate AI-based educational videos that are engaging and well-synced with narration.
Early Prototypes
We created two early prototypes:
These were only experimental to evaluate video quality and audio sync. Both drafts were eventually discarded and rewritten during the hackathon due to quality issues.
Key Feedback and Breakthrough
A Sarvam mentor explicitly pointed out that the early video quality was poor. This feedback led us to completely rework the video generation pipeline. The breakthrough came with the commit:
- tears of joy video working — the first successful high-quality output.
sample 1
Agentic Orchestration (LLM Agents)
Initial Concerns and Testing
We were unsure if the LLMs we chose would integrate well with an agentic framework, so we also ran a basic test with a simple agent-based setup:
- fixes in learning, agent orchestrator and central chromadb
This early test involved a lightweight agent (under 200 lines of code). In contrast, our final orchestrator ballooned to 742+ lines of code, with supporting agents modularized across multiple files.
Summary
We did some basic feasibility testing 1–2 days before the hackathon to reduce risk—but none of it included final features or production-quality components. Once the 24-hour hackathon started, we rewrote nearly all pre-existing code.
- Total commits during the hackathon: 77
- Commits before the hackathon: ~15
Nearly all features and code were built from scratch during the hackathon.
We worked extremely hard during the actual event and hope the small amount of initial exploration isn’t misunderstood as early progress. It was purely to validate ideas and everything meaningful was built during the hackathon itself.
Tracks Applied (2)
Sarvam AI Track
Sarvam.ai
Google Cloud Platform Usage
Google Cloud Platform