Created on 7th May 2025
•
Builder ecosystems are scaling rapidly, but the support infrastructure isn’t keeping up. Traditional support models, centered on 1:1 mentorship, manual grant reviews, and ad hoc advice, are limited by human bandwidth. This creates a bottleneck where high-potential builders are overlooked, community growth is stunted, and decision-makers are overwhelmed.
JesseXBT solves this by providing an AI-powered digital sidekick that delivers Jesse's knowledge, guidance, and funding decisions at scale. By fine-tuning a custom LLM on Jesse’s writing, social, and video content and integrating real-time data through RAG (retrieval-augmented generation), JesseXBT offers 24/7, high-context support to thousands of builders across platforms like X, Farcaster, and Telegram.
The result is instant access to funding, faster decision-making, and democratized support for builders globally, empowering the next wave of innovation with Jesse’s expertise, without the bottlenecks.
JesseXBT wallet: 0x3Cf18353270719544eB0A2094692826EE112ffD8
Tx of a microgrant given: https://basescan.org/tx/0xcc187686acca565a5bc2e03ba3b859b82df62809cd5246c8cb86e6e3ae2ee288
Microgrants given:
Our roadmap before, during, and after Based Batches:
🟠 Phase 0: MVP (before Apr 25)
✅ GPT model trained on Jesse’s voice and context (Farcaster + 160+ podcasts/videos)
✅ Adding knowledge base with no-code UI
✅ Using Puppeteer to scrape websites and make summaries
✅ GitHub evaluation for grants selection
✅ Connection to Warpcast, X, and Telegram profiles
🔵 Phase 1: Base Batches LATAM (Apr 25 – May 16)
✅ Finalize Gemini 2.5 fine-tuned model (Farcaster + podcasts/videos)
✅ RAG implementation using Pinecone
✅ Set up vector database (Pinecone) with namespaces (jessexbt, builders, protocols)
✅ Puppeteer site scraping with automatic/manual refresh
✅ Integrate public data ingestion (Farcaster, Twitter, Telegram) with retraining
✅ Auto-enrich GitHub, demo (Gemini 2.5), X threads
✅ Implement natural language prompt flow for builder/project evaluation
🟢 Phase 2: Core Infrastructure and Alignment (Day 1–20)
✅ Sentiment analysis + ZEP-Pinecone feedback loop
✅ Privacy & moderation layer (detect/filter PII, abuse, toxic outputs)
✅ Latency optimization via caching
✅ Response reranking (Pinecone)
✅ Intent recognition
🔴 Phase 3: Builder Evaluation Engine (Day 21–45)
✅ Define builder input types (GitHub, demos, social traction, originality)
✅ Integrate Talent Protocol / Verified Builder Registry
✅ Scoring logic connection to UI/UX, uniqueness, technical depth, traction
✅ Activate dynamic follow-ups based on weak/missing info
✅ Route builders with score > 0.9 to Jesse (DM triggers)
✅ Feedback for builders < 0.9
🟡 Phase 4: Open Launch and Learning Loop (Day 46–90)
✅ Publicly launch builder-facing agent (Farcaster, X, Telegram)
✅ Monitor query volume, scoring outcomes, referral success
✅ Scaffold lightweight Knowledge Graph from repeated queries
✅ Internal dashboard: builder scores, referrals, feedback logs
✅ Prepare for scaling (multi-agent, Base ecosystem, protocol analytics)
🟣 Phase 5: Empowerment (Post-Day 90)
✅ JesseXBT becomes a 24/7 Base-native agent
✅ Discovery engine for talent, ideas, and protocols
✅ Transparent builder scoring and protocol intelligence
✅ Trusted filter between Jesse and the noise
✅ Programmable assistant with memory, personality, and Base-native cognition
Improving the agent’s response quality: We initially attempted fine-tuning OpenAI's GPT-4 model using carefully curated data sourced from public interviews with Jesse Pollak on YouTube, as well as his Twitter and Farcaster posts. However, since Gemini 2.5 does not yet support fine-tuning, we shifted to a RAG (Retrieval-Augmented Generation) architecture using Pinecone. We stored all static knowledge in the vector database, which resulted in significantly more coherent and on-brand responses from the agent.,
Data extraction and analysis for grant evaluation: One of the core challenges was evaluating information from GitHub repositories, project websites, and product demos submitted by builders. On GitHub, the difficulty lay in selecting up to 20 files per repo that best represented the technical core of the project. For websites, we analyzed between 1 and 3 internal pages, including landing pages and documentation.
Additionally, we assessed UI quality by automatically capturing a screenshot of the provided URL and evaluated UX through the submitted demo.
Each project was also scored using a custom system starting with a Web3Score, which detects the use of Web3-compatible libraries and whether the project is deployed on Base. This score, combined with technical and design factors, helped us estimate a recommended USDC grant amount.
Managing the agent’s knowledge context: We faced challenges in managing both static and dynamic knowledge within the conversational context. Initially, we sent all relevant data with each prompt, which led to high token usage without significantly improving the relevance of the agent’s replies. We later iterated toward a RAG-based solution, storing static knowledge in Pinecone. This allowed for cleaner prompts and better context-driven responses from the agent (jessexbt).
Enabling multimodal input: At first, the agent only supported text-based interactions. We later integrated Langchain and migrated to Gemini 2.5, a multimodal LLM. This allowed the agent to understand and analyze video content (MP4 files or YouTube links) sent through any client—Twitter, Farcaster, Telegram, etc.—automatically extracting relevant insights from the video.,
Creating video content for grant results and feedback: We developed an AI-generated character inspired by Jesse Pollak, capable of simulating podcast-style videos with a variety of emotions and facial expressions. These videos are synchronized with scripts generated by the agent, and we used Eleven Labs to produce high-quality voiceovers.
The biggest challenge was integrating this process using FFmpeg in our GCP-based infrastructure. We had to make FFmpeg compatible within a Dockerized environment, which involved configuring codecs and dependencies properly.
We also replaced basic .srt subtitles with styled .ass (Advanced SubStation Alpha) subtitles, which support custom formatting, animations, positioning, and visual effects—resulting in a more polished and engaging viewing experience.
Agent observability and analytics integration: We integrated observability logs from A0x agents into a dashboard for the agent's stakeholders. This allows us to track metrics, user interactions, and the agent’s reasoning across different platforms (X, Farcaster, Telegram), offering a comprehensive view of performance and engagement.
Tracks Applied (1)
Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.