Pragya Singh
@Illuminate
Pragya Singh
@Illuminate
@ UPenn M&T
@ UPenn M&T
Philadelphia, United States
Devfolio stats
Devfolio stats
2
projects
2
2
prizes
2
2
hackathons
2
0
Hackathons org.
0
Top Projects
Top Projects
K
Kairos – Verifiable AI reasoning powered by decentralized agents and trust-scored logic for agent to agent communication protocols.
"Show your work." "How did you get that answer?" "Keep your work organized." We were all told this hundreds of times growing up in math class. Truly, if you don't know how you reached an answer, there's no proof that you know how to solve the problem. This is the problem we currently face with our AI systems and models. Despite AI systems generating fluent answers, we don't know why models reach their conclusions, which sources they rely on, or whether their logic holds up under scrutiny. This lack of reasoning transparency creates a trust gap—especially in high-stakes environments that AI models are heading towards. > How do you know when to trust your model? Quite simply, you don't. Currently, you can guess whether outputs are grounded in fact, aligned with your intents, or just plausible-sounding hallucinations. There are no mechanisms to validate reasoning quality, enforce accountability, or reuse valuable logic as structured, composable knowledge. Kairos solves this by rethinking how AI systems reason, validate, and act. First, it breaks user queries into modular tasks handled by specialized reasoning agents, each returning structured steps, claims, and hypotheses. These outputs are independently reviewed by distributed validators that assess logic, grounding, alignment, and novelty—coordinated through a trust-scored validation market. Only when reasoning is validated, Kairos recommends or executes actions. Thus, Kairos creates a verifiable chain of logic from input to impact. Thus, Kairos is transparent, traceable, and tamper-proof. Kairos creates reusable intellectual infrastructure: logic flows that can be registered, licensed, and improved over time. Kairos builds a foundation of reason you can trust.
Another day, another gen AI model. It seems like the speed of development has sped up so quickly over the past couple of years and we have seen the constant war of how ChatGPT, Grok, Deepseek, and Claude progress and perform. These models have changed our lives, but have only been released recently and can only get better due to increased data and better model design. Data is the true source behind AI’s power. However, AI agents in Web3 face a critical data bottleneck—developers struggle to access high-quality, real-time datasets. On the flip side, there are limited incentives for data providers to data, whether due to lack of direct rewards, expensive storage, or IP concerns. Existing data solutions are centralized, siloed, expensive, and often biased, limiting AI’s potential in decentralized ecosystems. Some people, including artists, musicians, scientists, and other organizations, have information that could help advance the quality of AI models, but are hesitant to allow their work to train AI models out of lack of rewards to the original creators. As a result, AI models face scrutiny when their work emulates that created by humans and lack of diverse high-quality data slows down innovation and restricts development. Additionally, there is no trustless system to source, benchmark, and incentivize AI training data while ensuring security and fairness. Without one, AI innovation in Web3 will remain restricted, limiting the development of the on-chain AI future. Validata tackles this issue by creating a decentralized data ecosystem where users contribute valuable datasets, developers train AI agents on these datasets, and a ZK-secured benchmarking system ensures fairness. Contributors and developers are incentivized to provide impactful data and achieve stronger performance on AI benchmarks, respectively. By leveraging Web3 infrastructure, Validata democratizes AI development while ensuring trust, privacy, and scalability.