AuthChain
Blockchain in the loop for safer AI assistance.
Created on 8th February 2026
•
AuthChain
Blockchain in the loop for safer AI assistance.
The problem AuthChain solves
AuthChain
A system for enforcing human oversight over autonomous AI systems, built on an interruption-driven control architecture with permissioned, auditable execution enforced via blockchain-based governance.
Problem Statement
AI is used primarily for its autonomous capabilities, powering companies all around the world to build and deploy products rapidly.
However, from gathered reports and studies, there has been a loss of $1.3 billion dollars due to such unrestricted AI execution, ranging from arbitary database deletions, poor quality code and security issues.
The simple solution, would be to be more involved in the code generation process, but this sacrifices a lot of the autonomy of the agentic execution.
Thus, our solution, AuthChain.
What we do
Any AI generated code, relies on tool calls, and all tool calls are seperated into Read Tiers and Write/Delete tiers.
Any of the tools belonging to the latter category, are transferred to the policy service, which then decides if further human/relevant expert intervention is necessary.
If so, it is passed via blockchain, to the UI, for the user to approve or deny.
This creates immutable logs and hashes of any actions permitted by the relevant user, and ensures reliability and accountability.
How it integrates with the relevant tools used in production today
- Github Copilot and equivalents
Such tools are simply an interaction between the IDE and the user themselves, and thus restricting it to single-person development.
AuthChain takes it forward and turns it into a Engineering tool, escalating issues accordingly, before they're ever committed.
- Github CI/CD pipelines
Such tools such as jenkins are reliant on predefined tests, and have no method of gatekeeping against rogue SQL queries, irrelevant files (w.r.t the tests and existing files), enabling injection of bypassing security measures via rogue LLM calls.
Taking it a step further
A major problem faced by the state of the art LLMs is the filling up of the context window with the multitude of tools and MCP servers.
We instead allow the LLM to create its own Tools, which are shared and saved and securitized via the policy service and the blockchain service, and are classified in a seperate tier.
This allows the LLM to grow beyond its predefined capabilties while remaining securitized.
Tracks Applied (3)
Google Gemini
Major League Hacking
Solana
Major League Hacking
Vultr
Major League Hacking

