Skip to content
ArchiveNET

ArchiveNET

Decentralized memory-sharing protocol for AI agent

Created on 21st June 2025

ArchiveNET

ArchiveNET

Decentralized memory-sharing protocol for AI agent

The problem ArchiveNET solves

Modern AI agents like ChatGPT, Claude, Cursor, VS Code extensions, and taskbots operate in complete isolation. Each one stores your conversations and context separately often in corporate-controlled databases meaning:

  • Every conversation starts from zero context
  • Your memory is fragmented across platforms
  • You don’t own or control your data
  • Important AI knowledge disappears when a platform changes or shuts down

This creates massive friction for users who rely on multiple AI tools in their workflows whether for development, productivity, research, or creativity.

How ArchiveNET Helps

ArchiveNET is a decentralized memory-sharing protocol that connects all your AI agents with a unified memory layer, stored permanently on-chain via Arweave.

With ArchiveNET:

  • ✅ Claude can recall what you told ChatGPT
  • ✅ Your VS Code AI assistant knows the roadmap discussed in your chatbot
  • ✅ You don’t have to repeat context every time you switch tools
  • ✅ You get true ownership of your AI memory, stored in your own isolated smart contract

ArchiveNET also removes the complexity of setting up Model Context Protocol (MCP) servers, writing configs, managing wallets, and more. Just install our agent xeni via pip, pass your API key, and you’re ready to go.

To make AI memory fast, searchable, and permanent, we're building the world’s first decentralized vector engine on top of the Arweave blockchain.
We call it EizenDB- a high-performance, on-chain vector database that powers memory retrieval across AI agents.

Here’s a snapshot of our internal benchmark data:

MetricResult
Insert PerformanceLinear scaling up to 10K vectors (685s)
Search LatencyAvg < 200ms, P95 < 300ms
Search Accuracy99.2% recall@10
Storage Efficiency66% reduction (ProtoBuf vs JSON)
Throughput Scaling3,200 QPS @ 64 concurrent connections
Memory StabilityStable 2.5GB baseline usage

We’re not just solving context loss- we’re giving users digital memory they own.
We believe In a future where every user has 10+ AI agents, memory continuity isn’t a luxury- it’s will be necessity but we all know anthropic never gonna share it's contexts to openai or vise-versa .

ArchiveNET makes that future possible with a private, persistent, and portable AI memory.

Challenges we ran into

One of the biggest challenges we faced was implementing the HNSW (Hierarchical Navigable Small World) algorithm for our on-chain vector engine Eizen.

Building something performant, fully on-chain, and without relying on existing vector DB libraries was a significant engineering effort, but it laid the foundation for truly decentralized, portable AI memory.

Another tricky part was getting Protocol Buffers to work seamlessly with Base64 encoding for storage on Arweave. Protobuf gave us a ~66% size reduction compared to JSON, which is critical for keeping blockchain costs low. But setting up the protobuf schemas, encoding them correctly into Base64, and ensuring they could be decoded reliably from the blockchain took significant experimentation

After some iterations, we nailed the serialization process, allowing us to efficiently store and retrieve AI memory as vectors from Arweave

Discussion

Builders also viewed

See more projects on Devfolio