Created on 6th May 2025
•
Onchain data might be public, but getting to it? That's a whole different beast.
Right now, working with blockchain data usually means juggling multiple platforms, decoding hex blobs, and dealing with gated infra. Most tools only show you what they choose to index and if you're stuck using raw RPCs? Good luck piecing together anything meaningful. It's slow, low-level, and also not built for analysis.
We felt that pain. So we built Sandworm, a fully open, collaborative IDE for querying blockchain data like it’s just another database.
At its core, Sandworm combines indexed chain data with live RPC access, giving you the full picture: from historical records to real-time activity, from raw data to decoded protocol-level insights — all in one place, no complex setup required.
If you know SQL, you already know how to use Sandworm. Our Worm Query Language (WQL) is a SQL-like syntax tailored for blockchain. We’ve got clear docs, starter templates, and an interface designed to get you from zero to insight fast.
Sandworm isn’t just a query engine.
It’s a full-on workspace. With Sandworm, you can
No vendor lock-in. No private APIs. The entire platform is built to be transparent and extensible — whether you’re poking around for fun or building something serious on top.
Challenges We Ran Into
Building with blockchain data is not for the weak. From parsing unpredictable return data to pushing the limits of what two devs can ship in time, this project tested everything we had.
Parsing messy output.
One of our biggest frontend challenges was dealing with the unpredictability of the engine’s return data. Since Sandworm supports both indexed and live RPC queries, responses often come in raw or wildly varying formats. To make them human-readable, we had to build a flexible parsing system that transforms results into clear tables, visualizations, and insights.
Tech stack tradeoffs.
We faced hard decisions between speed, scalability, and stability. Some tools were fast but fragile, others were scalable but slow under certain loads. On top of that, we had to respect the limits of our current infra: how many requests could we handle at once? Where might things break? These choices shaped everything from query execution to data delivery.
Infra strain.
Indexing over 10 million rows and integrating real-time RPC syncing on a budget wasn’t a walk in the park. Every optimization counted. We had to keep the system lean without sacrificing reliability, profiling performance constantly and finding creative ways to scale as we went.
Kernel complexity.
On the engine side, building a custom SQL kernel turned out to be way harder than expected. Combining SuiQL and EVM-compatible EQL into one unified query language is a major technical lift. It’s a full-blown compiler project in itself. Because we couldn’t finish the kernel in time, we temporarily fell back on PostgreSQL, which means some parts of the system aren’t fully optimized or native yet.
Split focus & time constraints.
The backend effort required to support all this naturally slowed frontend progress. Balancing the dev load between building the engine, the collaborative IDE, and a usable UI meant every hour had to be deliberate. There were times we had to prioritize building infra over shipping features we loved — just to keep the MVP viable.
Design & UX stretch.
Design was another challenge. As a dev-heavy team with no dedicated designer, we had to strike a balance between functionality and a clean UX. We prioritized simplicity and used real-time feedback from early testers to guide our decisions, but we know there’s still room for polish, and a proper design revamp is on our roadmap.
Collaboration was hard, but worth it.
We didn’t want just another data dashboard. Making Sandworm collaborative, where people can explore, fork, remix, and learn from each other in real time added another layer of technical and UX complexity. It’s still evolving, but it’s part of what makes Sandworm feel alive.
And through it all, testing never stops.
We’re constantly tuning performance, patching edge cases, and refining the experience. Our goal is simple: make it feel fast, smooth, and intuitive — even when the data isn’t.
Tracks Applied (2)