Created on 1st March 2025
•
Ethereum is a state machine, and the EVM executes transactions sequentially, one by one. It doesn’t execute other transactions until the execution of the current transaction is completed and the resulting state is computed. This is to avoid state/storage collisions and preserve atomicity of transactions. In other words, this ensures that each transaction is a self-contained, atomic unit of operation that changes the system’s state without interference from or interference to any other concurrent transactions. The execution, however, is still sequential, even with a set of completely independent transactions, because the EVM isn’t designed to be able to execute them at the same time.
Our solution to this challenge focuses on intelligently batching transactions from the mempool based on independent state accesses. Using Eigenlayer AVS, we have created a new way to parallelize the EVM - Our implementation uses a state access batching algorithm to figure out which transactions can be processed simultaneously. This AVS implementation works with a sequencer/proposer to help them form the most ideal maximally parallelizable block at any point of time.
Deployed AVS contracts:
AVS_GOVERNANCE_ADDRESS=0x874343CB2CaCf30CbbB60CF1C483D7E169230E68 ATTESTATION_CENTER_ADDRESS=0x8feb0306F8420436C0bc4054C6e670F131eAF573
Find AVS task submission transactions:
https://www.oklink.com/amoy/address/0x8feb0306f8420436c0bc4054c6e670f131eaf573
If you do not know what operators are - or are yet to deploy/register them with the project, please follow the steps listed here: https://docs.othentic.xyz/main/avs-framework/quick-start (upto step 8: Registering operator to AVS)
If you however, know the operator and have deployed the
AVS_Governance
andAttestation_Center
contracts, please proceed with populating the .env file in/Othentic-AVS
$ cd AVS/Othentic-AVS $ cp .env.example .env
Add deployer, operator 1, operator 2, operator 3 keys, AVS_GOVERNANCE_ADDRESS and ATTESTATION_CENTER_ADDRESS to the .env file
$ docker-compose up --build
This should spin up the AVS services along with the parallel execution helper that the AVS utilizes.
Navigate to the Attestation center contract to check attestations being posted every 5 seconds on-chain while the AVS is running. To find out about the format of these attestations and decode them, please refer (below)[#Format-of-attestations-posted-on-chain]
$ cd ../../ $ npm run dev
The batching algorithm is the true intellectual property and "secret sauce" of our system. It's what enables us to:
The AVS looks at the current Ethereum mempool, finds state/slot/storage accesses for each of these pending transactions and efficiently creates maximally parallelizable batches - meaning the maximum number of transactions that can be fit into parallelization batches/baskets. (Transactions that do not have collisions in their state accesses can be grouped together in parallelizable batches). These batches (and could be multiple) are then put together in a block form and proposed for the sequencer/proposer to include.
The blocks formed right now depends on the degree of parallelizability of the constituent transactions. But in the future, factors like tx fees and potentail MEV can be put together in an equation to find the most optimal reward bearing blocks for the proposer/sequencer. But, right now we are optimizing for maximal parallelizability.
a) AVS deployed using Othentic stack
b) Attestation Smart Contracts which keeps a store of block constituent hashes to keep operators accountable for valid ordering, which is detected when actually executing the block.
c) Parallel Execution Batcher Helper utilized by the AVS
d) UI that fetches, and displays the most parallelizable block options
e) A minimal geth (execution engine) update spec and implementation change that utilizes our parallelizable batch creation AVS
EigenLayer AVS has been utilized by the project. Its role is to bring trust and accountability to creating valid parallelizable (batches of) transactions within a block.
Overall, penalties incurred in the form of attempting to execute parallely, detecting issues/conflicts and having to queue them sequentially later ends up wasting time, which in fact makes execution slower (than a sequential approach). (as referenced in Seraph et al.'s paper - 'An Empirical Study of Speculative Concurrency in Ethereum Smart Contract' [https://arxiv.org/pdf/1901.01376]) where time penalties for detecting conflics and re-executing has shown to be counter-productive for parallelization approaches). In other words, parallel execeution only works efficiently when batches of parallelizable transactions are pre-decided and are valid (and do not have a collision) during execution. This underlines the importance of being correct when creating parallelizable batches or trusting someone who does it for you. The AVS helps in mitigating trust here - by allowing at least one EigenLayer operator to be accountable for proposing these parallelizable blocks.
Our project intends to bring parallelization to existing Ethereum, L2s and current EVM chains.
Past approaches to do this have been mainly under two different lines of thought processes - a) Speculative concurrency (with or without separation of nodes) b) Access Lists.
Speculative concurrency approaches tend to show less benefits over a sequential model in a practical setting. Access list approaches on the other hand work with prefetched/provided data for determining the most effective execution. However, access lists are hard to accurately be determined by a sole sequencing/proposer node before the transaction is actually executed, and enforcing a strict access list requirement is still hard from a user's or a commonly used wallet’s perspective. This has resulted in parallel execution approaches being discussed and the urgency to solve the problem highlighted several times over the past - but no approach has truly solved it yet.
Overall,
a) This is a very urgent problem - while the L1 might not see immediate effects of parallel execution (due to still needing to account time for propagation), it will save execution time leading to potentially being able to reduce block times and faster client synchronizations. The effect on L2s are more apparent, with alt DA layers now supporting L1 txdata inclusion, the limits to scalability is dependent on the sequencer's execution limits.
b) This approach works on the current EVM - unlike having to create or transition to an alt-L1. The parallel execution clients assist those who use it, but is not a neccessity.
c) Parallelization and its effects are deterministic in this project - which makes calculating the efficiency easier and furthermore smart agents can be utilized to identify maximally parallelizable groups (even transactions that have dynamic state access changes during execution)
d) This is good for Ethereum!
We have initiated the work on the client that just makes use of our smart ordering (batching) AVS, by creating specs and some changes. While this client is going to be just following orders from the data dictated by the AVS - and is relatively simple, more work needs to be done on developing this client.
The blocks formed right now depends on the degree of parallelizability of the constituent transactions. But in the future, factors like tx fees and potentail MEV can be put together in an equation to find the most optimal reward bearing blocks for the proposer/sequencer. Right now we are optimizing for maximal parallelizability.
Our perspective:
The goal of parallelization is to increase execution speed, translating into higher throughput for blockchain networks. This benefits users through reduced transaction fees and benefits proposers by enabling them to process more transactions per second. Proposers can afford to accept smaller fees per transaction while earning higher total fees due to the increased transaction volume.
Our analysis indicates that the median sustainable number of parallel groups per block (containing 109 transactions) is 32. With this configuration, the degree of parallelization is approximately 109 / 32 ≈ 3.4. To quantify the throughput improvement, we model the total block time as the sum of block propagation time (p) and block computation time (c):
Total block time = p + c seconds
The throughput is given by:
Throughput = Block size / (p + c)
With computation reduced by a median factor of 3.4 through parallelization, the new throughput becomes:
New Throughput = Block size / (p + c/3.4)
Assuming a total block time of 12 seconds split evenly among p = c = 6 seconds, the new throughput improves from 9.08 to 14.04 transactions per second, representing a 1.54x increase. This substantial improvement demonstrates the real-world impact of our parallel transaction processing approach.
Tracks Applied (11)
Flow
EigenLayer
EigenLayer
Optimism
EthStorage
EigenLayer
Base
Coinbase Developer Platform
okto
BNB Chain