ZKLife:zk-enhanced on-chain AI Gaming

ZKLife:zk-enhanced on-chain AI Gaming

A cutting-edge Game of Life version featuring on-chain cost-efficiency, AI opponents, and NFT rewards, safeguarded by zero-knowledge proofs (zkp)

129
Built at ETHDenver 2024
Third prize
H1

The problem ZKLife:zk-enhanced on-chain AI Gaming solves

Our solution addresses two critical pain points in current on-chain gaming:

  1. Cost Efficiency: On-chain gaming can be prohibitively expensive. To tackle this, we've adopted zero-knowledge proofs (zkp) to prove the entire evolution of the game. This significantly reduces costs while ensuring the game remains verifiable and fair.
  2. Enhanced Gameplay: Building upon the foundation of Conway's Game of Life, we've introduced two-player competitive versions and incorporated AI agents. This introduces exciting gameplay dynamics, including player-versus-AI matches, with NFT rewards upon victory.
    By addressing these issues, our on-chain game not only becomes more accessible due to reduced costs but also offers richer and more engaging gameplay experiences.

Challenges we ran into

Reducing On-Chain Costs

  • We explored various solutions to reduce the on-chain costs of the game.
  • Initially, we considered using circom to write circuits and generate SNARK proofs. However, this approach proved to be too limited in scalability, supporting only simple game logic.
  • Another idea was to write the entire game in Solidity and put it in a rollup. However, this approach posed challenges in rendering the game frame to the frontend for display, especially without emitting all data in events, which incurred high costs.
  • Ultimately, we opted to use a ZK coprocessor to provide proofs of the computational process, publishing the ZKP on-chain for verification. Additionally, we encoded the game board as unit256, with each end bit of the canvas not displaying, significantly reducing costs.

AI Model Selection

  • We conducted extensive trials in selecting the appropriate AI model for the game.
  • Initially, we explored reinforcement learning approaches like Q-learning but found them impractical due to the game's complexity, and limited time available, leading to high training costs. Building such models also requires non-trivial heuristic fine-tuning, which is not practical for the hackathon
  • We ultimately adopted the Minimax search algorithm with limited search depth, which seamlessly integrates with existing technology frameworks and provides relatively fast feedback on gameplay experience.

Cheer Project

Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.

Discussion