SkyNet

SkyNet

Decentralised ML across the interent, powered by ZK.

The problem SkyNet solves

As said by Forbes, the future of AI will lie in the hands of the one that has access to the most powerful computing resources. More and more users will have the need to train more and more advanced models.

What if you could have a decentralised method to train the same machine learning model, with the epochs and data split across different machines, across the internet. SkyNet aims to solve that, enhanced with the power of zk.

Exisitng solutions like hivemind have a steep usage curve and are restricted to just a local network of machines. Federated learning is again restricted to a local network (unless the private IPs are exposed) and is more focused on data privacy.

SkyNet has a the following flow, coupled with the power of zero knowledge proofs:

  1. Choose or upload Data
  2. Select a Model from Marketplace (For Inference) or Upload Model Architecture for training
  3. Specify the computing resources requirements and budget.
  4. A Push Notification is sent to all interested participants
  5. When Requirements are met, the task is assigned to all participants and logged in the smart contract.
  6. The workflow is managed using Waku's P2P stack to ensure a reliable mode of communication which enables end-to-end delivery confirmations and uniquely identifies nodes in the subnet. It is used in the exchange of weights during the training process, without revealing the same to anyone else.
  7. Every model trained has a zkProof Verifier Contract generated. Proofs are generated for every worker running the model. These proofs can be verified by the contract deployed on the Scroll zkEVM.

Challenges we ran into

During the course of our project development, we encountered several challenges that required thoughtful resolution:

Peer-to-Peer Connection with Waku:
Establishing a reliable peer-to-peer connection with Waku proved challenging due to intermittent failures in the relay server. This hurdle necessitated a thorough examination of the server's reliability issues and the implementation of robust measures to ensure consistent and dependable connections.

Integration of Project Components:
Efficiently consolidating various project components posed a significant challenge.

File Upload Using Lighthouse:
Overcoming difficulties in file uploads through Lighthouse emerged as a pivotal concern. Thanks to mentorship, we successfully addressed this challenge by leveraging valuable guidance.

Tracks Applied (7)

Arbitrum Track

Deployed the contract on Arbitrum Goerli: https://goerli.arbiscan.io/address/0xe778b31bce52f1f9dec9cd2fc036915aad03749b ...Read More

Arbitrum

Filecoin Track

Used Lighthouse.storage to securely store 'Data' and 'Models' and transfer ownership rights. Applying for: Most Unique ...Read More

Filecoin

Waku Track

It is the primary backbone of the project and is used for the entire coordination of machines (peers) on a decentralized...Read More

waku

Alliance Track

Inspired by the Decentralised GPU idea from the list of ideas provided by Alliance, we truly belive that this could be a...Read More

Alliance

Push Protocol Track

As soon as a Training Job is requested, a notification is sent to all owners of Compute Resources to accept the job or d...Read More

Push Protocol

Lighthouse.storage Track

Used Lighthouse.storage to securely store 'Data' and 'Models' and transfer ownership rights. Whenver a new 'Data' object...Read More

lighthouse

Scroll Track

We belive that a decentralised training is feasible only if we have a correct of proof of work. Every model trained has ...Read More

Scroll

Cheer Project

Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.

Discussion