F

FlSl

Scaling Fl

Created on 21st April 2024

F

FlSl

Scaling Fl

The problem FlSl solves

It is a unique way to train DL models using decentralised computing & Zero-Knowledge proofs for enhanced security & faster computations based on trustlessness & ultra-privacy.

Federated Learning is a privacy-preserving scheme to train deep learning models. Data exists in isolated pools and clients that are part of the network train a model with base parameters on their data. They share the updated model parameters with an aggregator that takes the federated average of this set of models. The result is going to be a new updated base model for the next epoch of training.

To remove the dependency on the server, we leverage ZK-Proofs to make the server trustless. The Zk-Proofs are then shown publicly so that anyone can verify whether or not the computation was done correctly.

Challenges I ran into

Hard to understand zk

Tracks Applied (1)

Morph Track

Contract deployed on MorphL2 here https://explorer-testnet.morphl2.io/address/0xe7f9a7D7945aCfa9180c60c2A4A8669566471faE

Morph

Technologies used

Cheer Project

Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.

Discussion

Builders also viewed

See more projects on Devfolio