F

FlSl

Scaling Fl

3
Built at ETHDubai 2024 Hackathon

The problem FlSl solves

It is a unique way to train DL models using decentralised computing & Zero-Knowledge proofs for enhanced security & faster computations based on trustlessness & ultra-privacy.

Federated Learning is a privacy-preserving scheme to train deep learning models. Data exists in isolated pools and clients that are part of the network train a model with base parameters on their data. They share the updated model parameters with an aggregator that takes the federated average of this set of models. The result is going to be a new updated base model for the next epoch of training.

To remove the dependency on the server, we leverage ZK-Proofs to make the server trustless. The Zk-Proofs are then shown publicly so that anyone can verify whether or not the computation was done correctly.

Challenges I ran into

Hard to understand zk

Technologies used

Cheer Project

Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.

Discussion