AI is rapidly growing and making its impact relevant in the world. However training AI models doesnt need to be costly and privacy infringing. Federated Learning is a scheme where a network of decentralised clients can train ML models in a privacy preserving manner. A major problem with FL is existence of malicious clients and free riders that reduce the accuracy of the network.
We solve this problem by building this on top of a proof of stake network. Clients that hold data and are responisble to train models are made to stake a set staking amount which can be slashed upon detection of malcious behaviour. This provides economic security for the network.
We can identify malicious clients by calculating the distances between the given vector and other vectors. If th emajority of clients are honest, their results will be clustered together and malicious clients will be outliers. Free riders are clients that dont meaningfully contribute to the training of the model. We check a correlation score between previous and current epoch of the clients and we can penalise clients with loew correlation scores.
We use stackr to build a micro-rollup to maintain states of model parameters trained by each client after each epoch. The rollup acts as a Model Parameters Sharing (MPS) Chain where verifiable off-chain computation for the slashing conditions also takes place.
End of the day, users get to access trained models with high accuracy without having to bootstrap the infrastructure neccesary to train something similar themselves. The user pays for this service which is split between the clients. The protocol also takes a small fee for facilitating this network.
Federated Learning could suffer from slight leakage of private datasets. To counteract this, we tried to implement homomorphic encryption. We ended up not doing it after researching for a while since we realised it would be quite impractical and computationally expensive.
Apart from this the team faced sudden issues with react in the morning du eto unkown reasons but figured it out swiftly and proceeded with development. We also faced difficulties in understand and imlpementing stackr into our architecture but with quick guidance by the team we were able to debug quickly and integrate the rollup service on top of our clients.
We have thoroughly researched on our architetcure understanding the problems and solutions of our idea. We have read several research papers and tried to implement the scheme of BGFLS and BPFL. We also faced issues establishing a remote connection between several clients and our server to demo the project as realistically as possible.
Tracks Applied (6)
Arbitrum
Polygon
The Graph
Alliance
MetaMask
Scroll
Technologies used
Cheering for a project means supporting a project you like with as little as 0.0025 ETH. Right now, you can Cheer using ETH on Arbitrum, Optimism and Base.
Discussion