Privacy-Preserving Machine Learning with Zero-Know
Guarding Privacy in Sensitive Data Environments
Created on 11th August 2024
•
Privacy-Preserving Machine Learning with Zero-Know
Guarding Privacy in Sensitive Data Environments
The problem Privacy-Preserving Machine Learning with Zero-Know solves
In privacy-sensitive industries like healthcare and finance, collaboration is often hindered by the need to protect sensitive data, leading to reluctance in data sharing due to risks like breaches or misuse. This limits the potential for improved outcomes through cooperative efforts. Our solution tackles this challenge by using Zero-Knowledge Proofs (ZKPs) to enable secure and private collaboration, allowing multiple parties to work together on tasks like machine learning or data analysis without exposing their sensitive data. This approach ensures trustless collaboration, where contributions can be verified without sharing actual data, thus overcoming barriers to cooperation and enhancing both efficiency and security in critical industries.
Challenges I ran into
We aimed to utilize different machine learning models beyond those available in the NovaNet Machine Learning library. We faced challenges with proof aggregation, which we anticipated would be implemented within NovaNet. Therefore, we decided to postpone addressing proof aggregation until future development phases. Not yet implemented folding schemes could facilitate seamless integration across different systems, paving the way for future interoperability.
Tracks Applied (3)
Privacy
NovaNet
Local Verifiable Compute
NovaNet
Grand prize
NovaNet