Nowadays with the massive adoption of generative AI models, it is hard to trust any video that we observe in news, TikTok, YouTube, etc. Therefore, we have to be able to verify authenticity of videos, at least in important subject. There has been some efforts on building classifier models that can distinguish between fake and authentic videos or images. However, the accuracy of those models, is generally proven to be not within the acceptable margin. Moreovoer, the availability and usability of such models is usually limited.
Another approach is to acompany videos with certificated (e.g., based on digital signatures). However, most of the scenarios, we way have access to a portion of a video (e.g., watching news or YouTube Shorts) that may be a trimmed/edited version of a much longer video. This project, to the best of knowledge, is the first to tackle this challenge by trying to generate zero0knowledge proofs for authenticity of a trimed/edited video, w.r.t. its original (trusted) source. The main advantage of our approach is that it can be used trustlessly. Any community or organization (different news agencies or social media) can define their own standards on this platform, while every user is able to verify the athenticity of published videos, trustlessly, without any trsut assumptions on the authorities.
LINK TO THE DEMO: https://drive.google.com/file/d/1ZmBq3w8etOuvAoCBCAOipt33lB7YP6sD/view?usp=drive_link
The main challenge was the limited time and the large amount of coding and circuit design needed for this project. We are definitely are not done with ProvenView and will continue its further developement. We (two people) started working on this idea from May 5th (almost two weeks). After around 2,000 lines of code in different programming stacks (Rust, Python, Circom, Solidity, and some bash-scripts), we still had to do many compromizes to reach to something presentable in the Hackathon.
Another ongoing challenge is the complexity of the system, from commitment phase to the proofe generation, and verification, it requires precise parameter selection to find the best possible middle-ground between ideal security and practicallity of the solution. Overall, the idea of having a dynamic commitment method for a video is alone challenging, without even considering the ZK proof of knowling an untampered sequence (trimmed) of random length within this video.
In order to overcome the complexity in prover side, we use Nova's folding-based zkSNARKs. The final proof in Nova is a compressed Spartan proof. Although it is fairly easy and very cheap to verify a Spartan-SNARK in almost any commodity device, verifying it on-chain in Solidity is harder compred to Gen1 zkSNARKs, such as Groth16 or Plonk. To be able to meet the deadline of the Hackathon, we had to write some mock functions in Solidity that verify Spartan zkSNARKs proofs.
Tracks Applied (6)
Aleph Zero
Aleph Zero
Nethermind Research
Polygon
zkLighter
Discussion