FakeFree

FakeFree

Real-Time Assurance Against Digital Manipulation

The problem FakeFree solves

Our project comes in the category of OPEN INNOVATION. It addresses the escalating threat posed by deep fake technology in video content. With the growth of deep fake techniques, distinguishing between authentic and manipulated videos has become increasingly challenging. This presents a significant problem as deep fakes can be used to spread misinformation, manipulate public opinion, and even defame individuals.

Our project aims to solve this problem by developing advanced techniques for detecting facial manipulation in videos. By accurately identifying deep fakes, we help mitigate the harmful effects of misinformation and safeguard the integrity and trustworthiness of video content in various contexts, including news media, entertainment, and beyond.

Furthermore, our project emphasizes the importance of user empowerment and engagement in combating misinformation. By providing users with tools to actively participate in content verification and deepfake detection, our project promotes a culture of critical thinking and digital literacy, which are essential for navigating the complexities of the digital age.

Challenges we ran into

Cross-Browser Compatibility: Ensuring compatibility across different browsers and versions added complexity to the development process. We conducted thorough testing to identify and address any compatibility issues that arose.

Preprocessing Real-Time Images: Preprocessing the real-time images captured from the current tab posed a significant challenge. We had to ensure that the images were appropriately formatted and normalized before feeding them into the deep learning models for inference. Through iterative testing and experimentation, we developed preprocessing techniques that effectively addressed this challenge.

Deployment and Model Fitting: Deploying the model and integrating it with the Chrome extension also posed significant challenges. We needed to ensure seamless integration between the model backend and the frontend extension, considering factors such as communication protocols, data exchange formats, and latency constraints. Additionally, adapting the model to work within the browser environment required careful tuning and optimization to achieve efficient inference.

Discussion