IncognitoGPT

IncognitoGPT

Your Secrets are Safe in Conversation

IncognitoGPT

IncognitoGPT

Your Secrets are Safe in Conversation

The problem IncognitoGPT solves

Problem:

Privacy and data security are major concerns in the digital era. Large language models, when used without safeguards, put user privacy and corporate confidentiality at risk. This includes the exposure of personal information such as addresses, names, Aadhaar card details, phone numbers, as well as company confidential data like project information, revenue figures, and client information.

Consequences:

The lack of protection in language models can have serious consequences. Data breaches can lead to identity theft and other harmful outcomes. Recent cases, such as Samsung banning ChatGPT due to data leaks, highlight the need for improved security measures. Regulatory bodies, like in Italy, have taken action by banning GPT usage due to compliance issues, emphasizing the importance of robust privacy measures.

Solution: Incognito GPT - Protecting Privacy and Confidentiality

Incognito GPT ensures that user prompts are handled securely and sensitive information is redacted and anonymized before being processed by online models.

Incognito GPT works by implementing a two-step process to protect user privacy and confidentiality:

  1. Local Processing: The user prompt is sent to a model called Vicuna, hosted locally on the user's device or an enterprise's internal servers. Vicuna performs redaction and anonymization, removing all personal identifiable information (PII) and confidential data from the prompt.

  2. Online Model Interaction: The sanitized prompt is then securely sent to the online model, such as GPT3.5, for generating the response. The response is received by Vicuna, which analyzes it to ensure its relevance and appropriateness for the user prompt, filtering out any unintended or potentially sensitive information.

Incognito GPT ensures user privacy and confidentiality by locally redacting and anonymizing data, and verifying the response, allowing secure utilization of language models while safeguarding personal and confidential info.

Challenges we ran into

  1. Running the Vicuna language model locally on the device was the toughest due to its heavy RAM requirement, and very less doc content availability on MAC.
  2. The interaction between the Vicuna model and the web UI. Vicuna runs on the local OS terminal, while the Web UI is on Javascript, so interacting between the terminal and web interface for input prompts and responses was a challenge.
  3. Figuring out the logic of how to add the secured info back to the GPT4’s response was a challenge.

Tracks Applied (2)

Ethereum x AI Track

For users who don't have enough computing power can delegate the processing to a trusted person. But that person needs t...Read More

ETHIndia

Google Infra

Project's Backend server is written in Flask and being hosted on Google cloud platform. Further application of Google cl...Read More

Google

Discussion