2PM.Network

2PM.Network

Privacy Computing, AI Model Ecosystem, Distributed Computing, AI Monetization, Modularity

2PM.Network

2PM.Network

Privacy Computing, AI Model Ecosystem, Distributed Computing, AI Monetization, Modularity

The problem 2PM.Network solves

2PM.Network is a public application network for privacy computing models (2PM stands for Public, Privacy and Models) that relies on the security and verification services of any EVM blockchain.

2PM.Network fragments AI model copyrights through the Story Protocol and distributes income as royalties to contributors of private data and computing power.

2PM.Network incentivizes AI model users to stake/re-stake Ethereum through an open AI inference base usage plan to enhance ecosystem security and obtain active verification services, verifying data, computation processes, and results.

2PM.Network is modular, allowing anyone to use the framework to establish a privacy computing node network, relying on the security of any EVM chain to train AI models and provide APIs to downstream applications, thereby building an AI application ecosystem.

2PM.Network can serve as an oracle network to provide AI inference results to EVM blockchains, enabling the construction of applications such as prediction markets.

Challenges we ran into

  1. Royalty Token Supply Management
    Problem: Story's Implementation of Royalty Token's supply was fixed. However, to align with our network's growth and the ongoing contributions from network participants, we needed a mechanism to continuously mint new tokens.
    Solution: We adopted a model where tokens are minted preemptively and then distributed based on contributions. This approach allowed us to dynamically increase the token supply in proportion to the actual contributions, ensuring that the tokenomics reflect the evolving nature of the network.

  2. Quantifying Contributions of Network Participants
    Problem: It was challenging to accurately quantify the contributions of network participants, particularly in terms of the data provided for training AI models. Different types of data can have varying impacts on the training process and the performance of AI models.
    Solution: We implemented an algorithm that uses the infinity norm to evaluate the impact of different data inputs on the models’ training. This mathematical approach allowed us to quantify contributions in a standardized way, making it feasible to distribute computational rewards fairly and transparently based on the value of the data provided.

Tracks Applied (1)

AI Track

In our ecosystem, AI models are treated as intellectual property (IP) assets. Network founders establish the training ne...Read More

Discussion