IEF is a platform designed to enable RetroPGF Round 3 nominees to evaluate a random set of peer nominees projects impact anonymously. Through this evaluation, they can attest to the correct self-evaluation of others and help improve the current information gap badgeholders that are not familiar with a specific category might have.
IEF proposes a 3-step process to address the challenge in measuring the impact of public goods projects applying for RetroPGF. This process is only possible with the creation, and upkeep of an Impact Evaluation Framework that provides opt-in guidelines on a) how KPI's to measure impact are created and measured (at a technical level), b) offers a non-exhaustive set of KPI's that are commonly used by projects to evaluate their impact, and the logic behind those KPI's (these can be used as inspiration on how to create new KPI's), c) the importance of empowering operators to self-assess, d) guidelines to generate both quantitative and qualitative evaluations, e) the dangers of over-meassuring.
This framework will reduce the coordination challenges currently faced by Optimisms Badgeholders, specifically in areas in which they are not experts, but still need to make a funding allocation decision
Steps:
Leveraging UniRep, IEF ensures that only projects who have submitted a self-reported impact evaluation, and are therefore familiar with the Impact Evaluation Framework, can evaluate peers in an anonymous manner using zero knowledge. It also enables the creation of a carry-on reputation from one evaluation round to the next, per category, without the need for a member to forfeit their anonymity. A random, anonymous evaluation serves to prevent collusion and retaliation of any part.
I'm not a dev :D so forking the UniRep boiler plate was a challenge.
Tracks Applied (2)
Discussion