An Interview With Carl Cervone: On Open Source, Digital Public Goods Funding, and Impact Tracking

Written by terezabizkova | Published 2024/01/23
Tech Story Tags: open-source | web3 | impact-tracking | impact-verification | public-goods | blockchain-for-public-good | public-goods-funding | software-development

TLDRIn the interview, Carl Cervone discusses his work on Open Source Observer, focusing on innovative ways to fund and measure the impact of digital public goods. He highlights the unique challenges in a decentralized ecosystem, emphasizing the importance of community-driven funding mechanisms and impact verification. Carl also touches on his involvement with the Hypercerts project, which aims to transform how public goods are funded and their impacts recognized. He advises newcomers in the field to start small and gradually increase their involvement, emphasizing community engagement and continuous learning. via the TL;DR App

Coincidentally, in the morning before the announcement of the eagerly awaited results of Optimism’s Retro PGF Round 3, I had the opportunity to sit down with a self-proclaimed “data nerd” and innovator in the space, Carl Cervone. Through projects like Open Source Observer, Carl works on programmatic mechanisms to fund public goods more sustainably. Known for his expertise in impact verification, a key theme of our conversation, his name is associated with projects like Hypercerts, Gitcoin, and Protocol Labs. Let’s dive in.

Carl, what are you working on these days?

Right now, my main project is Open Source Observer—it’s the first we’re launching from a new company called Kariba Labs. We aim to find new ways to measure the impact of different open-source software contributions.

Today, we have a pretty well-established infrastructure for funding public goods in the real world, which includes everything from taxes to donating to nonprofits or companies engaging in social responsibility. But as more public goods are moving online and are influenced by digital technology, there’s a need for new structures to fund the essential public goods that support our digital ecosystems.

Naturally, one of the challenges is the decentralized nature of these public goods. They aren’t confined to any jurisdiction or municipality, and their contributors and users are global and diverse. So, how do we identify what impact looks like, how do we fund it, and how do we reward the people contributing the most to it? That’s the puzzle we’re trying to solve.

So, what does the digital public goods funding space look like today?

There’s a clear focus on public goods in the Ethereum ecosystem, covering everything from educational efforts to UX to more technically sophisticated parts of the core infrastructure stack. Interestingly, this desire spans across actors—individuals, DAOs with their own treasuries, even the leading Layer 2s like Optimism and Arbitrum, and of course Ethereum itself through the Ethereum Foundation. But even if extremely valuable, public goods don’t tend to have traditional revenue generation models. That’s why it’s necessary for the network to come up with different ways to fund these public goods and for the community to embrace these efforts by  saying: “We value these public goods, and we need to fund them well.”

In practice, this means we have funding sources here and deserving projects there and a range of funding allocation mechanisms bridging the two.

On one end of the spectrum, there are more traditional structures like RFPs and grant-making cycles, where a foundation might say, “We have a certain amount of money to give out; you can apply.” Then, a team reviews each project, decides if they like it, and if so, awards funding.

On the other end of the spectrum are purely bottom-up, community-driven processes such as Gitcoin Grants. With Gitcoin, anyone can say, “We are a project; here’s our community, and we'd like to compete for some funding.” Every quarter, there are matching pools, and the amount of funding a project receives is determined by the number of votes it gets based on a specific formula. This approach has enabled hundreds, even thousands, of projects to receive funding every quarter. Many are small-scale, but some have grown from part-time projects to full-time employment for their teams. It's exciting because creating ways for people to work on public goods, especially those without traditional revenue models but that create immense value, addresses some of our biggest challenges. You essentially transform a role traditionally held by the government into a thriving, decentralized community.

All of this ties in closely with impact verification, right?

Exactly. And that’s why we need a three-sided marketplace. There are the funders, and then there are the projects. But for this to work, we need the third piece: the party that will measure which products have the most impact. Ideally, funders should be able to show that their investment has been impactful—that they’re getting some kind of positive ROI on the funds they’re giving out. Otherwise, they might just give the money away without considering who it goes to.

Impact tracking has traditionally been dominated by experts and auditors who are really good at coming in and taking a proprietary approach to measuring a project’s success. While there are undeniable benefits to having subject matter experts, this approach doesn't completely align with our goal of decentralizing public goods funding. With digital public goods, we need a mix of grassroots, community-driven instruments complemented by expert insights to ensure impactful use of funds.

So, there’s this fascinating landscape opening up where we could democratize access to impact measurement. One of the strategies is to increase data availability on projects and their impact. We can also expand surface areas for those participating to include users, open-source contributors, downstream developers, and so on. And it’s essential to simplify the process by having experts translate their guidance into frameworks that can be replicated or models for people to work and experiment with. So, we need to build out this measurement and analysis piece to achieve a functioning impact market. Otherwise, we’ll have top-notch funding mechanisms and funding pools, but there won’t be any impact to show for it.

What are some ways to build up this measurement piece?

Imagine the distribution of funding as a curve. One approach might be to allocate the same amount of money to each of the 100 projects. Alternatively, you could fund only the single best project. Between these extremes, there are various possible configurations. The initial step is deciding the desired distribution shape—how flat or skewed it should be. After establishing this, the next step involves combining data analysis with community input to assess projects and determine their impact.

For open-source software, certain aspects are straightforward to identify using data. You can easily spot projects with little to no activity or those that only become active around funding announcements. Similarly, highly active and popular projects with diverse contributors can be quickly identified. These evaluations are important, though many systems don’t currently utilize this data effectively. The goal is to get there.

The tricky part is dealing with the middle category, with significant variation among projects. A single metric or analysis method probably won't do; what's needed is to clearly define the specific impacts and measurement methods, followed by a thorough analysis of the data to figure out which projects are doing the best in their respective areas. This stage might also require the community's involvement or insights about these projects from well-informed people for the final evaluations.

I imagine reviewing each project and its impact can overwhelm the everyday user. How can we make this better?

That's precisely the issue we're addressing with Open Source Observer. The data is out there—accessible through Block Explorers, GitHub, and other platforms. However, having access doesn't necessarily mean it's straightforward to compile or format the data for every type of analysis needed.

We focus on developing a simple, scalable system that facilitates this process—a platform where newcomers can easily navigate dashboards, search for projects, and make comparisons. For instance, before contributing to Gitcoin grants, you could quickly verify a project's activity and ongoing developments. We want users to see actual data: insights into how long a project has been active, its level of engagement, and the nature of its activities.

Then there's another step for the “data nerds”, a pretty small community, who want to go deep into analyzing this stuff. We're working on a way for these folks to get the data they need in a format that lets them focus entirely on the analysis, not cleaning the data, importing it, or figuring out how to store it all. We aim to take care of these initial steps so they can get straight to the detailed analysis they're passionate about.

Surely, data nerds produce invaluable insights for the broader community, too. Are there any incentives for this group?

Yes, this is a big challenge we need to address: figuring out who the most curious and interested individuals are and how to transition their hobby into something that's potentially compensated or even a full-time job or organization. I'd like to see more experiments like Optimism RetroPGF taking root within communities. Then, perhaps every quarter, we could identify those who have made significant contributions or designed something really useful for others and offer them some funding as a reward.

Speaking of compensating impact, Hypercerts comes to mind. You’re one of the key architects of the project; can you introduce it and talk about its journey?

Absolutely! Hypercerts is a data layer for tracking various types of impact claims. When someone claims they've made a positive impact, this can be broken down into several dimensions: who did the work, when it was done, a description of the work, and the resulting event. Hypercerts introduces a way to give funders ownership over these claims.

This “ownership right” could mean simply being able to boast about funding on social media, or it could translate into financial returns, access to exclusive events, or membership privileges. The idea is to incentivize early funding by offering future benefits, which could be monetary, recognition-based, or other forms of social clout. Hypercerts captures this data and uses an ERC-1155 token to denote ownership of these impact claims.

Since launching a year ago, we've seen various interesting use cases emerge, from climate to journalism. It's still early, but we're eager to see how these initiatives develop. We anticipate that hypercerts will become a popular funding mechanism in certain communities, spurring a range of new experiments.

The ownership of the protocol now resides with the Hypercerts Foundation, which oversees the protocol and the infrastructure for creating hypercerts, and a growing community is starting to use it.

What’s been your favorite use case of hypercerts so far?

That’s a tough one! One unexpected and exciting use case has been recognizing the various contributions to events. Think about the impact of attending a crypto conference—the networking, the ideas shared. It's more than just attending; it's about the venue, sponsors, speakers, and even the small things like food and coffee. Hypercerts can track these contributions, from sponsors to speakers. This even extends to tracking ticket sales and, eventually, identifying highly impactful events. Ideally, this could lead to a secondary market to purchase that impact, helping us understand which events truly made a difference and funneling more funding to those who made them successful.

What would be your advice for builders who are relatively new to the public goods funding ecosystem and want to get more involved?

For those starting out, initiatives like RegenLearnings, which I recently co-launched with Kevin Owocki, can be a great entry point. The platform fosters a community passionate about regenerative economics, combining theoretical insights on mechanism design and public goods funding with practical, community-driven approaches. It's a place for learning, sharing, and applying these concepts.

Overall, navigating the transition from casual interest to full commitment in this ecosystem is a personal journey and varies for everyone. Some might be weaving this new interest into their busy work schedules, while others could be in between jobs, keen to dive deeper. The key is to ease into it, explore various facets, and find communities that resonate with you.

I wouldn’t exactly recommend flipping your professional life upside down and turning all your assets into Ethereum overnight. In web3, in particular, the narrative develops fast, so start with smaller contributions and see where that takes you. Often, these small contributions can pave the way to something bigger and more significant. It requires a unique kind of initiative different from the usual 9-to-5 job structure. Think of it as a marathon, not a sprint: Focus on building your presence and value in a specific community over time.

Lastly, how can people connect with you, and what events will you be attending this year?

You can check out some of my writing or contact me on Twitter. This year, I’m looking forward to EthDenver and Devcon in Thailand. Funding the Commons is another event I want to shill—there’ll be a few of them, but I’m most excited about the April one in San Francisco. I like that it seeks to bridge the gap between the blockchain and traditional software communities. After all, both share the same goal and have a lot of the same values, and I think there’s a lot of potential in bringing them together.


Written by terezabizkova | Tech writer/editor based in Colombia. Always curious. 💡
Published by HackerNoon on 2024/01/23