Recently the concept of token curated registries has seen a lot of buzz from Laura Shin’s interview with Brian Kelley to Mike Goldin’s work, and for good reason. For those who are unfamiliar with the concept of TCRs I highly recommend Mike Goldin’s piece on TCRs 1.0, it gives a great primer and gets a little into the weeds so you can understand the mechanism as well. Mike describes them briefly the following way:
A token-curated registry uses an intrinsic token to assign curation rights proportional to the relative token weight of entities holding the token. So long as there are parties which would desire to be curated into a given list, a market can exist in which the incentives of rational, self-interested token holders are aligned towards curating a list of high quality. Token-curated registries are decentrally-curated lists with intrinsic economic incentives for token holders to curate the list’s contents judiciously.
This type of registry applies far beyond what we traditionally think of as useful lists, the best colleges, the most wanted criminals, and great restaurants. I see the future of token curated registries (TCRs) as the way to bring blockchain past quantitative questions and into qualitative questions. To elaborate, the most successful blockchains today are ones that ultimately only have to solve math problems. Bitcoin has to make sure that a user has balances to send, Ethereum has to make sure that the computation is executing correctly, and Sia has to make sure that you are storing your data properly. All of these problems are solvable via somewhat complex math operations and are verified by computers. This is nice but don’t quite satisfy the vision for blockchain as many people see it today. Computation alone can’t prevent mango farmers from spraying pesticides on your mangos. The difference between blockchain and TCRs is that, as Ryan Selkis puts it, they require proof-of-human-work. This means they can solve human problems that can’t be touched by computation.
At its core a TCR is just a model for binary decisions about anything. Ultimately the binary decision is left to the token holders through the voting process. In practice, this can lead to a wild variety of metrics that token holders use to make their decision. This is especially true of lists with vague or poorly defined entry characteristics. Imagine a TCR for the top colleges. What defines a top college? There are hundreds of metrics suitable for judging the quality of a college; quality of teachers, tuition price, notable research, student SAT score, etc. This doesn’t necessarily mean that the list will be improperly curated, but it does mean that the TCR will be highly conformist (due to rational token holders making the safest votes) and have obfuscated utility. This is the underlying thought process around framework-based TCRs.
Mike Goldin’s pieces on TCRs lay most of the groundwork for what they should look like. In essence; a list of entities who desire to signal a consumer benefit and are willing to risk or forfeit capital to do so, curated only by people who have financial incentives to do it well. In his version the token holders are the active party in listings, responsible for removing active listees, challenging applicants, and setting network parameters (application length, minimum deposit, etc.) I propose a new method, framework-based TCRs (fTCRs), which takes a different approach.
In this method, a high level approval framework is developed by token holders which is composed of various metrics and weightings that will ultimately decide the fate of applicants and listees alike. In order to gather accurate information on each of the candidates a voting process is held for each metric in which token holders can vote either on a yes/no answer or on a value (which can be discretized to induce vote consolidation.) These metrics can be finalized for a candidate by either a majority or plurality, and put up to a renewal vote at any time, in the event there is a significant change for the listee.
A significant difference between fTCRs and TCRs is the method by which challenges to listing are approached. In TCRs challenges must be instantiated by a party who risks capital, in fTCRs challenges are instantiated by changes in the framework. If token holders vote to include SAT score as a measure of the best colleges, data for this metric must be accumulated on all current listings at which point all current listings are up for renewal; token holders must prove that current listings are still in compliance with the new framework. Fig. 1 outlines an example of such a system.
Fig 1: An example of the decision process in a framework TCR
Objective accounting — In fTCRs we can see that the method of listing is applied fairly to all participants in ways that may not be true of TCRs. Imagine a registry for trusted data aggregators which identified firms that both aggregated large amounts of customer data and handled that data responsibly and with respect to the consumer. It is possible that no matter how much progress on these fronts is made by companies like Equifax or Facebook they will never be featured on this list. This is not to state an opinion on either of these breaches or to conflate them in any way, it is merely to show that human biases around emotional issues are a significant concern for well-regulated TCRs (similar to the meme attack outlined by Goldin.) What the fTCR does to counteract these biases is to ask simpler questions of the token holders. Instead of asking, “Is this company a trustworthy data aggregator?” it would ask, “Does this company’s privacy policy include ‘X’ clause?” or “How many parties have access to forfeited information?” This type of question is more immune to biases in judgement than open-ended questions because the questions should be framed in such a way as to have a factual answer.
Reduction in Expected Application Cost — Due to the objective nature of the application process with fTCRs the cost/benefit analysis changes significantly for the candidates. Instead of being left in the dark as to whether or not they are going to be accepted into the registry, the candidates can have a pretty good sense of how well they will fill in the criteria before they apply. This means that there is more incentive for good candidates to apply, and less incentive for poor candidates to apply (assuming a high quality framework). It does, however, also reduce the value distributed to token holders from bad apple applications. This means value may have to come from ancillary sources such as recurring listing fees for accepted candidates, which already has a precedent with organizations like FINRA.
Large Attack Surface — In TCRs the honest voters are highly consolidated; there are only two answers so more than likely most honest voters will converge on the same answer. This means that it has a higher tolerance for byzantine actors. In fTCRs, where the criteria are made up of discretized values, it is not unlikely for one token holder to vote 8 on ‘faculty quality’ where another votes 7, despite both token holders being honest. It is hard to theorize about the optimal solution as it must be studied in practice, but a potential solution to this is to simply use a binary classification for each criterion. This reduces question difficulty and honest vote dispersion, thus increasing fault tolerance while at the same time preserving the fTCR specific advantages mentioned above.
Candidate-specific Judgements — There is also a possibility that some TCRs use the candidate to adjust what framework to apply. While broad categories of candidates will be supported by fTCRs, sliding scales will be near-impossible to implement securely. One example of this might be a credit scoring TCR. In credit scoring clients have significantly different lengths of file, which is the main criteria for judging creditworthiness. In this type of TCR a token holder may want to judge clients with short lengths of file on an entirely different framework than clients with long lengths of file, and incorporate incremental information into the framework as it becomes available. While not impossible to implement via fCTRs, the voting process to approve each framework for each individual length of file would likely be too onerous to add net value. On the other hand, this is relatively simple on your basic TCR; you simply let the market decide based on its incentives.
Work — In fTCRs there is a significantly higher amount of work required by token holders in order to ensure compliance. Instead of simply casting a vote, token holders must do research to come up with the necessary data points. This may lead to low participation in the network, and thus a more easily attacked registry. In order to counteract this, the network must put higher percentages of their token supply into curating the registry. Also, while there may be lower participation, this will also increase the rewards to any individual participant. Ultimately what matters most is the quality of the curators, which can be handled in any number of ways outside simply selling tokens on an exchange.
All feedback on these ideas is welcome. Feel free to respond here or on twitter https://twitter.com/CryptoDiplo.