Blockchains have unlocked a new design space for computing with the advent of trustless computing. In a world of increasing centralization (a la, the Big Four), increasing control and exploitation (a la, Facebook), and increasing censorship (a la, the State). The promise of a decentralized, self sovereign, censorship resistant protocol is exactly what the world wants/needs. You need not look any further than the ICO boom that seeded 2000+ new projects from in the span of 18 months to see there is a desire to build a new internet, devoid of the systemic challenges we are faced with today.
Enter Web 3.0: the intersection of the three pillars of the decentralized movement, and the foundation on which the next era of innovation will be built. Except, even with hundreds of billions of ascribed value — and some of the brightest minds dedicating their careers to building the stack — no is using it, yet.
I would argue that the limiting factor to mainstream adoption is the very thing that unlocked this new paradigm: decentralization.
If the vision of a decentralized internet will ever be realized, we need to recognize that decentralization is a spectrum and the fundamental trade off is usability. The degree to which one is optimized, the other will be sacrificed; however, there is a way to have our cake and eat it too – because this spectrum is not static, it is dynamic.
This spectrum is the battleground on which protocols will compete. In the end I believe 1–2 big winners will emerge. These winners will ultimately converge on a common point on the decentralization:usability spectrum. Where that point is, I don’t know (and it will likely continue to move as layer 1 and layer 2 solutions improve) but what I do believe, is that the best shot to get there is to start optimizing for usability and making trade offs in decentralization.
Much of the development in the space has over-indexed on decentralization with a focus on minimum viable usability (MVU) – or for those trying to obfuscate securities regulation, a different version of MVU (minimum viable utility).
The variable we should be optimizing for in the near term is minimum viable decentralization (MVD). That being said, depending on the use case, MVD can look very different. A micro transaction economy, like WAX or BAT, would have a different threshold than a store of value use-case like BTC.
The decentralization spectrum is also not a 2-dimensional spectrum; there are different vectors to consider that all have impact on the integrity of the network and user experience. For example, a permissioned blockchain offers higher throughput than a permissionless blockchain where the throughput decreases with incremental nodes, but the security of the network increases. When interacting with a permissionless network, say Bitcoin, there is are centralized and decentralized options in the onramp/offramp experience and storage i.e. centralized vs. decentralized exchanges and wallets.
On each vector, the variable that should be optimized for is user adoption, given the compounding value of network effects. In crypto, these network effects are only intensified given the nature of the underlying crypto asset with a fixed supply. In a Web 2.0 platform like Twitter, Facebook, or Reddit the value to the user of increased adoption is increased utility of the network (i.e. more content) but the financial value accrues to the platform. In a crypto-network, where the users are also stakeholders (meaning they own a piece of the scarce supply of the underlying crypto-asset), the users realizes both increased network utility and the financial upside that comes with increased adoption. As adoption of the network increases, demand for the underlying asset increases.
We are seeing these battle play out real time across the crypto ecosystem – both in underlying protocols and in the picks and shovels businesses. The former we’re seeing in the battle for platform dominance i.e. Ethereum vs. EOS and the latter we’re seeing in the battle in exchanges i.e. Coinbase vs. Binance vs. DEXs.
Ethereum is the incumbent in the platform race. The first Turing complete blockchain (industry jargon for a protocol with sound logic i.e. programmable money) enables dApps to be built on top of the Ethereum network, either using the native currency ETH, or an app coin using the ERC20 (fungible) or ERC721 (non-fungible) standards. Notably, Ethereum runs a permissionless consensus protocol, meaning that anyone can run a node without asking for persmission. Today there are 40k+ nodes on the Ethereum network. That brings higher confidence to network integrity while trading off throughput and latency. This has been a key limiting factor for mainstream user adoption.
By contrast, EOS runs a permissioned network of 21 nodes. This has a trade off on decentralization but a material impact on the throughput and latency of the network. According to blocktivity.info, the EOS network is processing 40x the number of transactions – this has allowed their dApp ecosystem to scale beyond that of Ethereum’s despite Ethereum’s incumbent status. According to stateofthedApps.com, EOS has 70k+ active dApp users to Ethereum’s 15k.
There are two vectors within the exchange battle that are worth noting. The first being centralized vs decentralized exchanges (CEX/DEX) and the second being the breadth of tokens offered on the platform – both of which are strategic decisions to optimize for adoption and subsequently network effects.
On the CEX vs DEX, we’ve seen that centralized exchanges have stolen the early market share. Coinbase took a consumer first approach in building their product – they made it easy for a user to enter the crypto ecosystem by acquiring the banking licensure in order to be able to custody user’s funds. This indexes highly on centralization but made it easy for even someone non-tech savvy (like my mother) to signup for a Coinbase account with a few clicks. Much easier than the LocalBitcoins process of old. This decision to centralize the processes and operate like a bank (the antithesis of the cypherpunk movement synonymous with the Bitcoin Whitepaper) enabled Coinbase to reach massive scale (20mm+ wallets) compared to leading DEXs whose wallet counts are in the tens of thousands.
Another vector of competition has arisen is the breadth of tokens. A recent report by Multicoin Capital on Binance illuminates how Binance’s strategy to list a wide array of tokens (200+) compared to Coinbase’s 7 listed tokens (4 of which are recent additions) has enabled them to capture 80%+ of daily crypto trading volume. Notably, the Binance exchange is also a centralized exchange, although did not offer fiat onramp until recently.
Both Coinbase and Binance recognize the dynamism of decentralization and are moving to decentralize more components of the exchange experience. Coinbase acquiring Paradex and Binance launching the Binance DEX.
Binance appears to be more bullish on decentralizing the Binance experience, but also is pragmatic enough to understand that optimizing for decentralization at the outset will have limitations – see a recent tweet from CZ:
CZ shows a cogent bias to pragmatism in user experience. It is a common thread in cyclical waves of innovation to over-optimize for the means over the end. We’ve seen that in crypto where too many teams are wielding hammers in search of nails.; first working to solve complex technical problems to then find a consumer problem they can apply, rather than using technology to solve a discrete consumer problem.
What crypto can learn from the incumbents it seeks to disrupt
Centralization carries a sacrilegious undertone in the context of crypto, however, there are certain areas where degrees of centralization is advantageous. Bitcoin showed the power of a strong incentive structure — the coordination of unknown actors coalescing around a common protocol. However, we have also seen the weakness of a decentralized governance structure, where decision making is near impossible. Even in a logically centralized community like Ethereum the rate of iteration continues to be an impediment to progress.
The tribalism and infighting within networks have been a blocker for permissionless protocols to coordinate around fundamental upgrades – many of which are trivial decisions that could be made very quickly with a re-centralization of decision making authority; and would likely be a net-positive for even the staunchest of opposition within the community.
The best software companies have carved out their leadership position through agility within tight feedback loops. The software we use today looks very different than what it did even six months ago. This is the product of short development cycles and constant iteration that are only possible through a structure of centralized decision making. Not necessarily a dictatorship, but a system of coordination amongst a small group of decision makers that can reach finality.
This is a race. The winners will be those who can react to emerging variables and maneuver with agility. In an open source environment where there is an open buffet of modules that can be added to forkable infrastructure, the winners will be those that maintain maximum flexibility.
Open/transparent is not contingent on decentralization
Centralizing components of both user experience and governance does not preclude transparency across the broader community. Open network design is an important pillar for anyone building in the space. A consumer can always self select out if they do not agree with the direction of the network. This is the healthy tension that aligns incentives of projects that make trade offs on decentralization.
The winners will find the right balance. The Web 2.0 era we operate in today is too centralized and needs to be upended. This post is not an argument against decentralization, it is an argument for a pragmatic approach of incrementalism as we move across the spectrum. We need to find a way to bring consumers along the journey, not run out ahead and hope they catch up. Someone is going to reach escape velocity, and my hypothesis is that it will be those who index towards usability over decentralization.