paint-brush
Blockchains Are Hard to Scaleby@michaelflaxman
1,612 reads
1,612 reads

Blockchains Are Hard to Scale

by Michael FlaxmanAugust 9th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<a href="https://hackernoon.com/tagged/bitcoin" target="_blank">Bitcoin</a> blocks have been <a href="https://blockchain.info/charts/avg-block-size?timespan=all" target="_blank">full for some time</a>:
featured image - Blockchains Are Hard to Scale
Michael Flaxman HackerNoon profile picture

Bitcoin blocks have been full for some time:

This has caused a huge debate in the bitcoin community. You may have heard of BIP 148 (UASF), the segwit2x “compromise”, BIP 91 and most recently the creation of a new cryptocurrency called bcash (with a shared history).

The bitcoin network is currently capped at roughly 3 transactions per second, which doesn’t sound like much if you compare it to Visa/Mastercard. You may be wondering why we haven’t been able to scale the blockchain as easily as a traditional service. To answer that, we have to start with the basics.

The main purpose of a blockchain is to order transactions (to prevent double-spends) without a trusted third party. In order for this to be truly trustless, everyone must be able to easily validate all transactions on the blockchain serially. If you’ve studied computer science, you can quickly see the problem: having every person store and validate every transaction from every other person (and in the same order) does not scale well.

Here is what the bitcoin blockchain size currently looks like:

130+ GB is not nothing

Bitcoin stores balances as unspent transaction outputs (UTXOs), which is a record of a previous transaction that is now available to be spent. The good news is that once a UTXO is spent, it doesn’t need to be stored any longer for trustless validation. The bad news is that spending UTXOs may create more UTXOs than it destroys. Here’s what the number of UTXOs looks like:

UTXO growth appears to be tapering

Note that you also have to validate the ECDSA signatures, but since that can be done in parallel it’s not the primary bottleneck. I’ll explain it as part of a future post on hardware and software improvements.

Fun fact: click here to see how Bitcoin and Ethereum’s hardware usage has changed over time.

Unfortunately, decentralization is not a binary measure. The more computationally expensive it is to validate the blockchain, the fewer users who will do it. One benchmark we have is the number of full nodes that are announcing themselves publicly on the network. It’s impossible to measure how many full nodes there are that do not accept connections, and it’s an inherently imperfect measure (1 node != wallet), but it can be a useful approximation:

While bitcoin usage has exploded, the number of bitcoin full nodes has had only modest growth.

If you’re going to delegate your trust to a centralized institution, you might as well use a bank. They can easily keep all their customer data on a simple SQL database and can scale to many orders of magnitude greater transaction throughput. Of course you have to trust them to be solvent, not steal from you, freeze your funds, etc.

All else being equal, more decentralization is always better. If that decentralization comes at a cost (transaction throughput), what is the right amount? How do we scale while maintaining decentralization?

In this series, I’ll explain the pros/cons of various scaling proposals: better hardware, better software, one way payment channels, the lightning network, extension blocks, sidechains, drivechains, off-chain transactions, and unrelated blockchains. Follow me for updates as I publish!

Thanks to Jimmy Song, Nemil Dalal, Aaron Caswell, and others for reviewing earlier versions of this post. As always, reviewing does not mean endorsement of the content.