paint-brush
Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Editionby@Vinod Manoharan
1,474 reads
1,474 reads

Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition

by Vinod ManoharanJune 8th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The scalability problem is currently the main limitation for the mass adoption of blockchain technology. Sharding is considered to be the most promising, but there is no common vision on how to implement sharding to find the best acceptable compromise among the numerous parameters of the network. Projects such as Algorand, Cardano, Near, and Zilliqa have developed their own blockchain designs based on sharding. They all rely on a Proof-of-Stake (PoS) consensus algorithm and pseudo-random selection of validators.

People Mentioned

Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Proof of Work (PoW) vs. Proof of Stake (PoS): Sharding Edition
Vinod Manoharan HackerNoon profile picture

The blockchain scalability problem is currently the main limitation for the mass adoption of blockchain technology. In the standard permissionless p2p blockchain design introduced by Satoshi Nakamoto, every node has to process all the data in the network.

However, nodes in the network often have different capabilities. In Nakamoto's standard design, the performance of the network is limited by the performance of the weakest full-nodes in the network. 

One naïve approach of scaling a blockchain network is restricting network participation for weak nodes. In this case, the network relies only on so-called "tall nodes" with broad and fast network connections that could process a large amount of data.

However, such a network inevitably becomes more centralized, as the maintenance of tall nodes is often more expensive. In this context, scaling is therefore achieved at the cost of decentralization, which is known to be the most valuable feature of blockchain networks.  

Researchers around the world have come up with different proposals for solving the scalability problem. Sharding is considered to be the most promising. Yet, there is no common vision on how to implement sharding to find the best acceptable compromise among the numerous parameters of the network.

Projects such as Ethereum 2.0, Algorand, Cardano, Near, and Zilliqa have developed their own blockchain designs based on sharding. However, all of these projects have a similar pattern in their designs. They all rely on a Proof-of-Stake (PoS) consensus algorithm and pseudo-random selection of validators for shard committees.

In order to participate in the block validation process under the PoS sharding approach, every participant locks some amount of coins in the stake. For example, in Ethereum 2.0, one stake of at least 32 coins corresponds to 1 vote during a block validation round.

It is important to note that each validator can make multiple stakes and gather multiple votes. As such, the user who staked a specific number of stakes could become a validator on an equal number of different shards based on a pseudo-random committee election mechanism.

Some proponents of PoS sharding often conflate the concept of "stake" with that of "validator". I think, many readers have seen catchy headlines that a particular testnet of coin X has attracted over 20 thousand "validators".

However, this estimate is not about the number of participants. It is about the number of stakes. It is impossible to know who placed those stakes. There could be a thousand or a hundred stakeholders. It is also possible that the majority of stakes are controlled by a single entity. In this case, it is clear that the network is centralized.

Therefore, purported labeling of the aforementioned stakes of this single entity as separate validators is not only confusing and misleading, but also malicious.

Our approach is to distinguish participants from their stakes. To illustrate, let us perform some calculations. Let us assume that there is a D number of different shards in the network and some participant stakes an S number of stakes.

Then, the probability that this participant will be elected as a validator with one or more votes in the particular shard by the ideal pseudo-random function is

Also, it is a mathematical expectation of the function that attains 1 if the participant is a validator, and 0 in the opposite case. The sum of these functions over all the shards is the number of shards that are validated by the participants.

Thus, the mathematical expectation of the number of shards validated by a participant is given by the formula:

For example, in the Ethereum 2.0 testnet, the shard count D is 64. According to the formula, a participant who locks 44 stakes will validate on average of 32 shards.

It means that on average, this participant will manage 32 shards, or precisely half of the data in the network. That participant will download and process half of the data in the network. One may argue that half is not the whole. PoS sharding was advertised as a major breakthrough to alleviate the load on weak nodes in the system.

However, this is not such a big improvement, and such participants will still have to process a large workload to maintain the system. Therefore, weak nodes will not notice the anticipated improvement in performance. 

One may argue that there is no need to lock 44 stakes. If the participant has limited resources, they could lock one or two stakes and process one or two shards. Unfortunately, the PoS sharding design assumes that shard committees are shuffled every epoch to prevent attacks from adaptive adversaries.

Adaptive adversaries corrupt targeted nodes, for instance, through DDoS or eclipse attacks. Corrupted nodes lose their stakes due to the underlying penalty and leave the committee. In the end, a malicious actor could gain control of the whole committee. In contrast, in the PoW system, the node can continue its work immediately after the attack.

Therefore, committee shuffling is an important part of PoS sharding. After such reshuffling, the committees are reelected and participants are assigned as validators to other shards. 

Unfortunately, in order to honestly execute their validation duties and verify transactions, participants with one stake have to download the state of that shard. This is a rather large amount of traffic.

Participants should know either all of the unspent transactions or all the account balances in order to continue their work. The alternative is to lose the stake or become a puppet of other nodes, which possess the necessary data. 

Let us carry out some calculations. Assume that every stake is locked for around 180 days, and every stake is elected as a validator once a day. Notice that the formula above works perfectly in this case too. We set D=64 and S=180.

On average, this participant will end up downloading the state on 60 out of 64 shards. That is almost the entire network. Here is another example. Assume the participant has locked 4 stakes. Then, after 11 days, they will download nearly 32 shards, which amounts to half of the network’s state.

Yet, we consider the load carried by minor stakeholders. The other side of the coin represents the wealthy stakeholders with many stakes. Imagine a server with 64 processing units that validate 64 shards, where each processing unit verifies their respective shard. Managing this server is a rather simple task.

Whenever committee reshuffling occurs, there is no need to download or update any shard states. It is only necessary to reshuffle the keys associated with the stakes between the processing units according to the committee election results.

Thus, an operation that is costly for small stakeholders is relatively cheap for a large stakeholder running these 64 processing units on the server. I think the attentive reader understands that the aforementioned server is a full node. Under this design, those who can afford running this full node will save much more money on network traffic.  

One may argue that 60 is less than 64 and a half of the state is not the whole state. Nevertheless, it is not the long-awaited solution that is worth “a billion dollar budget and 10 years of development”.

Yet, small stakeholders with weak nodes have to manage a huge amount of data or a huge amount of network traffic. This requirement entirely defeats the purpose of scaling through sharding!

Different projects that set the goal of implementing Proof-of-Stake sharding may have different shard counts, committee reshuffling intervals, and time intervals for locking a stake.

However, one may observe "performance below expectations" for any practical set of parameters. Whenever such projects face "launch delays", core teams often present them as development issues. However, they are, as I just described, plain design flaws inherent to PoS sharding.

Interestingly enough, there was no need to implement sharding based on Proof-of-Stake to reduce the workloads of small participants. Let's imagine that the project proposes sharding based on Proof-of-Work.

In contrast to PoS designs, it offers a setting convenient for weak nodes, so that they can manage their workloads. In this case, all participants are rewarded proportionally to their efforts in network maintenance. As a result, running weak nodes remains profitable.

Another benefit of PoW sharding is the absence of problems typical of PoS, i.e. nothing at stake attacks and stake grinding attacks. As a result, Proof-of-Work offers a better trade-off for scaling than the one that Proof-of-Stake could.