Decentralization and scalability

When the concept of electronic money and solving the “double spend” problem without an intermediary was explained by the pseudonym Satoshi Nakamoto in the famous Bitcoin whitepaper in 2008, expectations were high. The promise was a secure system with which money could be transferred electronically and which was well suited as an online means of payment.

The promise was largely kept, if we consider that Bitcoins could be transferred globally within a short time in 2011 at a price of 0.0005 BTC (< 0.01 USD at the time) per transaction. In most cases, not even a transaction fee was required. However, the high expectations were curbed at the latest on December 21, 2017, when a Bitcoin transaction cost 37.49 USD due to the very high trading volume at the time.

At this point, the number of transactions made on the Bitcoin blockchain exceeded the number of transactions that could be processed with the limited bandwidth of the system. Since the inclusion of transactions in new blocks on the Bitcoin blockchain follows the auction principle, users had to set higher and more competitive fees to ensure that their transactions were verified by the miners in the overloaded system and included in the next block. Because of this auction principle and due to the limited transaction volume, Bitcoin was no longer competitive enough as a means of payment at the end of 2017. At the latest after this event, it was clear to every participant in the Bitcoin universe that in order to continue the adoption of the technology, the dominant scalability problem must be solved.

Scalability vs. decentralization

The limited number of processed transactions per second – which is currently around 7 transactions – results from several parameters in the Bitcoin protocol, which is based on the proof-of-work algorithm. One is the size of a transaction measured in bytes (a transaction amounts to 400 bytes on average), the other is the maximum possible block size (approx. 1 megabyte). The third parameter is the so-called “blocktime”: the time period in which on average a new block can be “mined”, i.e. added to the blockchain. While the transaction size is dictated by the information that is stored per transaction, the two parameters blocksize and blocktime are artificially chosen to keep the ecosystem around the consensus process stable and decentralized.

If one were to either increase the block size while keeping the block time constant or otherwise shorten the block time while keeping the block size constant, some nodes (i.e., independent servers or computers), which contribute to the system by providing computing power and storage space, would no longer be able to accommodate the resulting increased system requirements and would no longer be able to participate in mining. If this effect were to prevail, it could result in the distribution of power via the consensus process within the Bitcoin community being undermined and governance being concentrated on a few contributors.

Such a centralization of power distribution would allow these few parties to willfully influence the system through a coalition. As a result, the value of the system would no longer be based on anything other than the trust placed in these few parties. The “trustless” principle that differentiates a blockchain database at its core from traditional, centralized data structures such as an SQL database would be compromised. This narrative addresses one manifestation of the well-known tradeoff problem between scalability and decentralization in blockchain systems.

Possible solutions

The scaling problem is discussed on different levels: Does it make more sense to make existing systems more scalable by reparametrizing them, or can better scaling be achieved by using a new consensus algorithm? Or can scaling even emerge outside of existing systems, in the so-called “off-chain” domain? Proposed solutions in all of these areas differ, on the one hand, in their technical feasibility at this point in time and, on the other hand, in their degree of loss of decentralization. For example, while reparametrizations of existing blockchain protocols are certainly feasible, off-chain solutions, which have often only been used in an experimental setting to date, carry less risk of ultimately transforming the blockchain into a centrally managed construct. In the following, the three levels mentioned are differentiated and explained with examples.

Reparametrization

Reparametrization is technically the simplest method to achieve scaling within blockchain systems. Basically, it involves defining new ratios for the parameters blocktime, blocksize and, to a limited extent, transaction size in order to gain scalability. Although this type of scaling is already available and used in practice, systems characterized by it are often less decentralized than those in which the trade-off between decentralization and scalability has not been entered into. To take one example: During the peak period of transaction fees in the Bitcoin ecosystem at the turn of 2017/2018, a “hard fork” (read: fork) of the Bitcoin chain emerged from the ideological disagreement within the community between “pro-decentrality” on the one hand and “pro-scalability” on the other. In this fork, modified parameters for block size and block time were used, resulting in higher performance measured by the maximum number of transactions per second: Bitcoin Cash was born.

Consensus algorithm

With Bitcoin, the Proof-of-Work (PoW) consensus algorithm became popular, which opened up the possibility for any person with access to the Internet and a computer to participate in the validation and writing of new blocks. This open way of network validation allows any person to have a say and influence by putting hardware and energy on the network. The disadvantages of this approach are the high overall energy consumption and the limited bandwidth that must be maintained to keep the Bitcoin blockchain open to all network participants, which ultimately results in low transaction volume. For example, the proof-of-stake (PoS) consensus algorithm offers a different approach to solving the double-spend problem. While the PoW approach uses the deployment of energy to secure the system, this function is fulfilled by so-called “stakes” in PoS-based networks.

Stakes are basically normal tokens within a currency system (e.g., Bitcoins or Ether) that are given a temporary lock and cannot be traded for the duration of that lock – like a deposit of sorts. Locking these tokens allows the owner to exercise voting and writing rights on the blockchain, respectively adding new blocks to the blockchain. While in PoW all miners compete against each other to be the first to solve the computational problem posed to them, in Proof of Stake a limited number of computers are used to generate a new block.

The computers are selected by a random algorithm that takes the number of tokens staked (read: the amount of the deposit) into account in the selection process. New blocks are submitted in a PoS system by the selected “stakers”, but subsequently validated by the entire network. If fraud is attempted, this leads to the “burning”, or destruction, of the staked tokens. The fact that the PoS algorithm does not require computing power to determine which “miner” is chosen to submit the next block leads to a more efficient handling of the writing process on the blockchain and offers new approaches to scalability.

Another popular consensus method is the so-called “Practical Byzantine Fault Tolerance” algorithm. This algorithm is commonly used for permissioned blockchains, in which a central authority decides whether to add participants to the blockchain. Due to its centralized aspect, the Practical Byzantine Fault Tolerance algorithm is often used only in situations where a certain level of trust already exists between network participants.

admin