In mid-may 2018, a group of developers identifying themselves only as Team Rocket published a whitepaper called “Snowflake to Avalanche: A Novel Metastable Consensus Protocol Family for Cryptocurrencies.”
This article will explain how the protocol works and its potential implications for the cryptocurrency sector.
An Anonymous Release
The Snowflake to Avalanche white paper was released anonymously through the popular distributed file sharing platform IPFS. The name of the anonymous group is a reference to the good-turned-evil organization in the Pokémon cartoon series with the motto “prepare for trouble and make it double.”
In a nod to this motto, the Avalanche white paper describes four protocols which are designed to work in a wide range of scenarios and are characterized by the developers as an upgrade to the existing consensus mechanisms.
The white paper was met with varying levels of interest and excitement. One notable name is outspoken crypto-personality Cornell professor, Emin Gün Sirer, who has referred to the Avalanche protocol as a simple yet incredibly powerful set of tools. He has further defined the protocol a breakthrough which “combines the best of Nakamoto consensus with the best of classical consensus.”
The History of Consensus
To best comprehend the Avalanche protocol and how it differs from its predecessors in the consensus space, it is imperative to delve into the tools developed by computer scientists in the past that were used to allow computers working together in a distributed network to reach a collective decision safely and securely.
Computers are powerful tools. These devices have become invaluable in almost every field due to their ability to handle a wide array of tasks, sometimes simultaneously and quickly. These advantages are compounded when a group of computers works together on the same function. This is the premise of distributed systems.
A configuration where networked computers, sometimes located in far-flung geographical positions, hold the components essential to the given task and the effective functioning of the network is a distributed network. In a distributed network, computing machines coordinate their actions by passing data continuously to and through each other. To effectively perform their tasks, computers in a distributed network must be able to view the state of the underlying database in real-time.
Distributed systems are essential in a number of scenarios. Take, for instance, a banking system which needs to service a wide berth of geographical locations or an online shopping service open to a global customer base.
Both of these instances require a mechanism through which it is possible to maintain a stable view across of the underlying database that connects all the machines on the network. In the case of the banking system, the accompanying database reflects account balances while in the e-commerce scenario it can be the stock available for goods, or other related variables.
The consensus is the state of agreement. In distributed systems, this is of utmost importance because the inability of the devices in a network to agree on a decision can cripple the entire configuration. Moreover, a consensus mechanism which is unable to support a large number of devices also works against the goals of a network and is, therefore, undesirable. Thus, creating effective consumer mechanisms have been a goal for computer scientists for as long as distributed systems have been in existence.
Over the last forty years, computer scientists have attempted to find viable solutions to the consensus problem. In the field of distributed systems, there are two major families of protocols: Classical consensus and the Nakamoto consensus protocols.
The classical consensus protocols are the oldest type of consensus mechanism. This group of tools was developed by a duo of computer scientists and eventually awarded them the Turing Award, which for computer scientists is equivalent to the Nobel prize. Leslie Lamport and Barbara Liskov are credited with introducing that oft-referenced analogy of the Byzantine generals which is used to explain the problem of achieving consensus in distributed systems. They are also widely considered the creators of the classical consensus protocols.
Classical consensus protocols are based on the principle of Practical Byzantine Fault Tolerance (PBFT). The advantages of this school of consensus protocol include quick finality as well as achieving guarantees about the committed transactions promptly.
Disadvantages include a lack of scalability. Classical protocols require quadratic communication costs among the devices participating in the network. This means that all the nodes on the network must know all the other devices on the network. Beyond the thousand node threshold, costs become too high to justify the network.
Additionally, security in classical consensus protocols is a result of a quorum of nodes who commit to a particular choice as a result of witnessing the action in question. These nodes must trust each other. Thus, classical consensus mechanisms are not well suited for permissionless databases such as those seen in digital currencies.
This brings us to the second class of tools, the Nakamoto Consensus Protocol. Following the release of the Bitcoin white paper, a new type of consensus mechanism came into view. The Nakamoto protocol differed in a number of ways to its predecessor. First, it was uniquely suited to support decentralized trustless systems. Nodes on this network did not have to trust each other but were still able to reach an agreement. The protocol achieves this feat because the nodes in the network do not have to know all the other devices participating in the network.
Secondly, the Nakamoto protocol allows any node to join or leave the network at any time. It is an open network with all nodes able to participate in any chosen manner on the network. Because of this feature, the Nakamoto protocol can scale to a large number of participants on a global scale. It also supports greater censorship resistance in comparison to the classical models.
While the Nakamoto protocol ushered in a new age of digital currencies and supports a cryptocurrency sector with a significant value, it is not without its disadvantages.
Speed, for instance, is still a significant issue. While recent upgrades to the Bitcoin network, have reduced waiting periods for Bitcoin transactions, it is still slow compared to other payment processors such as Visa or Mastercard. Moreover, throughput is low as it can handle between three and seven transactions per second. These numbers are not nearly as large as they need to be to scale and support a global currency effectively.
The Nakamoto protocol relies heavily on proof-of-work (PoW). As a result, this family of consensus mechanism consumes an enormous amount of energy. With environmental concerns continuing to gain steam, it becomes increasingly difficult to justify the energy spent on simply powering a network.
The Avalanche Protocol
As discussed above, both families of consensus mechanisms have their advantages and disadvantages. The new set of mechanisms proposed by the anonymous Team Rocket claims to be superior to both of its predecessors. Team rocket defines the Avalanche protocols as a “new family of leaderless Byzantine fault tolerance protocols, built on a metastable mechanism.”
The Avalanche protocol is composed of four mechanisms which build upon each other and together make up the entire structural support of the greater consensus tool. The four mechanisms described in the proposal are Slush, Snowflake, Snowball, and Avalanche.
How Does it Work?
The white paper states, “Inspired by gossip algorithms, this new family gains its safety through a deliberately metastable mechanism. Specifically, the system operates by repeatedly sampling the network at random and steering the correct nodes towards the same outcome. Analysis shows that metastability is a powerful, albeit non-universal, technique: it can move a large network to an irreversible state quickly, though it is not always guaranteed to do so.”
Gossip algorithms are a type of communication witnessed in peer-to-peer networks which typically involve a random sampling of connected nodes which then receives information. The Avalanche protocol borrows heavily on the principle of Gossip protocols as it also utilizes subsampling of the nodes on the network to achieve consensus.
To understand how the Avalanche protocol works, consider this scenario. Imagine if a network of trustless nodes that want to choose between two colors, say, blue or red. A node within the network will pick a number of nodes, at random, and pose the question to them.
The nodes chosen to be part of the sample group will return an answer with their chosen color to the questioning node. Using the responses from the sample group, the questioning node will see that the network is leaning towards a certain color. Subsequently, every node in the network goes through the same process and in this way consensus is achieved within the network.
The protocol can be characterized as a recurring subsampled voting process. In the scenario of a tie between the colors following the first round of voting within the sampled group, the second round of voting exponentially decreases the probability of a tie happening again. Additionally, every subsequent round of voting after that reduces the chances of a tied outcome more and more.
This feature is what is referred to as Metastability; the Avalanche protocol is designed to land on one choice eventually. The entire premise of consensus mechanisms is to ensure an agreement between the nodes on a network and to avoid the eventuality of a tie. Avalanche’s metastable protocol is designed to tip a network towards one of the choices in a scenario.
Returning to the color choice example, with every round of voting the network will begin to see a pattern of which color the nodes are leaning towards. With each round of voting, the network comes to this conclusion at a faster rate than the round before. At a certain threshold, the network achieves its final state where all the nodes have decided on a color.
The Pros and Cons
Avalanche’s features enable it to support incredibly fast speeds. Team Rocket claims it takes only two seconds to achieve the final state. This means that transactions are processed and verified in just two seconds. The developers also argue that the Avalanche protocol has a very high throughput able to handle 1,000 to 10,000 transactions per second.
Another important feature is its robustness. The Avalanche protocol works without the need to know or agree on the details of the nodes participating in the network. The network need not agree on the identities of participants to achieve undeniable consensus.
The Avalanche protocol is also energy efficient. Consensus is achieved through the specialized gossip protocol, therefore, negating the need for the same large amounts of energy used in proof-of-work and other similar mechanisms.
Furthermore, because all nodes are similar and with equal powers, there is no special class of nodes, like miners in the Bitcoin ecosystem. This reduces the amount of influence that any class of nodes can exert on a network. It also increases the Byzantine Fault Tolerance of the network. Simply put, even if 50 percent of the nodes on the network were dishonest or malicious, the network would remain secure.
Another important feature, which can be viewed as both an advantage and disadvantage, is that there is no liveness guarantee for conflicting transactions. This means if a dishonest node attempts to implement a double-spend, the Avalanche protocol will not be able to reach a consensus on either of the operations.
Contrary to both classical and Nakamoto mechanisms, the Avalanche protocol cannot guarantee a choice in this scenario. The lack of consensus will result in lost money. Punishments are an essential feature of any cryptosystem, and the Avalanche protocol approaches this in an interesting way. The lack of liveness guarantee implicitly disincentivizes any intentional malicious activity.
While some corners of the crypto world have shown support for the Avalanche protocol, the mechanism has been criticized by Vlad Zamfir, lead developer for Ethereum’s proposed consensus upgrade Casper expressed his thoughts that the protocol, was not as good or secure as it claimed to be. He stated:
“It’s not asynchronously safe and it’s probabilistic. More like the worst of both worlds.”