Tendermint consensus algorithm guarantees the following specifications for all heights:
If the faulty validators have less than 1/3 of voting power in the current validator set. In the case where this assumption does not hold, each of the specification may be violated.
The agreement property says that for a given height, any two correct validators that decide on a block for that height decide on the same block. That the block was indeed generated by the blockchain, can be verified starting from a trusted (genesis) block, and checking that all subsequent blocks are properly signed.
However, faulty nodes may forge blocks and try to convince users (light clients) that the blocks had been correctly generated. In addition, Tendermint agreement might be violated in the case where 1/3 or more of the voting power belongs to faulty validators: Two correct validators decide on different blocks. The latter case motivates the term “fork”: as Tendermint consensus also agrees on the next validator set, correct validators may have decided on disjoint next validator sets, and the chain branches into two or more partitions (possibly having faulty validators in common) and each branch continues to generate blocks independently of the other.
We say that a fork is a case in which there are two commits for different blocks at the same height of the blockchain. The problem is to ensure that in those cases we are able to detect faulty validators (and not mistakenly accuse correct validators), and incentivize therefore validators to behave according to the protocol specification.
Conceptual Limit. In order to prove misbehavior of a node, we have to show that the behavior deviates from correct behavior with respect to a given algorithm. Thus, an algorithm that detects misbehavior of nodes executing some algorithm A must be defined with respect to algorithm A. In our case, A is Tendermint consensus (+ other protocols in the infrastructure; e.g., Cosmos full nodes and the Light Client). If the consensus algorithm is changed/updated/optimized in the future, we have to check whether changes to the accountability algorithm are also required. All the discussions in this document are thus inherently specific to Tendermint consensus and the Light Client specification.
Q: Should we distinguish agreement for validators and full nodes for agreement? The case where all correct validators agree on a block, but a correct full node decides on a different block seems to be slightly less severe that the case where two correct validators decide on different blocks. Still, if a contaminated full node becomes validator that may be problematic later on. Also it is not clear how gossiping is impaired if a contaminated full node is on a different branch.
Remark. In the case 1/3 or more of the voting power belongs to faulty validators, also validity and termination can be broken. Termination can be broken if faulty processes just do not send the messages that are needed to make progress. Due to asynchrony, this is not punishable, because faulty validators can always claim they never received the messages that would have forced them to send messages.
Forks are the result of faulty validators deviating from the protocol. In principle several such deviations can be detected without a fork actually occurring:
double proposal: A faulty proposer proposes two different values (blocks) for the same height and the same round in Tendermint consensus.
double signing: Tendermint consensus forces correct validators to prevote and precommit for at most one value per round. In case a faulty validator sends multiple prevote and/or precommit messages for different values for the same height/round, this is a misbehavior.
lunatic validator: Tendermint consensus forces correct validators to prevote and precommit only for values v that satisfy valid(v). If faulty validators prevote and precommit for v although valid(v)=false this is misbehavior.
Remark. In isolation, Point 3 is an attack on validity (rather than agreement). However, the prevotes and precommits can then also be used to forge blocks.
amnesia: Tendermint consensus has a locking mechanism. If a validator has some value v locked, then it can only prevote/precommit for v or nil. Sending prevote/precomit message for a different value v’ (that is not nil) while holding lock on value v is misbehavior.
spurious messages: In Tendermint consensus most of the message send instructions are guarded by threshold guards, e.g., one needs to receive 2f + 1 prevote messages to send precommit. Faulty validators may send precommit without having received the prevote messages.
Independently of a fork happening, punishing this behavior might be important to prevent forks altogether. This should keep attackers from misbehaving: if less than 1/3 of the voting power is faulty, this misbehavior is detectable but will not lead to a safety violation. Thus, unless they have 1/3 or more (or in some cases more than 2/3) of the voting power attackers have the incentive to not misbehave. If attackers control too much voting power, we have to deal with forks, as discussed in this document.
As in this case we have two different blocks (both having the same right/no right to exist), a central system invariant (one block per height decided by correct validators) is violated. As full nodes are contaminated in this case, the contamination can spread also to light clients. However, even without breaking this system invariant, light clients can be subject to a fork:
There are several scenarios in which forks might happen. The first is double signing within a round.
Tendermint consensus implements a locking mechanism: If a correct validator p receives proposal for value v and 2f + 1 prevotes for a value id(v) in round r, it locks v and remembers r. In this case, p also sends a precommit message for id(v), which later may serve as proof that p locked v. In subsequent rounds, p only sends prevote messages for a value it had previously locked. However, it is possible to change the locked value if in a future round r’ > r, if the process receives proposal and 2f + 1 prevotes for a different value v’. In this case, p could send a prevote/precommit for id(v’). This algorithmic feature can be exploited in two ways:
F2. Faulty Flip-flopping (Amnesia): faulty validators precommit some value id(v) in round r (value v is locked in round r) and then prevote for different value id(v’) in higher round r’ > r without previously correctly unlocking value v. In this case faulty processes “forget” that they have locked value v and prevote some other value in the following rounds. Some correct validators might have decided on v in r, and other correct validators decide on v’ in r’. Here we can have branching on the main chain (Fork-Full).
F3. Correct Flip-flopping (Back to the past): There are some precommit messages signed by (correct) validators for value id(v) in round r. Still, v is not decided upon, and all processes move on to the next round. Then correct validators (correctly) lock and decide a different value v’ in some round r’ > r. And the correct validators continue; there is no branching on the main chain. However, faulty validators may use the correct precommit messages from round r together with a posteriori generated faulty precommit messages for round r to forge a block for a value that was not decided on the main chain (Fork-Light).
F1-F3 may contaminate the state of full nodes (and even validators). Contaminated (but otherwise correct) full nodes may thus communicate faulty blocks to light clients. Similarly, without actually interfering with the main chain, we can have the following:
F4. Phantom validators: faulty validators vote (sign prevote and precommit messages) in heights in which they are not part of the validator sets (at the main chain).
F5. Lunatic validator: faulty validator that sign vote messages to support (arbitrary) application state that is different from the application state that resulted from valid state transitions.
We consider three types of potential attack victims:
F1 and F2 can be used by faulty validators to actually create multiple branches on the blockchain. That means that correctly operating full nodes decide on different blocks for the same height. Until a fork is detected locally by a full node (by receiving evidence from others or by some other local check that fails), the full node can spread corrupted blocks to light clients.
Remark. If full nodes take a branch different from the one taken by the validators, it may be that the liveness of the gossip protocol may be affected. We should eventually look at this more closely. However, as it does not influence safety it is not a primary concern.
F3 is similar to F1, except that no two correct validators decide on different blocks. It may still be the case that full nodes become affected.
In addition, without creating a fork on the main chain, light clients can be contaminated by more than a third of validators that are faulty and sign a forged header F4 cannot fool correct full nodes as they know the current validator set. Similarly, LCS know who the validators are. Hence, F4 is an attack against LCB that do not necessarily know the complete prefix of headers (Fork-Light), as they trust a header that is signed by at least one correct validator (trusting period method).
The following table gives an overview of how the different attacks may affect different nodes. F1-F3 are on-chain attacks so they can corrupt the state of full nodes. Then if a light client (LCS or LCB) contacts a full node to obtain headers (or blocks), the corrupted state may propagate to the light client.
F4 and F5 are off-chain, that is, these attacks cannot be used to corrupt the state of full nodes (which have sufficient knowledge on the state of the chain to not be fooled).
Attack | FN | LCS | LCB |
---|---|---|---|
F1 | direct | FN | FN |
F2 | direct | FN | FN |
F3 | direct | FN | FN |
F4 | direct | ||
F5 | direct |
Q: Light clients are more vulnerable than full nodes, because the former do only verify headers but do not execute transactions. What kind of certainty is gained by a full node that executes a transaction?
As a full node verifies all transactions, it can only be contaminated by an attack if the blockchain itself violates its invariant (one block per height), that is, in case of a fork that leads to branching.
In case of equivocation based attacks, faulty validators sign multiple votes (prevote and/or precommit) in the same round of some height. This attack can be executed on both full nodes and light clients. It requires 1/3 or more of voting power to be executed.
Validators:
Observe that this setting violates the Cosmos failure model.
Execution:
Consequences:
Creating evidence of misbehavior is simple in this case as we have multiple messages signed by the same faulty processes for different values in the same round.
We have to ensure that these different messages reach a correct process (full node, monitor?), which can submit evidence.
Validators:
Execution:
Consequences:
Once equivocation is used to attack light client it opens space for different kind of attacks as application state can be diverged in any direction. For example, it can modify validator set such that it contains only validators that do not have any stake bonded. Note that after a light client is fooled by a fork, that means that an attacker can change application state and validator set arbitrarily.
In order to detect such (equivocation-based attack), the light client would need to cross check its state with some correct validator (or to obtain a hash of the state from the main chain using out of band channels).
Remark. The light client would be able to create evidence of misbehavior, but this would require to pull potentially a lot of data from correct full nodes. Maybe we need to figure out different architecture where a light client that is attacked will push all its data for the current unbonding period to a correct node that will inspect this data and submit corresponding evidence. There are also architectures that assumes a special role (sometimes called fisherman) whose goal is to collect as much as possible useful data from the network, to do analysis and create evidence transactions. That functionality is outside the scope of this document.
Remark. The difference between LCS and LCB might only be in the amount of voting power needed to convince light client about arbitrary state. In case of LCB where security threshold is at minimum, an attacker can arbitrarily modify application state with 1/3 or more of voting power, while in case of LCS it requires more than 2/3 of the voting power.
In case of amnesia, faulty validators lock some value v in some round r, and then vote for different value v’ in higher rounds without correctly unlocking value v. This attack can be used both on full nodes and light clients.
Validators:
Execution:
Remark. In this case, the more than 1/3 of faulty validators do not need to commit an equivocation (F1) as they only vote once per round in the execution.
If a light client is attacked using this attack with 1/3 or more of voting power (and less than 2/3), the attacker cannot change the application state arbitrarily. Rather, the attacker is limited to a state a correct validator finds acceptable: In the execution above, correct validators still find the value acceptable, however, the block the light client trusts deviates from the one on the main chain.
In case there is an attack with more than 2/3 of the voting power, an attacker can arbitrarily change application state.
Validators:
Execution
Consequences:
Q: do we need to define a special kind of attack for the case where a validator sign arbitrarily state? It seems that detecting such attack requires a different mechanism that would require as an evidence a sequence of blocks that led to that state. This might be very tricky to implement.
In this kind of attack, faulty validators take advantage of the fact that they did not sign messages in some of the past rounds. Due to the asynchronous network in which Tendermint operates, we cannot easily differentiate between such an attack and delayed message. This kind of attack can be used at both full nodes and light clients.
Validators:
Execution:
Consequences:
Q: should we keep this as a separate kind of attack? It seems that equivocation, amnesia and phantom validators are the only kind of attack we need to support and this gives us security also in other cases. This would not be surprising as equivocation and amnesia are attacks that followed from the protocol and phantom attack is not really an attack to Tendermint but more to the Cosmos Proof of Stake module.
In case of phantom validators, processes that are not part of the current validator set but are still bonded (as attack happen during their unbonding period) can be part of the attack by signing vote messages. This attack can be executed against both full nodes and light clients.
Validators:
Execution:
Consequences:
Remark. We can have phantom-validator-based attacks as a follow up of equivocation or amnesia based attack where forked state contains validators that are not part of the validator set at the main chain. In this case, they keep signing messages contributed to a forked chain (the wrong branch) although they are not part of the validator set on the main chain. This attack can also be used to attack full node during a period of time it is eclipsed.
Remark. Phantom validator evidence has been removed from implementation as it was deemed, although possibly a plausible form of evidence, not relevant. Any attack on the light client involving a phantom validator will have needed to be initiated by 1/3+ lunatic validators that can forge a new validator set that includes the phantom validator. Only in that case will the light client accept the phantom validators vote. We need only worry about punishing the 1/3+ lunatic cabal, that is the root cause of the attack.
Lunatic validator agrees to sign commit messages for arbitrary application state. It is used to attack light clients. Note that detecting this behavior require application knowledge. Detecting this behavior can probably be done by referring to the block before the one in which height happen.
Q: can we say that in this case a validator declines to check if a proposed value is valid before voting for it?