Once we fix the communication model (synchrony, asynchrony, or partial synchrony see here), and we fix a threshold adversary we still need to make important modeling decisions about the adversary power.

Here we will use the simplest model of a threshold adversary that can control up to f nodes given a static group of n nodes. We will later consider dynamic, permissionless and bounded resource models.

In addition to the size of the threshold ($n>f$, $n>2f$, or $n>3f$), there are 4 more important parameters:

  1. the type of corruption.
  2. the computational power of the adversary.
  3. the visibility of the adversary.
  4. the adaptivity of the adversary.

1. Type of corruption

The next critical aspect is what type of corruption the adversary can inflict on the $f$ nodes is can corrupt. There are three classic adversaries: Crash, Omission, and Byzantine.

Crash : once the node is corrupted, it stops sending and receiving all messages.

Omission : once corrupted, the adversary can decide, for each message sent or received, to either drop or allow it to continue.

Byzantine : this gives the adversary full power to control the node and take any (arbitrary) action on the corrupted node.

Note that each corruption type subsumes the previous. There are other types of corruptions (most notable are variants of Covert adversaries) that we will cover later.

2. Computational power

The computational power of the adversary is the next choice. There are two traditional variants and one newer one:

  1. Unbounded : the adversary has unbounded computational power. This model often leads to notions of perfect security.
  2. Computationally bounded : the adversary is at most a polynomial advantage in computational power over the honest parties. Typically this means that the adversary cannot (except with negligible probability) break the cryptographic primitives being used. For example, typically assume the adversary cannot forge signatures of nodes not in his control (see Goldreich’s chapter one for traditional CS formal definitions of polynomially bounded adversaries).
  3. Fine-grained computationally bounded : there is some concrete measure of computational power and the adversary is limited in concrete manner. This model is used in proof-of-work based protocols. For example, see Andrychowicz and Dziembowski for a way to model hashrate.

3. Visibility

The visibility is the power of the adversary to see the messages and the states of the non-corrupted parties. Again, there are two basic variants:

  1. Full information : here we assume the adversary sees the internal state of all parties and the content of all message sent. This often limits the protocol designer. See for example: Feige’s selection protocols, or Ben-Or etal’s Byzantine agreement.
  2. Private channels : in this model we assume the adversary cannot see the internal state of honest parties and cannot see the internal content of messages between honest parties. The adversary does know when a message is being sent and depending on the communication model can decide to delay it by any value that is allowed by the communication model.

4. Adaptivity

Adaptivity is the ability of the adversary to corrupt dynamically based on information the adversary learns during the execution. There are two basic variants: static and adaptive. The adaptive model has several sub-variants but we will cover here only the simplest one.

  1. Static : the adversary has to decide which f nodes to corrupt in advance before the execution of the protocol.

  2. Adaptive : the adversary can decide dynamically as the protocol progresses who to corrupt based on what the adversary learns over time. The main parameter that still needs to be decided is how long it takes between the adversary decision to corrupt and the event that the control is passed to the adversary. One standard assumption is that this is instantaneous. We will later review several other options (for example, see here).

Please leave comments on Twitter