Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 4, 2193

Evolutionary Instability of Zero-Determinant Strategies Demonstrates That Winning Is Not Everything


Evolutionary Instability of Zero-Determinant Strategies Demonstrates That Winning Is Not Everything

Christoph Adami et al. Nat Commun.

Erratum in

  • Nat Commun. 2014;5:3764


Zero-determinant strategies are a new class of probabilistic and conditional strategies that are able to unilaterally set the expected payoff of an opponent in iterated plays of the Prisoner's Dilemma irrespective of the opponent's strategy (coercive strategies), or else to set the ratio between the player's and their opponent's expected payoff (extortionate strategies). Here we show that zero-determinant strategies are at most weakly dominant, are not evolutionarily stable, and will instead evolve into less coercive strategies. We show that zero-determinant strategies with an informational advantage over other players that allows them to recognize each other can be evolutionarily stable (and able to exploit other players). However, such an advantage is bound to be short-lived as opposing strategies evolve to counteract the recognition.


Figure 1
Figure 1. Mean expected payoff of arbitrary ZD strategies playing ‘Pavlov’.
The payoff formula image (red surface) defined by the allowed set (p1, p4) (shaded region) against the strategy Pavlov, given by the probabilities formula image=(1, 0, 0, 1). As formula image is everywhere smaller than formula image (except on the line p1=1), it is Pavlov, which is the ESS for all allowed values (p1,p4), according to equation (6). For p1=1, ZD and Pavlov are equivalent as the entire payoff matrix (6) vanishes (even though the strategies are not the same).
Figure 2
Figure 2. Population fractions of ZD versus Pavlov over time.
Population fractions πZD (blue) and πPAV (green) as a function of time for initial ZD concentrations πZD(0) between 0.1 and 0.9.
Figure 3
Figure 3. Population fractions using agent-based simulations and replicator equations.
Population fractions πZD (blue tones) and πPAV (green tones) for two different initial concentrations. The solid lines show the average of the population fraction from 40 agent-based simulations as a function of evolutionary time measured in updates, while the dashed lines show the corresponding replicator equations. As time is measured differently in agent-based simulations as opposed to the replicator equations, we applied an overall scale to the time variable of the Runge–Kutta simulation of equation (7) to match the agent-based simulation.
Figure 4
Figure 4. Evolution of probabilities on the evolutionary line of descent (LOD).
Evolution of probabilities p1 (blue), p2 (green), p3 (red) and p4 (teal) on the evolutionary LOD of a well-mixed population of 1,024 agents, seeded with the ZD strategy (p1, p2, p3, p4)=(0.99, 0.97, 0.02, 0.01). Lines of descent (see Methods) are averaged over 40 independent runs. Mutation rate per gene μ=1%, replacement rate r=1%.

Similar articles

See all similar articles

Cited by 21 PubMed Central articles

See all "Cited by" articles


    1. Press W. & Dyson F. J. Iterated Prisoners’ Dilemma contains strategies that dominate any evolutionary opponent. Proc. Natl Acad. Sci. USA 109, 10409–10413 (2012). - PMC - PubMed
    1. Stewart A. J. & Plotkin J. B. Extortion and cooperation in the Prisoner’s Dilemma. Proc. Natl Acad. Sci. USA 109, 10134–10135 (2012). - PMC - PubMed
    1. Axelrod R. & Hamilton W. The evolution of cooperation. Science 211, 1390–1396 (1981). - PubMed
    1. Maynard Smith J. Evolution and the Theory of Games Cambridge University Press: Cambridge, UK, (1982).
    1. Hofbauer J. & Sigmund K. Evolutionary Games and Population Dynamics Cambridge University Press: Cambridge, UK, (1998).

Publication types