Skip to main content
Top
Published in: Dynamic Games and Applications 3/2014

Open Access 01-09-2014

The Evolution of Cooperation Through Institutional Incentives and Optional Participation

Author: Tatsuya Sasaki

Published in: Dynamic Games and Applications | Issue 3/2014

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Rewards and penalties are common practical tools that can be used to promote cooperation in social institutions. The evolution of cooperation under reward and punishment incentives in joint enterprises has been formalized and investigated, mostly by using compulsory public good games. Recently, Sasaki et al. (2012, Proc Natl Acad Sci USA 109:1165–1169) considered optional participation as well as institutional incentives and described how the interplay between these mechanisms affects the evolution of cooperation in public good games. Here, we present a full classification of these cases of evolutionary dynamics. Specifically, whenever penalties are large enough to cause the bi-stability of both cooperation and defection in cases in which participation in the public good game is compulsory, these penalties will ultimately result in cooperation if participation in the public good game is optional. The global stability of coercion-based cooperation in this optional case contrasts strikingly with the bi-stability that is observed in the compulsory case. We also argue that optional participation is not as effective under rewards as under punishment.

1 Introduction

Self-interest often leads to freeloading on the contributions of others in the dynamics associated with common goods and joint enterprises [22, 41]. As is well known, incentivization, such as rewarding and punishing, is a popular method for harnessing the selfish action and for motivating individuals to behave cooperatively [5, 7, 8, 15, 17, 38, 39, 45, 47, 53, 54]. Experimental and theoretical studies on joint enterprises under various incentive schemes are growing [3, 20, 21, 28, 37, 50, 51, 59, 60].
Obviously, whether rewards or penalties, sufficiently large incentives can transform freeloaders into full cooperators, and incentives with small impact do nothing on the outcomes [50]. However, incentivizing is costly, and such heavy incentives often incur serious costs on those who provide the incentives, whether in a peer-to-peer or institutional manner. Previous game-theoretic studies on the evolution of cooperation with incentives have focused on public good games with compulsory participation, and revealed that the intermediate degrees of punishment lead to a couple of stable equilibria, full defection and full cooperation [7, 8, 42, 47, 50, 54]. In this bi-stable dynamics, establishing full cooperation requires an initially sufficient fraction of cooperators, or ex ante adjustment to overcome the initial condition [8, 42]. This situation is a coordination game [57], which is a model of great interest for analyzing a widespread coordination problem (e.g., in choosing distinct technical standards).
In contrast to a traditional case with compulsory participation, another approach to the evolution of cooperation is an option to opt out of joint enterprises [1, 6, 10, 18, 24, 25, 32, 35, 40, 49, 52, 62, 65]. The opting-out option can make the freeloader problem relaxed: individuals can exit a joint venture when stuck in a state in which all freeload off one another (“economic stalemate”), and then pursue a stand-alone project; if a joint venture with mutual cooperation is more profitable than in isolation, the individuals once exited will switch to contributing to the venture. This situation, however, will also find defection attractive. Thus, joint enterprises with optional participation can give rise to a rock-paper-scissors cycle [24, 25, 35, 52].
Recently, Sasaki et al. [50] revealed that considering optional participation as well as institutional incentives can effect fully cooperative outcomes for the intermediate ranges of incentives. They demonstrated that opting-out combined with rewarding is not very effective at establishing full cooperation, but opting-out combined with punishment is very effective at establishing cooperation. Although there are a series of existing papers on the interplay of punishment and opting-out mechanisms [9, 13, 16, 26, 55, 56, 61], the main points of these earlier studies comprise solving the puzzling issue of second-order freeloading: the exploitation of the efforts of others to uphold incentives for cooperation [7, 38, 41, 43, 63]. Sasaki et al. [50] consider incentives controlled exclusively by a centralized authority (like the empire or state) [2, 4, 12, 31], and thus, their model is already free from the second-order freeloader problem.
Here we analytically provide a full classification of the replicator dynamics in a public good game with institutional incentives and optional participation. We clarify when and how cooperation can be selected over defection in a bi-stable situation associated with institutional punishment without requiring any ability to communicate among individuals. In particular, assuming that the penalties are large enough to cause bi-stability with both full cooperation and full defection (no matter what the basins of attraction are) in cases of compulsory participation, cooperation will necessarily become selected in the long term, regardless of the initial conditions.
The paper is organized as follows. In Sect. 2, we formalize optional public good games with institutional incentives and determine the average payoffs for the three strategies: cooperation, defection, and non-participation. In Sect. 3, based on analytical results from compulsory games (Sect. 3.1), we explore the interior equilibrium (Sect. 3.2) and in detail classify global dynamics for the three strategies (Sect. 3.3). Finally, in Sect. 4 we provide further discussion and concluding remarks.

2 Model

2.1 Social Dilemmas

To describe our institutional-incentive model, we start from public good games with group size n≥2. The n players in a group are given the opportunity to participate in a public good game. We assume that participation pays a fixed entrance fee σ>0 to the sanctioning institution, whereas non-participation yields nothing. We denote by m the number of players who are willing to participate (0≤mn) and assume that at least two participants are required for the game to occur [9, 13, 24, 26, 55]. If the game does take place, each of the m participants in the group can decide whether to invest a fixed amount c>0 into a common pool, knowing that each contribution will be multiplied by r>1 and then shared equally among all m−1 other participants in the group. Thus, participants have no direct gain from their own investments [13, 15, 55, 56, 63]. If all of the participants invest, they obtain a net payoff (r−1)c>0. The game is a social dilemma, which is independent of the value of r, because participants can improve their payoffs by withholding their contribution.
Let us next assume that the total incentive stipulated by a sanctioning institution is proportional to the group size m and hence of the form , where δ>0 is the (potential) per capita incentive. If rewards are employed to incentivize cooperation, these funds will be shared among the so-called “cooperators” who contribute (see [48] for a voluntary reward fund). Hence, each cooperator will obtain a bonus that is denoted by /n C, where n C denotes the number of cooperators in the group of m participants. If penalties are employed to incentivize cooperation, “defectors” who do not contribute will analogously have their payoffs reduced by \(m\delta/ n_{\rm D}\), where \(n_{\rm D}\) denotes the number of defectors in the group of m participants (\(m =n_{\rm C} +n_{\rm D}\)).
We consider an infinitely large and well-mixed population of players, from which n samples are randomly selected to form a group for each game. Our analysis of the underlying evolutionary game is based especially on the replicator dynamics [30] for the three corresponding strategies of the cooperator, defector, and non-participant, with respective frequencies x, y, and z. The combination of all possible values of (x,y,z) with x,y,z≥0 and x+y+z=1 forms the triangular state space Δ. We denote by C, D, and N the three vertices of Δ that correspond to the three homogeneous states in which all cooperate (x=1), defect (y=1), or are non-participants (z=1), respectively. For Δ, the replicator dynamics is defined by
$$ \dot{x}=x\bigl(P_{\rm C}^s - \bar{P}^s\bigr),\qquad \quad\dot{y}=y\bigl(P_{\rm D}^s - \bar {P}^s\bigr),\qquad \quad\dot{z}=z\bigl(P_{\rm N}^s - \bar{P}^s\bigr), $$
(1)
where \(\bar{P}^{s}\) denotes the average payoff in the entire population; \(P_{\rm C}^{s}\), \(P_{\rm D}^{s}\), and \(P_{\rm N}^{s}\) denote the expected payoff values for cooperators, defectors, and non-participants, respectively; and \(s = {\rm o}, {\rm r}, {\rm p}\) is used to specify one of three different incentive schemes, namely, “without incentives,” “with rewards,” and “with punishment,” respectively. Because non-participants have a payoff of 0, \(P_{\rm N}^{s}=0\), and thus, \(\bar{P}^{s}=xP_{\rm C}^{s}+yP_{\rm D}^{s}\).
We note that if (r−1)c>σ, the three edges of the state space Δ form a heteroclinic cycle without incentives: N → C → D → N (Figs. 2a or 3a). Defectors dominate cooperators because of the cost of contribution c, and non-participants dominate defectors because of the cost of participation σ. Finally, cooperators dominate non-participants because of the net benefit from the public good game with (r−1)c>σ. In the interior of \(\rm \varDelta\), all of the trajectories originate from and converge to N, which is a non-hyperbolic equilibrium. Hence, cooperation can emerge only in brief bursts, sparked by random perturbations [13, 25].

2.2 Payoffs

Here, we calculate the average payoff for the whole population and the expected payoff values for cooperators and defectors. In a group with m−1 co-participants (m=2,…,n), a defector or a cooperator obtains from the public good game an average payoff of rcx/(1−z) [13]. Hence,
$$ P_{\rm D}^{\rm o} = \biggl(rc \frac{x}{1-z}-\sigma \biggr) \bigl(1-z^{n-1}\bigr). $$
(2)
Note that z n−1 is the probability of finding no co-players and, thus, of being reduced to non-participation. In addition, cooperators contribute c with a probability 1−z n−1, and thus, \(P_{\rm C}^{\rm o} - P_{\rm D}^{\rm o} = -c(1-z^{n-1})\). Hence, \(\bar{P}^{\rm o}=(1-z^{n-1})[(r-1)cx-\sigma(1-z)]\).
We now turn to the cases with institutional incentives. First, we consider penalties. Because cooperators never receive penalties, we have \(P_{\rm C}^{\rm p}=P_{\rm C}^{\rm o}\). In a group in which the m−1 co-participants include k cooperators (and thus, m−1−k defectors), switching from defecting to cooperating implies avoiding the penalty /(mk). Hence,
$$\begin{aligned} P_{\rm C}^{\rm p} - P_{\rm D}^{\rm p} =& \bigl(P_{\rm C}^{\rm o} - P_{\rm D}^{\rm o}\bigr) + \sum _{m=2}^{n} {n-1 \choose m-1} (1-z)^{m-1} z^{n-m} \\ & \times \Biggl[ \sum_{k=0}^{m-1} {m-1 \choose k} \biggl( \frac{x}{1-z} \biggr)^k \biggl( \frac{y}{1-z} \biggr)^{m-1-k} \frac{m \delta}{m-k} \Biggr] \\ =& -(c-\delta) \bigl(1-z^{n-1}\bigr) + \delta\frac{x(1-(1-y)^{n-1})}{y}, \end{aligned}$$
(3)
and thus,
$$\begin{aligned} \bar{P}^{\rm p} =& \bar{P}^{\rm o} - \delta\bigl[y \bigl(1-z^{n-1}\bigr) + x\bigl(1-(1-y)^{n-1}\bigr)\bigr] \\ =&\bigl(1-z^{n-1}\bigr) \bigl((r-1)cx - \sigma(1-z) - \delta y\bigr) - \delta x\bigl(1-(1-y)^{n-1}\bigr). \end{aligned}$$
(4)
Next, we consider rewards. It is now the defectors who are unaffected, implying \(P_{\rm D}^{\rm r}=P_{\rm D}^{\rm o}\). In a group with m−1 co-participants, including k cooperators, switching from defecting to cooperating implies obtaining the reward /(k+1). Hence,
$$\begin{aligned} P_{\rm C}^{\rm r} - P_{\rm D}^{\rm r} =& \bigl(P_{\rm C}^{\rm o} - P_{\rm D}^{\rm o}\bigr) + \sum _{m=2}^{n} {n-1 \choose m-1} (1-z)^{m-1} z^{n-m} \\ & \times \Biggl[ \sum_{k=0}^{m-1} {m-1 \choose k} \biggl( \frac{x}{1-z} \biggr)^k \biggl( \frac{y}{1-z} \biggr)^{m-1-k} \frac{m \delta}{k+1} \Biggr] \\ =& -(c-\delta) \bigl(1-z^{n-1}\bigr) + \delta\frac{y(1-(1-x)^{n-1})}{x}, \end{aligned}$$
(5)
and thus,
$$\begin{aligned} \bar{P}^{\rm r} =& \bar{P}^{\rm o} + \delta\bigl[x \bigl(1-z^{n-1}\bigr) + y\bigl(1-(1-x)^{n-1}\bigr)\bigr] \\ =&\bigl(1-z^{n-1}\bigr) \bigl((r-1)cx - \sigma(1-z) + \delta x\bigr) - \delta y\bigl(1-(1-x)^{n-1}\bigr). \end{aligned}$$
(6)

3 Results

3.1 Coordination and Coexistence for Compulsory Participation

We investigated the interplay of institutional incentives and optional participation. As a first step, we considered replicator dynamics along the three edges of the state space Δ. On the DN-edge (x=0), this dynamics is always D → N because the payoff for non-participating is better than that for defecting by at least the participation fee σ, regardless of whether penalties versus rewards are in place. On the NC-edge (y=0), it is obvious that if the public good game is too expensive (i.e., if σ≥(r−1)c, under penalties or σ≥(r−1)c+δ, under rewards), players will opt for non-participation more than cooperation. Indeed, N becomes a global attractor because \(\dot{z} > 0\) holds in Δ∖{z=0}. We do not consider cases further but assume that the dynamics of the NC-edge is always N → C.
On the CD-edge (z=0), the dynamics corresponds to compulsory participation, and Eq. (1) reduces to \(\dot{x} = x(1-x)(P_{\rm C}^{s} - P_{\rm D}^{s})\). Clearly, both of the ends C (x=1) and D (x=0) are fixed points. Under penalties, the term for the payoff difference is
$$ P_{\rm C}^{\rm p} - P_{\rm D}^{\rm p} = -c + \delta \frac{1-x^n}{1-x} = -c + \delta\sum_{i=0}^{n-1} x^i . $$
(7)
Under rewards, it is
$$ P_{\rm C}^{\rm r} - P_{\rm D}^{\rm r} = -c + \delta \frac{1-(1-x)^n}{x} = -c + \delta\sum_{i=0}^{n-1} (1-x)^i . $$
(8)
Because δ>0, \(P_{\rm C}^{\rm p} - P_{\rm D}^{\rm p}\) strictly increases, and \(P_{\rm C}^{\rm r} - P_{\rm D}^{\rm r}\) strictly decreases, with x. The condition under which there exists an interior equilibrium R on the CD-edge is
$$ \delta_- < \delta< \delta_+ , \quad\mathrm{with} \ \delta_- = \frac{c}{n} \ \mathrm{and} \ \delta_+ = c . $$
(9)
Next, we summarize the game dynamics for compulsory public good games (Fig. 1). For such a small δ that δ<δ , defection is a unique outcome; D is globally stable, and C is unstable. For such a large δ that δ>δ +, cooperation is a unique outcome; C is globally stable, and D is unstable. For the intermediate values of δ, cooperation evolves in different ways under penalties versus rewards, as follows. Under penalties (Fig. 1a), as δ crosses the threshold δ , C becomes stable, and an unstable interior equilibrium R splits off from C. The point R separates the basins of attraction of C and D. Penalties cause bi-stable competition between cooperators and defectors, which is often exhibited as a coordination game [57]; one or the other norm will become established, but there can be no coexistence. With increasing δ, the basin of attraction of D becomes increasingly smaller, until δ attains the value of δ +. Here, R merges with the formerly stable D, which becomes unstable.
In contrast, under rewards (Fig. 1b), as δ crosses a threshold δ , D becomes unstable, and a stable interior equilibrium R splits off from D. The point R is a global attractor. Rewards give rise to the stable coexistence of cooperators and defectors, which is a typical result in a snowdrift game [58]. As δ increases, the fraction of cooperators within the stable coexistence becomes increasingly larger. Finally, as δ reaches another threshold δ +, R merges with the formerly unstable C, which becomes stable. We note that both δ + and δ do not depend on whether we take into account rewards or penalties.

3.2 The Interior Equilibrium Q for Optional Participation

Now, we consider the interior of the state space Δ. We start by exploring the fixed point in the interior. For this purpose, we introduce the coordinate system (f,z) in Δ∖{z=1}, with f=x/(x+y), and we rewrite Eq. (1) as
$$ \dot{f}=f(1-f) \bigl(P_{\rm C}^s - P_{\rm D}^s \bigr), \qquad\dot{z}=-z \bar{P}^s. $$
(10)
Dividing the right-hand side of Eq. (10) by 1−z n−1, which is positive in Δ∖{z=1}, corresponds to a change in velocity and does not affect the orbits in Δ [30]. Using Eqs. (3)–(6), this transforms Eq. (10) into the following. Under penalties, Eq. (10) becomes
$$ \begin{aligned} \dot{f} &= f(1-f)\bigl[-c + \delta+ \delta f H(f,z)\bigr], \\ \dot{z} &= z(1-z)\bigl[\sigma+ \delta- \bigl((r-1)c + \delta\bigr)f + \delta f(1-f)H(f,z)\bigr], \end{aligned} $$
(11)
whereas under rewards, it becomes
$$ \begin{aligned} \dot{f} &= f(1-f)\bigl[-c + \delta+ \delta(1-f) H(1-f,z)\bigr], \\ \dot{z} &= z(1-z)\bigl[\sigma- \bigl((r-1)c + \delta\bigr)f + \delta f(1-f)H(1-f,z)\bigr], \end{aligned} $$
(12)
where
$$ H(f,z) = \frac{1-[f+(1-f)z]^{n-1}}{(1-f)(1-z^{n-1})} = \frac {1+[f+(1-f)z]+ \cdots+[f+(1-f)z]^{n-2}}{1+z+ \cdots+z^{n-2}}. $$
(13)
Note that \(H(f,0)=\sum_{i=0}^{n-2} f^{i}\) and H(f,1)=1.
At an interior equilibrium Q \(= (f_{\rm Q}, z_{\rm Q})\), the three different strategies must have equal payoffs, which, in our model, means that they all must equal 0. The conditions \(P_{\rm C}^{\rm o}=P_{\rm C}^{\rm p} =0\) under penalties and \(P_{\rm D}^{\rm o}=P_{\rm D}^{\rm r}=0\) under rewards imply that \(f_{\rm Q}\) is given by
$$ f_{\rm Q(p)}=\frac{c+\sigma}{rc} \ {\rm under} \ {\rm penalties} \ { \rm and} \ f_{\rm Q(r)}=\frac{\sigma}{rc} \ {\rm under} \ {\rm rewards}, $$
(14)
respectively. Thus, if it exists, the interior equilibrium Q must be located on the line given by \(f = f_{\rm Q}\). From Eqs. (11) and (12), Q must satisfy
$$ H(f,z)=\frac{c-\delta}{\delta f} \ {\rm under} \ {\rm penalties} \ {\rm and} \ H(1-f,z)=\frac{c-\delta}{\delta(1-f)} \ {\rm under} \ {\rm rewards}. $$
(15)
When there are only two players (i.e., pairwise interactions with n=2), there are either no interior equilibria or else a line of interior equilibria that connects R and N (the latter situation can arise for only one choice of δ). A summary of the dynamics for n=2 is given in Sect. 3.4. Here we analyze the general case of a public good game with more than two players (i.e., n>2). Then, if Q exists, it is uniquely determined and a saddle point, whether incentives are penalties or rewards (see Appendices A.1 and A.2 for detailed proofs of the uniqueness and the saddle, respectively). As δ increases, Q splits off from R (with \(x_{\rm R} = f_{\rm Q}\)) and moves across the state space along the line given by Eq. (14) and finally exits this space through N. The function H decreases with increasing z, and the right-hand side of Eq. (15) decreases with increasing δ, which implies that \(z_{\rm Q}\) increases with δ. By substituting Eq. (13) into Eq. (15), we find that the threshold values of δ for Q’s entrance (z=0) and exit (z=1) into the state space are respectively given by
$$ \delta_s = \frac{c}{1+B+\cdots+B^{n-1}} \quad{\rm and} \quad \delta^s = \frac{c}{1+B}, $$
(16)
where \(B=f_{\rm Q(p)}\) (and \(s = \rm{p}\)) under penalties, and \(B=1-f_{\rm Q(r)}\) (and \(s = \rm{r}\)) under rewards. We note that δ <δ s δ s <δ +, which is an equality only for n=2.

3.3 Classification of Global Dynamics

Here, we analyze in detail the global dynamics using Eqs. (11) and (12), which are well defined on the entire unit square U={(f,z):0≤f≤1,0≤z≤1}. The induced mapping, \(\mathit{cont}:U \to{\rm \varDelta}\), contracts the edge z=1 onto the vertex N. Note that \({\rm C} = (1,0)\) and \({\rm D} = (0,0)\) as well as both ends of the edge z=1, \({\rm N}_{0} = (0,1)\) and \({\rm N}_{1} = (1,1)\), are hyperbolic equilibria, except when each undergoes bifurcation (as shown later). We note that the dynamic on the \({\rm N}_{1} {\rm N}_{0}\)-edge is unidirectional to \({\rm N}_{0}\) without incentives.
First, we examine penalties. From Eq. (11), the Jacobians at C and \({\rm N}_{0}\) are respectively given by
$$ J_{\rm C} = \begin{pmatrix} c-n\delta& 0 \\ 0 & -[(r-1)c-\sigma] \end{pmatrix} \quad\text{and} \quad J_{{\rm N}_1} = \begin{pmatrix} c-2\delta& 0 \\ 0 & (r-1)c-\sigma \end{pmatrix} . $$
(17)
From our assumption that (r−1)c>σ, it follows that if δ<c/n, then \(\operatorname{det} J_{\rm C} < 0\), and thus, C is a saddle point; otherwise, \(\operatorname{det} J_{\rm C} > 0\) and \(\operatorname{tr} J_{\rm C} < 0\), and thus, C is a sink. Regarding \({\rm N}_{1}\), if δ<c/2, \({\rm N}_{1}\) is a source (\(\operatorname{det} J_{{\rm N}_{1}} > 0\) and \(\operatorname{tr} J_{{\rm N}_{1}} > 0\)); otherwise, \({\rm N}_{1}\) is a saddle (\(\operatorname{det} J_{{\rm N}_{1}} < 0\)). Next, the Jacobians at D and \({\rm N}_{0}\) are respectively given by
$$ J_{\rm D} = \begin{pmatrix} -(c-n\delta) & 0 \\ 0 & \sigma+\delta \end{pmatrix} \quad\text{and} \quad J_{{\rm N}_0} = \begin{pmatrix} -(c-n\delta) & 0 \\ 0 & -(\sigma+\delta) \end{pmatrix} . $$
(18)
If δ<c, D is a saddle point (\(\operatorname{det}J_{\rm D} < 0\)), and \({\rm N}_{0}\) is a sink (\(\operatorname{det}J_{{\rm N}_{0}} > 0\) and \(\operatorname {tr}J_{{\rm N}_{0}} < 0\)); otherwise, D is a source (\(\operatorname{det}J_{\rm D} > 0\) and \(\operatorname{tr}J_{\rm D} > 0\)), and \({\rm N}_{0}\) is a saddle point (\(\operatorname{det}J_{{\rm N}_{0}} < 0\)).
We also analyze the stability of R. As δ increases from c/n to c, the boundary repellor \({\rm R} = (x_{\rm R},0)\) enters the CD-edge at C and then moves to D. The Jacobian at R is given by
$$ J_{\rm R} = \begin{pmatrix} \delta x_{\rm R} (1-x_{\rm R}) \frac{\partial }{\partial f} f H(f,z) \vert _{\rm R} & \ast\\ 0 & -rcx_{\rm R}+(c+\sigma) \end{pmatrix} . $$
(19)
Its upper diagonal component is positive because ∂H(f,z)/∂f≥0 and H>0, whereas the lower component vanishes at \(x_{\rm R}=f_{\rm Q(p)}=(c + \sigma)/(rc)\). Therefore, if \(f_{\rm Q(p)} < x_{\rm R} < 1\), R is a saddle point (\(\operatorname{det}J_{\rm R} < 0\)) and is stable with respect to z; otherwise, if \(0 < x_{\rm R} < f_{\rm Q(p)}\), R is a source (\(\operatorname{det}J_{\rm R} > 0\) and \(\operatorname{tr}J_{\rm D} > 0\)).
In addition, a new boundary equilibrium \({\rm S} = (x_{\rm S},1)\) can appear along the \({\rm N}_{1} {\rm N}_{0}\)-edge. Solving \(\dot{f}(x_{\rm S},1)=0\) in Eq. (11) yields \(x_{\rm S} = (c-\delta) / \delta\); thus, S is unique. S is a repellor along the edge (as is R). As δ increases, S enters the edge at \({\rm N}_{1}\) (for δ=c/2) and exits it at \({\rm N}_{0}\) (for δ=c). The Jacobian at S is given by
$$ J_{\rm S} = \begin{pmatrix} \delta x_{\rm S} (1-x_{\rm S}) \frac{\partial }{\partial f} f H(f,z) \vert _{\rm S} & \ast\\ 0 & \delta x_{\rm S}^2+(r-1)cx_{\rm S}-\sigma-\delta \end{pmatrix} . $$
(20)
Again, its upper diagonal component is positive. Using \(x_{\rm S} = (c-\delta)/ \delta\), we find that the sign of the lower component changes once, from positive to negative, as δ increases from c/2 to c. Therefore, S is initially a source (\(\operatorname{det}J_{\rm S} > 0\) and \(\operatorname{tr}J_{\rm S} > 0\)) but then turns into a saddle point (\(\operatorname{det}J_{\rm S} < 0\)), which is stable with respect to z.
Let us now turn to rewards. From Eq. (12), the Jacobians at D and \({\rm N}_{0}\) are
$$ J_{\rm D} = \begin{pmatrix} -\!(c-n\delta) & 0 \\ 0 & \sigma \end{pmatrix} \quad\text{and} \quad J_{{\rm N}_0} = \begin{pmatrix} -\!(c-2\delta) & 0 \\ 0 & -\sigma \end{pmatrix} . $$
(21)
If δ<c/n, D is a saddle point (\(\operatorname{det}J_{\rm D} < 0\)); otherwise, D is a source (\(\operatorname{det}J_{\rm D} > 0\) and \(\operatorname {tr}J_{\rm D} > 0\)). Regarding \({\rm N}_{0}\), if δ<c/2, \({\rm N}_{0}\) is a sink (\(\operatorname{det}J_{{\rm N}_{0}} > 0\) and \(\operatorname{tr}J_{{\rm N}_{0}} < 0\)); otherwise, \({\rm N}_{0}\) is a saddle point (\(\operatorname {det}J_{{\rm N}_{0}} < 0\)). Meanwhile, the Jacobians at C and \({\rm N}_{1}\) are
$$ J_{\rm C} = \begin{pmatrix} c-\delta& 0 \\ 0 & -[(r-1)c-\sigma+\delta] \end{pmatrix} \quad\text{and} \quad J_{{\rm N}_1} = \begin{pmatrix} c-\delta& 0 \\ 0 & (r-1)c-\sigma+\delta \end{pmatrix} . $$
(22)
From (r−1)c>σδ, it follows that if δ<c, C is a saddle point (\(\operatorname{det}J_{\rm C} < 0\)), and \({\rm N}_{1}\) is a source (\(\operatorname{det}J_{{\rm N}_{1}} > 0\) and \(\operatorname{tr}J_{{\rm N}_{1}} > 0\)); otherwise, C is a sink (\(\operatorname{det}J_{\rm C} > 0\) and \(\operatorname{tr}J_{\rm C} < 0\)), and \({\rm N}_{1}\) is a saddle point (\(\operatorname{det}J_{{\rm N}_{1}} < 0\)).
We also analyze the stability of R. As δ increases from c/n to c, the boundary attractor R enters the CD-edge at D and then moves toward C. The Jacobian at R is given by
$$ J_{\rm R} = \begin{pmatrix} -\delta x_{\rm R} (1-x_{\rm R}) \frac{\partial }{\partial f} (1-f) H(1-f,z) \vert _{\rm R} & \ast\\ 0 & -rcx_{\rm R}+\sigma \end{pmatrix} . $$
(23)
Its upper diagonal component is negative because ∂H(1−f,z)/∂f≤0 and H>0, and the lower component vanishes at \(x_{\rm R} = f_{\rm Q(r)} = \sigma/ (rc)\). Therefore, if \(0 < x_{\rm R} < f_{\rm Q(r)}\), R is a saddle point (\(\operatorname{det}J_{\rm R} < 0\)) and unstable with respect to z; otherwise, if \(f_{\rm Q(r)} < x_{\rm R} < 1\), R is a sink (\(\operatorname{det}J_{\rm R} > 0\) and \(\operatorname{tr}J_{\rm R} < 0\)).
Similarly, a boundary equilibrium S can appear along the \({\rm N}_{1} {\rm N}_{0}\)-edge. Solving \(\dot{f}(x_{\rm S},1)=0\) in Eq. (12) yields \(x_{\rm S} = 1 - (c - \delta) / \delta\), and thus, S is unique. S is an attractor along the edge (as is R). As δ increases, S enters the edge at \({\rm N}_{0}\) (for δ=c/2) and exits at \({\rm N}_{1}\) (for δ=c). The Jacobian at S is
$$ J_{\rm S} = \begin{pmatrix} -\delta x_{\rm S} (1-x_{\rm S}) \frac{\partial }{\partial f} (1-f) H(1-f,z) \vert _{\rm S} & \ast\\ 0 & -[\delta x_{\rm S}^2 - ((r-1)c+2\delta)x_{\rm S} + \sigma] \end{pmatrix} . $$
(24)
Again, its upper diagonal component is positive. Using \(x_{\rm S} = 1 - (c - \delta) / \delta\), we find that the sign of the lower component changes once, from negative to positive, as δ increases from c/2 to c. Therefore, S is initially a sink (\(\operatorname{det}J_{\rm S} > 0\) and \(\operatorname{tr}J_{\rm S} < 0\)) and then becomes a saddle point (\(\operatorname {det}J_{\rm S} < 0\)), which is unstable with respect to z.
We give a full classification of the global dynamics, as follows.
1.
For 0≤δ<δ (Figs. 2a and 3a), C and D are saddle points, \({\rm N}_{1}\) is a source, and \({\rm N}_{0}\) is a sink. There is no other equilibrium, and \(\dot{f} < 0\) holds in the interior state space. All interior orbits originate from \({\rm N}_{1}\) and converge to \({\rm N}_{0}\). \({\rm N}_{0}\) is globally stable. After applying the contraction map, we find that the interior of Δ is filled with homoclinic orbits originating from and converging to N.
 
2.
As δ crosses δ (Figs. 2b and 3b), under penalties, C becomes a sink, and the saddle point R enters the CD-edge at C; under rewards, D turns into a source, and R enters the same edge through D.
Penalties. There exists an orbit originating from \({\rm N}_{1}\) and converging to R that separates the basins of attraction of C and \({\rm N}_{0}\). All of the orbits in the basin of \({\rm N}_{0}\) have their α-limits at \({\rm N}_{1}\). Hence, the corresponding region in Δ is filled with homoclinic orbits and is surrounded by a heteroclinic cycle N → R → D → N. However, if the population is in the vicinity of N, small and rare random perturbations will eventually send the population into the basin of attraction of C (as is the case for c/2<δ).
Rewards. There exists an orbit originating from R and converging \({\rm N}_{0}\). In contrast to the case with penalties, \({\rm N}_{0}\) remains a global attractor. A region separated by the orbit R\({\rm N}_{0}\) encloses orbits with \({\rm N}_{1}\) as their α-limit. Therefore, in Δ, the corresponding region is filled with homoclinic orbits that are surrounded by a heteroclinic cycle N → C → R → N.
 
3.
As δ crosses c/2 (Figs. 2c and 3c), under penalties, \({\rm N}_{1}\) becomes a saddle point, and the source S enters the \({\rm N}_{1} {\rm N}_{0}\)-edge at \({\rm N}_{1}\); under rewards, \({\rm N}_{0}\) becomes a saddle point, and the sink S enters the same edge at \({\rm N}_{0}\). As δ increases, S moves toward \({\rm N}_{0}\) (penalties) or \({\rm N}_{1}\) (rewards).
Penalties. If \(c/2 < \delta_{\rm p}\) holds, then for \(c/2 < \delta< \delta_{\rm p}\), there is still an orbit originating from S and converging to R that separates the interior of Δ into the basins of attraction of C and \({\rm N}_{0}\). All of the orbits in the basin of \({\rm N}_{0}\) have their α-limits at \({\rm N}_{1}\), as before. In Δ, the separatrix NR and the NC-edge now intersect transversally at N, and the entrance of a minority of participants (including cooperators and defectors) into the greater population of non-participants may be successful.
Rewards. If \(c/2 < \delta_{\rm r}\) holds, then for \(c/2 < \delta < \delta_{\rm r}\), there exists an orbit originating from R and converging to S that divides the interior of Δ into two regions: one of them consists of orbits originating from \({\rm N}_{1}\), corresponding in Δ to a region filled with homoclinic orbits; the other one consists of orbits originating from D.
 
4.
Penalties. As δ crosses \(\delta_{\rm p}\) (Fig. 2d), the saddle point Q enters the interior of Δ through R, which becomes a source. Based on the uniqueness of Q and the Poincaré–Bendixson theorem ([30], Appendix A.3), we can see that there is no such homoclinic orbit originating from and converging to Q, and the unstable manifold of Q must consist of an orbit converging to C and an orbit converging to \({\rm N}_{0}\); the stable manifold of Q must consist of an orbit originating from D and an orbit originating from S (or, in the case that \(\delta_{\rm p} < c/2\), from \({\rm N}_{1}\) for \(\delta_{\rm p} < \delta< c/2\)). The stable manifold separates the basins of attraction of C and \({\rm N}_{0}\); the unstable manifold separates the basin for \({\rm N}_{0}\) into two regions. One of these regions is filled with orbits originating from S (or from \({\rm N}_{1}\) under the above conditions) and converging to \({\rm N}_{0}\). For Δ, this means that the corresponding region is filled with homoclinic orbits and is surrounded by a heteroclinic cycle N → Q → N (Fig. 2d). As δ further increases, Q moves across U, from the CD-edge to the \({\rm N}_{1} {\rm N}_{0}\)-edge along the line \(f = f_{\rm Q(p)}\).
Rewards. As δ crosses \(\delta_{\rm r}\) (Fig. 3d), Q enters the interior of Δ through R, which becomes a sink. As δ continues to increase, similarly Q moves to the \({\rm N}_{1} {\rm N}_{0}\)-edge, along the line \(f = f_{\rm Q(r)}\). There is no homoclinic loop for Q, as under penalties, and now, we find that the stable manifold of Q must consist of two orbits originating from D and \({\rm N}_{1}\); the unstable manifold of Q must consist of an orbit converging to R and another converging to S (or, in the case that \(\delta_{\rm r} < c/2\), to \({\rm N}_{0}\) for \(\delta_{\rm r} < \delta< c/2\) (Fig. 3c)). The stable manifold separates the basins of attraction of R and S (or \({\rm N}_{0}\) under the above conditions); the unstable manifold separates the basin for S (or \({\rm N}_{0}\)) into two regions. One of these regions is filled with orbits issuing from \({\rm N}_{1}\) and converging to S (or \({\rm N}_{0}\)). The corresponding region in \(\rm \varDelta\) is filled with homoclinic orbits and is surrounded by a heteroclinic cycle N → Q → N (Figs. 3c and 3d). If the population is in the vicinity of N, small and rare random perturbations will eventually send the population into the basin of attraction of R (as is the case for \(\delta^{\rm r} < \delta\)).
 
5.
As δ crosses \(\delta^{\rm p}\) under penalties (Fig. 2e) or \(\delta^{\rm r}\) under rewards (Fig. 3e), Q exits the state space through S, which then becomes a saddle point. For larger values of δ, there is no longer an interior equilibrium.
 
6.
Finally, as δ crosses δ + (Figs. 2f and 3f), R and S simultaneously exit U, through D and \({\rm N}_{0}\) (penalties) or C and \({\rm N}_{1}\) (rewards), respectively. For δ +<δ, \({\rm N}_{1}\) and \({\rm N}_{0}\) are saddle points, D is a source, and C is a sink. \(\dot{f} > 0\) holds throughout the state space. All of the interior orbits originate from D and converge to C. Hence, C is globally stable.
 

3.4 Degenerate Dynamics for Pairwise Interactions with n=2

In the specific case when n=2, by solving Eqs. (14) and (15) with H(f,z)=1, the dynamics has an interior equilibrium only when
$$ \delta=\frac{rc^2}{(r+1)c+\sigma} \ {\rm under} \ {\rm penalties} \ {\rm and} \ \delta= \frac{rc^2}{2rc-\sigma} \ {\rm under} \ {\rm rewards}. $$
(25)
At this moment, throughout both incentives, R and S in U undergo bifurcation simultaneously, and the line \(f = f_{\rm Q}\) given in Eq. (14), which consists of a continuum of equilibria, connects R and S (and in Δ, R and N) (Fig. 4). When δ does not take the specific value in Eq. (25), there is no interior equilibrium, and the global dynamics is classified as in the general case when n>2 (see the list 1–3, 5, 6 of Sect. 3.3). Within pairwise interactions, therefore, the interior dynamics degenerates. This exceptional case was not described in Sasaki et al. [50].

4 Discussion

We considered a model for the evolution of cooperation through institutional incentives and analyzed in detail the evolutionary game dynamics. We employed public goods games, which typically assume that there are at least three players. Specifically, based on a public good game with optional participation, we fully analyzed how opting-out impacts on game dynamics; in particular, opting-out can completely overcome a coordination problem associated with punishment for a considerably broader range of parameters than in cases of compulsory participation.
We start from assuming that there is a state-like institution that takes exclusive control of individual-level sanctions in the form of penalties and rewards. In our extended model, nobody is forced to enter a joint enterprise that is protected by the institutional sanctioning. However, whoever is willing to enter, must be charged at the entrance. Further, if one proves unable or unwilling to pay, the sanctioning institution can ban that person from participation in the game. Indeed, joint ventures in real life are mostly protected by enforceable contracts in which members can freely participate, but are then bound by a higher authority. For example, anyone can opt to not participate in a wedding vow, but once it is taken, it is among the strongest enforceable contracts. As far as we know, higher authorities always demand penalties if contracts are broken.
Based on our mathematical analysis, we argue that institutional punishment, rather than institutional rewards, can become a more viable incentivization scheme for cooperation when combined with optional participation. In spite of the fact that the expected payoffs include nonlinear terms, the corresponding replicator dynamics is completely analyzed: in particular, proving that the interior equilibrium for optional participation is unique and a saddle point plays a key role in solving the global dynamics.
We show that combining optional participation with rewards can only marginally improve group welfare (to the same level as the non-participant’s fixed payoff) for a small range of the per capita incentive δ, with \(\delta_{-} < \delta< \delta_{\rm r}\) (Fig. 3b). Within this interval, compulsory participation can lead to partial cooperation; however, optional participation eliminates the cooperation and thus drives a population into a state in which all players exit. Hence, freedom of participation is not a particularly effective way of boosting cooperation under a rewards scenario.
Under penalties, the situation changes considerably. Indeed, as soon as δ>δ (Fig. 2b), the state in which all players cooperate abruptly turns into a global attractor for optional participation. When δ just exceeds δ , group welfare becomes the maximum (r−1)cσ. Meanwhile, for compulsory participation, a largest part of the (boundary) state space between cooperation and defection still belongs to the basin of attraction of the state in which all players defect. Because δ =c/n, where n is the group size, and c is the net contribution cost (a constant), when n is larger, the minimal sanctioning cost δ to establish full cooperation is smaller.
Collaborating results for compulsory participation have recently been obtained in continuous public good games with institutional incentives by Cressman et al. [12], who considered the gradual evolution of continuously varying contribution to a public good. The authors show that rewarding and punishing with probabilities depending on the player’s contribution and those of the co-players, can destabilize full defection and stabilize full cooperation, respectively. This model also indicates that combining the best of both incentives would lead the population to full cooperation, irrespective of the initial condition. Looking back at our model, non-participation reflects the common characteristic of destabilizing full defection; thus, it would be fascinating to investigate how efficiently voluntary rewards [28, 48, 54], instead of voluntary participation, can establish coercion-based cooperation.
In the next two paragraphs, we consider only the penalty scenario and the corresponding coordination situation. There are various approaches to equilibrium selection in n-person coordination games for binary choices [19, 29, 34]. A strand of literature uses stochastic evolution models [14, 33, 64], in which typically, a risk-dominant equilibrium [23] that has the larger basin of attraction is selected through random fluctuation in the long run. In contrast, considering optional participation, our model typically selects the cooperation equilibrium which provides the higher group welfare, even if the cooperation equilibrium has the smaller basin of attraction when participation is compulsory than has the defection equilibrium. In the sense of favoring the efficient equilibrium, our result is similar to that found in a decentralized partner-changing model proposed by Oechssler [36], in which players may occasionally change interaction groups.
Higher-order freeloaders are problematic for decentralized peer-to-peer sanctions [11, 41]. This is not the case, however, for centralized institutional sanctions. In addition, it is clear that sanctioning institutions will stipulate a lesser antisocial punishment targeted at cooperators [27], which can prevent the evolution of pro-social behaviors ([44, 46], see also [18]). Indeed, punishing cooperators essentially promotes defectors, who will reduce the number of participants willing to pay for social institutions. For self-sustainability, thus, sanctioning institutions should dismiss any antisocial schemes that may lead to a future reduction in resources for funding the institution.
Thus, we find that our model restricts the space of possible actions into a very narrow framework of alternative strategies, while increasing complexity. In practice, truly chaotic situations which offer a very long list of possibilities are unfeasible and create inconvenience, as is described by Michael Ende in “The Prison of Freedom” [1992]. Participants in economic experiments usually can make their meaningful choices only from a short and regulated list of options, as is the way in real life. Our result indicates that a third party capable of controlling incentives and membership can play a key role in selecting a cooperation equilibrium without ex ante adjustment. The question of how such a social order can emerge out of a world of chaos is left entirely open.

Acknowledgements

We thank Åke Brännström, Ulf Dieckmann, and Karl Sigmund for their comments and suggestions on an earlier version of this paper. This study was enabled by financial support by the FWF (Austrian Science Fund) to Ulf Dieckmann at IIASA (TECT I-106 G11), and was also supported by grant RFP-12-21 from the Foundational Questions in Evolutionary Biology Fund.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Appendix

Appendix

A.1 Uniqueness of the Interior Equilibrium Q

We show that \(z_{\rm Q}\) is uniquely determined in the general case for n>2. Both equations in Eq. (15) have at most one solution with respect to z. Because \(f_{\rm Q}\) is independent of \(z_{\rm Q}\), it is sufficient to show that H(f,z) is strictly monotonic for every z∈(0,1). We first consider penalties. A straightforward computation yields
$$\begin{aligned} \frac{\partial}{\partial z} H(f,z) &= \frac{n-1}{(1-f)(1-z^{n-1})^2} \bigl[z^{n-2} - \bigl(f+(1-f)z\bigr)^{n-2} \bigl((1-f)+fz^{n-2}\bigr)\bigr] \\ &= \frac{(n-1) z^{n-2}}{(1-f)(1-z^{n-1})^2} \\ &\quad {} \times \biggl[ 1- \biggl\{ \biggl( \frac{f+(1-f)z}{z} \biggr) \bigl((1-f)+fz \bigr) \biggr\}^{n-2} \frac{(1-f)+fz^{n-2}}{((1-f)+fz)^{n-2}} \biggr]. \end{aligned}$$
(26)
We note that
$$ \biggl( \frac{f+(1-f)z}{z} \biggr) \bigl((1-f)+fz\bigr) = 1+f(1-f) \biggl( z-2+ \frac{1}{z} \biggr) = 1+f(1-f)\frac{(1-z)^2}{z} > 1, $$
(27)
and
$$ \frac{(1-f)+fz^{n-2}}{((1-f)+fz)^{n-2}} \ge1. $$
(28)
This inequality obviously holds for n=2. By induction for every larger n, if it holds for n, it must hold for n+1 because
$$ \frac{(1-f)+fz^{n+1}}{((1-f)+fz)^{n+1}} - \frac{(1-f)+fz^{n}}{((1-f)+fz)^{n}} = \frac{f(1-f)(1-z)(1-z^n)}{((1-f)+fz)^{n+1}} > 0. $$
(29)
Consequently, the square bracketed term in the last line of Eq. (26) is negative. Thus, ∂H(f,z)/∂z<0 for every z∈(0,1). We now consider rewards and use the same argument as above. This concludes our proof of the uniqueness of Q.

A.2 The Saddle Point Q

We prove that for n>2, Q is a saddle point. We first consider penalties using Eq. (11). Because the square brackets in Eq. (11) vanish at Q, the Jacobian at Q is given by
$$ J_{\rm Q} = \left.\begin{pmatrix} \delta f(1-f) ( H + f \frac{\partial H}{\partial f} ) & \delta f^2 (1-f) \frac{\partial H}{\partial z}\\ z(1-z) [ -A+\delta ( (1-2f)H+f(1-f) \frac {\partial H}{\partial f} ) ] & \delta f(1-f)z(1-z) \frac{\partial H}{\partial z} \end{pmatrix} \right\rvert _{\rm Q}, $$
(30)
where H=H(f,z) and A=(r−1)c+δ. Using ∂H(f,z)/∂z<0, H>0, and A>0, which yields
$$ \mathrm{det} \, J_{\rm Q} = \delta f^2 (1-f)z(1-z) \bigl[A + \delta f H(f,z)\bigr] \frac{\partial H(f,z)}{\partial z} < 0. $$
(31)
Therefore, Q is a saddle point.
We next consider rewards using Eq. (12). Similarly, we find that the Jacobian at Q is given by
$$ J_{\rm Q} = \left.\begin{pmatrix} \delta f(1-f) ( -H + (1-f) \frac{\partial H}{\partial f} ) & \delta f (1-f)^2 \frac{\partial H}{\partial z} \\ -z(1-z) [ A+\delta ( (1-2f)H+f(1-f) \frac {\partial H}{\partial f} ) ] & \delta-f(1-f)z(1-z) \frac{\partial H}{\partial z} \end{pmatrix} \right \vert _{\rm Q}, $$
(32)
where H=H(1−f,z) and A is as in Eq. (30). Using ∂H(1−f,z)/∂z<0, H>0, and A>0, it follows again that \(\mathrm{det} \, J_{\rm Q} < 0\). Therefore, Q is a saddle point.

A.3 No Homoclinic Orbit of Q

First, we prove that a homoclinic loop that originates from and converges to Q does not exist. Using the Poincaré–Bendixson theorem [30] and the uniqueness of an interior equilibrium, we show that if it does exist, there must be a point p inside the loop such that both of its α- and ω-limit sets include Q. This contradicts the fact that Q is a saddle point. Indeed, there may be a section that cuts through Q such that the positive and negative orbits of p infinitely often cross it; however, it is impossible for a sequence consisting of all the crossing points to originate from and also converge to the saddle point Q. Hence, there is no homoclinic orbit of Q.
Next, we show that orbits that form the unstable manifold of Q do not converge to the same equilibrium (indeed, this is a sink). If they do, the closed region that is surrounded by the orbits must include a point q such that its ω-limit set is Q. Using the Poincaré–Bendixson theorem and the uniqueness of an interior equilibrium, the α-limit set for q must include Q; this is a contradiction. Similarly, we can prove that the orbits that form the stable manifold of Q do not issue from the same equilibrium.
Literature
5.
go back to reference Balliet D, Mulder LB, Van Lange PAM (2011) Reward, punishment, and cooperation: a meta-analysis. Psychol Bull 137:594–615. doi:10.1037/a0 CrossRef Balliet D, Mulder LB, Van Lange PAM (2011) Reward, punishment, and cooperation: a meta-analysis. Psychol Bull 137:594–615. doi:10.​1037/​a0 CrossRef
23.
go back to reference Harsanyi JC, Selten R (1988) A general theory of equilibrium selection in games. MIT Press, Cambridge MATH Harsanyi JC, Selten R (1988) A general theory of equilibrium selection in games. MIT Press, Cambridge MATH
30.
go back to reference Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge CrossRefMATH Hofbauer J, Sigmund K (1998) Evolutionary games and population dynamics. Cambridge University Press, Cambridge CrossRefMATH
37.
38.
go back to reference Oliver P (1980) Rewards and punishments as selective incentives for collective action: theoretical investigations. Am J Sociol 85:1356–1375. doi:10.1086/227168 CrossRef Oliver P (1980) Rewards and punishments as selective incentives for collective action: theoretical investigations. Am J Sociol 85:1356–1375. doi:10.​1086/​227168 CrossRef
39.
go back to reference Olson E (1965) The logic of collective action: public goods and the theory of groups. Harvard University Press, Cambridge Olson E (1965) The logic of collective action: public goods and the theory of groups. Harvard University Press, Cambridge
40.
41.
go back to reference Ostrom E (1990) Governing the commons: the evolution of institutions for collective action. Cambridge University Press, New York CrossRef Ostrom E (1990) Governing the commons: the evolution of institutions for collective action. Cambridge University Press, New York CrossRef
50.
go back to reference Sasaki T, Brännström Å, Dieckmann U, Sigmund K (2012) The take-it-or-leave-it option allows small penalties to overcome social dilemmas. Proc Natl Acad Sci USA 109:1165–1169. doi:10.1073/pnas.1115219109 CrossRef Sasaki T, Brännström Å, Dieckmann U, Sigmund K (2012) The take-it-or-leave-it option allows small penalties to overcome social dilemmas. Proc Natl Acad Sci USA 109:1165–1169. doi:10.​1073/​pnas.​1115219109 CrossRef
57.
go back to reference Skyrms B (2004) The stag hunt and the evolution of social structure. Cambridge University Press, Cambridge Skyrms B (2004) The stag hunt and the evolution of social structure. Cambridge University Press, Cambridge
58.
go back to reference Sugden R (1986) The economics of rights, co-operation and welfare. Blackwell, Oxford Sugden R (1986) The economics of rights, co-operation and welfare. Blackwell, Oxford
Metadata
Title
The Evolution of Cooperation Through Institutional Incentives and Optional Participation
Author
Tatsuya Sasaki
Publication date
01-09-2014
Publisher
Springer US
Published in
Dynamic Games and Applications / Issue 3/2014
Print ISSN: 2153-0785
Electronic ISSN: 2153-0793
DOI
https://doi.org/10.1007/s13235-013-0094-7

Other articles of this Issue 3/2014

Dynamic Games and Applications 3/2014 Go to the issue

Premium Partner