Abstract

We consider games of strategic substitutes and complements on networks and introduce two evolutionary dynamics in order to refine their multiplicity of equilibria. Within mean field, we find that for the best-shot game, taken as a representative example of strategic substitutes, replicator-like dynamics does not lead to Nash equilibria, whereas it leads to a unique equilibrium for complements, represented by a coordination game. On the other hand, when the dynamics becomes more cognitively demanding, predictions are always Nash equilibria: for the best-shot game we find a reduced set of equilibria with a definite value of the fraction of contributors, whereas, for the coordination game, symmetric equilibria arise only for low or high initial fractions of cooperators. We further extend our study by considering complex topologies through heterogeneous mean field and show that the nature of the selected equilibria does not change for the best-shot game. However, for coordination games, we reveal an important difference: on infinitely large scale-free networks, cooperative equilibria arise for any value of the incentive to cooperate. Our analytical results are confirmed by numerical simulations and open the question of whether there can be dynamics that consistently leads to stringent equilibria refinements for both classes of games.

1. Introduction

Strategic interactions among individuals located on a network, be it geographical, social, or of any other nature, are becoming increasingly relevant in many economic contexts. Decisions made by our neighbors on the network influence ours and are in turn influenced by their other neighbors to whom we may or may not be connected. Such a framework makes finding the best strategy a very complex problem, almost always plagued by a very large multiplicity of equilibria. Researchers are devoting much effort to this problem, and an increasing body of knowledge is being consolidated [13]. In this work we consider games of strategic substitutes and strategic complements on networks, as discussed in [4]. In this paper, Galeotti et al. obtained an important reduction in the number of game equilibria by going from a complete information setting to an incomplete one. They introduced incomplete information by assuming that each player is only aware of the number of neighbors he/she has, but not of their identity nor of the number of neighbors they have in turn. We here aim at providing an alternative equilibrium refinement by looking at network games from an evolutionary viewpoint. In particular, we look for the set of equilibria which can be accessed according to two different dynamics for players’ strategies and discuss the implications of such reduction. Furthermore, we go beyond the state-of-the-art mean field approach and consider the role of complex topologies with a heterogeneous mean field technique.

Our work belongs to the literature on strategic interactions in networks and its applications to economics [513]. In particular, one of the games we study is a discrete version of a public goods game proposed by Bramoullé and Kranton [14], who opened the way to the problem of equilibrium selection in this kind of games under complete information. Bramoullé further considered this problem [15] for the case of anticoordination games on networks, showing that network effects are much stronger than for coordination games. As already stated, our paper originates from Galeotti et al. [4], for they considered one-shot games with strategic complements and substitutes and model equilibria resulting from incomplete information. Our approach is instead based on evolutionary selection of equilibria—pertaining to the large body of work emanating from the Nash programme [1619]—and is thus complementary to theirs. In particular we focus on the analysis of two evolutionary dynamics (see Roca et al. [20] for a review of the literature) in two representative games and on how this dynamics leads to a refinement of the Nash equilibria or to other final states. The dynamics we consider are Proportional Imitation [21, 22], which does not lead in general to Nash equilibria, and best response [23, 24], which instead allows for convergence to Nash equilibria—an issue about which there are a number of interesting results in the case of a well-mixed population [2527]. As we are working on a network setup, our specific perspective is close to that of Boncinelli and Pin [28]. They elaborate on the literature on stochastic stability [19, 29] (see [24, 30] for an early example of related dynamics on lattices) as a device that selects the equilibria that are more likely to be observed in the long run, in the presence of small errors occurring with a vanishing probability. They work from the observation [31] that different equilibria can be selected depending on assumptions on the relative likelihood of different types of errors. Thus, Boncinelli and Pin work with a best response dynamics and by means of a Markov Chain analysis find, counterintuitively, that when contributors are the most perturbed players, the selected equilibrium is the one with the highest contribution. The techniques we use here are based on differential equations and have a more dynamical character, and we do not incorporate the possibility of having special distributions of errors—although we do consider random mistakes. Particularly relevant to our work is the paper by López-Pintado [32] (see [33] for an extension to the case of directed networks) where a mean field dynamical approach involving a random subsample of players is proposed. Within this framework, the network is dynamic, as if at each period the network were generated randomly. Then a unique globally stable state of the dynamics is found, although the identities of free riders might change from one period to another. The difference with our work is that we do not deal with a time-dependent subsample of the population, but we use a global mean field approach (possibly depending on the connectivity of individuals) to describe the behavior of a static network.

In the remainder of this introduction we present the games we study and the dynamics we apply for equilibrium refinement in detail, discuss the implications of such a framework on the informational settings we are considering, and summarize our main contributions.

1.1. Framework
1.1.1. Games

We consider a finite set of agents of cardinality , linked together in a fixed, undirected, exogenous network. The links between agents reflect social interactions, and connected agents are said to be “neighbors.” The network is defined through a symmetric matrix with null diagonal, where means that agents and are neighbors, while means that they are not. We indicate with the set of ’s neighbors; that is, , where the number of such neighbors is the degree of the node.

Each player can take one of two actions , with denoting ’s action. Hence, only pure strategies are considered. In our context (particularly for the case of substitutes), action 1 may be interpreted as cooperating and action 0 as not doing so—or defecting. Thus, the two actions are labeled in the rest of the paper as and , respectively. There is a cost , where , for choosing action , while action bears no cost.

In what follows we concentrate on two games, the best-shot game and a coordination game, as representative instances of strategic substitutes and strategic complements, respectively. We choose specific examples for the sake of being able to study analytically their dynamics. To define the payoffs we introduce the following notation: is the aggregate action in and .

(a) Strategic Substitutes: Best-Shot Game. This game was first considered by Bramoullé and Kranton [14] as a model of the local provision of a public good. As stated above, we consider the discrete version, where there are only two actions available, as in [4, 28]. The corresponding payoff function takes the formwhere is the Heaviside step function if and otherwise.

(b) Strategic Complements: Coordination Game. For our second example, we follow Galeotti et al. [4] and consider again a discrete version of the game, but now let the payoffs of any particular agent be given byAssuming that , we are faced with a coordination game where, as discussed in [4], depending on the underlying network and the information conditions, there can generally be multiple equilibria.

1.1.2. Dynamics

Within the two games we have presented above, we now consider evolutionary dynamics for players’ strategies. Starting at with a certain fraction of players randomly chosen to undertake action , at each round of the game players collect their payoff according to their neighbors’ actions and the kind of game under consideration. Subsequently, a fraction of players update their strategy. We consider two different mechanisms for strategy updating.

(a) Proportional Imitation (PI) [21, 22]. It represents a rule of imitative nature in which player may copy the strategy of a selected counterpart , which is chosen randomly among the neighbors of . The probability that copies ’s strategy depends on the difference between the payoffs they obtained in the previous round of the game:where is a normalization constant that ensures and allows for mistakes (i.e., copying an action that yielded less payoff in the previous round). Note that because of the imitation mechanism of PI, the configurations and are absorbing states: the system cannot escape from them and not even mistakes can reintroduce strategies, as they always involve imitation. On the other hand, it can be shown that PI is equivalent to the well-known replicator dynamics in the limit of an infinitely large, well-mixed population (equivalently, on a complete graph) [34, 35]. As was first put by Schlag [22], the assumption that agents play a random-matching game in a large population and learn the actual payoff of another randomly chosen agent, along with a rule of action that increases their expected payoff, leads to a probability of switching to the other agent’s strategy that is proportional to the difference in payoffs. The corresponding aggregate dynamics is like the replicator dynamics. See also [36] for another interpretation of these dynamics in terms of learning.

(b) Best Response (BR). This rule was introduced in [23, 24] and has been widely used in the economics literature. BR describes players that are rational and choose their strategy (myopically) in order to maximize their payoff, assuming that their neighbors will again do what they did in the last round. This means that each player , given the past actions of their partners , computes the payoffs that he/she would obtain by choosing action 1 (cooperating) or 0 (defecting) at time , respectively, and . Then actions are updated as follows:and if . Here again represents the probability of making a mistake, with indicating fully rational players.

The reason to study these two dynamics is because they may lead to different results as they represent very different evolutions of the players’ strategies. In this respect, it is important to mention that, in the case , Nash equilibria are stable by definition under BR dynamics and, vice versa, any stationary state found by BR is necessarily a Nash equilibrium. On the contrary, with PI this is not always true: even in the absence of mistakes, players can change action by copying better-performing neighbors, also if such change leads to a decreasing of their payoffs in the next round. Another difference between the two dynamics is the amount of cognitive capability they assume for the players: whereas PI refers to agents with very limited rationality, which imitate a randomly chosen neighbor on the only condition that he/she does better, BR requires agents with a much more developed analytic ability.

1.1.3. Analytical and Informational Settings

We study how the system evolves by either of these two dynamics, starting from an initial random distribution of strategies. In particular, we are interested in the global fraction of cooperators and its possible stationary value . We carry out our calculations in the framework of a homogeneous mean field (MF) approximation, which is most appropriate to study networks with homogeneous degree distribution like Erdös-Rényi random graphs [37]. The basic assumption underlying this approach is that every player interacts with an “average player” that represents the actions of his/her neighbors. More formally, the MF approximation consists in assuming that when a player interacts with a neighbor of theirs, the action of such a neighbor is with probability (and otherwise), independently of the particular pair of players considered [38]. Loosely speaking, this amounts to having a very incomplete information setting, in which all players know only how many other players they will engage with, and is reminiscent of that used by Galeotti et al. [4] for their refinement of equilibria. However, the analogy is not perfect and therefore, for the sake of accuracy, we do not dwell any further on the matter. In any case, MF represents our setup for most of the paper.

As an extension of the results obtained in the above context, we also study the case of highly heterogeneous networks, that is, networks with broad degree distribution , such as scale-free ones [39]. In these cases in fact there are a number of players with many neighbors (“hubs”) and many players with only a few neighbors, and this heterogeneity may give rise to very different behaviors as compared to Erdös-Rényi systems. Analytically, this can be done by means of the heterogeneous mean field technique (HMF) [40] which generalizes, for the case of networks with arbitrary degree distribution, the equations describing the dynamical process by considering degree-block variables grouping nodes within the same degree. More formally, now when a player interacts with a neighbor of theirs, the action of such a neighbor is with probability (and otherwise) if is the neighbor’s degree ( is the density of cooperators within players of degree ). By resorting to this second perspective we are able to gain insights on the effects of heterogeneity on the evolutionary dynamics of our games.

1.2. Our Contribution

Within this framework, our main contribution can be summarized as follows. In our basic setup of homogeneous networks (described by the mean field approximation): for the best-shot game, PI leads to a stationary state in which all players play , that is, to full defection, which is however non-Nash as any player surrounded by defectors would obtain higher payoff by choosing cooperation (at odds with the standard version of the public goods game). This is the result also in the presence of mistakes, unless the probability of errors becomes large, in which case the stationary state is the opposite, , that is, full cooperation, also non-Nash. Hence, PI does not lead to any refinement of the Nash equilibrium structure. On the contrary, BR leads to Nash equilibria characterized by a finite fraction of cooperators , whereas, in the case when players are affected by errors, this fraction coincides with the probability of making an error as the mean degree of the network goes to infinity. The picture is different for the coordination game. In this case, PI does lead to Nash equilibria, selecting the coordination in 0 below a threshold value of and the opposite state otherwise. This threshold is found to depend on the initial fraction of players choosing . Mistakes lead to the appearance of a new possibility, an intermediate value of the fraction of players choosing 1, and as before the initial value of this fraction governs which equilibrium is selected. BR gives similar results, albeit for the fact that a finite fraction of 1 actions can also be found even without mistakes, and with mistakes the equilibria are not full 0 or 1 but there is always a fraction of mistaken players. Finally, changing the analytical setting by proceeding to the heterogeneous mean field approach does not lead to any dramatic change in the structure of the equilibria for the best-shot game. Interestingly, things change significantly for coordination games—when played on infinitely large scale-free networks. In this case, which is the one where the heterogeneous mean field should make a difference, equilibria with nonvanishing cooperation are obtained for any value of the incentive to cooperate (represented by the parameter ).

The paper is organized in seven sections including this introduction. Section 2 presents our analysis and results for the best-shot game. Section 3 deals with the coordination game. In both cases, the analytical framework is that of the mean field technique. After an overall analysis of global welfare performed in Section 4, Section 5 presents the extensions of the results for both games within the heterogeneous mean field approach, including some background on the formalism itself. Finally, Section 6 contains an assessment of the validity of all these analytical findings in light of the results of recent numerical simulations of the system described above, and Section 7 concludes the paper summarizing our most important findings concerning the refinement of equilibria in network games and pointing to relevant open questions.

2. Best-Shot Game

2.1. Proportional Imitation

We begin by considering the case of strategic substitutes when imitation of a neighbor is only possible if he/she has obtained better payoff than the focal player; that is, in (3). In that case, the main result is the following.

Proposition 1. Within the mean field formalism, under PI dynamics, when the final state for the population is the absorbing state with a density of cooperators (full defection) except if the initial state is full cooperation.

Proof. Working in a mean field context means that individuals are well-mixed, that is, every player interacts with average players. In this case the differential equation for the density of cooperators isThe first term is the probability of picking a defector with a neighboring cooperator, times the probability of imitation . The second term is the probability of picking a cooperator with a neighboring defector, times the probability of imitation . In the best-shot game a defector cannot copy a neighboring cooperator (who has lower payoff by construction), whereas, a cooperator eventually copies one of his/her neighboring defectors (who has higher payoff). Hence and is equal to the payoff difference . Since the normalization constant for strategic substitutes, (5) becomesThe solution, for any initial condition , ishence for : the only stationary state is full defection unless .

Remark 2. As discussed above, PI does not necessarily lead to Nash equilibria as asymptotic, stationary states. This is clear in this case. For any the population ends up in full defection, even if every individual player would be better off by switching to cooperation. This phenomenon is likely to arise from the substitutes or anticoordination character of the game: in a context in which it is best to do the opposite of the other players, imitation does not seem the best way for players to decide on their actions.

Proposition 3. Within the mean field formalism, under PI dynamics, when the final state for the population is the absorbing state (full defection) when , when , and when . When the initial state is or , it remains unchanged.

Proof. Equation (5) is still valid, with unchanged, whereas, . By introducing the effective cost we can rewrite (7) asHence for only for () and instead for () then , and for () then for (cooperation is favored now).

Remark 4. As before, PI does not drive the population to a Nash equilibrium, independently of the probability of making a mistake. However, mistakes do introduce a bias towards cooperation and thus a new scenario: when their probability exceeds the cost of cooperating, the whole population ends up cooperating.

2.2. Best Response

We now turn to the case of the best response dynamics, which (at least for ) is guaranteed to drive the system towards Nash equilibria. In this scenario, we have not been able to find a rigorous proof of our main result, but we can make some approximations in the equation that support it. As we will see, our main conclusion is that, within the mean field formalism under BR dynamics, when the final state for the population is a mixed state , , for any initial condition.

Indeed, for BR dynamics without mistakes, the homogeneous mean field equation for iswhere the first term is the probability of picking a cooperator who would do better by defecting, and the second term is the probability of picking a defector who would do better by cooperating. This far, no approximation has been made; however, these two probabilities cannot be exactly computed and we need to estimate them.

To evaluate the two probabilities, we can recall that always, whereas when none of the neighbors cooperates and otherwise. Therefore, for an average player of degree we have that . Consistently with the mean field framework we are working on, as a rough approximation, we can assume that every player has degree (the average degree of the network), so that . Thus, we haveTo go beyond this simple estimation, we can work out a better approximation by integrating over the probability distribution of players’ degrees . For Erdös-Rényi random graphs, in the limit of large populations (), it is . This leads to and, subsequently,

Remark 5. The precise asymptotic value for the density of cooperators, , depends on the approximation considered above. However, at least for networks that are not too inhomogeneous, the approximations turn out to be very good, and therefore the corresponding picture for the evolution of the population is quite accurate. It is interesting to note that, whatever its exact value, in both cases is such that the right-hand sides of (10) and (11) vanish and, furthermore, is an attractor of the dynamics, because .

How is the above result modified by mistakes? When , (9) becomeswhere the first term accounts for cooperators rightfully switching to defection and defectors wrongly selecting cooperation, while the second term accounts for defectors correctly choosing cooperation and cooperators mistaken to defection. Proceeding as before, and again in the limit , we approximate , thus arriving atfrom which it is possible to find the attractor of the dynamics . Such attractor in turn exists if a threshold that is bounded below by 1/2, which would be tantamount to players choosing their action at random. Therefore, all reasonable values for the probability of errors allow for equilibria.

Remark 6. To gain some insight on the cooperation levels arising from BR dynamics in the Nash equilibria, we have numerically solved (13). The values for are plotted in Figure 1 for different values of , as a function of . We observe that the larger the , the lower the cooperation level. The intuition behind such result is that the more the connections that every player has, the lower the need to play 1 to ensure obtaining a positive payoff. It could then be thought that this conclusion is reminiscent of the equilibria found for best-shot games in [4], which are nonincreasing in the degree. However, this is not the case, as in our work we are considering an iterated game that can perfectly lead to high degree nodes having to cooperate. Note also that this approach leads to a definite value for the density of cooperators in the Nash equilibrium, but there can be many action profiles for the player compatible with that value, so multiplicity of equilibria is reduced but not suppressed.

Remark 7. From Figure 1 it is also apparent that as the likelihood of mistakes increases, the density of cooperators at equilibrium increases. Note that for very large values of the connectivity , (13) has solution , and thus , in agreement with the fact that when a player has many neighbors he/she can assume that a fraction of them will cooperate, thus turning defection into his/her BR.

3. Coordination Game

We now turn to the case of strategic complements, exemplified by our coordination game. As above, we start from the case without mistakes, and we subsequently see how they affect the results.

3.1. Proportional Imitation

Proposition 8. Within the mean field formalism, under PI dynamics, when the final state for the population is the absorbing state with a density of cooperators (full defection) when , and the absorbing state with when . In the case both outcomes are possible.

Proof. Still within our homogeneous mean field context, the differential equation for the density of cooperators is again (5). As we are in the case in which , we have that and , where for strategic complements . Given that and that, consistently with our MF framework, , we findwhere we have introduced the values and .
It is easy to see that is an unstable equilibrium, as for and for . Therefore, we have two different cases: when then and the final state is full cooperation (), whereas when then and the outcome is full defection (). When then , so both outcomes are in principle possible.

Remark 9. The same (but opposite) intuition we discussed in Remark 2 about the outcome of PI on substitute games suggests that imitation is indeed a good procedure to choose actions in a coordination setup. In fact, contrary to the case of the best-shot game, in the coordination game PI does lead to Nash equilibria, and indeed it makes a very precise prediction: a unique equilibrium that depends on the initial density. Turning around the condition for the separatrix, we have ; that is, when few people cooperate initially then evolution leads to everybody defecting, and vice versa. In any event, having a unique equilibrium (except exactly at the separatrix) is a remarkable achievement.

Remark 10. In a system where players may have different degrees, while full defection is always a Nash equilibrium for the coordination game, full cooperation becomes a Nash equilibrium only when , where is the smallest degree in the network—which means that only networks with feature a fully cooperative Nash equilibrium.

When , the problem becomes much more involved and we have not been able to prove rigorously our main result. In fact, now we have and . Equation (15) thus becomeswhere we have used . We then have three different cases which we can treat approximately: (i)When , then and (16) reduces to (15); that is, we would recover the result for the case with no mistakes.(ii)When , then and (16) can be rewritten as with . This value leads to an unstable equilibrium; in particular, for so that and hence (16) holds.(iii)Finally when , then and (16) can be rewritten as with . As before, gives an unstable equilibrium, because for so that again where (16) holds.

Remark 11. In summary, the region becomes a finite basin of attraction for the dynamics. Note that when , then has no solution and becomes the attractor in the whole space. Our analysis thus shows that, for a range of initial densities of cooperators, there is a dynamical equilibrium characterized by an intermediate value of , which is neither full defection nor full cooperation. Instead, for small enough or large enough values of , the system evolves towards the fully defective or fully cooperative Nash equilibrium, respectively.

Remark 12. The intuition behind the result above could be that mistakes can take a number of people away from the equilibrium, be it full defection or full cooperation, and that this takes place in a range of initial conditions that grows with the likelihood of mistakes.

3.2. Best Response

Considering now the case of BR dynamics, the case of the coordination game is no different from that of the best-shot game and we cannot find rigorous proofs for our results, although we believe that we can substantiate them on firm grounds. To proceed, for this case, (9) becomeswhere we have taken into account that and . Assuming that every node has degree , that is, a regular random network, it is clear that there must be at least neighboring cooperators in order to have . ThusOnce again, the difficulty is to show that is the unstable equilibrium. However, we can follow the same approach used with PI and write ; that is, we approximate as a Heaviside step function with threshold in . We then again have three different cases as follows: (i)If , then : we have and the attractor becomes .(ii)If , then : we have and a stable equilibrium at .(iii)Finally if , then : we have and a stable equilibrium at .

Remark 13. As one can see, even without mistakes, BR equilibria with intermediate values of the density of cooperators can be obtained in a range of initial densities. Compared to the situation with PI, in which we only found the absorbing states as equilibria, this points to the fact that more rational players would eventually converge to equilibria with higher payoffs. It is interesting to note that such equilibria could be related to those found by Galeotti et al. [4] in the sense that not everybody in the network chooses the same action; however, we cannot make a more specific connection as we cannot detect which players choose which action—see, however, Section 5.2.2.

A similar approach allows some insight on the situation . We start again from (12), which now reduces toApproximating as before we again have the same three different cases. (i)If , then the attractor is unaffected by the particular value of .(ii)If , then the stable equilibrium lies at .(iii)If , then the stable equilibrium is at .

Remark 14. Adding mistakes to BR does not change dramatically the results, as it did occur with PI. The only relevant change is that equilibria for low or high densities of cooperators are never homogeneous, as there is a percentage of the population that chooses the wrong action. Other than that, in this case the situation is basically the same with a range of densities converging to an intermediate amount of cooperators.

4. Analysis of Global Welfare

Having found the equilibria selected by different evolutionary dynamics, it is interesting to inspect their corresponding welfare (measured in terms of average payoffs). We can again resort to the mean field approximation to approach this problem.

Best-Shot Game. In this case the payoff of player is given by (1): . Within the mean field approximation, for a generic player with degree , we can approximate the theta function as , where the first term is the contribution given by player cooperating (), whereas the second term is the contribution of player defecting () and at least one of ’s neighbors cooperating ( for at least one ). It follows easily thatIf (where stands for the Dirac delta function), then , whereas, if , then . We recall that in the simple case where players do not make mistakes (), PI leads to a stationary cooperation level , which corresponds to . On the other hand, with BR the stationary value of is given by (10) or (11), both leading to . As long as , it is (the payoff of full cooperation). We thus see that under BR players are indeed able to self-organize into states with high values of welfare in a nontrivial manner: defectors are not too many and are placed on the network to allow any of them to be connected to at least one cooperator (and thus to get the payoff ); this, together with cooperators having by construction, results in a state of higher welfare than full cooperation.

Coordination Game. Now player ’s payoff is given by (2): . Again within the mean field framework we approximate the term as , and we immediately obtain is thus a convex function of , which (considering that ) attains its maximum value at when , and at for . Recalling that, in the simple case , with both PI and BR there are two different stationary regimes ( for and for ), we immediately see that for the stationary state maximizes welfare, and the same happens for with . However, in the intermediate region , the stationary state is but payoffs are not optimal.

5. Extension: Higher Heterogeneity of the Network

In the two previous sections we have confined ourselves to the case in which the only information about the network we use is the mean degree, that is, how many neighbors players do interact with on average. However, in many cases, we may consider information on details of the network, such as the degree distribution, and this is relevant as most networks of a given nature (e.g., social) are usually more complex and heterogeneous than Erdös-Rényi random graphs. The heterogeneous mean field (HMF) [40] technique is a very common theoretical tool [41] to deal with the intrinsic heterogeneity of networks. It is the natural generalization of the usual mean field (homogeneous mixing) approach to networks characterized by a broad distribution of the connectivity. The fundamental assumption underlying HMF is that the dynamical state of a vertex depends only on its degree . In other words, all vertices having the same number of connections have exactly the same dynamical properties. HMF theory can be interpreted also as assuming that the dynamical process takes place on an annealed network [41], that is, a network where connections are completely reshuffled at each time step, with the sole constraints that both the degree distribution and the conditional probability (i.e., the probability that a node of degree has a neighbor of degree , thus encoding topological correlations) remain constant.

Note that in the following HMF calculations we always assume that our network is uncorrelated; that is, . This is consistent with our minimal informational setting, meaning that it represents the most natural assumption we can make.

5.1. Best-Shot Game
5.1.1. Proportional Imitation

In this framework, considering more complex network topologies does not change the results we found before, and we again find a final state that is not a Nash equilibrium, namely, full defection.

Proposition 15. In the HMF setting, under PI dynamics, when the final state for the population is the absorbing state with a density of cooperators (full defection) except if the initial state is full cooperation.

Proof. The HMF technique proceeds by building the -block variables: we denote by the density of cooperators among players of degree . The differential equation for the density of cooperators is The first term is the probability of picking a defector of degree with a neighboring cooperator of degree times the probability of imitation (all summed over ), whereas the second term is the probability of picking a cooperator of degree with a neighboring defector of degree times the probability of imitation (again, all summed over ). For the best-shot game when , we have We now introduce these values in (24) and, using the uncorrelated network assumption, we arrive atwhere we have introduced the probability to find a cooperator following a randomly chosen link:The corresponding differential equation for readsand its solution has the same form of (7):with as . Hence for which implies for and .

Remark 16. For the best-shot game with PI, the particular form of the degree distribution does not change anything. The outcome of evolution still is full defection, thus indicating that the failure to find a Nash equilibrium arises from the (bounded rational) dynamics and not from the underlying population structure. Again, this suggests that imitation is not a good procedure for the players to decide in this kind of games.

Proposition 17. In the HMF setting, under PI dynamics, when the final state for the population is the absorbing state (full defection) when , when , and when . When the initial state is or , it remains unchanged.

Proof. Equation (24) is still valid, but now and . Again, using the uncorrelated network assumption, and introducing the effective cost , we arrive atand at the end at a solution of the same form of (29):with . Hence for (which implies ) only for () and instead for () then () , and for () then for (which implies ).

5.1.2. Best Response

Always within the deterministic scenario with , for the case of best response dynamics the differential equation for each of the -block variables has the same form as (9) above, where now to evaluate we have to consider the particular values of neighbors’ degrees. As before, we consider the uncorrelated network case and introduce the variable from (27). We thus haveThe differential equation for is thuswhose solution depends on the form of degree distribution . Nevertheless, the critical value such that the right-hand side of (33) equals zero is also in this case the attractor of the dynamics.

Remark 18. In order to assess the effect of degree heterogeneity, we have plotted in Figure 2 the numerical solution for two random graphs, an Erdös-Rényi graph with a homogeneous degree distribution and a scale-free graph with a much more heterogeneous distribution . In both cases, the networks are uncorrelated so our framework applies. As we can see from the plot, the results are not very different, and they become more similar as the average degree increases. This is related on one hand to the particular form of Nash equilibria for strategic substitutes, where cooperators are generally the nodes with low degree and on the other hand to the fact that the main difference between a homogeneous and a scale-free lies in the tail of the distribution. In this sense, the nodes with the highest degrees (that can make a difference) do not contribute to and thus their effects on the system are negligible.

If we allow for the possibility of mistakes, the starting point of the analysis is—for each of the -block variables —the differential equation given by (9). Recalling that , we easily arrive atA sufficient condition for the existence of a dynamical attractor is : also, in heterogeneous networks, all reasonable values for the probability of errors allow for the existence of stable equilibria.

5.2. Coordination Game

Unfortunately, for the coordination game, working in the HMF framework is much more complicated, and we have been able to gain only qualitative but important insights on the system’s features. For the sake of clarity, we illustrate only the deterministic case in which no mistakes are made ().

5.2.1. Proportional Imitation

The average payoffs of cooperating and defecting for players with degree are where is the same as defined in (27).

We then use our starting point for the HMF formalism, (24), where now the probabilities of imitation are Once again within the assumption of an uncorrelated network, we findIn the second term we can carry out the sum over , which yields . We are now ready to write the differential equation for :Carrying out the summation over in the first term (which results again in ), and relabeling as , we are left withFinally, introducing the new variablewe arrive at

Remark 19. While is it difficult to solve (41) in a self-consistent way, qualitative insights can be gained by defining , and by rewriting (41) as (the term always and thus can be discarded). Now, starting from the beginning of the game at , the initial conditions univocally determine the value of and thus of . For , and decreases. Because of this, in the next time step we have on average that , meaning : again and keeps decreasing. By iterating such a reasoning, we conclude that in this case the stable equilibrium is . Symmetrically, for , the attractor becomes , and the transition between the two regimes lies at . Note that is basically the second momentum of the degree distribution, where each degree is weighted with the density . Recalling that may diverge for highly heterogeneous networks (e.g., it diverges for scale-free networks with ), and that for the coordination game cooperation is more favorable for players with many neighbors (hence for high ), we immediately see that in these cases diverges as well (as the divergence is given by nodes with high degree). Thus, while at the transition point the product remains finite (and equal to ), to compensate for the divergence of (Figure 3). We can conclude that, in networks with broad and in the limit , cooperation emerges also when the incentive to cooperate () vanishes. This is likely to be related to the fact that as the system size goes to infinity, so does the number of neighbors of the largest degree nodes. This drives hubs to cooperate, thus triggering a nonzero level of global cooperation. However, if the network is homogeneous, neither nor diverge, so that remains finite and the fully defective state appears also in the limit .

5.2.2. Best Response

For BR dynamics, we would have to begin again from the fact that the differential equation for each of the -block variables has the same form of (19). We would then need to evaluate . As in the homogeneous case, such expression is difficult to treat analytically. Alternatively, we can perform the approximation of setting ; that is, we approximate with a Heaviside step function with threshold in . This leads to and to the following self-consistent equation for the equilibrium :whose solution strongly depends on the form of degree distribution . Indeed, if the network is highly heterogeneous (e.g., a scale-free network with ), it can be shown that is a stable equilibrium whose dependence on is of the form ; that is, there exists a nonvanishing cooperation level no matter how small the value of . However, if the network is more homogeneous (e.g., ), becomes unstable and for the system always falls in the fully defective Nash equilibria. Another important characterization of such system comes from considering (42) and (43): we have when and for . In this sense, we find a qualitative agreement between the features of our equilibria and those found by Galeotti et al. [4], in which players’ actions show a monotonic, nondecreasing dependence on their degrees.

6. Comparison with Numerical Simulations

Before discussing and summarizing our results, one question that arises naturally is whether, given that mean field approaches are approximations in so far as they assume interactions with a typical individual (or classes of typical individuals), our results are accurate descriptions of the real dynamics of the system. Therefore, in this section, we present a brief comparison of the analytical results we obtained above with those arising from a complete program of numerical simulations of the system recently carried out by us, whose details can be found in [42] (along with many additional findings on issues that cannot be analytically studied). In this comparison, we focus on the scenario in which mistakes are not allowed ( as it, being deterministic, allows for a meaningful comparison of theory and simulations without extra effects arising perhaps from poor sampling.

Concerning the best-shot game, numerical simulations fully confirm our analytical results. With PI, the dynamical evolution is in perfect agreement with that predicted by both MF and HMF theory—which indeed coincide when (as in our case) does not depend on . Simulations and analytics agree well also when the dynamics is BR: The final state of the system is, for any initial condition, a Nash equilibrium with cooperators ratio (which decreases for increasing network connectivity). Yet, the solution of from (13) slightly underestimates the one found in simulations—probably because of the approximation made in computing the probabilities of (9). Notwithstanding this minor quantitative disagreement, we can safely confirm the validity of our analytical results.

On the other hand, the agreement between theory and simulations is also good for coordination games with PI dynamics. On homogeneous networks, numerical simulations show an abrupt transition from full defection to full cooperation as crosses a critical value . The MF theory is thus able to qualitatively predict the behavior of the system; furthermore, while is somewhat smaller than the predicted by the theory, simulations also show that in the infinite network size, which implies that for reasonably large systems our analytical predictions are accurately fulfilled. Finally, simulations cannot find other Nash equilibrium (with intermediate cooperation levels) than full defection, again as predicted by the MF calculations. On heterogeneous networks instead, simulations show a smooth crossover between full defection and full cooperation, and the point at which the transition starts () tends to zero as the system size grows. Therefore, the most important prediction of HMF theory, namely, that the fully defective state disappears in the large size limit (a phenomenon not captured by the simple MF approach), is fully confirmed by simulations. Finally, concerning BR dynamics for coordination games, we have a similar scenario: in homogeneous networks, simulations allow finding a sharp transition at from full defection to full cooperation, featuring many nontrivial Nash equilibria (all characterized by intermediate cooperation levels) in the transient region. This behavior, together with in the infinite network size, agrees well with the approximate theoretical results. Heterogeneous networks instead feature a continuous transition, and it appears from numerical simulations that—in the infinite network size—a Nash equilibrium with nonvanishing cooperation level exists no matter how small the value of , exactly as predicted by the HMF calculations.

We can conclude that the set of analytical results we are presenting in this paper provides, in spite of its approximate character, a very good description of the evolutionary equilibria of our two prototypical games, particularly so when considering the more accurate HMF approach.

7. Conclusion

In this paper, we have presented two evolutionary approaches to two paradigmatic games on networks, namely, the best-shot game and the coordination game as representatives, respectively, of the wider classes of strategic substitutes and complements. As we have seen, using the MF approximation we have been able to prove a number of rigorous results and otherwise to get good insights on the outcome of the evolution. Importantly, numerical simulations support all our conclusions and confirm the validity of our analytical approach to the problem.

Proceeding in order of increasing cognitive demand, we first summarize what we have learned about PI dynamics, equivalent to replicator dynamics in a well-mixed population. For the case of the best-shot game, this dynamics has proven unable to refine the set of Nash equilibria, as it always leads to outcomes that are not Nash. On the other hand, the asymptotic states obtained for the coordination game are Nash equilibria and constitute indeed a drastic refinement, selecting a restricted set of values for the average cooperation. We believe that the difference between these results arises from the fact that PI is imitative dynamics and in a context such as the best-shot game, in which equilibria are not symmetric, this leads to players imitating others who are playing “correctly” in their own context but whose action is not appropriate for the imitator. In the coordination game, where the equilibria should be symmetric, this is not a problem and we find equilibria characterized by a homogeneous action. Note that imitation is quite difficult to justify for rational players (as humans are supposed to act), because it assumes bounded rationality or lack of information leaving players no choice but copying others’ strategies [22]. Indeed, imitation is much more apt to model contexts as biological evolution, where payoffs are interpreted as reproductive successes within natural selection [43]. Under this interpretation, in the best-shot game, for instance, it is clear that a cooperator surrounded by defectors would die out and be replaced by the offspring of one of its neighboring defectors.

When going to a more demanding evolutionary rule, BR does lead by construction to Nash equilibria—when players are fully rational and do not make mistakes. We are then able to obtain predictions on the average level of cooperation for the best-shot game but still many possible equilibria are compatible with that value. Predictions are less specific for the coordination game, due to the fact that—in an intermediate range of initial conditions—different equilibria with finite densities of cooperators are found. The general picture remains the same in terms of finding full defection or full cooperation for low or high initial cooperation, but the intermediate region is much more difficult to study.

Besides, we have probed into the issue of degree heterogeneity by considering more complex network topologies. Generally speaking, the results do not change much, at least qualitatively, for any of the dynamics applied to the best-shot game. The coordination game is more difficult to deal with in this context but we were able to show that when the number of connections is very heterogeneous, cooperation may be obtained even if the incentive for cooperation vanishes. This vanishing of the transition point is reminiscent of what occurs for other processes on scale-free networks, such as percolation of epidemic spreading [41]. Interestingly, our results are in contrast with [15], in the sense that—for our dynamical approach—coordination games are more affected by the network (and are henceforth more difficult to tackle) than anticoordination ones.

Finally, a comment is in order about the generality of our results. We believe that the insight on how PI dynamics drives the two types of games studied here should be applicable in general; that is, PI should lead to dramatic reductions of the set of equilibria for strategic complements, but is likely to be off and produce spurious results for strategic substitutes, due to imitation of inappropriate choices of action. On the other hand, BR must produce Nash equilibria, as already stated, leading to significant refinements for strategic substitutes but to only moderate ones for strategic complements. This conclusion hints that different types of dynamics should be considered when refining the equilibria of the two types of games, and raises the question of whether a consistently better refinement could be found with only one dynamics. In addition, our findings also hint to the possible little relevance of the particular network considered on the ability of the dynamics to cut down the number of equilibria. In this respect, it is important to clarify that while our results should apply to a wide class of networks going from homogeneous to extremely heterogeneous, networks with correlations, clustering, or other nontrivial structural properties might behave differently. These are relevant questions for network games that we hope will attract more research in the near future.

Abbreviations

PI:Proportional imitation
BR:Best response
MF:Homogeneous mean field
HMF:Heterogeneous mean field.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The author is thankful to Antonio Cabrales, Claudio Castellano, Sanjeev Goyal, Angel Sánchez, and Fernando Vega-Redondo for their feedback on early versions of the manuscript and advice on the presentation of the results. This work was supported by the Swiss Natural Science Foundation (Grant no. PBFRP2_145872) and the EU project CoeGSS (Grant no. 676547).