Under assumptions, [A-1], we know that
\(\mathcal {N}(\omega ,v)\) is nonempty and
\(\rho _{w_{ca}^{*}}\)-compact and therefore we know that
\(\mathcal {P} (\omega ,v)\) is nonempty and
\(\rho _{Y}\)-compact. Moreover, applying optimal measurable selection results (e.g., Himmelberg et al. [
19]) and Berge’s maximum theorem (e.g., see 17.31 in Aliprantis and Border [
1]), we can show that the Nash correspondences,
\(\mathcal {N}(\cdot ,\cdot ) \) and
\(\mathcal {P}(\cdot ,\cdot )\), are upper Caratheodory (also, see Proposition 4.2 in Page [
28]). In particular, the Nash correspondence,
\( \mathcal {N}(\cdot ,\cdot )\), is jointly measurable in
\((\omega ,v)\) and
\( \mathcal {N}(\omega ,\cdot )\) is upper semicontinuous in
v for each
\( \omega \), and the Nash payoff correspondence,
\(\mathcal {P}(\cdot ,\cdot )\), is jointly measurable in
\((\omega ,v)\) and
\(\mathcal {P}(\omega ,\cdot )\) is upper semicontinuous in
v for each
\(\omega \).
For a formal proof we refer the reader to Fu and Page [
15]. Informally, the proof proceeds along the following lines: Let
\(\mathcal {S}^{\infty }( \mathcal {P}(\cdot ,v))\) be the set of all
\(\mu \)-equivalence classes of measurable functions,
\(u\in \mathcal {L}_{Y}^{\infty }\), such that
\(u(\omega )\in \) \(\mathcal {P}(\omega ,v)\) a.e.
\([\mu ]\). It follows from Blackwell’s Theorem [
9], extended to
DSGs, that our discounted stochastic game of club network formation (
10) will have stationary Markov perfect equilibria in network formation strategies if and only if there exists a value function profile,
\(v^{*}\in \mathcal {L} _{Y}^{\infty }\), such that
$$\begin{aligned} v^{*}(\omega )\in \mathcal {P}(\omega ,v^{*})\text { a.e. }[\mu ]\text { ,} \end{aligned}$$
or equivalently if and only if the Nash payoff selection correspondence,
$$\begin{aligned} v\longrightarrow \mathcal {S}^{\infty }(\mathcal {P}(\cdot ,v)):=\mathcal {S} ^{\infty }(\mathcal {P}_{v}), \end{aligned}$$
has fixed points, i.e., has at least one value function profile,
\(v^{*}\in \mathcal {L}_{Y}^{\infty }\), such that
\(v^{*}\in \mathcal {S}^{\infty }(\mathcal {P}_{v^{*}})\). This is a very difficult fixed point problem because the measurable selection valued correspondence,
\(\mathcal {S}^{\infty }(\mathcal {P}_{(\cdot )})\), is neither convex valued nor closed value—nor of course is it upper semicontinuous. Until Fu and Page [
15] no results were available to establish the existence of a fixed point for Nash payoff selection correspondences. Essentially what Fu and Page [
15] show is that while the Nash payoff selection correspondence,
\(v\longrightarrow \mathcal {S} ^{\infty }(\mathcal {P}_{v})\), is badly behaved, under assumptions [A-1], its underlying upper Caratheodory Nash payoff correspondence,
\(\mathcal {P}(\cdot ,\cdot )\), always contains an upper Caratheodory sub-correspondence,
\( p(\cdot ,\cdot )\), that has
\(\varepsilon \)-approximate Caratheodory selections for all
\(\varepsilon >0\), implying that the Nash payoff selection sub-correspondence,
\(v\longrightarrow \mathcal {S}^{\infty }(p_{v})\), has fixed points. Thus, Fu and Page [
15] show that, in general, there exists a Nash payoff sub-correspondence,
\(\mathcal {S}^{\infty }(p_{(\cdot )})\), having fixed points, i.e., that there exists
\(v^{*}\in \mathcal {L} _{Y}^{\infty }\), such that
\(v^{*}\in \mathcal {S}^{\infty }(p_{v^{*}})\subset \mathcal {S}^{\infty }(\mathcal {P}_{v^{*}})\). More fundamentally, Fu and Page [
15] are able to show that the upper Caratheodory Nash payoff sub-correspondence,
\(p(\cdot ,\cdot )\), has
\( \varepsilon \)-approximate Caratheodory selections for all
\(\varepsilon >0\), because under assumption [A-1] the underlying upper Caratheodory Nash correspondence,
\(\mathcal {N}(\cdot ,\cdot )\), always contains an upper Caratheodory Nash sub-correspondence,
\(\eta (\cdot ,\cdot )\), that takes closed connected values. Thus, because
$$\begin{aligned} \begin{array}{ll} p(\omega ,v)&{} =(p_{1}(\omega ,v),\ldots ,p_{n}(\omega ,v)) \\ \\ &{} :=(U_{1}(\omega ,v_{1},\eta (\omega ,v)),\ldots ,U_{n}(\omega ,v_{n},\eta (\omega ,v))), \end{array} \end{aligned}$$
the continuity of
\(U_{d}(\omega ,v_{d},\cdot )\) in behavioral actions
\( \sigma \) for each player
\(d=1,2,\ldots ,n\) together with the closed connectedness of
\(\eta (\omega ,v)\) implies that
$$\begin{aligned} (\omega ,v)\longrightarrow p_{d}(\omega ,v)=U_{d}(\omega ,v_{d},\eta (\omega ,v)) \end{aligned}$$
is upper Caratheodory and
interval valued for each player
\( d=1,2,\ldots ,n\). It then follows from Corollary 4.3 in Kucia and Nowak [
22] that each player’s Nash payoff sub-correspondence,
\(p_{d}(\cdot ,\cdot )\), is Caratheodory approximable and this together with the unusual properties of Komlos convergence [
21] of value functions allows us to show that there exists a value function profile,,
\(v^{*}\in \mathcal {L} _{Y}^{\infty }\), such that
\(v^{*}(\omega )\in p(\omega ,v^{*})\) a.e.
\([\mu ]\) or equivalently such that
$$\begin{aligned} v^{*}\in \mathcal {S}^{\infty }(p_{v^{*}})\subset \mathcal {S}^{\infty }(\mathcal {P}_{v^{*}}). \end{aligned}$$
In order to complete our informal argument for existence we need only note that by implicit measurable selection (e.g., see Theorem 7.1 in Himmelberg [
18]), there exists a profile,
\(\sigma ^{*}(\cdot )=(\sigma _{1}^{*}(\cdot ),\ldots ,\sigma _{m}^{*}(\cdot ))\), of a.e. measurable selections of
\(\omega \longrightarrow \eta (\omega ,v^{*})\), such that for each player
\(d=1,2,\ldots ,n\),
$$\begin{aligned} v_{d}^{*}(\omega )=U_{d}(\omega ,v_{d}^{*},\sigma ^{*}(\omega ))\in U_{d}(\omega ,v_{d}^{*},\eta (\omega ,v^{*})):=p_{d}(\omega ,v^{*})\text { a.e. }[\mu ], \end{aligned}$$
(19)
Thus, for each player
d, the state-contingent prices given by value function,
\(v_{d}^{*}(\cdot )\in \mathcal {L}_{Y_{d}}^{\infty }\), incentivize the continued choice by each player
d, of behavioral strategy,
\(\sigma _{d}^{*}(\cdot )\), and we have for the value function-strategy profile pair,
\((v^{*},x^{*}(\cdot ))\in \mathcal {S} ^{\infty }(p_{v^{*}})\times \mathcal {S}^{\infty }(\eta _{v^{*}})\), that
$$\begin{aligned} v^{*}(\omega )=U(\omega ,v^{*},\sigma ^{*}(\omega ))\in p(\omega ,v^{*})\text { and }\sigma ^{*}(\omega )\in \eta (\omega ,v^{*}) \text { a.e. }[\mu ]\text {,} \end{aligned}$$
(20)
implying that
$$\begin{aligned} v^{*}(\omega )\in \mathcal {P}(\omega ,v^{*})\text { and }\sigma ^{*}(\omega )\in \mathcal {N}(\omega ,v^{*})\text { a.e. }[\mu ]\text {. } \end{aligned}$$
(21)
Thus, for value function-behavioral strategy profile pair,
\((v^{*},\sigma ^{*}(\cdot ))\), we have for each player
\(d=1,2,\ldots ,n\) and for
\(\omega \) a.e.
\([\mu ]\), that
\((v^{*},\sigma ^{*}(\cdot ))\) satisfies the Bellman equation (1 below) and the Nash condition (2 below),
$$\begin{aligned} \left. \begin{array}{l} \text {(1) }v_{d}^{*}(\omega )=(1-\beta _{d})r_{d}(\omega ,\sigma _{d}^{*}(\omega ),\sigma _{-d}^{*}(\omega ))+\beta _{d}\int _{\Omega }v_{d}^{*}(\omega ^{\prime })q(\omega ^{\prime }|\omega ,\sigma _{d}^{*}(\omega ),\sigma _{-d}^{*}(\omega )), \\ \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text {and} \\ \\ \text {(2) }(1-\beta _{d})r_{d}(\omega ,\sigma _{d}^{*}(\omega ),\sigma _{-d}^{*}(\omega ))+\beta _{d}\int _{\Omega }v_{d}^{*}(\omega ^{\prime })q(\omega ^{\prime }|\omega ,\sigma _{d}^{*}(\omega ),\sigma _{-d}^{*}(\omega )) \\ \\ =\max _{\sigma _{d}\in \Delta (\Phi _{d}(\omega ))}\left[ (1-\beta _{d})r_{d}(\omega ,\sigma _{d},\sigma _{-d}^{*}(\omega ))+\beta _{d}\int _{\Omega }v_{d}^{*}(\omega ^{\prime })q(\omega ^{\prime }|\omega ,\sigma _{d},\sigma _{-d}^{*}(\omega ))\right] . \end{array} \right\} \end{aligned}$$
(22)
Thus,
\(\sigma ^{*}(\cdot )\in \) \(\mathcal {S}^{\infty }(\mathcal {N} _{v^{*}})\) is a stationary Markov perfect equilibrium of a
DSG satisfying assumptions [A-1], incentivized by state-contingent prices,
\( v^{*}\in \) \(\mathcal {S}^{\infty }(\mathcal {P}_{v^{*}})\).
Thus here we have argued informally—and shown formally in Fu and Page [
15]—that, under the usual assumptions specifying a discounted stochastic game (in this case a club network formation
DSG), while the
\( DSG^{\prime }s\) Nash payoff selection is badly behaved, nonetheless, it naturally possesses (without additional assumptions) an underlying Nash payoff correspondence,
\(\mathcal {P}(\cdot ,\cdot )\), containing sub-correspondences,
\(p(\cdot ,\cdot )\), which are Caratheodory approximable implying that its induced selection correspondence,
\(v\longrightarrow \mathcal {S}^{\infty }(p_{v^{*}})\), has fixed points. He and Sun [
16], by making an
additional assumption (that the
DSG is
\(\mathcal {G}\)-nonatomic or has
a coarser transition kernel) guarantee that
\( DSG^{\prime }s\) Nash payoff selection correspondence has a convex valued sub-correspondence—and therefore an approximable sub-correspondence.
9 Moreover, He and Sun [
16] show that Duggan [
12] accomplishes the same thing by assuming that the
DSG has a noisy state. In the negative direction, Levy [
23] and Levy and McLennan [
24] construct counterexamples showing that not all uncountable-finite
DSGs have stationary Markov perfect equilibria. They accomplish this by constructing counterexamples in which the Nash correspondences are
not approximable—which follows from the fact that in their counterexamples, there is an absence of fixed points. Because the club network formation
DSG we analyze here is
approximable, we avoid the Levy–McLennan counterexamples.