Skip to main content
Log in

Mean Field Games with a Dominating Player

  • Published:
Applied Mathematics & Optimization Aims and scope Submit manuscript

Abstract

In this article, we consider mean field games between a dominating player and a group of representative agents, each of which acts similarly and also interacts with each other through a mean field term being substantially influenced by the dominating player. We first provide the general theory and discuss the necessary condition for the optimal controls and equilibrium condition by adopting adjoint equation approach. We then present a special case in the context of linear-quadratic framework, in which a necessary and sufficient condition can be asserted by stochastic maximum principle; we finally establish the sufficient condition that guarantees the unique existence of the equilibrium control. The proof of the convergence result of finite player game to mean field counterpart is provided in Appendix.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bensoussan, A., Sung, K.C.J., Yam, S.C.P., Yung, S.P.: Linear-quadratic mean field games. J. Optim. Theory Appl. (2015, submitted)

  2. Bensoussan, A., Frehse, J.: Stochastic games for \(N\) players. J. Optim. Theory. Appl. 105(3), 543–565 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bensoussan, A., Frehse, J.: On diagonal elliptic and parabolic systems with super-quadratic Hamiltonians. Commun. Pure Appl. Anal. 8, 83–94 (2009)

    MathSciNet  MATH  Google Scholar 

  4. Bensoussan, A., Frehse, J., Vogelgesang, J.: Systems of Bellman equations to stochastic differential games with non-compact coupling. Discret. Contin. Dyn. Syst. 27(4), 1375–1389 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bensoussan, A., Frehse, J., Yam, S.C.P.: Mean Field Games and Mean Field Type Control Theory. Springer, New York (2013)

    Book  MATH  Google Scholar 

  6. Le Bris, C., Lions, P.L.: Existence and uniqueness of solutions to Fokker-Planck type equations with irregular coefficients. Commun. Part. Differ. Equ. 33(7), 1272–1317 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Carmona, R., Delarue, F.: Probabilistic analysis of mean-field games. SIAM J. Control Optim. 51(4), 2705–2734 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Cooper, G.: The Origin of Financial Crises: Central Banks, Credit Bubbles, and the Efficient Market Fallacy. Harriman House Limited, Petersfield (2008)

    Google Scholar 

  9. Fleming, W.H., Souganidis, P.E.: On the existence of value functions of two player, zero sum stochastic differential games. Indiana Univ. Math. J. 38, 293–314 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  10. Huang, M.: Large-population LQG games involving a major player: the Nash certainty equivalence principle. SIAM J. Control Optim. 48(5), 3318–3353 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Huang, M., Caines, P.E., Malhamé, R.P.: Individual and mass behaviour in large population stochastic wireless power control problems: centralized and Nash equilibrium solutions. In: Proceedings of the 42nd IEEE Conference on Decision and Control, Maui, 98–103 December 2003

  12. Huang, M., Malhamé, R.P., Caines, P.E.: Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6(3), 221–252 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Lasry, J.M., Lions, P.L.: Jeux á champ moyen I–Le cas stationnaire. C. R. Acad. Sci. Ser. I 343, 619–625 (2006)

    Article  MathSciNet  Google Scholar 

  14. Lasry, J.M., Lions, P.L.: Jeux á champ moyen II Horizon fini et contrôle optimal. C. R. Acad. Sci. Ser. I 343, 679–684 (2006)

    Article  MathSciNet  Google Scholar 

  15. Lasry, J.M., Lions, P.L.: Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ma, J., Yong, J.: Forward-Backward Stochastic Differential Equations and Their Applications. Lecture Notes in Mathematics, vol. 1702. Springer, New York (1999)

    MATH  Google Scholar 

  17. Nguyen, S.L., Huang, M.: Linear-quadratic-Gaussian mixed games with continuum-parametrized minor players. SIAM J. Control Optim. 50(5), 2907–2937 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Nguyen, S.L., Huang, M.: Mean field LQG games with mass behavior responsive to a major player. In: 51st IEEE Conference on Decision and Control (2012)

  19. Nourian, M., Caines, P.E.: \(\epsilon \)-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM J. Control Optim. 51(4), 3302–3331 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  20. Peng, S.: Stochastic Hamilton–Jacobi–Bellman equations. SIAM J. Control Optim. 30(2), 284–304 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  21. Peng, S., Yang, Z.: Anticipated backward stochastic differential equation. Ann. Probab. 37(3), 877–902 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  22. Villani, C.: Forward-Backward Stochastic Differential Equations and Their Applications. Lecture Notes in Mathematics, vol. 1702. Springer, New York (1999)

    Google Scholar 

Download references

Acknowledgments

The first author-Alain Bensoussan acknowledges the financial support of the Hong Kong RGC GRF 500113 and the National Science Foundation under grant DMS 1303775. The second author-Michael Chau acknowledges the financial support from the Chinese University of Hong Kong, and the present work constitutes a part of his work for his postgraduate dissertation. The third author-Phillip Yam acknowledges the financial support from The Hong Kong RGC GRF 404012 with the project title: Advanced Topics In Multivariate Risk Management In Finance And Insurance, The Chinese University of Hong Kong Direct Grants 2010/2011 Project ID: 2060422 and 2011/2012 Project ID: 2060444. Phillip Yam also expresses his sincere gratitude to the hospitality of both Hausdorff Center for Mathematics and Hausdorff Research Institute for Mathematics of the University of Bonn during the preparation of the present work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. C. P. Yam.

Appendix

Appendix

1.1 \(\epsilon \)-Nash Equilibrium

We now establish that the solutions of Problems 2.1 and 2.2 is an \(\epsilon \)-Nash Equilibrium. Suppose that there are N representative agents behaving in similar manner, so that the state of the dominating player and the i-th agent satisfies the following SDE respectively:

$$\begin{aligned}&\left\{ \begin{array}{rcl} dy_0&{}=&{}g_0\Big (y_0(t),\dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{y_1^j(t)},u_0(t)\Big )dt+\sigma _0\Big (y_0(t)\Big )dW_0(t),\\ y_0(0)&{}=&{}\xi _0.\\ \end{array} \right. \nonumber \\&\left\{ \begin{array}{rcl} dy_1^i&{}=&{}g_1\Big (y_1^i(t),y_0(t),\dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{y_1^j(t)},u_1^i(t)\Big )dt+\sigma _1\Big (y_1^i(t)\Big )dW_1^i(t),\\ y_1^i(0)&{}=&{}\xi _1^i.\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(22)

where \(\delta _y\) is Dirac measure with a unit mass at y. We call Equation (22) the empirical system. The corresponding objective functional for the i-th agent is:

$$\begin{aligned} \mathcal {J}^{N,i}(\mathbf{u})= & {} \mathbb {E}\bigg [\displaystyle \int _0^Tf_1\Big (y_1^i(t),y_0(t),\dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{y_1^j(t)},u^i(t)\Big )dt\\&+h_1\Big (y_1^i(T),y_0(T),\dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{y_1^j(T)}\Big )\bigg ], \end{aligned}$$

where \(\mathbf{u}=(u_1^1,u_1^2,\ldots ,u_1^N)\). We expect that when \(N \rightarrow \infty \), the hypothetical approximation is described by (1), that is:

$$\begin{aligned}&\left\{ \begin{array}{rcl} dx_0&{}=&{}g_0\Big (x_0(t),m_{x_0}(t),u_0(t)\Big )dt+\sigma _0\Big (x_0(t)\Big )dW_0(t),\\ x_0(0)&{}=&{}\xi _0.\\ \end{array}\right. \nonumber \\&\left\{ \begin{array}{rcl} dx_1^i&{}=&{}g_1\Big (x_1^i(t),x_0(t),m_{x_0}(t),u_1^i(t)\Big )dt+\sigma _1\Big (x_1^i(t)\Big )dW_1^i(t),\\ x_1^i(0)&{}=&{}\xi _1^i.\\ \end{array}\right. \end{aligned}$$
(23)

We call Eq. (23) the mean field system. The corresponding limiting objective functional for the i-th player is

$$\begin{aligned} \mathcal {J}^{i}(u_1^i)=\mathbb {E}\bigg [\displaystyle \int _0^Tf_1\Big (x_1^i(t),x_0(t),m_{x_0}(t),u_1^i(t)\Big )dt+h_1\Big (x_1^i(T),x_0(T),m_{x_0}(T)\Big )\bigg ].\nonumber \\ \end{aligned}$$
(24)

Using Corollary 2.5, the necessary condition for optimality is described by the SHJB-FP coupled Eq. (8). To proceed, we assume that the optimal control \(\hat{\mathbf{u}}=(\hat{u}_1^1,\hat{u}_1^2,\ldots ,\hat{u}_1^N)\) exists. To avoid ambiguity, denote \(\hat{x}_1^i\) and \(\hat{y}_1^i\) the states dynamics of \(x_1^i\) and \(y_1^i\) corresponding to the optimal control \(\hat{u}_1^i\). The mean field term \(m_{x_0}(\cdot ,t)\), is the probability measure of the optimal trajectory \(\hat{x}_1^i\) at time t, conditioning on \(\mathcal {F}^0_t\). Under this construction, being conditional on \(\mathcal {F}^0_t\), \(\{\hat{x}_1^i\}_i\) are independent and identically distributed processes; while \(\{\hat{y}_1^i\}_i\) are dependent on each other through the empirical distribution. For simplicity, for two density functions m and \(m'\), we write \(W_2(m d \lambda ,m' d \lambda )=W_2(m,m')\).

Lemma 5.1

Suppose that the assumptions (A.1–A.3) hold. If \(m_{x_0}(\cdot ,t)\) is chosen to be the density function of \(\hat{x}_1^i\) conditional on \(\mathcal {F}_t^0\), then

$$\begin{aligned} \mathbb {E}\Big [\displaystyle \sup _{u \le T}|y_0(u)-x_0(u)|^2\Big ]+\mathbb {E}\Big [\displaystyle \sup _{u \le T}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\Big ]=o(1). \end{aligned}$$

Proof

Observe that for any \(t\in [0,T]\)

$$\begin{aligned} \mathbb {E}\displaystyle \sup _{u\le t}|y_0(u)-x_0(u)|^2\le & {} C \bigg \{t\mathbb {E}\displaystyle \int _0^t \bigg |g_0\Bigg (y_0(s),\dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{y}_1^j(s)},u_0(s)\Bigg )\\&-g_0\Big (x_0(s),m_{x_0}(s),u_0(s)\Big )\bigg |^2ds\\&+\mathbb {E}\displaystyle \int _0^t\bigg |\sigma _0\Big (y_0(s)\Big )-\sigma _0\Big (x_0(s)\Big )\bigg |^2ds\bigg \}, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\displaystyle \sup _{u\le t}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\le & {} C \bigg \{t\mathbb {E}\displaystyle \int _0^t \bigg |g_1\Big (\hat{y}_1^i(s),y_0(s),\dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{y}_1^j(s)},\hat{u}_1^i(s)\Big )\\&-g_1\Big (\hat{x}_1^i(s),x_0(s),m_{x_0}(s),\hat{u}_1^i(s)\Big )\bigg |^2ds\\&+\mathbb {E}\displaystyle \int _0^t\bigg |\sigma _1\Big (\hat{y}_1^i(s)\Big )-\sigma _1\Big (\hat{x}_1^i(s)\Big )\bigg |^2ds\bigg \}, \end{aligned}$$

By the Lipschitz assumptions, we have

$$\begin{aligned}&\mathbb {E}\displaystyle \sup _{u\le t}|y_0(u)-x_0(u)|^2+\mathbb {E}\displaystyle \sup _{u\le t}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\nonumber \\&\quad \le C\bigg \{t\mathbb {E}\displaystyle \int _0^t \Big |y_0(s)-x_0(s)\Big |^2+W_2^2\left( \dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{y}_1^j(s)},m_{x_0}(s)\right) ds\nonumber \\&\qquad +\mathbb {E}\displaystyle \int _0^t\Big |y_0(s)-x_0(s)\Big |^2 ds\bigg \}\nonumber \\&\qquad +C\bigg \{t\mathbb {E}\displaystyle \int _0^t \Big |\hat{y}_1^i(s)-\hat{x}_1^i(s)\Big |^2+\Big |y_0(s)-x_0(s)\Big |^2\nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{y}_1^j(s)},m_{x_0}(t)\right) ds+\mathbb {E}\displaystyle \int _0^t\Big |y_1^i(s)-x_1^i(s)\Big |^2 ds\bigg \}\nonumber \\&\quad \le C\bigg \{\mathbb {E}\displaystyle \int _0^t \displaystyle \sup _{u\le s}\Big |y_0(u)-x_0(u)\Big |^2+\displaystyle \sup _{u\le s}\Big |\hat{y}_1^i(u)-\hat{x}_1^i(u)\Big |^2\nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{x}_1^j(s)}\right) +W_2^2\left( \dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{x}_1^j(s)}\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) ds\bigg \}, \end{aligned}$$
(25)

where \(C>0\) is a constant, changing line by line, that depends only on T and K. By definition, for any Dirac measures \(\delta _y\) on \(\mathbb {R}^{n_1}\) and density function m, we have

$$\begin{aligned} W_2^2(\delta _y,m)=\int _{\mathbb {R}^{n_1}}|y-x|^2 dm(x). \end{aligned}$$

Also observe that the joint measure \(\frac{1}{N}\sum _{j=1}^N\delta _{(\hat{y}_1^j(s), \hat{x}_1^j(s))}\) on \({\mathbb {R}^{n_1}\times {\mathbb {R}^{n_1}}}\) has respective marginals \(\frac{1}{N}\sum _{j=1}^N\delta _{\hat{y}_1^j(s)}\) and \(\frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)}\) on \({\mathbb {R}^{n_1}}\). Using the definition of Wasserstein metric, we evaluate

$$\begin{aligned}&\mathbb {E}\left[ W_2^2\left( \dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{x}_1^j(s)}\right) \right] \nonumber \\&\quad \le \mathbb {E}\left[ \displaystyle \int _{\mathbb {R}^{n_1}\times {\mathbb {R}^{n_1}}}|y-x|^2 d\left( \dfrac{1}{N}\sum _{j=1}^N\delta _{(\hat{y}_1^j(s), \hat{x}_1^j(s))}(y,x)\right) \right] \nonumber \\&\quad \le \dfrac{1}{N}\displaystyle \sum _{j=1}^N\mathbb {E}\Big |\hat{y}_1^j(s)-\hat{x}_1^j(s)\Big |^2\nonumber \\&\quad =\mathbb {E}\Big |\hat{y}_1^i(s)-\hat{x}_1^i(s)\Big |^2, \end{aligned}$$
(26)

where the last equality results from the fact that \(\{\hat{y}^j - \hat{x}^j\}_{j=1}^N\) are symmetric. Similarly, we also have

$$\begin{aligned} \mathbb {E}\left[ W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{x}_1^j(s)}\right) \right] \le \mathbb {E}\Big |\hat{y}_1^i(s)-\hat{x}_1^i(s)\Big |^2. \end{aligned}$$

Combining with (25) and applying Gronwall’s inequality, we have

$$\begin{aligned}&\mathbb {E}\displaystyle \sup _{u\le t}|y_0(u)-x_0(u)|^2+\mathbb {E}\displaystyle \sup _{u\le t}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\nonumber \\&\quad \le Ce^{Ct}\mathbb {E}\left[ \displaystyle \int _0^t W_2^2\left( \dfrac{1}{N}\displaystyle \sum _{j=1}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1,j\ne i}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) ds\right] .\nonumber \\ \end{aligned}$$
(27)

First observe that

$$\begin{aligned} \mathbb {E}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] =\mathbb {E}\left[ \mathbb {E}^{\mathcal {F}_s^0}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] \right] . \end{aligned}$$

Given \(\mathcal {F}^0_s\), \(\{\hat{x}_1^j\}_j\) are i.i.d. with the common conditional density function \(m_{x_0}(s)\), which is \(\mathcal {F}^0_s\) measurable. We first have the empirical measure \(\dfrac{1}{N}\displaystyle \sum _{j=1}^N\delta _{\hat{x}_1^j(s)}\) converges weakly to \( m_{x_0}(s)\); while by the strong law of larger number, the empirical second moment \(\dfrac{1}{N}\displaystyle \sum _{j=1}^N |\hat{x}_1^j(s)|^2\) converges almost surely to the theoretical second moment \(\mathbb {E}^{\mathcal {F}_s^0}\left| \hat{x}_1^1(s)\right| ^2\). By the equivalence between Wasserstein metric and both (1) the weak* convergence in measure and (2) the second moment convergence, we have

$$\begin{aligned} W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \rightarrow 0 \text { a.s. on } \mathcal {F}^0_s, s\in [0,T]. \end{aligned}$$
(28)

By the definition of Wasserstein metric, we obtain

$$\begin{aligned} W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \le \left[ \frac{2}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2+2\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2\right] . \end{aligned}$$

To check the uniform integrability with respect to the conditional expectation on \(\mathcal {F}_s^0\),

$$\begin{aligned}&\mathbb {E}^{\mathcal {F}_s^0}\left[ \frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2\mathbb {I}_{\{\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2+\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2>C\}}\right] \nonumber \\&\quad \le \mathbb {E}^{\mathcal {F}_s^0}\left[ \frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2\mathbb {I}_{\{\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2>\frac{C}{2}\}}\right] +\mathbb {E}^{\mathcal {F}_s^0}\left[ \frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2\mathbb {I}_{\{\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2>\frac{C}{2}\}}\right] \nonumber \\&\quad =\frac{1}{N}\sum _{j=1}^N \mathbb {E}^{\mathcal {F}_s^0}\Big [|\hat{x}_1^j(s)|^2\mathbb {I}_{\{\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2>\frac{C}{2}\}}\Big ]+\frac{1}{N}\sum _{j=1}^N\mathbb {E}^{\mathcal {F}_s^0}\Big [|\hat{x}_1^j(s)|^2\Big ]\mathbb {I}_{\{\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2>\frac{C}{2}\}}\nonumber \\&\quad =\mathbb {E}^{\mathcal {F}_s^0}\Big [|\hat{x}_1^1(s)|^2\mathbb {I}_{\{\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2>\frac{C}{2}\}}\Big ]+\mathbb {E}^{\mathcal {F}_s^0}\Big [|\hat{x}_1^j(s)|^2\Big ]\mathbb {I}_{\{\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2>\frac{C}{2}\}}, \end{aligned}$$
(29)

where the last equality results from the symmetry on \(\{|\hat{x}_1^j(s)|^2\}_j\). By a simple application of Chebyshev’s Inequality gives

$$\begin{aligned} \mathbb {P}^{\mathcal {F}_s^0}\left[ \frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2>\frac{C}{2}\right] \le \frac{2\mathbb {E}^{\mathcal {F}_s^0}\Big [\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2\Big ]}{C}\le \frac{2\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2}{C}, \end{aligned}$$

which is independent of N, and goes to 0 as \(C\rightarrow \infty \) on \(\mathcal {F}^0_s\), and hence the first term converges to 0. On the other hand, the second term in (29) vanishes if \(\frac{C}{2}\ge \mathbb {E}^{\mathcal {F}_s^0} |\hat{x}_1^1(s)|^2\triangleq \int _{\mathbb {R}^{n_1}}|x|^2 dm_{x_0}(x,s)\) which is finite almost surely on \(\mathcal {F}^0_s\). We conclude that

$$\begin{aligned} \mathbb {E}^{\mathcal {F}_s^0}\left[ \frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2\mathbb {I}_{\{\frac{1}{N}\sum _{j=1}^N |\hat{x}_1^j(s)|^2+\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2>C\}}\right] \rightarrow 0, \end{aligned}$$

and hence the family \(\{W_2^2(\frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s))\}_N\) is uniformly integrable with respect to the conditional expectation \(\mathbb {E}^{\mathcal {F}_s^0}\). Together with (28), we obtain

$$\begin{aligned} \mathbb {E}^{\mathcal {F}_s^0}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] \rightarrow 0 \text { a.s.} \end{aligned}$$

Next we have

$$\begin{aligned} \mathbb {E}^{\mathcal {F}_s^0}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] \le 4\mathbb {E}^{\mathcal {F}_s^0}|\hat{x}_1^1(s)|^2\le 4\mathbb {E}^{\mathcal {F}_s^0}\left[ \sup _{u\le T}|\hat{x}_1^1(u)|^2\right] . \end{aligned}$$
(30)

On the left hand side, we obtain the standard estimate

$$\begin{aligned} \mathbb {E}\left[ \mathbb {E}^{\mathcal {F}_s^0}\left[ \sup _{u\le T}|\hat{x}_1^1(u)|^2\right] \right] =\mathbb {E}\left[ \sup _{u\le T}|\hat{x}_1^1(u)|^2\right] <\infty \end{aligned}$$
(31)

by the linear growth assumptions on functional coefficients in the evolution of \(\hat{x}_1^i\) and \(x_0\). We conclude that the left hand side of (30) is integrable, and hence by the Dominated Convergence Theorem we have

$$\begin{aligned}&\mathbb {E}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] \\&\quad =\mathbb {E}\left[ \mathbb {E}^{\mathcal {F}_s^0}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] \right] \rightarrow 0, \text { for all } s\in [0,t]. \end{aligned}$$

Finally, observe that the estimate (31) is uniformly in s, another application of the Dominated Convergence Theorem gives

$$\begin{aligned}&\lim _{N\rightarrow \infty }\int _0^t \mathbb {E}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] ds\\&\quad =\int _0^t \lim _{N\rightarrow \infty } \mathbb {E}\left[ W_2^2\left( \frac{1}{N}\sum _{j=1}^N\delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] ds=0. \end{aligned}$$

Similar estimate applies on the second term in Equation (27). Put \(t=T\), we finally have

$$\begin{aligned} \mathbb {E}\displaystyle \sup _{u\le T}|y_0(u)-x_0(u)|^2+\mathbb {E}\displaystyle \sup _{u\le T}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\rightarrow 0, \text { as }N\rightarrow \infty . \end{aligned}$$

\(\square \)

We also have approximation for the cost functionals.

Lemma 5.2

$$\begin{aligned} \mathcal {J}^{N,i}(\hat{\mathbf{u}})-\mathcal {J}^{i}(\hat{u}_1^i)=o(1). \end{aligned}$$

Proof

With the quadratic assumptions (2), we have

$$\begin{aligned}&|\mathcal {J}^{N,i}(\hat{\mathbf{u}})-\mathcal {J}^{i}(\hat{u}^i)|\le \mathbb {E}\left[ \displaystyle \int _0^Tf_1\left( \hat{y}_1^i(t),y_0(t),\dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(t)},\hat{u}^i(t)\right) \right. \\&\qquad -f_1\left( \hat{x}_1^i(t),x_0(t),m_{x_0}(t),\hat{u}_1^i(t)\right) dt\\&\qquad \left. +h_1\left( \hat{y}_1^i(T),y_0(T),\dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(T)}\right) -h_1\Big (\hat{x}_1^i(T),\hat{x}_0(T),m_{x_0}(T)\Big )\right] \\&\quad \le C\mathbb {E}\left[ \displaystyle \int _0^T \Big [1+|\hat{y}_1^i(t)|+|\hat{x}_1^i(t)|+|y_0(t)|+|x_0(t)|+\left( \dfrac{\sum _{j=1, j \ne i}^N |\hat{y}_1^j(t)|^2}{N-1}\right) ^{\frac{1}{2}}\right. \\&\qquad \qquad \left. +\Big (\mathbb {E}^{\mathcal {F}_t^0}|\hat{x}_1^i(t)|^2\Big )^{\frac{1}{2}}+2|\hat{u}^i(t)|\right] \\&\qquad \qquad \cdot \left[ |\hat{y}_1^i(t)-\hat{x}_1^i(t)|+|y_0(t)-x_0(t)|+W_2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(t)},m_{x_0}(t)\right) \right] dt\\&\qquad +\left[ 1+|\hat{y}_1^i(T)|+|\hat{x}_1^i(T)|+|y_0(T)|+|x_0(T)|+\left( \dfrac{\sum _{j=1, j \ne i}^N |\hat{y}_1^j(T)|^2}{N-1}\right) ^{\frac{1}{2}}\right. \\&\qquad \qquad \left. +\Big (\mathbb {E}^{\mathcal {F}_t^0}|\hat{x}_1^i(T)|^2\Big )^{\frac{1}{2}}\right] \\&\qquad \qquad \cdot \left[ |\hat{y}_1^i(T)-\hat{x}_1^i(T)|+|y_0(T)-x_0(T)|\right. \\&\qquad \qquad \left. \left. +W_2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(T)},m_{x_0}(T)\right) \right] \right] . \end{aligned}$$

An application of Hölder’s inequality, and the symmetry on \(\{\hat{x}_1^j\}_j\) gives

$$\begin{aligned}&|\mathcal {J}^{N,i}(\hat{\mathbf{u}})-\mathcal {J}^{i}(\hat{u}^i)| \le C\left\{ \left[ \mathbb {E}\displaystyle \int _0^T \left[ 1+|\hat{y}_1^i(t)|^2+|\hat{x}_1^i(t)|^2+|y_0(t)|^2+|x_0(t)|^2\right. \right. \right. \\&\quad \left. \left. +\,\dfrac{\sum _{j=1, j \ne i}^N |\hat{y}_1^j(t)|^2}{N-1}+\mathbb {E}^{\mathcal {F}_t^0}|\hat{x}_1^i(t)|^2+|\hat{u}^i(t)|^2\right] dt\right] ^{\frac{1}{2}}\\&\qquad \qquad \cdot \left[ \mathbb {E}\displaystyle \int _0^T\left[ |\hat{y}_1^i(t)-\hat{x}_1^i(t)|^2+|y_0(t)-x_0(t)|^2\right. \right. \\&\qquad \qquad \left. \left. +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(t)},m_{x_0}(t)\right) \right] dt\right] ^{\frac{1}{2}}\\&\qquad +\left[ \mathbb {E}\left[ 1+|\hat{y}_1^i(T)|^2+|\hat{x}_1^i(T)|^2+|y_0(T)|^2+|x_0(T)|^2\right. \right. \\&\qquad \left. \left. +\dfrac{\sum _{j=1, j \ne i}^N |\hat{y}_1^j(T)|^2}{N-1}+\mathbb {E}^{\mathcal {F}_t^0}|\hat{x}_1^i(T)|^2\right] \right] ^{\frac{1}{2}}\\&\qquad \qquad \qquad \cdot \left[ \mathbb {E}\left[ |\hat{y}_1^i(T)-\hat{x}_1^i(T)|^2+|y_0(T)-x_0(T)|^2\right. \right. \\&\qquad \qquad \left. \left. \left. +W^2_2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{y}_1^j(T)},m_{x_0}(T)\right) \right] \right] ^{\frac{1}{2}}\right\} \\= & {} C\left\{ \bigg [\mathbb {E}\displaystyle \int _0^T \Big [1+|\hat{y}_1^i(t)|^2+|\hat{x}_1^i(t)|^2+|y_0(t)|^2+|x_0(t)|^2+|\hat{u}^i(t)|^2\Big ]dt\bigg ]^{\frac{1}{2}}\right. \\&\qquad \qquad \qquad \cdot \left[ \mathbb {E}\displaystyle \int _0^T\left[ |\hat{y}_1^i(t)-\hat{x}_1^i(t)|^2+|y_0(t)-x_0(t)|^2\right. \right. \\&\qquad \qquad \left. \left. +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{x}_1^j(t)},m_{x_0}(t)\right) \right] dt\right] ^{\frac{1}{2}}\\&\qquad +\bigg [\mathbb {E}\Big [1+|\hat{y}_1^i(T)|^2+|\hat{x}_1^i(T)|^2+|y_0(T)|^2+|x_0(T)|^2\Big ]\bigg ]^{\frac{1}{2}}\\&\qquad \qquad \qquad \cdot \left[ \mathbb {E}\Big [|\hat{y}_1^i(T)-\hat{x}_1^i(T)|^2+|y_0(T)-x_0(T)|^2\right. \\&\qquad \qquad \left. \left. \left. +W^2_2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{x}_1^j(T)},m_{x_0}(T)\right) \right] \right] ^{\frac{1}{2}}\right\} \end{aligned}$$

By the linear growth assumptions on \(g_0\), \(\sigma _0\), \(g_1\) and \(\sigma _1\), it is easy to show that

$$\begin{aligned} \mathbb {E}\displaystyle \int _0^T \Big [1+|\hat{y}_1^i(t)|^2+|\hat{x}_1^i(t)|^2+|y_0(t)|^2+|x_0(t)|^2+|\hat{u}^i(t)|^2\Big ]dt \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\Big [1+|\hat{y}_1^i(T)|^2+|\hat{x}_1^i(T)|^2+|y_0(T)|^2+|x_0(T)|^2\Big ] \end{aligned}$$

are bounded (independent of N). We finally arrive at the estimates

$$\begin{aligned}&|\mathcal {J}^{N,i}(\hat{\mathbf{u}})-\mathcal {J}^{i}(\hat{u}^i)|\\&\quad \le C\left\{ \left[ \mathbb {E}\displaystyle \int _0^T\left[ |\hat{y}_1^i(t)-\hat{x}_1^i(t)|^2+|y_0(t)-x_0(t)|^2\right. \right. \right. \\&\qquad \left. \left. +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{x}_1^j(t)},m_{x_0}(t)\right) \right] dt\right] ^{\frac{1}{2}}\\&\qquad +\left[ \mathbb {E}\left[ |\hat{y}_1^i(T)-\hat{x}_1^i(T)|^2+|y_0(T)-x_0(T)|^2\right. \right. \\&\qquad \left. \left. \left. +W^2_2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=1, j \ne i}^N\delta _{\hat{x}_1^j(T)},m_{x_0}(T)\right) \right] \right] ^{\frac{1}{2}}\right\} , \end{aligned}$$

which goes to 0 as \(N\rightarrow \infty \), as shown in Lemma 5.1. Hence

$$\begin{aligned} |\mathcal {J}^{N,i}(\hat{\mathbf{u}})-\mathcal {J}^{i}(\hat{u}_1^i)|=o(1). \end{aligned}$$

\(\square \)

In the previous lemmas, we assumed that all players adopt their corresponding mean field optimal controls. By symmetry, the convergences of state dynamics and the cost functionals are then established. To show that the mean field optimal controls \(\mathbf{u}\) indeed constitute a \(\epsilon \)-Nash equilibrium on the empirical system, without loss of generality, we assume that the first player did not obey the mean field optimal control. In particular, let \(u_1^1\) be an arbitrary control in \(\mathcal {A}_1\), define \(\mathbf{u}{:}=(u_1^1,\hat{u}_1^2,\dots ,\hat{u}_1^N)\). We then have the following empirical and mean field SDEs for the dominating player, the 1-st player and the i-th player (\(i>1\)) respectively:

$$\begin{aligned}&\left\{ \begin{array}{rcl} dy_0&{}=&{}g_0\left( y_0(t),\dfrac{1}{N}\left( \delta _{ y_1^1(t)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(t)}\right) ,u_0(t)\right) dt+\sigma _0\left( y_0(t)\right) dW_0(t),\\ y_0(0)&{}=&{}\xi _0.\\ dx_0&{}=&{}g_0\left( x_0(t),m_{x_0}(t),u_0(t)\right) dt+\sigma _0\left( x_0(t)\right) dW_0(t),\\ x_0(0)&{}=&{}\xi _0.\\ \end{array}\right. \\&\left\{ \begin{array}{rcl} dy_1^1&{}=&{}g_1\left( y_1^1(t),y_0(t),\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(t)},u_1^1(t)\right) dt+\sigma _1\left( y_1^1(t)\right) dW_1^1(t),\\ y_1^1(0)&{}=&{}\xi _1^1.\\ dx_1^1&{}=&{}g_1\Big (x_1^1(t),x_0(t),m_{x_0}(t),u_1^1(t)\Big )dt+\sigma _1\Big (x_1^1(t)\Big )dW_1^1(t),\\ x_1^1(0)&{}=&{}\xi _1^1.\\ \end{array}\right. \\&\left\{ \begin{array}{rcl} d\hat{y}_1^i&{}=&{}g_1\left( \hat{y}_1^i(t),y_0(t),\dfrac{1}{N-1}\left( \delta _{ y_1^1(t)}+\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(t)}\right) ,\hat{u}_1^i(t)\right) dt\\ &{}&{}+\sigma _1\left( \hat{y}_1^i(t)\right) dW_1^i(t),\\ \hat{y}_1^i(0)&{}=&{}\xi _1^i.\\ d\hat{x}_1^i&{}=&{}g_1\left( \hat{x}_1^i(t),x_0(t),m_{x_0}(t),\hat{u}_1^i(t)\right) dt+\sigma _1\left( \hat{x}_1^i(t)\right) dW_1^i(t),\\ \hat{x}_1^i(0)&{}=&{}\xi _1^i.\\ \end{array}\right. \end{aligned}$$

We claim that if \(m_{x_0}\) is the density function of \(\hat{x}_1^i\) conditioning on \(\mathcal {F}^0\), then we have the convergence \(y_0\rightarrow x_0\), \(y_1^1\rightarrow x_1^1\) and \(\hat{y}_1^i \rightarrow \hat{x}_1^i\) in the sense of the following lemma

Lemma 5.3

$$\begin{aligned}&\mathbb {E}\Big [\displaystyle \sup _{u \le T}|y_0(u)-x_0(u)|^2\Big ]+\mathbb {E}\Big [\displaystyle \sup _{u \le T}|y_1^1(u)-x_1^1(u)|^2\Big ]\\&\quad +\mathbb {E}\Big [\displaystyle \sup _{u \le T}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\Big ]\rightarrow 0, \quad \text { as } N \rightarrow \infty . \end{aligned}$$

Proof

We first show the convergence of the dominating player and the i-th player. Similar to the proof of Lemma 5.1, we first have

$$\begin{aligned}&\mathbb {E}\displaystyle \sup _{u\le t}|y_0(u)-x_0(u)|^2+\mathbb {E}\displaystyle \sup _{u\le t}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\nonumber \\&\quad \le C\left\{ t\mathbb {E}\displaystyle \int _0^t \Big |y_0(s)-x_0(s)\Big |^2+W_2^2\left( \dfrac{1}{N}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) ,m_{x_0}(s)\right) ds\right. \nonumber \\&\qquad +\mathbb {E}\displaystyle \int _0^t\Big |y_0(s)-x_0(s)\Big |^2\bigg \}ds\nonumber \\&\qquad +C\bigg \{t\mathbb {E}\displaystyle \int _0^t \Big |\hat{y}_1^i(s)-\hat{x}_1^i(s)\Big |^2+\Big |y_0(s)-x_0(s)\Big |^2\nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)}\right) ,m_{x_0}(s)\right) ds+\mathbb {E}\displaystyle \int _0^t\Big |\hat{y}_1^i(s)-\hat{x}_1^i(s)\Big |^2\bigg \}ds\nonumber \\&\quad \le C\mathbb {E}\displaystyle \int _0^t \left[ \displaystyle \sup _{u\le s}\Big |y_0(u)-x_0(u)\Big |^2+\displaystyle \sup _{u\le s}\Big |\hat{y}_1^i(u)-\hat{x}_1^i(u)\Big |^2\nonumber \right. \\&\qquad +W_2^2\left( \dfrac{1}{N}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) ,\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{x}_1^j(s)}\right) +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)}\right) ,\dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)}\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{x}_1^j(s)}\right) \nonumber \\&\qquad \left. +W_2^2\left( \dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] ds \end{aligned}$$
(32)

By the same argument used in Equation (26) in Lemma 5.1, we have

$$\begin{aligned}&\mathbb {E}\left[ W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{x}_1^j(s)}\right) \right] \\&\qquad +\mathbb {E}\left[ W_2^2\left( \dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)},\dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{x}_1^j(s)}\right) \right] \le 2\mathbb {E}|\hat{y}_1^i(s)-\hat{x}_1^i|^2. \end{aligned}$$

Hence, by applying Gronwall’s inequality on Equation (32), we have

$$\begin{aligned}&\mathbb {E}\displaystyle \sup _{u\le t}|y_0(u)-x_0(u)|^2+\mathbb {E}\displaystyle \sup _{u\le t}|\hat{y}_1^i(u)-\hat{x}_1^i(u)|^2\nonumber \\&\quad \le Ce^{Ct}\mathbb {E}\displaystyle \int _0^t \left[ W_2^2\left( \dfrac{1}{N}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) ,\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) \right. \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \nonumber \\&\qquad +W_2^2\left( \dfrac{1}{N-1}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)}\right) ,\dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{y}_1^j(s)}\right) \nonumber \\&\qquad \left. +W_2^2\left( \dfrac{1}{N-2}\displaystyle \sum _{j=2,j\ne i}^N \delta _{\hat{x}_1^j(s)},m_{x_0}(s)\right) \right] ds \end{aligned}$$
(33)

For the first term in (33), consider the following joint measure on \(\mathbb {R}^{n_1}\times \mathbb {R}^{n_1}\)

$$\begin{aligned} \mu (x,y)=\dfrac{1}{N}\displaystyle \sum _{j=2}^N\delta _{(\hat{y}_1^j(s),\hat{y}_1^j(s))}(x,y)+\dfrac{1}{N(N-1)}\displaystyle \sum _{j=2}^N\delta _{(y_1^1(s),\hat{y}_1^j(s))}(x,y), \end{aligned}$$

which has respective marginals

$$\begin{aligned} \dfrac{1}{N}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) \quad \text {and}\quad \dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}. \end{aligned}$$

By the definition of Wasserstein metric,

$$\begin{aligned}&\mathbb {E}\left[ W_2^2\left( \dfrac{1}{N}\left( \delta _{ y_1^1(s)}+\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) ,\dfrac{1}{N-1}\displaystyle \sum _{j=2}^N \delta _{\hat{y}_1^j(s)}\right) \right. \\&\quad \le \mathbb {E}\left[ \displaystyle \int _{\mathbb {R}^{n_1}\times \mathbb {R}^{n_1}}|x-y|^2d\mu (x,y)\right] \\&\quad =\mathbb {E}\left[ \dfrac{1}{N(N-1)}\displaystyle \sum _{j=2}^N|y_1^1(s)-\hat{y}_1^j(s)|^2\right] \\&\quad =\dfrac{1}{N}\mathbb {E}|y_1^1(s)-\hat{y}_1^2(s)|^2, \end{aligned}$$

where the last equality results from symmetry on \(\{y_1^j\}_j\), clearly goes to 0 as \(N\rightarrow \infty \). Similar argument applies for the third term in (33). For the convergence of the second and the fourth term, we refer to the argument in the last part of Lemma 5.1 and the results follow.

For the convergence of the 1-st player, the procedure is similar and we do not provide here. \(\square \)

We conclude from the similar procedures to show the convergence of the cost functional. In particular, we have

$$\begin{aligned} |\mathcal {J}^{N,1}(\mathbf{u})-\mathcal {J}^{1}(u_1^1)|=o(1). \end{aligned}$$

Theorem 5.4

\(\hat{\mathbf{u}}\) is an \(\epsilon \)-Nash equilibrium.

Proof

Summarizing all the obtained results in this section, we can conclude

$$\begin{aligned}&|\mathcal {J}^{N,1}(\hat{\mathbf{u}})-\mathcal {J}^{1}(\hat{u}_1^1)|=o(1);\\&|\mathcal {J}^{N,1}(\mathbf{u})-\mathcal {J}^{1}( u_1^1)|=o(1). \end{aligned}$$

Since \(\hat{u}_1^1\) is optimal control, we have \(\mathcal {J}^{1}(\hat{u}_1^1) \le \mathcal {J}^{1}( u_1^1)\). We deduce

$$\begin{aligned} \mathcal {J}^{N,i}(\hat{\mathbf{u}}) \le \mathcal {J}^{N,1}(\mathbf{u})+o(1). \end{aligned}$$

Hence, \(\hat{\mathbf{u}}\) is an \(\epsilon \)-Nash equilibrium. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bensoussan, A., Chau, M.H.M. & Yam, S.C.P. Mean Field Games with a Dominating Player. Appl Math Optim 74, 91–128 (2016). https://doi.org/10.1007/s00245-015-9309-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-015-9309-1

Keywords

Navigation