Open Access 10122020
Dynamic Equilibrium of Market Making with Price Competition
Published in: Dynamic Games and Applications  Issue 3/2021
Abstract
In this paper, we discuss the dynamic equilibrium of market making with price competition and incomplete information. The arrival of market sell/buy orders follows a pure jump process with intensity depending on bid/ask spreads among market makers and having a looping countermonotonic structure. We solve the problem with the nonzerosum stochastic differential game approach and characterize the equilibrium value function with a coupled system of Hamilton–Jacobi nonlinear ordinary differential equations. We prove, do not assume a priori, that the generalized Issac’s condition is satisfied, which ensures the existence and uniqueness of Nash equilibrium. We also perform some numerical tests that show our model produces tighter bid/ask spreads than those derived using a benchmark model without price competition, which indicates the market liquidity would be enhanced in the presence of price competition of market makers.
1 Introduction
Market makers play an important role in providing liquidity for other market participants. They keep quoting bid and ask prices at which they stand ready to buy and sell for a wide variety of assets simultaneously. One of the key challenges faced by market makers is to manage inventory risk. Market makers need to decide bid/ask prices which influence both their profit margins and accumulation of inventory. Many market makers compete for market order flows as their profits come from the bid/ask spread of each transaction. Traders choose to buy/sell at the most competitive prices offered in the market. Hence, market makers face a complex optimization problem. In this paper, we model market making for a single asset with price competition as a nonzerosum stochastic differential game.
There has been active research on optimal market making in the literature with focus on inventory risk management. Stochastic control and Hamilton–Jacobi–Bellman (HJB) equation, a nonlinear partial differential equation (PDE), are used to derive the optimal bid/ask spread. Ho and Stoll [13] give the first prototype model for the market making problem. Avellaneda and Stoikov [2] propose a basic trading model in which the asset midprice follows a Brownian motion, market buy/sell order arrivals follow a Poisson process with exponentially decreasing intensity function of bid/ask spread, and market makers optimally set the bid/ask spread to maximize the expected utility of the terminal wealth. Guéant et al. [11] discuss a quotedriven market and include the inventory penalty for terminal utility maximization. Guéant [10] extends the model in Guéant et al. [11] to a general intensity function and reduces the dimensionality of the HJB equation for CARA utility. Cartea and Jaimungal [6] consider the market impact and capture the clustering effect of market order arrivals with a selfexciting process driven by informative market orders and news events and solve the HJB equation by an asymptotic method. Cartea et al. [5] study the model uncertainty, similar to Avellaneda and Stoikov [2], Guéant et al. [11], except for the selfexciting feature of market order arrivals. Fodra and Pham [8] divide the market orders depending on the size which may bump up the midprice that follows a Markov renewal process. Abergel et al. [1] discuss a pure jump model for optimal market making on the limit order book with the Markov decision process technique conditioned on the jump time clock.
Advertisement
One common feature in the aforementioned papers is that market order arrivals follow a Poison process with controlled intensity. The probability that a market maker buys/sells a security at the bid/ask price she quotes is a function of her own bid/ask spread only. This setting provides tractability, but ignores the influence of prices offered by other market makers. The price competition between market makers in practice is an important trading factor and needs to be integrated in the model. Kyle adopts the gametheoretic approach in a number of papers [14‐16] to study the price competition between market participants of informed traders, noisy traders and market makers, finds the equilibrium explicitly and shows its impact on price formation and market liquidity. To the best knowledge of the authors, there are no known results in the literature on price competition between market makers who keep trading to profit from bid/ask spread while minimizing inventory risk and improving market liquidity. The primary motivation of this paper is to fill this gap. Market making with price competition is the key difference of our model to that of Guéant et al. [11] and others in the literature. The standard optimal stochastic control is not applicable to our model due to the looping dependence structure, and the equilibrium control is used instead to solve the problem.
The main contributions of this paper are the following: Firstly, we discuss price competition between market makers in a continuoustime setting with inventory constraints and incomplete market information of competitors’ inventory and extend the classical optimal market making framework in Avellaneda and Stoikov [2] with the gametheoretic approach. Secondly, we prove the existence and uniqueness of Nash equilibrium for the game under linear quadratic payoff and prove the generalized Issac’s condition is satisfied for a system of nonlinear ordinary differential equations (ODEs), rather than assuming it to hold a priori or solving it explicitly as in the most literature, see [3, 4, 12, 17]. Thirdly, we perform some numerical tests to compute the equilibrium value function and equilibrium controls (bid/ask spreads) and compare results with those from a benchmark model without price competition, and we find our model reduces the bid/ask spread and improves the asset liquidity in the market considerably.
The rest of the paper is organized as follows. In Sect. 2, we introduce the model setup and notations. In Sect. 3, we state the main results on the existence and uniqueness of Nash equilibrium, the generalized Issac’s condition and the verification theorem for the equilibrium value function. In Sect. 4, we perform numerical tests to show the impact of price competition and compare the results with a benchmark model without price competition. In Sect. 5, we prove the main results (Theorems 3.3 and 3.4). Section 6 concludes.
2 Model
Consider a market in a probability space \((\Omega , \mathcal{F}, P)\) with homogeneous market makers in a set \(\Omega _{\mathrm{mm}}\). Choose one of them as a reference market maker, whose states include time variable \(t \in [0, T]\), asset reference price \(S_{t}\), cash position \(X_{t}\) and the inventory position \(q_{t}\). \(S_{t}\) is public information known to all market makers, whereas \(X_{t}\) and \(q_{t}\) are each market maker’s private information. The reference asset price \(S_t\) is assumed to follow a Gaussian processwhere W is a standard Brownian motion adapted to the filtration \(\{ \mathcal {F}_{t} \}_{t \in \mathbb {R}_{+} }\), generated by W and augmented with all Pnull sets, and \(\sigma \) is a constant representing asset volatility. The terminal time T is small, normally a day, the probability that \(S_t\) becomes negative is negligible, and we may assume \(S_t\) is always positive. Market makers do not buy/sell the asset at the reference price, but at bid and ask prices, and make profit from the bid/ask spread. Denote by a a buying order and b a selling order. The reference market maker’s bid price \(S_{t}^{b}\) and ask price \(S_{t}^{a}\) are given bywhere \(\delta ^{b}_{t}\) and \(\delta ^{a}_{t}\) are the bid and ask spreads controlled by the reference market maker.
$$\begin{aligned} dS_{t} = \sigma dW_{t}, \end{aligned}$$
$$\begin{aligned} \begin{aligned}&S_{t}^{b} = S_{t}  \delta ^{b}_{t}, \quad S_{t}^{a} = S_{t} + \delta ^{a}_{t}, \end{aligned} \end{aligned}$$
Advertisement
At time t, other market makers also quote bid and ask prices simultaneously to compete with the reference market maker. Among their quotes, there exist a lowest ask price and a highest bid price, which are the most competitive prices other than reference market maker’s prices. Denote by \({\mathbf {k}}_{{\mathbf {a}}}\) the market maker who provides the lowest ask price \(S_{{\mathbf {k}}_{{\mathbf {a}}}, t}^{a}\), and \({\mathbf {k}}_{{\mathbf {b}}}\) the market maker who provides the highest bid price \(S_{{\mathbf {k}}_{{\mathbf {b}}}, t}^{b}\); in other words, \(\delta ^{b}_{{\mathbf {k}}_{{\mathbf {b}}}, t}\) and \(\delta ^{a}_{{\mathbf {k}}_{{\mathbf {a}}}, t}\) are the lowest bid and ask spreads among the reference market maker’s competitors.
Traders tend to sell/buy at the most competitive bid/ask price, but may accept less competitive prices due to other factors such as liquidation of large quantities. From the reference market maker’s perspective, the arrival of buying/selling orders is unpredictable, but the intensities depend on both her bid/ask spreads and the most competitive ones. The lower her bid/ask spreads to the most competitive ones, the more likely they are to be hit by traders. Hence, the arrival intensity is decreasing in terms of her spread and increasing in the most competitive spread. The arrival of selling market order \(N_{t}^{b}\) and that of buying market order \(N_{t}^{a}\) are Poisson processes with controlled intensities \(\lambda _{t}^{b}\) and \(\lambda _{t}^{a}\), defined bywhere f is the intensity function. Denote by \(f'_{1}\) the firstorder partial derivative of f to its first variable, \(f''_{1 1}\) the secondorder partial derivative of f to its first variable, etc.
$$\begin{aligned} \begin{aligned}&\lambda _{t}^{a} = f(\delta ^{a}_{ t},\delta ^{a}_{{\mathbf {k}}_{{\mathbf {a}}}, t}), \quad \lambda _{t}^{b} = f(\delta ^{b}_{ t},\delta ^{b}_{{\mathbf {k}}_{{\mathbf {b}}}, t}), \end{aligned} \end{aligned}$$
Assumption 2.1
Assume f is twice continuously differentiable and for all \(\delta , x, y \in \mathbb {R}\), \(f(\delta ,x) > 0\), \(f'_{1}(\delta , x) < 0\), \(f'_{2}(\delta , x) \ge 0\), \(\lim _{\delta \rightarrow +\infty }  \frac{f'_{1}(\delta , \delta )}{f(\delta , \delta )} > 0\), andFurthermore, assume there exists a twice continuously differentiable function \(\lambda : \mathbb {R} \rightarrow \mathbb {R}\) such that \(f(\delta ,x) \le \lambda (\delta )\) for all \(x\in R\), \(\lim _{\delta \rightarrow +\infty } \lambda (\delta ) \delta = 0\) and \( \lambda (\delta ) \lambda ''(\delta ) < 2 (\lambda '(\delta ))^2\).
$$\begin{aligned} \begin{aligned}&f(\delta , x) f''_{1 1}(\delta , y)  2 f'_{1}(\delta , x) f'_{1}(\delta , y) +  f'_{1}(\delta , x) f'_{2}(\delta , y)  f_{12}''(\delta ,y) f(\delta ,x)  < 0. \end{aligned} \end{aligned}$$
(2.1)
Some conditions in Assumption 2.1 are technical and needed in the proof. Many functions satisfy these conditions, for example, \(f(\delta , x) = \lambda (\delta ) g(x)\), where \(\lambda \) is the one in Assumption 2.1 with negative firstorder derivative and \(\lim _{\delta \rightarrow +\infty }  \frac{\lambda '(\delta )}{\lambda (\delta )} > 0\), and g is increasing, positive and bounded. Here is another example:where \(\Lambda \) is the magnitude of market order arrival rate, a the decay rate, k the dependence rate of the difference between reference market maker’s price and the most competitive price in the market with \(a \ge \frac{\sqrt{2}}{2} k > 0\). It is easy to check that f satisfies all conditions in Assumption 2.1. Some simple functions may not satisfy Assumption 2.1. For example, a constant function is excluded; if it were allowed, it would imply the size of bid/ask spread does not affect the arrival rate for market makers, clearly unrealistic.
$$\begin{aligned} \begin{aligned}&f(\delta , x) := \frac{\Lambda e^{a \delta }}{\sqrt{1 + 3 e^{k (\delta  x)}}}, \end{aligned} \end{aligned}$$
(2.2)
We assume there is an inventory position constraint for all market makers. Let \(\mathbf {Q} = \{Q, \ldots , Q\}\) be a finite set of integers with Q and \(Q\) the maximum and minimum positions a market maker may hold and \(q_{t} \in \mathbf {Q}\). When \(q_t = Q\) (or \(Q\)), market maker cannot buy (or sell) any more. Denote by \(I^{b}\) and \(I^{a}\) the indicator functions of market maker’s buying or selling capability:where \(\mathbb {1}_A\) is an indicator that equals 1 if A is true and 0 if A is false. When market maker’s bid price is hit by a market order (\(N_{t}^{b}\) increases by 1), her inventory \(q_t\) increases by 1 and she pays \(S_{t}^{b}\) for buying the asset. Similarly, when market maker’s ask price is hit by a market order (\(N_{t}^{a}\) increases by 1), her inventory \(q_t\) decreases by 1 and she receives \(S_{t}^{a}\) for selling the asset. The dynamics of cash \(X_{t}\) and inventory \(q_{t}\) are given bywith the initial condition \((X_0,q_0)=(x,q) \in \mathbb {R} \times \mathbf {Q}\).
$$\begin{aligned} \begin{aligned}&I^{b}(q) := \mathbb {1}_{\{q + 1 \in \mathbf {Q}\}}, \quad I^{a}(q) := \mathbb {1}_{\{q  1 \in \mathbf {Q}\}}, \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&d X_{t} = S_{t}^{a} I^{a}(q_{t}) d N_{t}^{a}  S_{t}^{b} I^{b}(q_{t}) d N_{t}^{b}\\&d q_{t} = I^{b}(q_{t}) d N_{t}^{b}  I^{a}(q_{t}) d N_{t}^{a} \end{aligned} \end{aligned}$$
The reference market maker does not have complete information on the whole market. Denote by \((\mathbf {x}_{k_{b}},\mathbf {q}_{k_{b}})\) and \((\mathbf {x}_{k_{a}},\mathbf {q}_{k_{a}})\) the states of market makers \({\mathbf {k}}_{{\mathbf {b}}}\) and \({\mathbf {k}}_{{\mathbf {a}}}\), respectively. They are random variables from the reference market maker’s perspective, as her competitors’ states are not public information. The reference market maker can only deduce the probability distribution for both \((\mathbf {x}_{k_{b}},\mathbf {q}_{k_{b}})\) and \((\mathbf {x}_{k_{a}},\mathbf {q}_{k_{a}})\) based on available public information. We assume their probability distributions are known and timeinvariant. They are \(P_{b}\) for \((\mathbf {x}_{k_{b}},\mathbf {q}_{k_{b}})\) and \(P_{a}\) for \((\mathbf {x}_{k_{a}},\mathbf {q}_{k_{a}})\). This incomplete information assumption is a reasonable approximation of real market. We next use a heuristic example to illustrate the incomplete information setting and \(P_{a}\) and \(P_{b}\).
Example 2.2
Consider at time t there are 3 market makers quoting in the market including the reference market maker. Their potential states, corresponding probability and bid/ask spread are assumed by following table.
x

q
 Probability  Bid spread (bps)  Ask spread (bps) 

0 
\(1\)

\(\frac{1}{3}\)
 10  50 
0  0 
\(\frac{1}{3}\)
 30  30 
0  1 
\(\frac{1}{3}\)
 50  10 
For simplicity, we assume they all have same cash position \(x = 0\) and there are only three inventory possibilities \(q = 1, 0, 1\). Assume uniform probability on \(q = 1, 0, 1\). When \(q = 1\), market maker will prefer to buy than sell. Hence, they will quote lower bid spread 10bps and higher ask spread 50bps. For \(q = 1\), it is the opposite. Denote the inventory of the reference market maker’s two competitors as \(q_{1}\) and \(q_{2}\). We can calculate \(P_{a}\) asTake \(P_{a}(0, 1)\) as an example. It is the probability that market maker among the two that quotes the lowest ask spread has inventory \(1\), which implies both market makers have inventory \(q_{1} = q_{2} = 1\) as otherwise a lower ask spread 30 bps or 10 bps would be quoted if one of them had inventory 0 or 1. Other values for \(P_{a}\) and \(P_{b}\) can be calculated similarly.
$$\begin{aligned} \begin{aligned}&P_{a}(0, 1) = P(q_{1} = 1) P(q_{2} = 1) = \frac{1}{9} \\&P_{a}(0, 0) = P(q_{1} = 1) P(q_{2} = 0) + P(q_{1} = 0) P(q_{2} = 1) + P(q_{1} = 0) P(q_{2} = 0) = \frac{1}{3} \\&P_{a}(0, 1) = 1  (P_{a}(0, 1) + P_{a}(0, 0)) = \frac{5}{9}. \end{aligned} \end{aligned}$$
We assume market makers take closedloop feedback strategies that are deterministic functions of state variables at time t, that is, there exist functions \(\delta ^{a}\) and \(\delta ^{b}\) such that bid/ask spreads of market maker are given byDenote by \(\mathbf {A}^{a}\) and \(\mathbf {A}^{b}\) the sets of all \(\delta ^{a}\) and \(\delta ^{b}\) that are lower bounded square integrable measurable functions, \(\varvec{\delta } := (\delta ^{b},\delta ^{a}) \in \mathbf {A}^{b} \times \mathbf {A}^{a}\) reference market maker’s strategy, \(\mathbf {\varvec{\delta }}_{\Omega }:= \{\varvec{\delta }_{\textit{m}}, \textit{m} \in \Omega _{\mathrm{mm}}\}\) the collection of all market makers’ strategies, so reference market maker’s strategy \(\varvec{\delta } \in \mathbf {\varvec{\delta }}_{\Omega }\). Using the game theory convention, we may label the reference market maker as 0 and \(\mathbf {\varvec{\delta }}_{0}\) the set of strategies of all other market makers in \(\Omega _{\mathrm{mm}}\) except the reference market maker, i.e., \(\mathbf {\varvec{\delta }}_{0} := \{\varvec{\delta }_{\textit{m}}, \textit{m} \ne 0, \textit{m} \in \Omega _{\mathrm{mm}}\}\).
$$\begin{aligned} \begin{aligned}&\delta ^{a}_{t} = \delta ^{a}(t, S, x, q), \quad \delta ^{b}_{t} = \delta ^{b}(t, S, x, q). \end{aligned} \end{aligned}$$
As everyone else in \(\Omega _{\mathrm{mm}}\) can be reference market maker’s competitor when a market order arrives, their strategies influence her expected market order arrival intensity. Reference market maker’s cash and inventory are determined by her own strategy \(\varvec{\delta }\) as well as those in the set \(\mathbf {\varvec{\delta }}_{0}\). Starting at time \(t\in [0,T]\) with initial asset price S, cash x and inventory q, the reference market maker wants to maximize the following payoff function:where \(\mathbb {E}_t\) is the conditional expectation operator given \(S_t=S\), \(X_t=x\) and \(q_t=q\). The reference market maker wants to maximize the expected value of terminal wealth while penalizes the holding inventory at terminal time T and throughout the time interval [0, T] with \(\gamma \) a positive constant representing the risk adverse level and l an increasing convex function on \(R_+\) with \(l(0)=0\), denoting the liquidity penalty for holding inventory at T. Due to the circular dependence nature among market makers and their strategies, we use a gametheoretic approach to solve the problem. We next define the Nash equilibrium.
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta },\mathbf {\varvec{\delta }}_{0}, t, S, x, q) = \mathbb {E}_t\left[ X_{T}+ q_{T} S_{T}  l (q_{T})  \frac{1}{2} \gamma \sigma ^{2} \int _{t}^{T} (q_{s})^{2} {\mathrm{d}}s \right] , \end{aligned} \end{aligned}$$
(2.3)
Definition 2.3
We call the Nash equilibrium exists for a game \(G_{\mathrm{mm}}\) if there exists an equilibrium control profile \( \mathbf {\varvec{\delta }}_{\Omega }^{*} = \{ \varvec{\delta }^{*}_{\textit{m}}, \textit{m} \in \Omega _{\mathrm{mm}} \}\), such that for every reference player 0 in \(\Omega _{\mathrm{mm}}\), given her strategy \(\varvec{\delta }^{*} \in \mathbf {\varvec{\delta }}_{\Omega }^{*}\) and other players’ strategy set \(\mathbf {\varvec{\delta }}^{*}_{0}\), her payoff satisfies the following equilibrium condition:Moreover, the reference market maker’s equilibrium control is \(\varvec{\delta }^{*}\) and the equilibrium value function is
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta }^{*},\mathbf {\varvec{\delta }}^{*}_{0}, t, S, x, q) = \max _{\varvec{\delta } \in \mathbf {A}^{b} \times \mathbf {A}^{a}} J(\varvec{\delta },\mathbf {\varvec{\delta }}^{*}_{0}, t, S, x, q). \end{aligned} \end{aligned}$$
(2.4)
$$\begin{aligned} \begin{aligned}&V(t, S, x, q) := J(\varvec{\delta }^{*},\mathbf {\varvec{\delta }}^{*}_{0}, t, S, x, q). \end{aligned} \end{aligned}$$
(2.5)
3 Main Results
In this section, we prove the existence and uniqueness of Nash equilibrium for \(G_{\mathrm{mm}}\) when price competition is in place. We first reduce the model’s dimension by ansatz, then characterize the equilibrium value function by a system of nonlinear ODEs, prove the verification theorem and finally show the existence and uniqueness of Nash equilibrium by an equivalent ODE system.
Writing the integral form of \(X_{T}\) and \(q_{T}\) in payoff function (2.3) with Ito’s lemma, we can simplify the equilibrium value function V aswhere \(\theta _q: [0, T] \rightarrow \mathbb {R}\) is defined bywith \(\mathbb {E}_t\) being the conditional expectation operator given \(q_t=q\). Since process \(q_t\) takes value in a finite set \(\mathbf {Q}\), it is a Markov chain with \(M = 2 Q + 1\) states. Hence, game \(G_{\mathrm{mm}}\) is reduced to a continuoustime finitestate stochastic game. Define a function \(\theta : [0, T] \rightarrow \mathbb {R}^{M}\) asThe equilibrium bid/ask spreads only depend on state \(q_t\) at time t. As market makers are homogeneous, under equilibrium at time t, any two market makers with the same state q quote the same bid/ask spread, denoted by \(\pi ^{b}_{q}(t)\) and \(\pi ^{a}_{q}(t)\), respectively. Note that \(\pi ^{b}_{q}(t)\) exists for every \(q \in \mathbf {Q}\) except \(q = Q\) when market maker reaches the maximum inventory and stops quoting bid price. \(\pi ^{a}_{q}(t)\) is similarly defined. We can define the equilibrium control asThe market maker’s equilibrium control \(\varvec{\delta }^{*} = ((\delta ^{a})^{*}, (\delta ^{b})^{*})\) is given byWhen market order arrives at time t, the reference market maker expects her most competitive market maker in bid side to have inventory q with probability \(P^{b}_{q}\) and in ask side \(P^{a}_{q}\). As there are only finite number of states, the most competitive market maker’s state probability is given by:Market makers with inventory on boundary values do not quote in the market, so \(P^{a}_{Q} = P^{b}_{Q} = 0\).
$$\begin{aligned} \begin{aligned}&V(t,S,x,q) = x + q S + \theta _{q}(t), \end{aligned} \end{aligned}$$
(3.1)
$$\begin{aligned} \begin{aligned}&\theta _{q}(t) = \sup _{\varvec{\delta } \in \mathbf {A}^{b} \times \mathbf {A}^{a}} \mathbb {E}_t[ \int _{t}^{T} [\delta ^{a}_{s} f(\delta ^{a}_{s},\delta ^{a}_{{\mathbf {k}}_{{\mathbf {a}}}, s}) + \delta ^{b}_{s} f(\delta ^{b}_{s},\delta ^{b}_{{\mathbf {k}}_{{\mathbf {b}}}, s})  \frac{1}{2} \gamma \sigma ^{2} q_{s}^{2} ] {\mathrm{d}} s  l (q_{T})] \end{aligned} \end{aligned}$$
(3.2)
$$\begin{aligned} \begin{aligned}&\theta (t) = (\theta _{Q}(t), \ldots , \theta _{Q}(t)). \end{aligned} \end{aligned}$$
(3.3)
$$\begin{aligned} \begin{aligned}&\pi ^{a}(t) = (\pi ^{a}_{Q+1}(t), \ldots , \pi ^{a}_{Q}(t)), \quad \pi ^{b}(t) = (\pi ^{b}_{Q}(t), \ldots , \pi ^{b}_{Q1}(t)). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&(\delta ^{a})^{*}(t, S, x, q) = \pi ^{a}_{q}(t), \quad (\delta ^{b})^{*}(t, S, x, q) = \pi ^{b}_{q}(t). \end{aligned} \end{aligned}$$
(3.4)
$$\begin{aligned} \begin{aligned}&P^{a} = (P^{a}_{Q+1}, \ldots , P^{a}_{Q}), \quad P^{b} = (P^{b}_{Q}, \ldots , P^{b}_{Q1}). \end{aligned} \end{aligned}$$
We next provide a characterization for the value function \(\theta \) and the equilibrium controls \(\pi ^{a}\), \(\pi ^{b}\). Applying the dynamic programming principle, we get the following Hamilton Jacobi ODE system:where \(\eta ^{a}, \eta ^{b} : \mathbb {R}^{M} \times \mathbb {R} \times \mathbb {R}^{M1} \times \mathbf {Q} \rightarrow \mathbb {R}\) are defined by vectors \(\mu = (\mu _{Q}, \ldots , \mu _{Q}) \in \mathbb {R}^{M}\), \(\xi ^{a} = (\xi ^{a}_{Q+1}, \ldots , \xi ^{a}_{Q})\) or \(\xi ^{b} = (\xi ^{b}_{Q}, \ldots , \xi ^{b}_{Q1})\) asNote that \(\sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta , \pi ^{a}_{j}(t))\) and \(\sum _{j = Q}^{Q1} P^{b}_{j} f(\delta , \pi ^{b}_{j}(t))\) are reference market maker’s expected intensity of buying/selling market order arrival when her spread is \(\delta \) and other market makers take the equilibrium control. We can now characterize the Nash equilibrium.
$$\begin{aligned} \begin{aligned}&\theta '_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2  \sup _{\delta } \eta ^{a}(\theta (t),\delta ,\pi ^{a}(t), q) I^{a}(q)  \sup _{\delta } \eta ^{b}(\theta (t),\delta ,\pi ^{b}(t), q) I^{b}(q)\\&\theta _{q}(T) = l(q) \\&\pi ^{a}_{q}(t) \in {\mathop {{{\,\mathrm{\arg \!\sup }\,}}}\limits _{\delta }} \eta ^{a}(\theta (t),\delta ,\pi ^{a}(t), q), \quad \forall q \in \{Q + 1, \ldots , Q \} \\&\pi ^{b}_{q}(t) \in {\mathop {{{\,\mathrm{\arg \!\sup }\,}}}\limits _{\delta }} \eta ^{b}(\theta (t),\delta ,\pi ^{b}(t), q), \quad \forall q \in \{Q, \ldots , Q  1\}, \end{aligned} \end{aligned}$$
(3.5)
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\mu ,\delta ,\xi ^{a},q) := \sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta , \xi ^{a}_{j}) (\delta + \mu _{q1}\mu _{q})\\&\eta ^{b}(\mu ,\delta ,\xi ^{b},q) := \sum _{j = Q}^{Q1} P^{b}_{j} f(\delta , \xi ^{b}_{j}) (\delta + \mu _{q+1}\mu _{q}). \end{aligned} \end{aligned}$$
(3.6)
Theorem 3.1
Assume the Nash equilibrium of the game \(G_{\mathrm{mm}}\) exists. Then, the equilibrium value function V can be decomposed as (3.1) with function \(\theta \). Equilibrium control \(\varvec{\delta }^{*}\) can be written as (3.4) with two vectors \(\pi ^{a}(t)\) and \(\pi ^{b}(t)\). Moreover, \(\theta \), \(\pi ^{a}(t)\) and \(\pi ^{b}(t)\) satisfy the ODE system in (3.5).
The equilibrium condition for \(\pi ^{a}(t)\) and \(\pi ^{b}(t)\) in (3.5) leads to the following generalized Issac’s condition, which is also defined in Cohen and Fedyashov [7] to ensure the existence of Nash equilibrium for nonzerosum stochastic differential game and a natural extension of the standard Issac’s condition in the zerosum game to the nonzerosum game.
Definition 3.2
We call the generalized Issac’s condition holds if there exist functions \(w^{a}, w^{b} : \mathbb {R}^M \rightarrow \mathbb {R}^{M1}\) such that for any vector \(\mu \in \mathbb {R}^{M}\),where \(w^{a}_{q}, w^{b}_{q} : \mathbb {R}^{M} \rightarrow \mathbb {R}\) and \(w^a\), \(w^b\) are defined by
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\mu ,w^{a}_{q}(\mu ),w^{a}(\mu ), q) = \sup _{\delta } \eta ^{a}(\mu , \delta , w^{a}(\mu ), q), \quad \forall q \in \{Q + 1, \ldots , Q \} \\&\eta ^{b}(\mu ,w^{b}_{q}(\mu ),w^{b}(\mu ), q) = \sup _{\delta } \eta ^{b}(\mu , \delta , w^{b}(\mu ), q), \quad \forall q \in \{Q, \ldots , Q  1 \}, \end{aligned} \end{aligned}$$
(3.7)
$$\begin{aligned} \begin{aligned}&w^{a}(\mu ) := (w^{a}_{Q+1}(\mu ), \ldots , w^{a}_{Q}(\mu )), \quad w^{b}(\mu ) := (w^{b}_{Q}(\mu ), \ldots , w^{b}_{Q1}(\mu )). \end{aligned} \end{aligned}$$
If the generalized Issac’s condition is satisfied, we can substitute the function \(w^{a}\), \(w^{b}\) into operators \(\eta ^{a}\), \(\eta ^{b}\), and system (3.5) is reduced to the following ODE system:We next state the verification theorem.
$$\begin{aligned} \begin{aligned}&\theta '_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2  \eta ^{a}(\theta (t),w^{a}_{q}(\theta (t)),w^{a}(\theta (t)), q) I^{a}(q) \\& \eta ^{b}(\theta (t),w^{b}_{q}(\theta (t)),w^{b}(\theta (t)), q) I^{b}(q) \theta _{q}(T) = l(q). \end{aligned} \end{aligned}$$
(3.8)
Theorem 3.3
Assume that f satisfies Assumption 2.1 and that there exist bounded strategies \(\pi ^{a}, \pi ^{b}\) and function \(\theta \) on [0, T] satisfying system (3.5). Then, the Nash equilibrium of the game \(G_{\mathrm{mm}}\) exists. The equilibrium value function is given by (3.1) and the equilibrium control by (3.4).
From Theorems 3.1 and 3.3, we know the existence and uniqueness of Nash equilibrium for game \(G_{\mathrm{mm}}\) are equivalent to the existence and uniqueness of equilibrium controls \(\pi ^{a}\), \(\pi ^{b}\) and function \(\theta \) that satisfy the ODE system (3.5). We now state the main result of the paper.
Theorem 3.4
Assume f satisfies Assumption 2.1. Then, there exists a unique Nash equilibrium for game \(G_{\mathrm{mm}}\). Specifically, there exist unique locally Lipschitz continuous functions \(w^{a}, w^{b}\) that satisfy the generalized Issac’s condition in Definition 3.2, and there exists unique classical solution \(\theta \) to the ODE system (3.8), such that the equilibrium value function is given by (3.1) and the equilibrium controls by
$$\begin{aligned} \begin{aligned}&\pi ^{a}(t) = w^{a}(\theta (t)), \; \pi ^{b}(t) = w^{b}(\theta (t)), \; t\in [0,T]. \end{aligned} \end{aligned}$$
(3.9)
4 Numerical Test
In this section, we numerically find the Nash equilibrium value function and bid/ask spread when there is price competition with the intensity f defined in (2.2) and compare the numerical results with those derived using a benchmark model in Guéant [10] without price competition and with the intensity \( \tilde{f}(\delta ) :=0.5 \Lambda e^{a \delta }\) and the liquidity penalty \(l(q) := 0.1 q^2\). To make two models comparable, we define parameters for f and \(\tilde{f}\) in a way that when every market maker provides the same bid/ask spread, the intensity of market order arrivals is the same in both cases, which gives \(0.5 \Lambda \) in the definition of \(\tilde{f}\). The parameters of both models are set as follows:
S  \(\sigma \) (daily)  \(\gamma \)  k  a  \(\Lambda \)  T (day)  N  Q 

20.0  0.01  1.0  2.0  2.0  60.0  1.0  100  10 
Here, S is the initial asset value, N the number of time steps in discretization, T the period of one day, \(\sigma \) the daily volatility, a and \(\Lambda \) used in intensity functions, \(\gamma \) inventory penalty coefficient and Q the inventory maximum capacity. Furthermore, probabilities of the most competitive market makers’ state \(P^{a}\) and \(P^{b}\) are assumed to be given by (see Example 2.2 for explanation of \(P^{a}\) and \(P^{b}\))Figures 1 and 2 plot the equilibrium bid/ask spreads of both models at time 0.5. We note that higher inventory leads to lower ask spread but higher bid spread, indicating the preference of market makers to sell rather than to buy in order to remain inventory neutral and that the equilibrium bid/ask spreads of our model are tighter than those of the benchmark model, indicating improved market liquidity.
$$\begin{aligned} \begin{aligned}&P^{a}_{10} = P^{b}_{10} = 0 \\&P^{a}_{0} = P^{b}_{0} = 0.2 \\&P^{a}_{1} = P^{b}_{1} = 0.4 \\&P^{a}_{2} = P^{b}_{2} = 0.3 \\&P^{a}_{q} = 1/170, \quad q \ne 10, 0, 1, 2 \\&P^{b}_{q} = 1/170, \quad q \ne 10, 0, 1, 2. \end{aligned} \end{aligned}$$
×
×
Figure 3 plots the equilibrium ask spreads with different inventory levels on [0, T]. Market makers with positive inventory are more willing to sell and clear their positions due to the liquidity punishment at terminal time T, and this willingness increases as time nears T as the equilibrium ask spread is decreasing when t tends to T. For market makers with negative inventory, it is opposite. This explains empirical facts that trading volume increases at the end of the day.
×
×
Figure 4 plots the expected intensity functions in terms of bid/ask spread at time 0.5, which are given by \(G_b(\delta ) = \tilde{f}(\delta )\) for the benchmark model and \(G(\delta ) = \sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta , \pi ^{a}_{j}(t))\) for our model, respectively. The one from our model is derived endogenously from equilibrium, while the one assumed by the benchmark model comes from Avellaneda and Stoikov [2] in which the distribution of market order size and the statistics of the market impact are used. When price competition is in place, the market order arrival intensity decays faster, indicating that when price competition is in place but market maker assumes there were not, they would tend to overestimate the market order arrival intensity and quote higher bid/ask spreads.
×
×
Figures 5 and 6 plot the equilibrium value function \(\theta \) near the starting time 0 and the terminal time T, respectively. We note that \(\theta \) with price competition takes lower value than the one without at time 0.1 but performs better at time 0.9, especially when there are still large inventories to be liquidated, as market makers of the benchmark model overestimate the arrival intensity, which results in higher spreads and worse performance.
In summary, when price competition between market makers is in place, market maker tends to quote tighter bid/ask spreads and the market has better liquidity and lower transaction cost. However, the profit of market maker is reduced. The value function is lower when there is competition between market makers.
5 Proofs of Theorems 3.3 and 3.4
5.1 Proof of Theorem 3.3
Proof
To verify that \((\mathbf {\varvec{\delta }}_{\Omega })^{*}\) is the equilibrium control profile and V is the equilibrium value function, it is sufficient to check that they satisfy the equilibrium condition in (2.4). For any market maker in \(\Omega _{\mathrm{mm}}\), given other market makers’ strategies in \((\mathbf {\varvec{\delta }}_{\Omega })^{*}\) and any admissible strategy \(\varvec{\delta }\), we should prove:Assume the reference market maker takes the arbitrary strategy \(\varvec{\delta }\), while every other market maker decides his/her bid/ask spread by \((\delta ^{a})^{*}(t, S_t, X_t, q_{t}) = \pi ^{a}_{q_{t}}(t)\) and \((\delta ^{b})^{*}(t, S_t, X_t, q_{t}) = \pi ^{b}_{q_{t}}(t)\). Denote reference market maker’s cash position at any time t as \(X^{*, \varvec{\delta }}_{t}\), while their inventory is \(q^{*, \varvec{\delta }}_{t}\). Then, for any time \(t \in [0, T]\), by ansatz (3.1) and Itô’s lemma with respect to function \(\theta \), we get the following:As \(q^{*, \varvec{\delta }}_{u}\) takes value in finite set \(\mathbf {Q}\), and the solution for ODE exists on compact set [0, T], we know both \(\theta _{q}(u)\) and \(\theta '_{q}(u)\) are uniformly bounded on [0, T] for all \(q \in \mathbf {Q}\) and:Moreover, from assumption that \(f(\delta , x) \le \lambda (\delta )\) for all x, we have admissible control satisfies (see [10, page 16]):Taking expectation on both sides of (5.1), we have:where \(\eta ^{a}\) and \(\eta ^{b}\) are defined in (3.6). Hence, we have:As \(\theta \) satisfies ODE system (3.5) for every \(u \in [0, T]\). We substitute it into the corresponding part in (5.2) and have following.On the other hand, if the reference market maker also takes equilibrium control, her cash position and inventory are denoted by \(X^{*}_{t}\) and \(q^{*}_{t}\), respectively. And we have the following:Substituting the equilibrium control defined in (3.4) to (5.2) can conclude the proof as following:\(\square \)
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{0})^{*}, t, S, x, q) \le J((\varvec{\delta })^{*}, (\mathbf {\varvec{\delta }}_{0})^{*}, t, S, x, q) = V(t, S, x, q). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T}) = X^{*, \varvec{\delta }}_{T} + q^{*, \varvec{\delta }}_{T} S_{T} + \theta _{q^{*, \varvec{\delta }}_{T}}(T) = x + q S + \theta _q(t) \\&\quad + \int _{t}^{T} \delta ^{b}_{u} I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{b} + \int _{t}^{T} \delta ^{a}_{u} I^{a}(q^{*, \varvec{\delta }}_{u}) d N_{u}^{a} + \int _{t}^{T} q^{*, \varvec{\delta }}_{u} {\mathrm{d}}S_u + \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u){\mathrm{d}}u \\&\quad + \int _{t}^{T} (\theta _{q^{*, \varvec{\delta }}_{u} + 1}(u)  \theta _{q^{*, \varvec{\delta }}_{u}}(u) ) I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{b} + \int _{t}^{T} (\theta _{q^{*, \varvec{\delta }}_{u}  1}(u)  \theta _{q^{*, \varvec{\delta }}_{u}}(u) ) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}N_{u}^{a}. \end{aligned} \end{aligned}$$
(5.1)
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \int _{t}^{T} (q^{*, \varvec{\delta }}_{u})^2 {\mathrm{d}}u\right]< + \infty , \quad \mathbb {E}\left[ \int _{t}^{T} (\theta '_{q^{*, \varvec{\delta }}_{u}}(u))^2 {\mathrm{d}}u\right] < + \infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ \sum _{j = Q+1}^{Q} P^{a}_{j} \int _{t}^{T} f(\delta ^{a}_{u}, \pi ^{a}_{j}(t)) I^{a}(q^{*, \varvec{\delta }}_{u}) \delta ^{a}_{u} + \theta _{q^{*, \varvec{\delta }}_{u}  1}(u)  \theta _{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right]< + \infty \\&\mathbb {E}\left[ \sum _{j = Q}^{Q1} P^{b}_{j} \int _{t}^{T} f(\delta ^{b}_{u}, \pi ^{b}_{j}(t)) I^{b}(q^{*, \varvec{\delta }}_{u}) \delta ^{b}_{u} + \theta _{q^{*, \varvec{\delta }}_{u} + 1}(u)  \theta _{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right] < + \infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T})\right] = V(t, S, x, q) + \mathbb {E}\left[ \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \eta ^{a}(\theta (u),\delta ^{a}_{u},\pi ^{a}(u), q^{*, \varvec{\delta }}_{u}) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \eta ^{b}(\theta (u),\delta ^{b}_{u},\pi ^{b}(u), q^{*, \varvec{\delta }}_{u}) I^{b}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T}) \right] \le V(t, S, x, q) + \mathbb {E}\left[ \int _{t}^{T} \theta '_{q^{*, \varvec{\delta }}_{u}}(u) {\mathrm{d}}u\right] \\&+ \mathbb {E}\left[ \int _{t}^{T} \sup _{\delta _{u}^{a}} \eta ^{a}(\theta (u),\delta ^{a}_{u},\pi ^{a}(u), q^{*, \varvec{\delta }}_{u}) I^{a}(q^{*, \varvec{\delta }}_{u}) {\mathrm{d}}u\right] + \mathbb {E}\left[ \int _{t}^{T} \sup _{\delta _{u}^{b}} \eta ^{b}(\theta (u),\delta ^{b}_{u},\pi ^{b}(u), q^{*, \varvec{\delta }}_{u}) I^{b}(q^{*, \varvec{\delta }}_{u}){\mathrm{d}}u\right] . \end{aligned} \end{aligned}$$
(5.2)
$$\begin{aligned} \begin{aligned}&J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{0})^{*}, t, S, x, q) = \mathbb {E}\left[ V(T, S_{T}, X^{*, \varvec{\delta }}_{T}, q^{*, \varvec{\delta }}_{T})  \frac{1}{2} \gamma \sigma ^2 \int _{t}^{T} (q^{*, \varvec{\delta }}_{u})^2 {\mathrm{d}}u\right] \le V(t, S, x, q). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\theta (t),\pi ^{a}_{q}(t),\pi ^{a}(t), q) = \sup _{\delta } \eta ^{a}(\theta (t), \delta , \pi ^{a}(t), q), \quad \\&\eta ^{b}(\theta (t),\pi ^{b}_{q}(t),\pi ^{b}(t), q) = \sup _{\delta } \eta ^{b}(\theta (t), \delta , \pi ^{b}(t), q). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} J((\varvec{\delta })^{*}, (\mathbf {\varvec{\delta }}_{0})^{*}, t, S, x, q)&= \mathbb {E}\left[ V(T, S_{T}, X^{*}_{T}, q^{*}_{T})  \frac{1}{2} \gamma \sigma ^2 \int _{t}^{T} (q^{*}_{u})^2 {\mathrm{d}}u\right] \\&= V(t, S, x, q) \ge J(\varvec{\delta }, (\mathbf {\varvec{\delta }}_{0})^{*}, t, S, x, q). \end{aligned} \end{aligned}$$
5.2 Proof of Theorem 3.4
The proof of Theorem 3.4 is made of three steps: The key step for proving Steps 1 and 2 is to characterize the vectors \(w^{a}(\mu )\) and \(w^{b}(\mu )\) by the firstorder condition of Hamiltonian. They are the solutions to some equation system. Then, we can prove step 1 and 2 by discussing the zero point for the equation system. The key step for proving Step 3 is to obtain upper bound estimation for \(\theta \). It can be done by showing \(\theta \) is also a solution to another system of ODE, which admits the comparison principle, and hence upper bound for its solution. Without confusion of notations, we write \(w^{a}(\mu )\) and \(w^{b}(\mu )\) as,
1.
There exist functions \(w^{a}\), \(w^{b}\) such that for any vector \(\mu \in \mathbb {R}^{M}\), \(w^{a}(\mu )\) and \(w^{b}(\mu )\) satisfy Equation (3.7).
2.
\(w^{a}\) and \(w^{b}\) are unique and locally Lipschitz continuous, which guarantees RHS of the ODE system (3.8) is also locally Lipschitz continuous.
$$\begin{aligned} \begin{aligned}&w^{a}(\mu ) = w^{a} = (w_{Q+1}^{a}, \ldots , w_{Q}^{a}), \quad w^{b}(\mu ) = w^{b} = (w_{Q}^{b}, \ldots , w_{Q1}^{b}). \end{aligned} \end{aligned}$$
5.2.1 Proof of Step 1
We first show that \(w^{a}\) and \(w^{b}\) satisfy the equilibrium condition of the Hamiltonian system. We provide some preliminary results for the existence and uniqueness of the maximum point for Hamiltonian \(G_{q}^{a}(\delta ) := \eta ^{a}(\mu ,\delta , w, q)\) given any vector \(\mu \in \mathbf {R}^{M}\) and \(w \in \mathbf {R}^{M1}\). We can define \(G_{q}^{b}(\delta )\) and prove the same result similarly.
Lemma 5.1
Assume intensity function f satisfies all the assumptions in Theorem 2.1. Then, given any vectors \(w = (w_{Q+1}, \ldots , w_{Q}) \in \mathbb {R}^{M1}\) and \(\mu \), the maximum point exists and is unique for function \(G_{q}^{a}\) when \(q = Q + 1, \ldots , Q\). Furthermore, the maximum point of \(G_{q}^{a}(\delta )\) satisfies the firstorder condition:
$$\begin{aligned} \begin{aligned}&\frac{d G_{q}^{a}(\delta )}{d \delta } = 0. \end{aligned} \end{aligned}$$
Proof
Given any vector \(\mu \) and w, the expected intensity function d is defined byFrom Assumption 2.1, we know for any \(\delta \), x and y:Simple calculation showswhich implies \(\delta + \mu _{q1}  \mu _{q}+ d(\delta )/d'(\delta )\) is a strictly increasing function of \(\delta \). Combining with \(d'(\delta )<0\), we conclude that there exists a unique \(\delta ^*\) such that \(\frac{d G_{q}^{a}(\delta ^*)}{d \delta } = 0\) and \(G_{q}^{a}(\delta )\) is strictly increasing for \(\delta <\delta ^*\) and strictly decreasing for \(\delta >\delta ^*\), that is, \(\delta ^*\) is the unique global maximum point of \(G_{q}^{a}\). \(\square \)
$$\begin{aligned} \begin{aligned}&d(\delta ) := \sum _{j = Q+1}^{Q} P^{a}_{j2} f(\delta ,w_{j}). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&f(\delta , x) f''_{1 1}(\delta , y) + f(\delta , y) f''_{1 1}(\delta , x) < 4 f'_{1}(\delta , x) f'_{1}(\delta , y). \end{aligned} \end{aligned}$$
(5.3)
$$\begin{aligned} \begin{aligned}&d(\delta ) d''(\delta ) < 2 (d'(\delta ))^2, \end{aligned} \end{aligned}$$
Step 1 is equivalent to following theorem, which proves that the generalized Issac’s condition in Definition 3.2 holds for any vector \(\mu \in \mathbb {R}^M\). We only focus on \(w^{a}\), as the proof of \(w^{b}\) is similar.
Theorem 5.2
Assume the intensity function f satisfies Assumption 2.1. Then, for any fixed vector \(\mu = (\mu _{Q}, \ldots , \mu _{Q}) \in \mathbb {R}^M\), there exists vector \(w^{a} = (w_{Q+1}^{a}, \ldots , w_{Q}^{a})\) such that for \(q = Q + 1, \ldots , Q\),
$$\begin{aligned} \begin{aligned}&w_{q}^{a} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{\eta ^{a}(\mu , \delta , w^{a}, q) \}. \end{aligned} \end{aligned}$$
(5.4)
Define a mapping \(T: \mathbb {R}^{M1} \rightarrow \mathbb {R}^{M1}\) as(5.4) is equivalent to \(w^{a} = T(w^{a})\), namely, \(w^{a}\) is a fixed point of mapping T. We need the following Schauder fixedpoint theorem to prove the existence of \(w^a\).
$$\begin{aligned} \begin{aligned}&T_{q}(w) = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta \in \mathbb {R}}} \{ \eta ^{a}(\mu , \delta , w, q) \}, \quad \forall q \in \{Q + 1, \ldots , Q \} \\&T(w) := (T_{Q+1}(w), \ldots , T_{Q}(w)), \end{aligned} \end{aligned}$$
(5.5)
Theorem 5.3
(Schauder) If K is a nonempty convex closed subset of a Hausdorff topological vector space V and T is a continuous mapping of K into itself such that T(K) is contained in a compact subset of K, then T has a fixed point.
To apply Theorem 5.3, we need to show the existence of K and the continuity of T. The next lemma confirms the first requirement.
Lemma 5.4
Given any vector \(\mu = (\mu _{Q} \ldots , \mu _{Q}) \in \mathbb {R}^M\) and mapping T defined in (5.5), there exists a nonempty convex compact set \(K \subset \mathbb {R}^{M1}\) such that \(T(K) \subset K\).
Proof
Firstly, for any vector \(w \in \mathbb {R}^{M1}\), define \(\mathbf {y} = (y_{Q+1}, \ldots , y_{Q}) = T(w)\). There exist a uniform \(\delta _{\mathrm{min}} \in \mathbb {R}\) such that for every q,We can prove by contradiction. Assume there was no lower bound for \(y_{q}\). Defining \(G^{a}_{q}(\delta ) = \eta ^{a}(\mu ,\delta , y, q)\) for \(q = Q+1, \ldots , Q\), we knowDenote the uniform upper bound and lower bound of \(\mu _{q1}  \mu _{q}\) among all \(q \in \mathbf {Q}\) as \(M_d\) and \(m_d\). We haveOtherwise, \(G^{a}_{q}(y_{q}) < 0\) and contradicts with the fact that \(\delta >  m_d\), \(G^{a}_{q}(\delta ) > 0\) and \(y_{q} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{ G^{a}_{q}(\delta ) \}\). Hence, we can conclude thatSecondly, for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M1}\), define \(\mathbf {y} = (y_{Q+1}, \ldots , y_{Q}) = T(w)\). There exists a uniform \(\delta _{\mathrm{max}} \in \mathbb {R}\) such that for every q,Define \(\delta _0 :=  m_d + 1\). By definition of \(m_d\), for every q we haveHence, for every \(q \in \mathbf {Q}\), \(G^{a}_{q}(\delta _0) > 0\). Moreover, as f is increasing to its second argument, for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M1}\), we have:By assumption \(\lim _{\delta \rightarrow +\infty } \lambda (\delta ) \delta = 0\), there exists \(\delta _{\mathrm{max}} > \delta _0\) such thatAs \(f(\delta _{\mathrm{max}}, \cdot )\) is bounded by \(\lambda (\delta _{\mathrm{max}})\) uniformly, (5.8) and (5.9) imply that for any vector \(w \in [\delta _{\mathrm{min}}, +\infty )^{M1}\),Since \(\delta _{\mathrm{max}} > \delta _0\) and \(G^{a}_{q}(\delta _{\mathrm{max}}) <G^{a}_{q}(\delta _0)\), we know that the maximum point \(\delta ^*\) of \(G^{a}_{q}\) cannot be in the interval \((\delta _{\mathrm{max}}, \infty )\) as it would otherwise be a contradiction to \(G^{a}_{q}(\delta )\) being a strictly increasing function of \(\delta \) for \(\delta <\delta ^*\). Hence for any \(q \in \mathbf {Q}\),which shows \(T(K) \subset K\), where \(K = [\delta _{\mathrm{min}}, \delta _{\mathrm{max}}]^{M1}\). \(\square \)
$$\begin{aligned} \begin{aligned}&y_{q} \ge \delta _{\mathrm{min}}. \end{aligned} \end{aligned}$$
(5.6)
$$\begin{aligned} \begin{aligned}&y_{q} = {\mathop {{{\,\mathrm{\arg \!\max }\,}}}\limits _{\delta }} \{ G^{a}_{q}(\delta ) \}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&y_{q} >  M_d. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&y_{q} \ge \delta _{\mathrm{min}} :=  M_p. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&y_{q} \le \delta _{\mathrm{max}}. \end{aligned} \end{aligned}$$
(5.7)
$$\begin{aligned} \begin{aligned}&\delta _0 + \mu _{q1}  \mu _{q} \ge 1 > 0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&G^{a}_{q}(\delta _0) \ge \sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta _0,\delta _{\mathrm{min}}). \end{aligned} \end{aligned}$$
(5.8)
$$\begin{aligned} \begin{aligned}&\max _{q} \{ \sum _{j = Q+1}^{Q} P^{a}_{j} \lambda (\delta _{\mathrm{max}}) (\delta _{\mathrm{max}} + \mu _{q1}  \mu _{q}) \} < \sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta _0,\delta _{\mathrm{min}}). \end{aligned} \end{aligned}$$
(5.9)
$$\begin{aligned} \begin{aligned}&\max _{q} G^{a}_{q}(\delta _{\mathrm{max}}) <G^{a}_{q}(\delta _0). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&y_q \in [\delta _{\mathrm{min}}, \delta _{\mathrm{max}}], \end{aligned} \end{aligned}$$
To prove T is a continuous mapping, we need the following Berge’s maximum theorem.
Theorem 5.5
(Berge) Let X and \(\Theta \) be metric spaces, \(f:X\times \Theta \rightarrow \mathbb {R}\) be a function jointly continuous in its two arguments and \(C:\Theta \rightarrow X\) be a compactvalued correspondence. For x in X and \(\theta \) in \(\Theta \), letandIf C is continuous at some \(\theta \), then \(f^{*}\) is continuous at \(\theta \) and \(x^*\) is nonempty, compactvalued and upper hemicontinuous at \(\theta \), that is, if \(\theta _{n}\rightarrow \theta \) and \(b_n\rightarrow b\) as \(n\rightarrow \infty \) with \(b_n\in x^*(\theta _{n})\), then \(b\in x^*(\theta )\).
$$\begin{aligned} f^{*}(\theta )=\max \{f(x,\theta )x\in C(\theta )\}, \end{aligned}$$
$$\begin{aligned} x^{*}(\theta )={\mathrm {arg}} \max \{f(x,\theta )x\in C(\theta )\} =\{x\in C(\theta )\,\,f(x,\theta )=f^{*}(\theta )\}. \end{aligned}$$
The next lemma shows that any singlevalued, bounded, upper hemicontinuous mapping is a continuous function.
Lemma 5.6
Let A, B be two Euclidean spaces, \(\Gamma : A \rightarrow B\) be a singlevalued, bounded and upper hemicontinuous mapping, then \(\Gamma \) is a continuous function.
Proof
For any sequence \(a_n\rightarrow a\) and \(b_n= \Gamma (a_n)\) (\(\Gamma \) is a singlevalued mapping), if \(b_n\) tends to a limit b, then we must have \(b=\Gamma (a)\) by the hemicontinuity of \(\Gamma \) and we are done. Assume the sequence \(b_n\) did not have a limit. Since \(b_n\) is a bounded sequence, there exist at least two subsequences \(b_{n_k}\) and \(b_{n_k'}\) that converge to two different values b and \(b'\). Since \(a_n\rightarrow a\), we must have both \(a_{n_k}\) and \(a_{n_k'}\) tend to a, the hemicontinuity of \(\Gamma \) would imply \(b=\Gamma (a)\) and \(b'=\Gamma (a)\), a contradiction to the assumption that \(b\ne b'\). Therefore, \(\Gamma \) is continuous. \(\square \)
Lemma 5.7
Proof
We prove that given vector \(\mu \), each element \(T_q(w)\) of mapping T is continuous respect to each \(w_{q} \). As the maximum point of \(\eta ^{a}(\mu , \cdot , w, q)\) exists and is unique for every \(q \in \{Q + 1, \ldots , Q\}\), \(T_q\) is a welldefined singlevalue mapping. Moreover, \(\eta ^{a}(\mu , \delta , w, q)\) is jointly continuous w.r.t. \(\delta \) and w. By Berge’s maximum theorem, \(T_q\) is upper hemicontinuous function of w on bounded set K. Therefore, by Lemma 5.6, for \(q \in \{Q + 1, \ldots , Q\}\), \(T_q\) is also continuous w.r.t. every \(w_{q}\). We conclude that given vector \(\mu \), the mapping T is a continuous mapping from \(K \rightarrow K\). \(\square \)
Finally, we can prove Theorem 5.2, which concludes the proof of step 1.
Proof of Theorem 5.2
As the intensity function f satisfies Assumption 2.1, from Lemma 5.1, the maximum point of \(G^{a}_{q}(\delta )\) exists and is unique for every q. Fixed vector \(\mu \in \mathbb {R}^M\), define mapping \(T: \mathbb {R}^{M1} \rightarrow \mathbb {R}^{M1}\) as in (5.5). \(w^{a}\) is the fixed point of mapping T. To show the existence of fixed point to the mapping, Schauder fixedpoint theorem is applied to T by the following steps.
Firstly, by Lemma 5.4, there exists a bounded closed set \(K \subset \mathbb {R}^{M1}\) which is equivalently a compact set, such that \(T(K) \subset K\). From the proof of Lemma 5.4, the compact set K is convex.
Secondly, from Lemma 5.1 and 5.7, T is a single value continuous mapping from K to K. By Theorem 5.3, T has a fixed point for every given \(\mu \), denoted by \(w^{a}\), andThis concludes the proof of Step 1. \(\square \)
$$\begin{aligned} \begin{aligned}&w^{a}_{q} = T(w^{a})\in K. \end{aligned} \end{aligned}$$
(5.10)
5.2.2 Proof of Step 2
We first state a global implicit function theorem in [9, Theorem 4], which is used in the proof.
Theorem 5.8
Assume \(F : \mathbb {R}^{n} \times \mathbb {R}^{m} \rightarrow \mathbb {R}^{n}\) is a locally Lipschitz mapping such thatThen, there exists a unique locally Lipschitz function \(f: \mathbb {R}^{m} \rightarrow \mathbb {R}^{n}\) such that equations \(F(x, y) = 0\) and \(x = f(y)\) are equivalent in the set \(\mathbb {R}^{n} \times \mathbb {R}^{m}\).

For every \(y \in \mathbb {R}^{m}\), the function \(\phi _{y} : \mathbb {R}^{n} \rightarrow \mathbb {R}\), defined by \( \phi _{y}(x) = \frac{1}{2} F(x,y)^2\), is coercive, i.e., \(\lim _{x \rightarrow \infty } \phi _{y}(x) = +\infty \).

The set \(\partial _{x}F(x, y)\) is of maximal rank for all \((x, y) \in \mathbb {R}^{n} \times \mathbb {R}^{m}\).
With the help of Theorem 5.8, we can show the local Lipschitz continuity of functions \(w^{a}\) and \(w^{b}\).
Theorem 5.9
Assume the intensity function f satisfies Assumption 2.1. Then, there are singlevalued and locally Lipschitz continuous functions \(w^{a}, w^{b} : \mathbb {R}^{M} \rightarrow \mathbb {R}^{M1}\), such that they satisfy the generalized Issac’s condition (3.7) in Definition 3.2 for any given vector \(\mu \in \mathbb {R}^{M}\).
Proof
We provide the proof for \(w^{a}\) only. The proof for \(w^{b}\) is similar.
To begin with, from Assumption 2.1, we have (5.3) for all \(\delta \), x and y. From Lemma 5.1, the maximum point of \(G_{q}^{a}(\delta ) = \eta ^{a}(\mu ,\delta , w^{a}, q)\) is unique. From Remark 5.1, given any vector \(\mu \), \(w^{a}\) that satisfies the generalized Issac’s condition in Definition 3.2 is also the solution to the following firstorder condition for every q,For any vector \(\mu \) and \(\delta = (\delta _{Q+1}, \ldots , \delta _{Q})\), define function \(F_q : \mathbb {R}^{M1} \times \mathbb {R}^{M} \rightarrow \mathbb {R}\) for every \(q \in \{Q+1, \cdots Q\}\) as following:Define mapping \(F: \mathbb {R}^{M1} \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M1}\) asF is continuously differentiable and \(w^{a}\) is determined implicitly by \( F(w^{a}, \mu ) = 0\). From the proof of step 1, there exists a function \(w^{a}: \mathbb {R}^{M} \rightarrow \mathbb {R}^{M1}\) such that \(F(w^{a}(\mu ),\mu ) = 0\) for any vector \(\mu \). If we can verify Theorem 5.8 holds in this case, the function \(w^{a}\) satisfying \(F(w^{a}(\mu ),\mu ) = 0\) must be unique and continuously differentiable, which concludes our proof. Hence, the next step is to verify Theorem 5.8.
$$\begin{aligned} \begin{aligned}&\sum _{j = Q+1}^{Q} P^{a}_{j} [ f(w_{q}^{a},w_{j}^{a}) + f'_{1}(w_{q}^{a},w_{j}^{a}) (w_{q}^{a} + \mu _{q1}  \mu _{q}) ] = 0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&F_q(\delta , \mu ) :=  \frac{\sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta _{q},\delta _{j})}{\sum _{j = Q+1}^{Q} P^{a}_{j} f'_{1}(\delta _{q},\delta _{j}) }  \delta _{q}  (\mu _{q1}  \mu _{q}). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&F(\delta , \mu ) := (F_{Q+1}(\delta , \mu ), \ldots , F_{Q}(\delta , \mu )). \end{aligned} \end{aligned}$$
Firstly, we prove that the Jacobian matrix of F never vanishes. Denote Jacobian matrix of F with respect to \(\delta \) as \(\partial _{\delta } F\), a \(2Q\times 2Q\) matrix, and its component at (i, m) is \(\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu )\) for \(i,m=Q+1,\ldots , Q\). Denote by, for \(i \in \{ Q+1, \ldots , Q \}\),For \(m = i\), we have:From Assumption 2.1, we have (5.3), and simple calculation shows:Hence,For \(i \ne m\), the nondiagonal element of the Jacobian matrix \(\partial _{\delta } F\) is given by:To compare the diagonal element with the sum of nondiagonal elements, we have:From the definition of \(A_{i}\) and \(I_{i m}\),By the assumption of f in (2.1), we haveTherefore, as \(D_i > 0\), from (5.11), (5.12) and (5.13), we conclude thatThe Jacobian matrix \(\partial _{\delta } F(\delta , \mu )\) is strictly diagonally dominant and is therefore a nonsingular matrix.
$$\begin{aligned} \begin{aligned}&D_{i} := \left( \sum _{m = Q+1}^{Q} P^{a}_{m} f'_{1}(\delta _{q},\delta _{m}) \right) ^2 > 0 \\&A_{i} = \frac{1}{D_{i}} \sum _{m = Q+1}^{Q} \sum _{j = Q+1}^{Q} P^{a}_{m} P^{a}_{j} [ f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j})  f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})] \\&I_{i m} : = \frac{1}{D_{i}} P^{a}_{m} \sum _{j = Q+1}^{Q} P^{a}_{j} [ f(\delta _{i},\delta _{j}) f''_{1 2}(\delta _{i},\delta _{m})  f'_{1}(\delta _{i},\delta _{j}) f'_{2}(\delta _{i},\delta _{m}) ]. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu ) = 1 + A_{i} + I_{i i}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&1 + A_{i} = \frac{1}{D_{i}} \sum _{m = Q+1}^{Q} \sum _{j = Q+1}^{Q} P^{a}_{m} P^{a}_{j} [ f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j})  2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})] < 0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\left \frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )\right \ge 1  A_{i}  \left I_{i,i}\right . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu ) = I_{i m}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\left \frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )\right  \sum _{m \ne i}\left \frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu )\right \ge 1  A_{i}  \sum _{m = Q+1}^{Q} I_{i m}. \end{aligned} \end{aligned}$$
(5.11)
$$\begin{aligned} \begin{aligned}&1  A_{i}  \sum _{m = Q+1}^{Q} I_{i m} \\&= \frac{1}{D_{i}} \sum _{m = Q+1}^{Q} P^{a}_{m} \{ \sum _{j = Q}^{Q} P^{a}_{j} [ 2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})  f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j})] \\&\quad  \sum _{j = Q+1}^{Q} P^{a}_{j} [f(\delta _{i},\delta _{j}) f''_{1 2}(\delta _{i},\delta _{m})  f'_{1}(\delta _{i},\delta _{j}) f'_{2}(\delta _{i},\delta _{m})]  \}. \end{aligned} \end{aligned}$$
(5.12)
$$\begin{aligned} \begin{aligned}&\sum _{j = Q+1}^{Q} P^{a}_{j} [ 2 f'_{1}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j})  f''_{1 1}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) ] \\&\qquad \pm \left[ \sum _{j = Q+1}^{Q} P^{a}_{j}\left[  f'_{2}(\delta _{i},\delta _{m}) f'_{1}(\delta _{i},\delta _{j}) + f''_{1 2}(\delta _{i},\delta _{m}) f(\delta _{i},\delta _{j}) \right] \right] > 0. \end{aligned} \end{aligned}$$
(5.13)
$$\begin{aligned} \begin{aligned}&\frac{\partial F_i}{\partial \delta _{i}}(\delta , \mu )  \sum _{m \ne i}\frac{\partial F_i}{\partial \delta _{m}}(\delta , \mu ) > 0. \end{aligned} \end{aligned}$$
Secondly, we show that given any fixed vector \(\mu \), whenever \( \delta  \rightarrow \infty \), \( F(\delta , \mu )  \rightarrow \infty \). For any vector sequence \({{\mathbf {\delta }^{k}}}, k = 1, 2 \cdots \), \( {{\mathbf {\delta }^{k}}}  \rightarrow \infty \). Then, there exists sequence \(n_k \in \{Q+1, \ldots , Q\} , k = 1, 2 \cdots \), such that \( \delta ^{k}_{n_k}  \rightarrow \infty \). \(\delta ^{k}_{n_k}\) is the \(n_k\)th element of vector \({{\mathbf {\delta }^{k}}}\). In the case that \(\delta ^{ k}_{n_k} \rightarrow \infty \), as we haveHence, we know following when \(k \rightarrow +\infty \):It means when \(\delta ^{ k}_{n_k} \rightarrow \infty \), \( F({{{\mathbf {\delta }}^{k}}}, \mu )  \rightarrow \infty \).
$$\begin{aligned} \begin{aligned}&L_{n_k}({{\mathbf {\delta }^{k}}}) := \frac{\sum _{m = Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{m})}{\sum _{m = Q+1}^{Q} P^{a}_{m} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{m}) } < 0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) =  L_{n_k}({{\mathbf {\delta }^{k}}})  \delta ^{k}_{n_k}  (\mu _{n_k1}  \mu _{n_k}) >  \delta ^{k}_{n_k}  (\mu _{n_k1}  \mu _{n_k}) \rightarrow + \infty . \end{aligned} \end{aligned}$$
On the other hand, in the case that \(\delta ^{k}_{n_k} \rightarrow +\infty \), we can always assume \(\delta ^{k}_{n_k} = \max \{\delta ^{k}_{i} \}_{i \in \mathbf {Q}, i>Q}\). As \(f'_{1} < 0\), \(f>0\) and f is increasing function to its second variable, we have the following estimation on \(F_{n_k}({{\mathbf {\delta }^{k}}}, \mu )\):From the assumption that \( \lim _{\delta \rightarrow +\infty }  \frac{f'_{1}(\delta , \delta )}{f(\delta , \delta )} > 0\), we have:Then, by taking \(\delta ^{k}_{n_k} \rightarrow +\infty \), we finally have:Hence, when fixed \(\mu \) and \(\delta ^{k}_{n_k} \rightarrow +\infty \), we also get \( F({{\mathbf {\delta }^{k}}}, \mu )  \rightarrow \infty \). Moreover, if \(\delta ^{k}_{n_k}\) is consisted of two subsequences such that one converges to \(+\infty \), another to \(\infty \), by combining above, we can still get \( F({{\mathbf {\delta }^{k}}}, \mu )  \rightarrow \infty \). We conclude that whenever \( \delta  \rightarrow \infty \), \( F(\delta , \mu )  \rightarrow \infty \).
$$\begin{aligned} \begin{aligned}&F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) =  \frac{\sum _{m = Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{m})}{\sum _{m = Q+1}^{Q} P^{a}_{m} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{m}) }  \delta ^{k}_{n_k}  (\mu _{n_k1}  \mu _{n_k}) \\&\quad \le  \frac{\sum _{m = Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})}{P^{a}_{n_{k}} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})}  \delta ^{k}_{n_k}  (\mu _{n_k1}  \mu _{n_k}). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&0<  \lim _{\delta ^{k}_{n_{k}} \rightarrow +\infty } \frac{\sum _{m = Q+1}^{Q} P^{a}_{m} f(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})}{P^{a}_{n_{k}} f'_{1}(\delta ^{k}_{n_{k}},\delta ^{k}_{n_{k}})} < +\infty . \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\lim _{\delta ^{k}_{n_k} \rightarrow +\infty } F_{n_k}({{\mathbf {\delta }^{k}}}, \mu ) =  \infty . \end{aligned} \end{aligned}$$
Theorem 5.8 implies that there exists a function \(w^{a}: \mathbb {R}^{M} \rightarrow \mathbb {R}^{M1}\) such that \(F(w^{a}(\mu ), \mu ) = 0\) and \(w^{a}\) is unique and locally Lipschitz continuous, which concludes the proof of Step 2. \(\square \)
5.2.3 Proof of Step 3
We next prove there exists a unique classical solution \(\theta \) to ODE system (3.8) on [0, T]. The proof is divided by two parts. Firstly, we show the solution to ODE system (3.8) is bounded if it exists. Secondly, we provide the proof for existence and uniqueness of the classical solution to ODE system (3.8).
Lemma 5.10
Assume the intensity function f satisfies Assumption 2.1. If \(\theta : [0, T] \rightarrow \mathbb {R}^M\) is a solution to the ODE system (3.8), then for all \(q \in \mathbf {Q}\) we have
$$\begin{aligned}  \frac{1}{2} \gamma \sigma ^2 Q^2 T l(Q) \le \theta _{q}(t) \le 2 \sup _{\delta } \lambda (\delta ) \delta T. \end{aligned}$$
Proof
We first prove the upper bound. From the assumption on f and the proof for steps 1 and 2, the ODE system (3.8) is well defined. Since \(\theta \) is assumed to be a solution, define twice continuously differentiable functions \(d^{0}\) and \(d^{1}\) asFrom Assumption 2.1, we have (5.3) for all \(\delta \), x and y. Simple calculation shows that \(d^{0}\) and \(d^{1}\) satisfyOn the other hand, \(\theta \) is also the solution to ODE system for all \(q \in \mathbf {Q}\):The comparison principle for ODE system (5.14) can be proved easily with similar argument in the proof of comparison principle in Guéant [10]. Define operator \(H^{\zeta } : [0, T] \times \mathbb {R} \rightarrow \mathbb {R}\) for both \(\zeta = 0, 1\) asThen, from Guéant [10], we know \(H^{\zeta }\) is an increasing and nonnegative function in \(\Delta \mu \).Define \(\bar{\theta } : [0, T] \rightarrow \mathbb {R}^M\) as following:Substituting \(\bar{\theta }\) into ODE system (5.14), we haveThen, by the comparison principle from Guéant [10], we know for every \(q \in \mathbf {Q}\),We next prove the lower bound. Let \(\tilde{\theta }: [0, T] \rightarrow \mathbb {R}^M\) satisfy the following ODE system for all \(q \in \mathbf {Q}\):The closedform solution is given byNote we have estimation that for every vector \(\mu \in \mathbb {R}^M\) and every \(q \in \mathbf {Q}\),Since \(\tilde{\theta }_{q}(T) \le \theta _{q}(T)\), \(\tilde{\theta }^{'}_{q}(t) \ge \theta ^{'}_{q}(t)\), then it can be proved similarly as the proof of the upper solution that for every \(q \in \mathbf {Q}\):\(\square \)
$$\begin{aligned} \begin{aligned}&d^{0}(t, \delta ) := \sum _{j = Q}^{Q1} P^{b}_{j} f(\delta , w^{b}_{j}(\theta (t))) \le \lambda (\delta ) \\&d^{1}(t, \delta ) := \sum _{j = Q+1}^{Q} P^{a}_{j} f(\delta , w^{a}_{j}(\theta (t))) \le \lambda (\delta ). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&d^{\zeta }(t, \delta ) \le \lambda (\delta ), \quad \frac{\partial ^2 d^{\zeta }}{\partial \delta ^2}(t, \delta ) d^{\zeta }(t, \delta ) < 2 (\frac{\partial d^{\zeta }}{\partial \delta }(t, \delta ))^2, \quad \zeta = 0, 1. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\theta '_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2  \sup _{\delta } \{ d^{0}(t, \delta ) (\delta + \theta _{q + 1}(t)\theta _{q}(t)) \} I^{b}(q) \\& \sup _{\delta } \{ d^{1}(t, \delta ) (\delta + \theta _{q  1}(t)\theta _{q}(t)) \} I^{a}(q) \theta _{q}(T) = l(q). \end{aligned} \end{aligned}$$
(5.14)
$$\begin{aligned} \begin{aligned}&H^{\zeta }(t, \Delta \mu ) := \sup _{\delta } \{ d^{\zeta }(t, \delta ) (\delta + \Delta \mu ) \}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\max _{t \in [0, T], \zeta = 0, 1} H^{\zeta }(t, 0) \le \sup _{\delta } \{ \lambda (\delta ) \delta \}. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\bar{\theta }_{q}(t) = 2 \sup _{\delta } \lambda (\delta ) \delta (T  t). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}& \bar{\theta }'_{q}(t) + \frac{1}{2} \gamma \sigma ^2 q^2  H^{0}(t, \bar{\theta }_{q+1}(t)\bar{\theta }_{q}(t))I^{b}(q)  H^{1}(t, \bar{\theta }_{q1}(t)\bar{\theta }_{q}(t))I^{a}(q) \\&= \sum _{\zeta =0}^{1} (\sup _{\delta } \lambda (\delta )  H^{\zeta }(t, 0)) + \frac{1}{2} \gamma \sigma ^2 q^2 \ge 0 \\&\bar{\theta }_{q}(T) = 0 \ge \theta _{q}(T) =  l(q). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\theta _{q}(t) \le \bar{\theta }_{q}(t) \le 2 \sup _{\delta } \lambda (\delta ) \delta T. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\tilde{\theta }'_{q}(t)  \frac{1}{2} \gamma \sigma ^2 q^2 = 0 \\&\tilde{\theta }_{q}(T) = l(q). \end{aligned} \end{aligned}$$
(5.15)
$$\begin{aligned} \begin{aligned}&\tilde{\theta }_{q}(t) = \frac{1}{2} \gamma \sigma ^2 q^2 (t  T) l(q). \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\eta ^{a}(\mu , w^{a}_{q}(\mu ), w^{a}(\mu ), q) \ge 0, \quad \eta ^{b}(\mu , w^{b}_{q}(\mu ), w^{b}(\mu ), q) \ge 0. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&\theta _{q}(t)\ge \tilde{\theta }_{q}(t) \ge  \frac{1}{2} \gamma \sigma ^2 Q^2 T l(Q). \end{aligned} \end{aligned}$$
To prove the existence of a classical solution to the coupled ODE system (3.8), we cite the Picard–Lindelof theorem in ODE theory that provides the existence and uniqueness of solution.
Theorem 5.11
(Picard–Lindelof theorem) Consider the initial value problem in \(\mathbb {R}^M\):where \(F : \mathbb {R} \times \mathbb {R}^M \rightarrow \mathbb {R}^M\) is uniformly Lipschitz continuous in y with Lipschitz constant L (independent of t) and continuous in t. Then, for some value \(\varepsilon > 0\), there exists a unique solution y(t) to the initial value problem on the interval \( [t_{0}\varepsilon ,t_{0}+\varepsilon ]\).
$$\begin{aligned} y'(t) = F(t, y(t)),\; y(t_0) = y_0, \end{aligned}$$
The next lemma is a direct conclusion from the proof of Theorem 5.11, see Teschl [18]. It helps us to extend the local existence and uniqueness of solution to the global existence and uniqueness.
Lemma 5.12
Let \(C_{{a,b}}=[t_{0}a,t_{0}+a]\times B_{b}(y_{0})\), where \(B_{b}(y_{0})\) is a closed ball in \(\mathbb {R}^M\) with center \(y_0\) and radius b. DefineThen, the solution to the ODE system (3.8) exists and is unique on interval \([t_0  \epsilon , t_0 + \epsilon ]\), if \(\epsilon \) satisfies following:
$$\begin{aligned} {\begin{aligned}&M= \sup _{{(t, y) \in C_{{a,b}}}} \Vert F(t,y)\Vert . \end{aligned}} \end{aligned}$$
$$\begin{aligned} {\begin{aligned}&\epsilon < \min \left\{ \frac{b}{M},\frac{1}{L}, a\right\} . \end{aligned}} \end{aligned}$$
Theorem 5.13
Consider the terminalvalue ODE problem on [0, T]:where \(F : [0, T] \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M}\) is a jointly locally Lipschitz continuous function. Assume that there exists a constant K such that if solution \(\theta \) exists on any subinterval of [0, T], \(\theta (t) \in [K, K]^M\). Then, there exists a unique solution to (5.16) on [0, T].
$$\begin{aligned} \begin{aligned}&\theta '(t) = F(t,\theta (t)), \; \theta (T) = \theta _0, \end{aligned} \end{aligned}$$
(5.16)
Proof
Define \(A_{T, 2 \sqrt{M} K} := [0, T] \times [2 \sqrt{M} K, 2 \sqrt{M} K]^M\). F is a continuous function. Hence, there exists uniform constant \(C > 0\) such thatSince F is jointly locally Lipschitz continuous, there exists a series of open set \(A_{i}\) such that F is Lipschitz continuous in \(A_{i}\) with Lipschitz coefficient \(L_i\), and \(A_{T, 2 \sqrt{M} K} \subset \cup _{i} A_{i}\). By Heine–Borel theorem, there are finite set I of i such that \(A_{T, 2 \sqrt{M} K} \subset \cup _{i \in I} A_{i}\). Defining \(L := \max _{i \in I} L_i\), we know F is Lipschitz continuous on the compact set \(A_{T, 2 \sqrt{M} K}\) with uniform Lipschitz coefficient L.
$$\begin{aligned} {\begin{aligned}&C := \sup _{(t,y) \in A_{T, 2 \sqrt{M} K}} \Vert F(t,y)\Vert . \end{aligned}} \end{aligned}$$
(5.17)
As terminal value \(\theta _{0} \in [K, K]^M\), we define \(C^{0}_{T, \sqrt{M} K} := [0, T] \times B_{\sqrt{M} K}(\theta _{0})\). Then, \(C^{0}_{T, \sqrt{M} K} \subset A_{T, 2 \sqrt{M} K}\). For \(\epsilon := \min \{\frac{\sqrt{M} K}{C}, \frac{1}{L}, T \}\), the solution \(\theta \) to ODE system (5.16) exists and is unique on \([T  \epsilon , T]\). If \(\epsilon =T\), then we are done, otherwise, update the new terminal time as \(\tilde{T} := T  \epsilon \). Since \(\theta (\tilde{T}) \in [K, K]^M\) by assumption, we can update a new terminal value \(\theta _{0} := \theta (\tilde{T})\). Define a new \(C^{1}_{\tilde{T}, \sqrt{M} K} := [0, \tilde{T}] \times B_{\sqrt{M} K}(\theta (\tilde{T})) \subset A_{T, 2 \sqrt{M} K}\). For \(\epsilon := \min \{\frac{\sqrt{M} K}{C}, \frac{1}{L}, \tilde{T} \}\), solution \(\theta \) to ODE system (5.16) exists and is unique on \([\tilde{T}  \epsilon , \tilde{T}]\), and hence exists and is unique also on \([\tilde{T}  \epsilon , T]\). Repeat this process, and we can reach \(\epsilon = \tilde{T}\) after finite number of steps, in which case we have proved the existence and uniqueness of solution \(\theta \) to ODE system (5.16) on the whole time interval [0, T]. \(\square \)
Combining Lemma 5.10, Theorem 5.9, and Theorem 5.13, we can finally proceed to show that the ODE system (3.8) has a unique classical solution.
Theorem 5.14
Proof
According to Lemma 5.10, we know if the solution \(\theta \) exists on any subinterval of [0, T], there exists constant \(K \ge 0\) such thatDefine \(F: [0, T] \times \mathbb {R}^{M} \rightarrow \mathbb {R}^{M}\) asAs q is finite, the original ODE system (3.8) can be rewritten in a vector form with F as in (5.16). Then, F is a jointly locally Lipschitz continuous function, and if solution \(\theta \) exists on any subinterval of [0, T], \(\theta (t) \in [K, K]^M\). By Theorem 5.13, the ODE system has unique solution on [0, T]. This concludes the proof of step 3. \(\square \)
$$\begin{aligned} \begin{aligned}&K \le \theta _{q}(t) \le K. \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned}&F_{q}(t,\theta (t)) := \frac{1}{2} \gamma \sigma ^2 q^2  \eta ^{a}(\theta (t),w^{a}_{q}(\theta (t)),w^{a}(\theta (t)), q) I^{a}(q) \\&\qquad \qquad \qquad  \eta ^{b}(\theta (t),w^{b}_{q}(\theta (t)),w^{b}(\theta (t)), q) I^{b}(q) \\&F(t,\theta (t)) := (F_{Q}(t,\theta (t)), \ldots , F_{Q}(t,\theta (t))). \end{aligned} \end{aligned}$$
5.2.4 Completion of Proof of Theorem 3.4
From Steps 1, 2 and 3, we know there exist unique locally Lipschitz continuous functions \(w^{a}, w^{b}\) that satisfy the generalized Issac’s condition in Definition 3.2, the ODE system (3.8) is well defined and equivalent to the ODE system (3.5). There exists a unique classical solution to ODE system (3.8). Define the equilibrium value function for \(G_{\mathrm{mm}}\) by (3.1), and the equilibrium controls by (3.9). As \(\theta \) is the classical solution to the ODE system (3.8), it is a continuous function on [0, T] and hence bounded. Then, both \(\pi ^{a}(t) = w^{a}(\theta (t))\) and \(\pi ^{b}(t) = w^{b}(\theta (t))\) are bounded on [0, T]. \(\theta \), \(\pi ^{a}(t)\) and \(\pi ^{b}(t)\) satisfy the ODE system (3.5). Hence, from the verification Theorem 3.3, the equilibrium for game \(G_{\mathrm{mm}}\) exists. On the other hand, as the solution to ODE system (3.5) is unique, by Theorem 3.1 we know the equilibrium point is also unique.
6 Conclusions
In this paper, we have modeled the price competition between market makers, proved the generalized Issac’s condition, which ensures the existence and uniqueness of Nash equilibrium for market making with price competition, and derived the equilibrium strategies and the equilibrium value function. We have also performed numerical tests to compare our model with a benchmark model in the existing literature without price competition and found that the introduction of price competition reduces bid/ask spreads and improves market liquidity. There remain many open questions, for example, the jump processes \(N^a\) and \(N^b\) are no longer of Poisson type but more general (Hawkes processes, more general Markov jump processes), the set of inventory position constraints is no longer a finite set but may be infinite (eventually uncountable if considering a whole interval); we leave these and other open questions to our future research.
Acknowledgements
The authors are very grateful to the anonymous reviewers and the editors for their constructive comments and suggestions which have helped to improve the paper of the previous version.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.