Skip to main content
Erschienen in: Queueing Systems 3-4/2016

Open Access 11.08.2016

Steady-state analysis of shortest expected delay routing

verfasst von: Jori Selen, Ivo Adan, Stella Kapodistria, Johan van Leeuwaarden

Erschienen in: Queueing Systems | Ausgabe 3-4/2016

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We consider a queueing system consisting of two nonidentical exponential servers, where each server has its own dedicated queue and serves the customers in that queue FCFS. Customers arrive according to a Poisson process and join the queue promising the shortest expected delay, which is a natural and near-optimal policy for systems with nonidentical servers. This system can be modeled as an inhomogeneous random walk in the quadrant. By stretching the boundaries of the compensation approach we prove that the equilibrium distribution of this random walk can be expressed as a series of product forms that can be determined recursively. The resulting series expression is directly amenable to numerical calculations and it also provides insight into the asymptotic behavior of the equilibrium probabilities as one of the state coordinates tends to infinity.

1 Introduction

In this paper we analyze the performance of a system with two servers under the shortest expected delay (SED) routing policy. This routing policy assigns an arriving customer to the queue that has the shortest expected delay (sojourn time), where delay refers to the waiting time plus the service time. This policy arises naturally in various application areas, and poses considerable mathematical challenges.
In particular, we focus on a queueing system with two servers, where each server has its own queue with unlimited buffer capacity. All service times are independent and exponentially distributed, but the two servers have different service rates, i.e., respectively, 1 and s. In both queues customers are served in FCFS-order. Customers arrive according to a Poisson process with rate \(\lambda \) and upon arrival join one of the two queues according to the following mechanism: Let \(q_1\) and \(q_2\) be the number of customers in queue 1 and 2, respectively, including a possible customer in service. For an arriving customer, the expected delay in queue 1 is \(q_1 + 1\) and in queue 2 is \((q_2 + 1)/s\). The SED routing policy assigns an arriving customer to queue 1 if \(q_1 + 1 < (q_2 + 1)/s\) and to queue 2 if \(q_1 + 1 > (q_2 + 1)/s\). In case the expected delays in both queues are equal, i.e., \(q_1 + 1 = (q_2 + 1)/s\), the arriving customer joins queue 1 with probability q and queue 2 with probability \(1 - q\). Once a customer has joined a queue, no switching or jockeying is allowed. The service times are assumed independent of the arrival process and the customer decisions. This system is stable if and only if, see, for example, [13, Theorem 1],
$$\begin{aligned} \rho :=\lambda / (1 + s) < 1. \end{aligned}$$
(1.1)
We refer to this specific queueing system as the SED system. The SED system can be modeled as a two-dimensional Markov process on the states \((q_1,q_2)\) with the transition rate diagram as in Fig. 1. From the transition rate diagram, it is evident that this state description leads to an inhomogeneous random walk in the quadrant, making an exact analysis extremely difficult. The inhomogeneous behavior occurs along the line \(s(q_1 + 1) = q_2 + 1\) and it can be expected that the solution structure of the stationary probabilities above and below this line will be different. Moreover, for \(s \ne 1\), this line divides the state space in unequal proportions, further increasing the complexity of an exact analysis.
Under the assumption of identical service rates, SED routing becomes join the shortest queue (JSQ) routing which is known to minimize the mean expected delay [29]. If the service rates of the two servers are different, then JSQ is not optimal. As Whitt [29] points out, if the system if fully observable and we are a priori knowledgeable of the service times of waiting customers, for example, by looking at the shopping baskets in a supermarket, then the natural choice is to join the queue promising the smallest sojourn time in the system, instead of the shorter of the queues. However, this type of information is too detailed and might not always be available. In particular, there are many situations in which we have limited knowledge of the service times of the waiting customers. Such systems include teller waiting lines, production facilities, communication networks, etc. If the only available information is that of the number of waiting customers at each queue, then it is quite natural, although not necessarily optimal, to choose the shortest queue routing [4, 5, 9, 11, 16, 17, 20, 23, 25, 26, 30, 31]. However, if on top of the number of waiting customers we can estimate the expected service times at the two queues, then the natural choice is to route the customers according to SED, rather than JSQ.
SED routing seems to be a natural choice, but in practice this choice is not always optimal, since it does not minimize the mean stationary delay [14, 29]. For the two nonidentical server setting, Hajek [22] solves a Markov decision problem and proves that the optimal routing policy is of the threshold type. However, SED routing exhibits relatively good performance at both ends of the range of system utilizations, but performs slightly worse than other policies at medium utilizations; see [7]. Furthermore, Foschini [18] has shown that SED routing is asymptotically optimal and results in complete resource pooling in the heavy traffic limit.
State-dependent routing policies such as JSQ and SED are typically difficult to analyze. For the stationary behavior of queueing systems with SED routing very little is known. The difficulty in analyzing this type of model is evident from the analysis of the JSQ policy for two identical parallel exponential servers [21]. The first major steps towards its analysis were made in [16, 25], using a uniformization approach that established that the generating function of the equilibrium distribution is meromorphic. Thus, a partial fraction expansion of the generating function of the joint stationary queue length distribution would in principle lead to a representation of the equilibrium distribution as an infinite linear combination of product forms. For an extensive treatment of the JSQ system using a generating function approach, the interested reader is referred to [12, 15]. An alternative approach that is not based on generating functions is the compensation approach [4]. This approach directly solves the equilibrium equations and leads to an explicit solution. The essence of the approach is to first characterize the product forms satisfying the equilibrium equations for states in the inner region of the two-dimensional state space and then to use the product forms in this set to construct a linear combination of product forms which also satisfies the boundary conditions. The construction is based on a compensation argument: after introducing the starting product form, new product forms are added so as to alternately compensate for the errors on the two boundaries.
The compensation approach has been developed in a series of papers [1, 46] and aims at a direct solution for a class of two-dimensional random walks on the lattice of the first quadrant that obey the following conditions:
(i)
Step size: only transitions to neighboring states.
 
(ii)
Forbidden steps: no transitions from interior states to the North, North-East, and East.
 
(iii)
Homogeneity: the same transitions occur according to the same rates for all interior points, and similarly for all points on the horizontal boundary, and for all points on the vertical boundary.
 
The approach exploits the fact that the balance equations in the interior of the quarter plane are satisfied by linear (finite or infinite) combinations of product forms that need to be chosen such that the equilibrium equations on the boundaries are satisfied as well. As it turns out, this can be done by alternatingly compensating for the errors on the two boundaries, which eventually leads to an infinite series of product forms. The SED queueing system in this paper is more complicated than the JSQ and other classical queueing systems, see, for example, [1, 36], since the two-dimensional random walk that describes the SED system exhibits inhomogeneous behavior in the interior of the quadrant. In this paper, we show that the compensation approach can nevertheless be further developed to overcome the obstacles caused by the inhomogeneous behavior of the random walk. This leads to a solution for the stationary distribution in the form of a tree of product forms.
The only other work in this direction is [3], which considers SED routing for two identical parallel servers with Poisson arrivals and Erlang distributed service times. The crucial difference with our setting is that we do not focus on generalizing service times, but instead consider servers with different service rates.
The remainder of the paper is organized as follows. In Sect. 2 we introduce the model in detail and describe the equilibrium equations. We discuss the compensation approach and its methodological extensions together with our contribution in Sect. 3. Some numerical results are presented in Sect. 4. Section 5 applies the compensation approach to determine the equilibrium distribution of the SED system as a series of product-form solutions. Finally, we present some conclusions in Sect. 6.

2 Equilibrium equations

The Markov process associated with the SED system has an inhomogeneous behavior in the interior of the quadrant, specifically along the line \(s(q_1 + 1) = q_1 + 1\); see Fig. 1. In this section we transform the two-dimensional state space \((q_1,q_2)\) to a half-plane with a finite third dimension. For this state description, we show that the theoretical framework of the compensation approach can be extended and in this way we determine the equilibrium distribution of the SED system.
We will henceforth assume that s is a positive integer number. The service rate s could also be chosen to be rational and the analysis would be similar, but notationally more difficult. We further elaborate on this point in Remark 1.
In queue 2 we count the number of groups of size s and denote it by j, i.e., \(j = \lfloor q_2/s \rfloor \), and we denote the number of remaining customers by r, i.e., \(r = {\text {mod}}(q_2,s)\). Clearly, a single group in queue 2 requires the same expected amount of work as a single customer in queue 1. The total number of customers in queue 2 is thus \(js + r\) and for an arriving customer the expected delay in queue 2 is \(j + (r + 1)/s\). In terms of these variables, SED routing works as follows: if \(q_1 + 1 < j + (r + 1)/s\), the arriving customer joins queue 1, and if \(q_1 + 1 > j + (r + 1)/s\), the arriving customer joins queue 2. If the expected delays in both queues are equal, i.e., \(q_1 + 1 = j + (r + 1)/s\), the arriving customer joins queue 1 with probability q and queue 2 with probability \(1 - q\).
For convenience we introduce the length of the shortest queue as \(m = \min (q_1,j)\) and the difference between queue 2 and queue 1, i.e., \(n = j - q_1\). Using this notation, the SED system is formulated as a three-dimensional Markov process with state space \(\{(m,n,r) \mid m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, ~ r = 0,1,\ldots ,s-1 \}\). Under the stability condition (1.1) the equilibrium distribution exists. Let p(mnr) denote the equilibrium probability of being in state (mnr) and let
$$\begin{aligned} \mathbf {p}(m,n) = (p(m,n,0),p(m,n,1),\ldots ,p(m,n,s-1))^T \end{aligned}$$
(2.1)
with \(\mathbf {x}^T\) the transpose of a vector \(\mathbf {x}\). Throughout the paper, we use bold lowercase letters or numbers for vectors and uppercase Latin letters for matrices. For convenience, we have listed all state variables and their interpretation in Table 1.
Table 1
Interpretation of the state variables of the Markov process
Variable
Expression
Interpretation
\(q_1\)
 
Number of customers (groups of size 1) in queue 1
\(q_2\)
 
Number of customers in queue 2
\(\lfloor q_2/s \rfloor \)
 
Number of groups of size s in queue 2
m
\(\min (q_1,\lfloor q_2/s \rfloor )\)
Minimum number of groups in queue 1 and 2
n
\(\lfloor q_2/s \rfloor - q_1\)
Difference between number of groups in queue 2 and 1
r
\({\text {mod}}(q_2,s)\)
Number of customers in queue 2 that are not in a group
The transition rates are given by
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-016-9497-7/MediaObjects/11134_2016_9497_Equ160_HTML.gif
corresponding to arrivals, and
$$\begin{aligned} (m,n,r)&\xrightarrow {1} {\left\{ \begin{array}{ll} (m-1,n+1,r), &{} m> 0, ~ n \ge 0, ~ r = 0,1,\ldots ,s-1, \\ (m,n+1,r), &{} m \ge 0, ~ n < 0, ~ r = 0,1,\ldots ,s-1, \end{array}\right. } \\ (m,n,r)&\xrightarrow {s} {\left\{ \begin{array}{ll} (m,n,r-1), &{} m \ge 0, ~ n \in \mathbb {Z}, ~ r = 1,2,\ldots ,s-1, \\ (m,n-1,s-1), &{} m \ge 0, ~ n> 0, ~ r = 0, \\ (m-1,n-1,s-1), &{} m > 0, ~ n \le 0, ~ r = 0, \end{array}\right. } \end{aligned}$$
corresponding to service completions. Figure 2a displays the transition rate diagram for the three-dimensional state space. The transition rates are described by the matrices \(A_{x,y}\) in the positive quadrant and \(B_{x,y}\) in the negative quadrant, where the pair (xy) indicates the step size in the (mn)-direction. Let \(I\) be the \(s \times s\) identity matrix, \(M^{(x,y)}\) be an \(s \times s\) binary matrix with element (xy) equal to one and zeros elsewhere, and \(L\) be an \(s \times s\) subdiagonal matrix with elements \((x,x-1), ~ x = 1,2,\ldots ,s-1\) equal to one and zeros elsewhere. For consistency with the indexing of the vector \(\mathbf {p}(m,n)\), indexing of a matrix starts at 0. The transition rate matrices take the form
$$\begin{aligned} A_{1,-1}&= (1+s)\rho I,&A_{0,1}&= (1+s)\rho (1-q) M^{(0,s-1)}, \\ A_{-1,1}&= B_{0,1} = I,&B_{1,1}&= (1+s)\rho M^{(0,s-1)}, \\ A_{0,-1}&= B_{-1,-1} = sM^{(s-1,0)},&B_{0,-1}&= (1+s)\rho q M^{(s-1,s-1)}, \\ A_{0,0}&= -(1+s)(\rho + 1)I+ sL^T,&B_{0,0}&= A_{0,0} + (1+s)\rho L. \end{aligned}$$
The equilibrium equations can be written in matrix-vector form. We partition the state space as illustrated in Fig. 2b. For the interior of the positive and negative quadrant we have the following inner equations:
$$\begin{aligned}&A_{0,0} \mathbf {p}(m,n) + A_{1,-1} \mathbf {p}(m-1,n+1) \nonumber \\&\quad + A_{0,-1} \mathbf {p}(m,n+1) + A_{-1,1} \mathbf {p}(m+1,n-1) = \mathbf {0}, \quad m \ge 1, ~ n \ge 2,\end{aligned}$$
(2.2)
$$\begin{aligned}&B_{0,0} \mathbf {p}(m,n) + B_{1,1} \mathbf {p}(m-1,n-1) \nonumber \\&\quad + B_{0,1} \mathbf {p}(m,n-1) + B_{-1,-1} \mathbf {p}(m+1,n+1) = \mathbf {0}, \quad m \ge 1, ~ n \le -2. \end{aligned}$$
(2.3)
The equilibrium equations corresponding to the states on the horizontal axis, or directly adjacent to the horizontal axis, are referred to as the horizontal boundary equations and are given by
$$\begin{aligned}&A_{0,0} \mathbf {p}(m,1) + A_{1,-1} \mathbf {p}(m-1,2)+ A_{0,-1} \mathbf {p}(m,2) \nonumber \\&\quad + A_{-1,1} \mathbf {p}(m+1,0) + A_{0,1} \mathbf {p}(m,0) = \mathbf {0}, \quad m \ge 1, ~ n = 1, \end{aligned}$$
(2.4)
$$\begin{aligned}&B_{0,0} \mathbf {p}(m,-1) + B_{1,1} \mathbf {p}(m-1,-2) + B_{0,1} \mathbf {p}(m,-2) \nonumber \\&\quad + B_{-1,-1} \mathbf {p}(m+1,0) + B_{0,-1} \mathbf {p}(m,0) = \mathbf {0}, \quad m \ge 1, ~ n = -1,\end{aligned}$$
(2.5)
$$\begin{aligned}&B_{0,0} \mathbf {p}(m,0) + A_{1,-1} \mathbf {p}(m-1,1)+ B_{1,1} \mathbf {p}(m-1,-1) \nonumber \\&\quad + A_{0,-1} \mathbf {p}(m,1) + B_{0,1} \mathbf {p}(m,-1) = \mathbf {0}, \quad m \ge 1, ~ n = 0. \end{aligned}$$
(2.6)
The vertical boundary equations are
$$\begin{aligned} (A_{0,0} + I)\mathbf {p}(0,n) + A_{0,-1} \mathbf {p}(0,n+1) \nonumber \\ + \, A{-1,1} \mathbf {p}(1,n-1)&= \mathbf {0}, \quad m = 0, ~ n \ge 2, \end{aligned}$$
(2.7)
$$\begin{aligned} (B_{0,0} + sM^{(0,0)}) \mathbf {p}(0,n) + B_{0,1} \mathbf {p}(0,n-1) \nonumber \\ + \, B{-1,-1} \mathbf {p}(1,n+1)&= \mathbf {0}, \quad m = 0, ~ n \le -2. \end{aligned}$$
(2.8)
Finally, for the three remaining boundary states near the origin, we have
$$\begin{aligned} (A_{0,0} + I)\mathbf {p}(0,1) + A_{0,-1}\mathbf {p}(0,2) + A_{-1,1} \mathbf {p}(1,0) + A_{0,1} \mathbf {p}(0,0)&= \mathbf {0}, \end{aligned}$$
(2.9)
$$\begin{aligned} (B_{0,0} + sM^{(0,0)}) \mathbf {p}(0,-1) + B_{0,1} \mathbf {p}(0,-2) \nonumber \\ + B_{-1,-1} \mathbf {p}(1,0) + B_{0,-1} \mathbf {p}(0,0)&= \mathbf {0}, \end{aligned}$$
(2.10)
$$\begin{aligned} (B_{0,0} + I+ sM^{(0,0)})\mathbf {p}(0,0) + A_{0,-1} \mathbf {p}(0,1) + B_{0,1} \mathbf {p}(0,-1)&= \mathbf {0}, \end{aligned}$$
(2.11)
corresponding to the states (0, 1), \((0,-1)\) and (0, 0), respectively.
Remark 1
(Rational s) A system with a rational service rate \(s = \frac{s_2}{s_1}\) can also be analyzed. In that case, one needs to consider a system with two servers and service rates \(s_1\) and \(s_2\). Similar to our analysis at the start of Sect. 2, one denotes the number of groups of size \(s_1\) in queue 1 as i and the number of groups of size \(s_2\) in queue 2 as j. Then, let \(r_n \in \{0,1,\ldots ,s_n - 1\}\) denote the number of remaining customers in queue \(n = 1,2\). Based on the aforementioned construction, a single group in either queue 1 or 2 requires the same expected amount of work. Lastly, set \(m = \min (i,j)\), \(n = j - i\) and the third finite dimension is a lexicographical ordering of the states \((r_1,r_2) \in \{0,1,\ldots ,s_1 - 1\} \times \{0,1,\ldots ,s_2 - 1\}\). This state space description leads to a transition rate diagram that has a similar structure as the one seen in Fig. 2a. In this sense, a system with a rational service rate can be analyzed using the approach described in this paper.

3 Evolution of the compensation approach

In this section we use the following abbreviations: vertical boundary (VB); vertical compensation step (VCS); horizontal boundary (HB); and horizontal compensation step (HCS).
The compensation approach is used for the direct determination of the equilibrium distribution of Markov processes that satisfy the three conditions mentioned in Sect. 1. The key idea is a compensation procedure: the equilibrium distribution can be represented as a series of product-form solutions, which is generated term by term starting from an initial solution, such that each term compensates for the error introduced by its preceding term on one of the boundaries of the state space. In this section, we motivate why the SED system requires a fundamental extension of the compensation approach. We do so by first describing the evolution of the compensation approach through a series of models and present for each model the corresponding methodological contribution. Finally, we describe the extension required for the SED system.
The compensation approach was pioneered by Adan et al. [4], for a queueing system with two identical exponential servers, both with rate 1, and JSQ routing. Such a queueing system can be modeled as a Markov process with states \((q_1,q_2) \in \mathbb {N}_0^2\), where \(q_i\) is the number of customers at queue i, including a customer possibly in service. By defining \(m = \min (q_1,q_2)\) and \(n = q_2 - q_1\), one transforms the state space from an inhomogeneous random walk in the quadrant to a random walk in the half-plane that is homogeneous in each quadrant. Since the two quadrants are mirror images of each other, it is not needed to determine the equilibrium probabilities in both quadrants; it suffices to do so in the positive quadrant. The transition rate diagram of the Markov process is shown in Fig. 3a.
For the symmetric JSQ model in [4], the initial solution is of the form \(\eta _0 \alpha _0^m \beta _0^n\), where \(\eta _0\) is a coefficient and \(\alpha _0\) and \(\beta _0\) are the compensation parameters that satisfy the kernel equation
$$\begin{aligned} \alpha \beta ^2(\rho + 1) = \beta ^2 2\rho + \alpha \beta ^2 + \alpha ^2, \end{aligned}$$
(3.1)
which is obtained by substituting \(\alpha ^m \beta ^n\) in the equilibrium equations of the interior of the positive quadrant and dividing by common powers. This initial solution satisfies the equilibrium equations in the interior and on the HB (there is only one such solution). In order to compensate for the error on the VB, one adds the compensation term \(\nu _0 \alpha _1^m \beta _0^n\) such that \(\eta _0 \alpha _0^m \beta _0^n + \nu _0 \alpha _1^m \beta _0^n\) satisfies the equilibrium equations in the interior and on the VB. The compensation parameter \(\alpha _1\) with \(\alpha _1 < \beta _0\) is generated from (3.1) for a fixed \(\beta = \beta _0\). The coefficient \(\nu _0\) satisfies a linear equation and is a function of \(\eta _0\), \(\alpha _0\), \(\beta _0\), and \(\alpha _1\). The resulting solution violates the equilibrium equations on the HB. Hence, one adds another compensation term \(\eta _1 \alpha _1^m \beta _1^n\) such that \(\nu _0 \alpha _1^m \beta _0^n + \eta _1 \alpha _1^m \beta _1^n\) satisfies the equilibrium equations in the interior and on the HB, where \(\beta _1\) and \(\eta _1\) are determined in a similar way as for the VCS. Repeating the compensation steps leads to a series expression for the equilibrium probabilities that satisfies all equilibrium equations:
$$\begin{aligned} p(m,n) = C \sum _{l = 0}^\infty \eta _l \alpha _l^m \beta _l^n + C \sum _{l = 0}^\infty \nu _{l + 1} \alpha _{l + 1}^m \beta _l^n, \quad m \ge 0, ~ n \ge 1, \end{aligned}$$
(3.2)
where C is the normalization constant. Figure 4a displays the way in which the compensation parameters are generated.
The first extension is presented in [5] for the asymmetric JSQ model, i.e., the servers are now assumed to be nonidentical with speeds 1 and s. The symmetry argument used earlier does not hold anymore and one needs to consider the complete half-plane; see Fig. 3b for the transition rate diagram. Note that the half-plane consists of two quadrants with different transition rates that are coupled on the horizontal axis. The approach in this case is an extension of the approach introduced in [4]. In a VCS, one compensates solutions that satisfy the positive inner equations on the positive VB as well as solutions that satisfy the negative inner equations on the negative VB. Two kernel equations (one for each quadrant) are used to generate the \(\alpha \)s, and the coefficients satisfy different linear equations. For a HCS, each product-form solution that satisfies the positive inner equations generates a single \(\beta \) for the positive quadrant and a single \(\beta \) for the negative quadrant. Accordingly, a product-form solution that satisfies the negative inner equations is compensated on the HB. Thus, in this case the generation of compensation parameters has a binary tree structure; see Fig. 4b.
A further extension of the compensation approach is presented in [2] for a model with two identical servers, Erlang-s arrivals and JSQ routing. The state description is enhanced by adding a finite third dimension that keeps track of the number of completed arrival phases. The random walks in the positive and negative quadrant are mirror images, which permits us to perform the analysis only on the positive quadrant; see Fig. 3c for the transition rate diagram. In [2] the authors extend the compensation approach to a three-dimensional setting. Due to the three-dimensional state space, each compensation term takes the form \(\alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\), where \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) is a vector of coefficients of dimension s (equal to the number of arrival phases). In each HCS, s different parameters \(\beta \) are generated instead of just one. A graphical representation of the generation of compensation terms, which has an s -fold tree structure, is depicted in Fig. 4c.

3.1 Our contribution

The model at hand is defined on an s-layered half-plane, thus requiring that we further extend the compensation approach. Similarly to [5], we need to account for the two quadrants by considering two kernel functions (one for each quadrant). Furthermore, in accordance with [2], in every HCS, a total of s different parameters \(\beta \) are generated for the positive quadrant and a single \(\beta \) for the negative quadrant. This leads to a \((s + 1)\) -fold tree structure for the compensation parameters as depicted in Fig. 5. Additionally, the product-form solutions take the form \(\alpha ^m\beta ^n\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) or \(\alpha ^m\beta ^{-n}\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) depending on whether they are defined in the positive or negative quadrant, respectively. The resulting solution for the equilibrium distribution is, for \(m \ge 0, ~ n \ge 1\),
$$\begin{aligned} \mathbf {p}(m,n)&= C \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \eta _{l,d(i) + j} \alpha _{l,i}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l,i},\beta _{l,d(i) + j}) \nonumber \\&\quad + C \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \nu _{l + 1,d(i) + j} \alpha _{l + 1,d(i) + j}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l + 1,i},\beta _{l,d(i) + j}). \end{aligned}$$
(3.3a)
For \(m \ge 0, ~ n = 0\),
$$\begin{aligned} \mathbf {p}(m,n)&= C \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \alpha _{l,i}^m \mathbf {h}_{l,i}. \end{aligned}$$
(3.3b)
For \(m \ge 0, ~ n \le -1\),
$$\begin{aligned} \mathbf {p}(m,n)&= C \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \eta _{l,i(s + 1)} \alpha _{l,i}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l,i},\beta _{l,i(s + 1)}) \nonumber \\&\quad + C \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \nu _{l + 1,i(s + 1)} \alpha _{l + 1,i(s + 1)}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l + 1,i(s + 1)},\beta _{l,i(s + 1)}), \end{aligned}$$
(3.3c)
where C is the normalization constant and \(d(i) :=(i - 1)(s + 1)\). The first subscript l is the level at which a parameter resides, starting at level \(l = 0\) (the initial solution). Within a level, the parameters are differentiated by using an additional index i. A horizontal compensation step and the initial solution have coefficients \(\eta \) and a vertical compensation step has coefficients \(\nu \). Additionally, a vector \(\mathbf {h}\) is generated in each horizontal compensation step. The initial solution is described in Lemma 6, the horizontal compensation step is described in Lemma 8, and the vertical compensation step is described in Lemma 7; see Fig. 5.

4 Numerical results

Expression (3.3) is amenable for numerical calculations after applying truncation. For \(m \ge 0, ~ n \ge 1\),
$$\begin{aligned} \mathbf {p}_L(m,n)&= C \sum _{l = 0}^{\lfloor \frac{L}{2} \rfloor } \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \eta _{l,d(i) + j} \alpha _{l,i}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l,i},\beta _{l,d(i) + j}) \nonumber \\&\quad + C \sum _{l = 0}^{\lfloor \frac{L-1}{2} \rfloor } \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \nu _{l + 1,d(i) + j} \alpha _{l + 1,d(i) + j}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l + 1,i},\beta _{l,d(i) + j}). \end{aligned}$$
(4.1a)
For \(m \ge 0, ~ n = 0\),
$$\begin{aligned} \mathbf {p}_L(m,n)&= C \sum _{l = 0}^{\lfloor \frac{L}{2} \rfloor } \sum _{i = 1}^{(s + 1)^l} \alpha _{l,i}^m \mathbf {h}_{l,i}. \end{aligned}$$
(4.1b)
For \(m \ge 0, ~ n \le -1\),
$$\begin{aligned} \mathbf {p}_L(m,n)&= C \sum _{l = 0}^{\lfloor \frac{L}{2} \rfloor } \sum _{i = 1}^{(s + 1)^l} \eta _{l,i(s + 1)} \alpha _{l,i}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l,i},\beta _{l,i(s + 1)}) \nonumber \\&\quad + C \sum _{l = 0}^{\lfloor \frac{L-1}{2} \rfloor } \sum _{i = 1}^{(s + 1)^l} \nu _{l + 1,i(s + 1)} \alpha _{l + 1,i(s + 1)}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l + 1,i(s + 1)},\beta _{l,i(s + 1)}), \end{aligned}$$
(4.1c)
where the empty sum \(\sum _{l = 0}^{-1}\) is 0. Here, \(L = 0\) indicates only the initial solution and for instance \(L = 3\) indicates an initial solution, a vertical, horizontal, and another vertical compensation. Naturally, as L increases, the approximation becomes more accurate.
We perform several numerical experiments that verify that under SED routing the joint queue length process concentrates between the line where the expected delays in both queues are equal, \(q_1 + 1 = (q_2 + 1)/s\), and the line where the expected waiting time is equal, \(q_1 = q_2/s\), using the equilibrium distribution. To this end, we consider a system with service rates 1 and \(s = 3\), \(q = 0.4\) and set \(L = 16\) in (4.1). The equilibrium distribution for this model and varying \(\rho \) is given in Fig. 6 in the form of a heat plot, supporting our claim.
Next, to demonstrate the rate of convergence of the series in (3.3), we derive the number of compensation steps L for which the equilibrium probabilities \(\mathbf {p}(m,n)\) are considered sufficiently accurate: As a measure of accuracy we compute for each state (mn) the minimum number of compensation steps L such that
$$\begin{aligned} \max _{r = 0,1,\ldots ,s - 1} \frac{| p_L(m,n,r) - p_{L - 1}(m,n,r)|}{p_{L - 1}(m,n,r)} < 10^{-4}. \end{aligned}$$
(4.2)
Figure 7 shows that away from the origin, the convergence of the series is very fast, but the convergence is also quite fast for states close to the origin. Note that the distance of a state (mn) to the origin is directly related to the rate of convergence of the series expression (3.3). In particular, it seems to be a function of \(m + |n|\): faster convergence further away from the origin. This property is formally proven in Sect. 5.7 and can be exploited for numerical computations.
Remark 2
(No curse of dimensionality) Take a triangular set of states \(\mathcal {T}_{M} = \{ (m,n) \mid m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, ~ m + |n| \le M \}\), where M is some nonnegative integer. Technically, M needs to be strictly larger than some nonnegative integer N, but we do not go into the details here; the lower bound N is described in Sect. 5.7. For states outside \(\mathcal {T}_{M}\), we can use (4.1) to compute \(\mathbf {p}(m,n)\). Since the number of compensation steps L required to achieve accurate results according to (4.2) decreases with \(m + |n|\), the number of compensation steps L for each state \((m,n) \notin \mathcal {T}_{M}\) is relatively small. For states \((m,n) \in \mathcal {T}_{M}\), a linear system of equilibrium equations needs to be solved to determine the equilibrium probabilities \(\mathbf {p}(m,n)\), where one uses that \(\mathbf {p}(m,n), ~ (m,n) \notin \mathcal {T}_{M}\), are known.
As an example, for \(s = 4\), \(\rho = 0.8\), and \(q = 0.4\), the choice \(M = 3\) implies that \(L = 1\), which gives accurate results for \(\mathbf {p}(m,n), ~ (m,n) \notin \mathcal {T}_{M}\); see Fig. 7. So it is evident that the compensation approach does not suffer from the curse of dimensionality.

5 Applying the compensation approach

5.1 Outline

In the following subsections we describe the main steps in constructing the equilibrium distribution of the SED system. First, we develop some preliminary results in Sect. 5.2, showing that the inner equations have a product-form solution and we determine one of the two parameters of the product-form solution explicitly. Using these preliminary results, we determine the unique initial solution in Sect. 5.3. The vertical compensation step is outlined in Sect. 5.4. Section 5.5 describes the horizontal compensation procedure. We formalize the resulting solution in terms of a sequence of product forms in Sect. 5.6. This sequence of compensation terms grows as a \((s+1)\)-fold tree and the problem is in showing that the sequence converges. Section 5.7 is devoted to the issue of convergence.

5.2 Preliminary results

We conjecture that the inner equations have a product-form solution. To this end, we examine a related model that has the same behavior in the interior as the original model. We then show that the equilibrium distribution of this related model can be expressed as a product-form solution. Moreover, modeling this process as a quasi-birth–and–death (QBD) queue, we obtain a closed form expression for one of the parameters of the product-form solution. This procedure is closely related to the one in [2, Sect. 4].
The related model is constructed as follows. We start from the state space of the original model in Fig. 2a and bend the vertical axis as shown in Fig. 8. Note that for this modified model, \(m \in \mathbb {Z}\). We add an additional state \((-1,0,s-1)\) with a transition rate \(\hat{A}_{0,1} = s \mathbf {e}_{0}\) to state \((-1,1)\) and a transition rate \(\hat{B}_{0,-1} = (1 + s)\rho q \mathbf {e}_{s-1}\) to state \((-1,-1)\), where \(\mathbf {e}_{i}\) is a column vector of zeros of length s with a one at position i. The transitions from the states on the diagonal \(m + n = 0, ~ n \ge 1\) are kept consistent with the transitions from a state in the positive interior of the SED model, where the downward transitions are redirected to the state \((-1,0,s-1)\), specifically, \(\hat{A}_{0,-1} = s \mathbf {e}_{0}^T\). Similarly, the transitions from the states on the diagonal \(m - n = 0, ~ n \le -1\) are kept consistent with the ones in the negative interior of the SED model, where the upward transitions are redirected to the state \((-1,0,s-1)\), specifically, \(\hat{B}_{0,1} = \mathbf {1}^T\), where \(\mathbf {1}\) is a column vector of ones of size s.
The characteristic feature of the modified model is that its equilibrium equations for \(m + |n| = 0, ~ |n| \ge 2\) are exactly the same as the ones in the interior and the equilibrium equations for \(n \in \{-1,0,1\}\) are exactly the same as the ones on the horizontal boundary of the original model. In this sense, the modified model has no “vertical boundary” equations.
We next present two lemmas. The first lemma states that a product-form solution exists for the modified model, while in the second lemma we identify the geometric term of the product-form expression. Let \(\hat{\mathbf {p}}(m,n) = (\hat{p}(m,n,0),\hat{p}(m,n,1),\ldots ,\hat{p}(m,n,s-1))^\mathrm{{T}}\) denote the equilibrium distribution of the modified model.
Lemma 1
For \(\rho < 1\), the equilibrium distribution of the modified model exists and is of the form
$$\begin{aligned} \hat{\mathbf {p}}(m,n) = \alpha ^m \hat{\mathbf {q}}(n), \quad m + |n| \ge 0, ~ n \in \mathbb {Z}, \end{aligned}$$
(5.1)
with \(\alpha \in (0,1)\) and \(\hat{\mathbf {q}}(n) = (\hat{q}(n,0),\hat{q}(n,1),\ldots ,\hat{q}(n,s-1))^T\) such that
$$\begin{aligned} \sum _{n = -\infty }^\infty \alpha ^{-|n|} \hat{q}(n,r) < \infty , \quad r = 0,1,\ldots ,s-1. \end{aligned}$$
(5.2)
Proof
First notice that the modified model is stable whenever the original model is stable and that \(\hat{\mathbf {p}}(m,n)\) satisfies the inner and horizontal boundary equations (2.2)–(2.6) for all \(m + |n| = 0\), except for state (0, 0). Observe that the modified model restricted to the area \(\{ (m,n) \mid m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, ~ m + |n| \ge n_0 \} \cup \{(n_0 - 1,0,s - 1)\}, ~ n_0 = 1,2,\ldots \) embarked by two lines parallel to the diagonal axes yields exactly the same process. Hence, we can conclude that
$$\begin{aligned} \hat{\mathbf {p}}(m+1,n) = \alpha \hat{\mathbf {p}}(m,n), \quad m + |n| \ge 0, ~ n \in \mathbb {Z}, \end{aligned}$$
(5.3)
and therefore
$$\begin{aligned} \hat{p}(m,n,r) = \alpha ^m \hat{q}(n,r), \quad m + |n| \ge 0, ~ n \in \mathbb {Z}, ~ r = 0,1,\ldots ,s-1. \end{aligned}$$
(5.4)
Finally, we observe that
$$\begin{aligned} \sum _{n = -\infty }^\infty \alpha ^{-|n|} \hat{q}(n,r) = \sum _{n = -\infty }^\infty \hat{p}(-|n|,n,r) < 1, \end{aligned}$$
(5.5)
which concludes the proof. \(\square \)
In Lemma 1 we have shown that the equilibrium distribution of the modified model has a product form which is unique up to a positive multiplicative constant. In the next lemma we determine the unique \(\alpha \) in (5.1). More concretely, the unique \(\alpha \) is equal to \(\rho ^{1 + s}\).
Lemma 2
For \(\rho < 1\), the equilibrium distribution of the modified model is of the form
$$\begin{aligned} \hat{\mathbf {p}}(m,n) = \rho ^{(1+s)m} \hat{\mathbf {q}}(n), \quad m + |n| \ge 0, ~ n \in \mathbb {Z}. \end{aligned}$$
(5.6)
Proof
Let k denote the total number of customers in the system, i.e.,
$$\begin{aligned} k = {\left\{ \begin{array}{ll}(1+s)m + sn + r, &{} n \ge 0, \\ (1+s)m - n + r, &{} n < 0. \end{array}\right. } \end{aligned}$$
(5.7)
Then, the number of customers in the system forms a Markov process and for all states \(k > s\), the transitions are given by (i) from state k to state \(k + 1\) with rate \((1 + s)\rho \); and (ii) from state \(k + 1\) to state k with rate \(1 + s\). Let \(\hat{p}_k\) denote the probability of having k customers in the system. From the balance principle between states k and \(k + 1\) we obtain
$$\begin{aligned} \hat{p}_{k+1} = \rho \hat{p}_k, \quad k > s. \end{aligned}$$
(5.8)
Furthermore,
$$\begin{aligned} \hat{p}_{k+s+1}&= \sum _{\begin{array}{c} (m,n,r) \\ (1+s)m + sn + r = k + s + 1 \end{array}} \hat{p}(m,n,r) + \sum _{\begin{array}{c} (m,n,r) \\ (1+s)m - n + r = k + s + 1 \end{array}} \hat{p}(m,n,r) \nonumber \\&= \sum _{\begin{array}{c} (m,n,r) \\ (1+r)(m-1) + sn + r = k \end{array}} \alpha ^m \hat{q}(n,h) + \sum _{\begin{array}{c} (m,n,r) \\ (1+s)(m-1) - n + r = k \end{array}} \alpha ^m \hat{q}(n,r) \nonumber \\&= \alpha \sum _{\begin{array}{c} (l,n,r) \\ (1+s)l + sn + r = k \end{array}} \alpha ^l \hat{q}(n,h) + \alpha \sum _{\begin{array}{c} (l,n,r) \\ (1+s)l - n + r = k \end{array}} \alpha ^l \hat{q}(n,r) \nonumber \\&= \alpha \hat{p}_k. \end{aligned}$$
(5.9)
Combining the last two results immediately yields \(\alpha = \rho ^{1+s}\). \(\square \)
Combining Lemmas 1 and 2 yields the following result for the original model.
Proposition 1
For \(\rho < 1\), the inner and horizontal boundary equations (2.2)–(2.6) have a unique solution of the form
$$\begin{aligned} \mathbf {p}(m,n) = \rho ^{(1+s)m} \mathbf {q}(n), \quad m \ge 0, ~ n \in \mathbb {Z}, \end{aligned}$$
(5.10)
with \(\mathbf {q}(n) = (q(n,0),q(n,1),\ldots ,q(n,s-1))^T\) nonzero and
$$\begin{aligned} \sum _{n = -\infty }^\infty \rho ^{-(1+s)|n|} q(n,r) < \infty , ~ r = 0,1,\ldots ,s-1. \end{aligned}$$
(5.11)
The solution obtained in Proposition 1 satisfies the inner and horizontal boundary equations. However, we still need to specify the form of the vector \(\mathbf {q}(n)\). It will become apparent, from Lemmas 3, 4 and 5, that the form of the vector \(\mathbf {q}(n)\) is entirely different in the positive quadrant, on the horizontal axis, and the negative quadrant. Correctly identifying \(\mathbf {q}(n)\) will result in the initial solution that satisfies the equilibrium equations of the interior and the horizontal axis. In the following lemmas we describe the form of a solution satisfying the inner equations.
Lemma 3
(i)
The product form \(\mathbf {p}(m,n) = \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ), ~ m \ge 0, ~ n \ge 1\) is a solution of the inner equations of the positive quadrant (2.2) if
$$\begin{aligned} D_{\scriptscriptstyle +}(\alpha ,\beta ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = \mathbf {0}, \end{aligned}$$
(5.12)
with
$$\begin{aligned} D_{\scriptscriptstyle +}(\alpha ,\beta ) = \alpha \beta A_{0,0} + \alpha \beta ^2 A_{0,-1} + \alpha ^2 A_{-1,1} + \beta ^2 A_{1,-1} \end{aligned}$$
(5.13)
and the eigenvector \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = (i_{\scriptscriptstyle +}(\alpha ,\beta ,0),i_{\scriptscriptstyle +}(\alpha ,\beta ,1),\ldots ,i_{\scriptscriptstyle +}(\alpha ,\beta ,s-1))^\mathrm{{T}}\) satisfies, for \(r = 0,1,\ldots ,s - 1,\)
$$\begin{aligned} \frac{i_{\scriptscriptstyle +}(\alpha ,\beta ,r)}{i_{\scriptscriptstyle +}(\alpha ,\beta ,0)} = \Bigl ( \frac{\alpha \beta (1 + s)(\rho + 1) - \beta ^2 (1 + s) \rho - \alpha ^2}{\alpha \beta s} \Bigr )^r. \end{aligned}$$
(5.14)
 
(ii)
The product form \(\mathbf {p}(m,n) = \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ), ~ m \ge 0, ~ n \le -1\) is a solution of the inner equations of the negative quadrant (2.3) if
$$\begin{aligned} D_{\scriptscriptstyle -}(\alpha ,\beta ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) = \mathbf {0}, \end{aligned}$$
(5.15)
with
$$\begin{aligned} D_{\scriptscriptstyle -}(\alpha ,\beta ) = \alpha \beta B_{0,0} + \alpha \beta ^2 B_{0,1} + \alpha ^2 B_{-1,-1} + \beta ^2 B_{1,1} \end{aligned}$$
(5.16)
and the eigenvector \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) = (i_{\scriptscriptstyle -}(\alpha ,\beta ,0),i_{\scriptscriptstyle -}(\alpha ,\beta ,1),\ldots ,i_{\scriptscriptstyle -}(\alpha ,\beta ,s-1))^T\) satisfies, for \(r = 0,1,\ldots ,s - 1,\)
$$\begin{aligned} \frac{i_{\scriptscriptstyle -}(\alpha ,\beta ,r)}{i_{\scriptscriptstyle -}(\alpha ,\beta ,0)} = \frac{\varPsi (\alpha ,\beta ,\psi _{\scriptscriptstyle -}(\beta )) \psi _{\scriptscriptstyle +}(\beta )^r - \varPsi (\alpha ,\beta ,\psi _{\scriptscriptstyle +}(\beta )) \psi _{\scriptscriptstyle -}(\beta )^r}{\varPsi (\alpha ,\beta ,\psi _{\scriptscriptstyle -}(\beta )) - \varPsi (\alpha ,\beta ,\psi _{\scriptscriptstyle +}(\beta ))}, \end{aligned}$$
(5.17)
with
$$\begin{aligned} \psi _{\scriptscriptstyle \pm }(\beta ) = \frac{(1 + s)(\rho + 1) - \beta \pm \sqrt{(\beta - (1 + s)(\rho + 1))^2 - 4s(1 + s)\rho }}{2s} \end{aligned}$$
(5.18)
and
$$\begin{aligned} \varPsi (\alpha ,\beta ,\psi )&= \beta - (1 + s)(\rho + 1) + s \psi + \frac{\beta }{\alpha }(1 + s)\rho \psi ^{s-1} \nonumber \\&= (1 + s) \rho \psi ^{-1} \left( \frac{\beta }{\alpha } \psi ^s - 1 \right) . \end{aligned}$$
(5.19)
 
Proof
(i)
Inserting the product form \(\mathbf {p}(m,n) = \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) into (2.2) and dividing by \(\alpha ^{m-1}\beta ^{n-1}\) results in (5.12). For \(\det (D_{\scriptscriptstyle +}(\alpha ,\beta )) = 0\), the rank of \(D_{\scriptscriptstyle +}(\alpha ,\beta )\) is \(s-1\) and thus we have one free variable, which allows us to express \(i_{\scriptscriptstyle +}(\alpha ,\beta ,r)\) in terms of \(i_{\scriptscriptstyle +}(\alpha ,\beta ,0)\). The eigenvector \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) follows from the system of linear equations (5.12), which reads, for \(r = 0,1,\ldots ,s-2\),
$$\begin{aligned} \bigl ( \alpha ^2 + \beta ^2(1 + s)\rho - \alpha \beta (1 + s)(\rho + 1) \bigr ) i_{\scriptscriptstyle +}(\alpha ,\beta ,r) + \alpha \beta s i_{\scriptscriptstyle +}(\alpha ,\beta ,r+1) = 0, \end{aligned}$$
(5.20)
and gives (5.14).
 
(ii)
Inserting the product form \(\mathbf {p}(m,n) = \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) into (2.3) and dividing by \(\alpha ^{m-1}\beta ^{-n+1}\) results in (5.15). For \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = 0\), the rank of \(D_{\scriptscriptstyle -}(\alpha ,\beta )\) is \(s-1\) and thus we have one free variable, which allows us to express \(i_{\scriptscriptstyle -}(\alpha ,\beta ,r)\) in terms of \(i_{\scriptscriptstyle -}(\alpha ,\beta ,0)\). The system of linear equations (5.15) reads
$$\begin{aligned}&\bigl ( \alpha \beta ^2 - \alpha \beta (1 + s)(\rho + 1)\bigr ) i_{\scriptscriptstyle -}(\alpha ,\beta ,0) + \alpha \beta s i_{\scriptscriptstyle -}(\alpha ,\beta ,1) \nonumber \\&\quad + \beta ^2 (1 + s)\rho i_{\scriptscriptstyle -}(\alpha ,\beta ,s-1) = 0, \end{aligned}$$
(5.21)
$$\begin{aligned}&\alpha \beta (1 + s)\rho i_{\scriptscriptstyle -}(\alpha ,\beta ,r-1) + \bigl ( \alpha \beta ^2 - \alpha \beta (1 + s)(\rho + 1)\bigr ) i_{\scriptscriptstyle -}(\alpha ,\beta ,r) \nonumber \\&\quad + \alpha \beta s i_{\scriptscriptstyle -}(\alpha ,\beta ,r+1) = 0, \quad r = 1,2,\ldots ,s-2. \end{aligned}$$
(5.22)
 
We construct a solution \(i_{\scriptscriptstyle -}(\alpha ,\beta ,r) = (c_+ \psi _{\scriptscriptstyle +}(\beta )^r + c_- \psi _{\scriptscriptstyle -}(\beta )^r)i_{\scriptscriptstyle -}(\alpha ,\beta ,0)\) and thus require \(c_+ + c_- = 1\). The two roots \(\psi _{\scriptscriptstyle \pm }(\beta )\) follow from substituting \(i_{\scriptscriptstyle -}(\alpha ,\beta ,r) = \psi ^r i_{\scriptscriptstyle -}(\alpha ,\beta ,0)\) in (5.22) and dividing by \(\psi ^{r-1}i_{\scriptscriptstyle -}(\alpha ,\beta ,0)\), yielding
$$\begin{aligned} s \psi ^2 + (\beta - (1 + s)(\rho + 1))\psi + (1 + s)\rho = 0. \end{aligned}$$
(5.23)
The constants \(c_+\) and \(c_-\) follow from (5.21) and the simplification follows from the fact that \(i_{\scriptscriptstyle -}(\alpha ,\beta ,r) = \psi ^r i_{\scriptscriptstyle -}(\alpha ,\beta ,0)\) satisfies (5.23). \(\square \)
Remark 3
(Normalized eigenvectors) Without loss of generality we henceforth set the first elements \(i_{\scriptscriptstyle +}(\alpha ,\beta ,0) = i_{\scriptscriptstyle -}(\alpha ,\beta ,0) = 1\) for all \(\alpha \) and \(\beta \). Since normalization of the equilibrium distribution follows afterwards, one can arbitrarily select the value of the first element of the eigenvectors \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) and \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\).
We wish to determine the \(\alpha \)s and \(\beta \)s with \(0< |\alpha |,|\beta | < 1\), for which (5.12) and (5.15) have nonzero solutions \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) and \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\), respectively. Equivalently, we determine \(\alpha \)s and \(\beta \)s that satisfy \(\det (D_{\scriptscriptstyle +}(\alpha ,\beta )) = 0\) or \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = 0\). We first investigate the location and the number of zeros of \(\det (D_{\scriptscriptstyle +}(\alpha ,\beta )) = 0\).
Lemma 4
The equation \(\det (D_{\scriptscriptstyle +}(\alpha ,\beta )) = 0\) assumes the form
$$\begin{aligned} \bigl ( \alpha \beta (1 + s)(\rho + 1) - \beta ^2(1 + s)\rho - \alpha ^2 \bigr )^s - \beta (\alpha \beta s)^s = 0 \end{aligned}$$
(5.24)
and can be rewritten as
$$\begin{aligned} \frac{\alpha \beta (1 + s)(\rho + 1) - \beta ^2(1 + s)\rho - \alpha ^2}{\alpha \beta s} = u_i \beta ^{1/s}, \quad i = 1,2,\ldots ,s, \end{aligned}$$
(5.25)
where \(u_i\) is the i-th root of unity of \(u^s = 1\) and \(\beta ^{1/s}\) is the principal root.
(i)
For every \(\alpha \) with \(|\alpha | \in (0,1)\), equation (5.25) has exactly one root \(\beta _i\) inside the open circle of radius \(|\alpha |\) for each i. Furthermore, all s roots \(\beta _i\) are distinct.
 
(ii)
For every \(\beta \) with \(|\beta | \in (0,1)\), equation (5.25) has exactly one root \(\alpha _i\) inside the open circle of radius \(|\beta |\) for each i. Furthermore, all s roots \(\alpha _i\) are distinct.
 
Proof
Dividing (5.24) by \((\alpha \beta s)^s\) and taking the s-th root reduces (5.24) to (5.25).
(i) The proof consists of three main steps.
(a)
For every fixed \(\alpha \) with \(|\alpha | \in (0,1)\), equation (5.24) has exactly s roots \(\beta \) inside the open circle of radius \(|\alpha |\).
 
(b)
These s roots \(\beta \) are distinct.
 
(c)
For every fixed \(\alpha \) with \(|\alpha | \in (0,1)\), equation (5.25) has at least one root \(\beta _i\) inside the open circle of radius \(|\alpha |\) for each i.
 
Combining these three steps proves that for every fixed \(\alpha \) with \(|\alpha | \in (0,1)\), equation (5.25) has at exactly one root \(\beta _i\) inside the open circle of radius \(|\alpha |\) for each i and the \({\beta _i} s\) are distinct.
(i)(a) Equation (5.24) is a polynomial of degree 2s in \(\beta \) and we will show that exactly s roots are inside the open circle of radius \(|\alpha |\). Other possible roots inside the open unit circle appear not to be useful, since these will produce a divergent solution, as follows from (5.11). Divide both sides of (5.24) by \(\alpha ^{2s}\) and set \(z = \beta /\alpha \) to obtain
$$\begin{aligned} f(z)^s - \alpha s^s z^{s+1} = 0, \end{aligned}$$
(5.26)
with \(f(z) = (1 + s)(\rho + 1)z - (1 + s)\rho z^2 - 1\). Then f(z) has the two roots
$$\begin{aligned} v_{\scriptscriptstyle \pm } = \frac{\rho + 1 \pm \sqrt{(\rho + 1)^2 - 4\rho /(1+s)}}{2\rho }. \end{aligned}$$
(5.27)
Observe that, for \(0< \rho < 1\), \(v_{\scriptscriptstyle +} > 1\) and \(0< v_{\scriptscriptstyle -} < 1\). Furthermore, for \(|z| = 1\) one establishes
$$\begin{aligned} |f(z)|^s&= | (1 + s)(\rho + 1)z - (1 + s)\rho z^2 - 1 |^s \nonumber \\&\ge | (1 + s)(\rho + 1)|z| - (1 + s)\rho |z|^2 - 1 |^s = s^s > |\alpha |s^s. \end{aligned}$$
(5.28)
Hence, by Rouché’s theorem (see, for example, [24, Theorem 9.3.2]), equation (5.26) has exactly s roots inside the open unit circle. Thus, (5.24) has for each fixed \(|\alpha | \in (0,1)\) exactly s roots \(\beta \) inside the open circle of radius \(|\alpha |\).
(i)(b) Label the left-hand side of (5.26) as g(z). For a root to be of (at least) multiplicity two, it must hold that \(g(z) = g'(z) = 0\), which gives
$$\begin{aligned} (\rho + 1)z + (s - 1)\rho z^2 - 1 = 0, \end{aligned}$$
(5.29)
with solutions
$$\begin{aligned} \omega _{\scriptscriptstyle \pm } = \frac{-(\rho + 1) \pm \sqrt{(\rho + 1)^2 + 4(s - 1)\rho }}{2(s - 1)\rho }. \end{aligned}$$
(5.30)
Note that for \(0< \rho < 1\), \(\omega _{\scriptscriptstyle -} < 0\), \(0< \omega _{\scriptscriptstyle +} < 1\) and \(\omega _{\scriptscriptstyle +}\) is a monotonically decreasing function of \(\rho \) with \(\omega _{\scriptscriptstyle +} = 1\) for \(\rho = 0\) and \(\omega _{\scriptscriptstyle +} = (\sqrt{s}-1)/(s-1)\) for \(\rho = 1\). For \(z = \omega _{\scriptscriptstyle +}\) equation (5.26) reveals the contradiction
$$\begin{aligned} \alpha = \frac{f(\omega _{\scriptscriptstyle +})^s}{s^s \omega _{\scriptscriptstyle +}^{s+1}} = \frac{\bigl ( 1 + \rho (1 - 2 \omega _{\scriptscriptstyle +}) \bigr )^s}{\omega _{\scriptscriptstyle +}} \ge 1. \end{aligned}$$
(5.31)
Thus, the s roots \(\beta \) inside the open circle of radius \(|\alpha |\) are distinct.
(i)(c) This step is presented in Appendix 1. The proof was communicated to us by A. J. E. M. Janssen.
(ii) Set \(z = \alpha /\beta \), multiply (5.25) by zs and rearrange to obtain
$$\begin{aligned} z \bigl ( (1 + s)(\rho + 1) - u_i \beta ^{1/s} s \bigr ) = z^2 + (1 + s)\rho . \end{aligned}$$
(5.32)
Label the left-hand side as f(z) and the right-hand side as g(z). Note that f(z) has a single root within the unit circle. For \(|z| = 1\) one establishes
$$\begin{aligned} |f(z)|&= \bigl | z \bigl ( (1 + s)(\rho + 1) - u_i \beta ^{1/s} s \bigr )\bigr | = \bigl | (1 + s)(\rho + 1) - u_i \beta ^{1/s}s \bigr | \nonumber \\&\ge (1 + s)(\rho + 1) - |u_i| \bigl |\beta ^{1/s}\bigr | s = (1 + s)\rho + 1 + s\bigl (1 - \bigl |\beta ^{1/s}\bigr |\bigr ) \nonumber \\&> (1 + s)\rho + 1 \ge | z^2 + (1 + s)\rho | = |g(z)|. \end{aligned}$$
(5.33)
Hence, by Rouché’s theorem we establish that (5.25) has exactly one root \(\alpha _i\) inside the open circle of radius \(|\beta |\) for each i. These roots \(\alpha _i\) are distinct, since the \(u_i\) are distinct. \(\square \)
Remark 4
(Identical eigenvectors) We note that for fixed \(\beta \) with \(|\beta | \in (0,1)\) and fixed i, (5.25) is a quadratic equation in \(\alpha \) and has two solutions, say \(\alpha _{\scriptscriptstyle +}\) and \(\alpha _{\scriptscriptstyle -}\), satisfying \(|\alpha _{\scriptscriptstyle -}|< |\beta | < |\alpha _{\scriptscriptstyle +}|\). Then, combining (5.14) and (5.25) we have the property that \(\mathbf {i}_{\scriptscriptstyle +}(\alpha _{\scriptscriptstyle +},\beta ) = \mathbf {i}_{\scriptscriptstyle +}(\alpha _{\scriptscriptstyle -},\beta )\), which we will use later.
We next provide information on the location and the number of zeros of \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = 0\). The polynomial form of \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = 0\) is too complicated for application of Rouché’s theorem. Therefore we apply the following matrix variant of Rouché’s theorem.
Theorem 1
(de Smit [28]) Let \(A(z) = (a_{i,j}(z))\) and \(B(z) = (b_{i,j}(z))\) be complex \(n \times n\) matrices, where B(z) is diagonal. The elements \(a_{i,j}(z)\) and \(b_{i,j}(z)\), \(0 \le i,j \le n - 1\) are meromorphic functions in a simply connected region \(\mathcal {S}\) in which \(\mathcal {T}\) is the set of all poles of these functions. \(\mathcal {C}\) is a rectifiable closed Jordan curve in \(\mathcal {S} - \mathcal {T}\). Let \(x_B\) and \(x_{A+B}\) be the number of zeros inside \(\mathcal {C}\) of \(\det (B(z))\) and \(\det (A(z) + B(z))\), respectively, and \(y_B\) and \(y_{A+B}\) the number of poles inside \(\mathcal {C}\) of \(\det (B(z))\) and \(\det (A(z)+B(z))\) (zeros and poles of higher order are counted according to this order). If \(|b_{i,i}(z)| > \sum _{j = 0}^{n-1} |a_{i,j}(z)|\) on \(\mathcal {C}\) for all \(i = 0,1,\ldots ,n - 1\), then
$$\begin{aligned} x_{A+B} - y_{A+B} = x_B - y_B. \end{aligned}$$
(5.34)
Lemma 5
(i)
For every \(\alpha \) with \(|\alpha | \in (0,1)\), the equation \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = 0\) can be rewritten as
$$\begin{aligned} \alpha ^2 s^s + \beta ^2 ((1 + s)\rho )^s - \alpha \beta s^s ( \psi _{\scriptscriptstyle +}(\beta )^s + \psi _{\scriptscriptstyle -}(\beta )^s ) = 0, \end{aligned}$$
(5.35)
where \(\psi _{\scriptscriptstyle \pm }(\beta )\) is given in (5.18), and has exactly one root \(\beta \) in the open circle of radius \(|\alpha |\).
 
(ii)
For every \(\beta \) with \(|\beta | \in (0,1)\), equation (5.35) has exactly one root \(\alpha \) in the open circle of radius \(|\beta |\).
 
Proof
(i) Setting \(z = \beta /\alpha \), we observe that
$$\begin{aligned} D_{\scriptscriptstyle -}(\alpha ,\beta ) = \alpha ^2 (A(z) + B(z)), \end{aligned}$$
(5.36)
with
$$\begin{aligned} A(z)&= (1 + s) \rho z L+ s z L^T + (1 + s)\rho z^2 M^{(0,s-1)} + s M^{(s-1,0)}, \end{aligned}$$
(5.37)
$$\begin{aligned} B(z)&= z ( \alpha z - (1 + s)(\rho + 1)) I. \end{aligned}$$
(5.38)
Hence, \(\det (D_{\scriptscriptstyle -}(\alpha ,\beta )) = \alpha ^{2s} \det (A(z) + B(z))\). Let us take \(\mathcal {C}\) to be the unit circle. Furthermore, we can verify that, for \(|\alpha | \in (0,1)\), \(|z| = 1\), and \(i = 0,1,\ldots ,s-1,\)
$$\begin{aligned} |b_{i,i}(z)| = |(1 + s)(\rho + 1) - \alpha z| > (1 + s)\rho + s = \sum _{j = 0}^{s-1} |a_{i,j}(z)| \end{aligned}$$
(5.39)
and the number of zeros and poles inside \(\mathcal {C}\) of \(\det (B(z))\) are \(x_B = s\) and \(y_B = 0\), respectively. The number of poles of \(\det (A(z) + B(z))\) inside \(\mathcal {C}\) is \(y_{A+B} = 0\). By Theorem 1 we derive that \(\det (A(z) + B(z))\) has exactly s roots inside \(\mathcal {C}\). Moreover,
$$\begin{aligned} \det (A(z) + B(z))&= (-z)^{s-1} \Bigl ( s^s + z^2 ((1 + s)\rho )^s \nonumber \\&\quad - z \sum _{i = 0}^{\lfloor s/2 \rfloor } (-1)^i \frac{s}{s - i} \left( {\begin{array}{c}s - i\\ i\end{array}}\right) \nonumber \\&\quad \times (s(1 + s)\rho )^i ((1 + s)(\rho + 1) - \alpha z)^{s - 2i}\Bigr ). \end{aligned}$$
(5.40)
So, \(\det (A(z) + B(z))\) has the root 0 of multiplicity \(s - 1\) and exactly one nonzero root inside \(\mathcal {C}\). The summation in (5.40) is known as the Waring formula. By using the following identity:
$$\begin{aligned} x^s + y^s = \sum _{i = 0}^{\lfloor s/2 \rfloor } (-1)^i \frac{s}{s - i} \left( {\begin{array}{c}s - i\\ i\end{array}}\right) (xy)^i (x + y)^{s - 2i}, \end{aligned}$$
(5.41)
we can simplify the Waring formula; see, for example, [19, Eq. (1)]. So, we set \(xy = s(1 + s)\rho \) and \(x + y = (1 + s)(\rho + 1) - \alpha z\). Recall from (5.23) that \(\psi _{\scriptscriptstyle +}(\alpha z)\psi _{\scriptscriptstyle -}(\alpha z) = (1 + s)\rho /s\). This allows us to establish that \(x = s \psi _{\scriptscriptstyle +}(\alpha z)\) and \(y = s \psi _{\scriptscriptstyle -}(\alpha z)\). Thus, (5.40) simplifies to
$$\begin{aligned}&\det (A(z) + B(z)) \nonumber \\&\quad = (-z)^{s-1} \Bigl ( s^s + z^2 ((1 + s)\rho )^s - z s^s (\psi _{\scriptscriptstyle +}(\alpha z)^s + \psi _{\scriptscriptstyle -}(\alpha z)^s) \Bigr ). \end{aligned}$$
(5.42)
Since \(\det (A(z) + B(z))\) has exactly s roots in the unit circle and 0 is a root of multiplicity \(s - 1\), we are led to the conclusion that there exists exactly one nonzero root in the unit circle. Thus, (5.35) has exactly one nonzero root \(\beta \) in the open circle of radius \(|\alpha |\).
(ii) The proof is identical to the proof of (i). \(\square \)
Remark 5
(Notation) For a fixed \(\alpha \) with \(|\alpha | \in (0,1)\) the roots of (5.24) are denoted by \(\beta _1,\beta _2,\ldots ,\beta _s\) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and the single root of (5.35) is denoted by \(\beta _{s + 1}\) with \(|\beta _{s + 1}| < |\alpha |\). Similar indexing exists for the roots \(\alpha \) for a fixed \(\beta \) with \(|\beta | \in (0,1)\).

5.3 Initial solution

Lemmas 3, 4, and 5 characterize basic solutions satisfying the inner equations (2.2)–(2.3). In Lemma 6 we specify the form of \(\mathbf {q}(n)\).
Lemma 6
(Initial solution) For \(|\alpha | \in (0,1)\), let \(\beta _1,\beta _2,\ldots ,\beta _s\) be the roots of (5.24) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i)\) the corresponding nonzero vectors satisfying (5.14). Symmetrically, let \(\beta _{s + 1}\) be the root of (5.35) with \(|\beta _{s + 1}| < |\alpha |\) and \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1})\) the corresponding nonzero vector satisfying (5.17). Then there exists exactly one \(\alpha \) with \(|\alpha | \in (0,1)\) for which there exists a nonzero vector \(\mathbf {h}= (h(0),h(1),\ldots ,h(s-1))^T\) and coefficients \(\eta _1,\eta _2,\ldots ,\eta _{s+1}\) such that
$$\begin{aligned} \mathbf {p}(m,n) = {\left\{ \begin{array}{ll}\alpha ^m \sum _{i = 1}^s \eta _i \beta _i^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i), &{} m \ge 0, ~ n \ge 1, \\ \alpha ^m \mathbf {h}, &{} m \ge 0, ~ n = 0, \\ \eta _{s + 1} \alpha ^m \beta _{s + 1}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}), &{} m \ge 0, ~ n \le -1, \end{array}\right. } \end{aligned}$$
(5.43)
satisfies (2.2)–(2.6). This unique \(\alpha \) is equal to \(\rho ^{1 + s}\).
The vector \(\mathbf {h}\) is given by
$$\begin{aligned} \mathbf {h}= \alpha \bigl ( A_{0,1} + \alpha A_{-1,1} \bigr )^{-1} \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \end{aligned}$$
(5.44)
and the coefficients \(\eta _1,\eta _2,\ldots ,\eta _s\) satisfy
$$\begin{aligned}&\sum _{i = 1}^s \eta _i \Bigl ( \beta _i \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) + \alpha ^2 B_{0,0} \bigl ( A_{0,1} + \alpha A_{-1,1} \bigr )^{-1} \Bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\&\qquad = - \eta _{s + 1} \beta _{s + 1} \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) \end{aligned}$$
(5.45)
and \(\eta _{s + 1} = 1\).
Proof
Substituting (5.43) into (2.4) yields
$$\begin{aligned} \mathbf {h}&= - \frac{1}{\alpha } \bigl ( A_{0,1} + \alpha A_{-1,1} \bigr )^{-1} \nonumber \\&\quad \times \sum _{i = 1}^s \eta _i \bigl ( \alpha \beta _i A_{0,0} + \beta _i^2 A_{1,-1} + \alpha \beta _i^2 A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i). \end{aligned}$$
(5.46)
Furthermore, by using (5.15) for the summands one obtains (5.44). Equation (5.45) follows from substituting (5.43) into (2.6) and using (5.44). One can arbitrarily choose \(\eta _{s + 1}\) due to the normalization that follows at the end. \(\square \)

5.4 Compensation on the vertical boundary

The initial solution (5.43) does not satisfy the positive and negative vertical boundary. We consider a term \(\eta \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) that satisfies the positive inner balance equations and stems from a solution that satisfies both the inner and horizontal boundary equations. For now, we refer to this term as the original term. In the case of the initial solution (5.43), this would be one of the s product-form solutions of the positive quadrant. The idea behind the compensation approach is to add compensation terms \(\sum _{i = 1}^s \nu _i \alpha _i^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _i,\beta )\) to the original term, such that the linear combination
$$\begin{aligned} \eta \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) + \sum _{i = 1}^s \nu _i \alpha _i^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _i,\beta ), \quad m \ge 0, ~ n \ge 1, \end{aligned}$$
(5.47)
satisfies both (2.7) and (2.2). Substituting (5.47) in (2.7) gives
$$\begin{aligned} \eta \beta ^{n-1} V_+(\alpha ,\beta ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) + \sum _{i = 1}^s \nu _i \beta ^{n-1} V_+(\alpha _i,\beta ) \mathbf {i}_{\scriptscriptstyle +}(\alpha _i,\beta ) = \mathbf {0}, \quad n \ge 2, \end{aligned}$$
(5.48)
where \(V_+(\alpha ,\beta ) = \beta (A_{0,0} + I) + \beta ^2 A_{0,-1} + \alpha A_{-1,1}\). Note that we indeed require that the original term and the compensation terms share the same \(\beta \) and therefore we obtain the s roots \(\alpha _1,\alpha _2,\ldots ,\alpha _s\) with \(|\alpha _i| < |\beta |\) from (5.24). Clearly, (5.47) now satisfies the positive inner equations (2.2). This leaves the s coefficients \(\nu _i\) to satisfy the s equations (5.48). However, from Remark 4 we deduce that there exists a specific i for which \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = \mathbf {i}_{\scriptscriptstyle +}(\alpha _i,\beta )\) and thus there is only one coefficient that is required to be nonzero.
Let us now consider a term from the negative quadrant \(\eta \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) that satisfies the negative inner balance equations and stems from the same solution that satisfies both the inner and horizontal boundary equations. Let us now refer to this term as the original term. In the case of the initial solution (5.43), this would be the solution on the negative quadrant. As before, the compensation approach dictates adding a compensation term \(\nu _{s + 1} \alpha _{s + 1}^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{s + 1},\beta )\) to the original term, such that the linear combination
$$\begin{aligned} \eta \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) + \nu _{s + 1} \alpha _{s + 1}^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{s + 1},\beta ), \quad m \ge 0, ~ n \le -1, \end{aligned}$$
(5.49)
satisfies both (2.8) and (2.3). Substituting (5.49) in (2.8) gives
$$\begin{aligned}&\eta \beta ^{-n+1} V_-(\alpha ,\beta ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) \nonumber \\&\quad + \nu _{s + 1} \beta ^{-n+1} V_-(\alpha _{s + 1},\beta ) \mathbf {i}_{\scriptscriptstyle -}(\alpha _{s + 1},\beta ) = \mathbf {0}, \quad n \le -2, \end{aligned}$$
(5.50)
where \(V_-(\alpha ,\beta ) = \beta (B_{0,0} + sM^{(0,0)}) + \beta ^2 B_{0,1} + \alpha B_{-1,-1}\). Note that we indeed require that the original term and the compensation term share the same \(\beta \) and therefore we obtain the root \(\alpha _{s + 1}\) with \(|\alpha _{s + 1}| < |\beta |\) from (5.35). This ensures that the linear combination (5.49) satisfies the negative inner equations (2.3). Even though there is only one coefficient \(\nu _{s + 1}\) for s equations, it turns out that this provides enough freedom to satisfy (5.50).
Lemma 7
(Vertical compensation)
(i)
Consider the product form \(\eta \alpha ^m\beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) that satisfies the positive inner equations (2.2) that stems from a solution that satisfies the inner and horizontal boundary equations. For this \(\alpha \) and fixed \(\beta \), let \(\alpha _1\) be the root that satisfies \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = \mathbf {i}_{\scriptscriptstyle +}(\alpha _1,\beta )\) with \(|\alpha _1| < |\beta |\). Then there exists a coefficient \(\nu _1\) such that
$$\begin{aligned} \mathbf {p}(m,n) = \eta \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) + \nu _1 \alpha _1^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _1,\beta ), \quad m \ge 0, ~ n \ge 1, \end{aligned}$$
(5.51)
satisfies (2.2) and (2.7). The coefficient \(\nu _1\) satisfies
$$\begin{aligned} \nu _1 = - \eta \frac{1 - \frac{\beta }{\alpha } (1 + s)\rho }{1 - \frac{\beta }{\alpha _1} (1 + s)\rho }. \end{aligned}$$
(5.52)
 
(ii)
Consider a product form \(\eta \alpha ^m\beta ^n \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) that satisfies the negative inner equations (2.3) that stems from a solution that satisfies the inner and horizontal boundary equations. For this \(\beta \), let \(\alpha _{s + 1}\) be the root of (5.35) with \(|\alpha _{s + 1}| < |\beta |\) and let \(\mathbf {i}_{\scriptscriptstyle -}(\alpha _{s + 1},\beta )\) be the corresponding nonzero vector satisfying (5.17). Then there exists a coefficient \(\nu _{s + 1}\) such that
$$\begin{aligned} \mathbf {p}(m,n)&= \eta \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) \nonumber \\&\quad + \nu _{s + 1} \alpha _{s + 1}^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{s + 1},\beta ), \quad m \ge 0, ~ n \le -1, \end{aligned}$$
(5.53)
satisfies (2.3) and (2.8). The coefficient \(\nu _{s + 1}\) is given by
$$\begin{aligned} \nu _{s + 1} = -\eta \frac{s - \frac{\beta }{\alpha } (1 + s)\rho i_{\scriptscriptstyle -}(\alpha ,\beta ,s-1)}{s - \frac{\beta }{\alpha _{s + 1}} (1 + s)\rho i_{\scriptscriptstyle -}(\alpha _{s + 1},\beta ,s-1)}. \end{aligned}$$
(5.54)
 
Proof
(i) Use (5.12) to establish that
$$\begin{aligned} V_+(\alpha ,\beta )\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = \left( \beta I- \frac{\beta ^2}{\alpha } A_{1,-1} \right) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) = \beta \left( 1 - \frac{\beta }{\alpha } (1 + s)\rho \right) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(5.55)
Then using Remark 4 we can establish that there exists a single \(\nu _i\) that is nonzero. We label the single nonzero coefficient as \(\nu _1\) and find it from (5.48) with the help of (5.55).
(ii) Use (5.15) to establish that
$$\begin{aligned} V_-(\alpha ,\beta )\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) = \beta \left( s M^{(0,0)} - \frac{\beta }{\alpha } (1 + s)\rho M^{(0,s-1)} \right) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ). \end{aligned}$$
(5.56)
Thus, (5.50) reduces to a single equation and \(\nu _{s + 1}\) follows directly. \(\square \)

5.5 Compensation on the horizontal boundary

The solution obtained after compensation on the vertical boundary, as outlined in Lemma 7, does not satisfy the horizontal boundary equations. So, we need to compensate for the error on the horizontal boundary by adding new terms. The compensation procedure on the horizontal boundary has a few differences from the one described in the previous section. We outline these differences by informally treating the compensation of a product-form term of the positive quadrant. The difference in the compensation procedure is due to the fact that adding compensation terms only for the positive quadrant does not make the solution satisfy the horizontal boundary equations. Intuitively, this originates from the fact that the horizontal boundary, i.e., \(n = 0\), is connected to both the positive and negative quadrants. Thus, for the horizontal compensation step, we need to add product-form terms for the complete positive half-plane. It turns out that these product-form terms are nearly identical to the initial solution, which indeed satisfied both the inner and horizontal boundary equations. The same procedure holds for a product-form term of the negative quadrant.
The formal compensation on the horizontal boundary is outlined in the next lemma.
Lemma 8
(Horizontal compensation)
(i)
Consider the product form \(\nu \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) that satisfies the positive inner equations (2.2) that stems from a solution that satisfies the inner and vertical boundary equations. For this \(\alpha \), let \(\beta _1,\beta _2,\ldots ,\beta _s\) be the roots of (5.24) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and let \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i)\) be the corresponding nonzero vectors satisfying (5.14). Symmetrically, for this \(\alpha \), let \(\beta _{s + 1}\) be the root of (5.35) with \(|\beta _{s + 1}| < |\alpha |\) and let \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1})\) be the corresponding nonzero vector satisfying (5.17). Then there exists a nonzero vector \(\mathbf {h}\) and coefficients \(\eta _1,\eta _2,\ldots ,\eta _{s + 1}\) such that
$$\begin{aligned} \mathbf {p}(m,n) = {\left\{ \begin{array}{ll} \begin{aligned}&{}\nu \alpha ^m \beta ^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) \\ &{}+ \sum _{i = 1}^s \eta _i \alpha ^m \beta _i^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \end{aligned}, &{} m \ge 0, ~ n \ge 1, \\ \alpha ^m \mathbf {h}, &{} m \ge 0, ~ n = 0, \\ \eta _{s + 1} \alpha ^m \beta _{s + 1}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}), &{} m \ge 0, ~ n \le -1, \end{array}\right. } \end{aligned}$$
(5.57)
satisfies (2.2)–(2.6). The vector \(\mathbf {h}\) and the coefficients \(\eta _1,\eta _2,\ldots ,\eta _{s + 1}\) satisfy
$$\begin{aligned}&\bigl (A_{0,1} + \alpha A_{-1,1} \bigr ) \mathbf {h}- \alpha \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = \nu \alpha \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ), \end{aligned}$$
(5.58)
$$\begin{aligned}&\alpha B_{0,0} \mathbf {h}+ \sum _{i = 1}^s \eta _i \beta _i \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\&\quad + \eta _{s + 1} \beta _{s + 1} \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) = - \nu \beta \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ), \end{aligned}$$
(5.59)
$$\begin{aligned}&\alpha s h(0) + (1 + s)\rho q h(s-1) - \eta _{s + 1} \alpha s = 0. \end{aligned}$$
(5.60)
 
(ii)
Consider the product form \(\nu \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) that satisfies the negative inner equations (2.2) that stems from a solution that satisfies the inner and vertical boundary equations. For this \(\alpha \), let \(\beta _1,\beta _2,\ldots ,\beta _s\) be the roots of (5.24) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and let \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i)\) be the corresponding nonzero vectors satisfying (5.14). Symmetrically, for this \(\alpha \), let \(\beta _{s + 1}\) be the root of (5.35) with \(|\beta _{s + 1}| < |\alpha |\) and let \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1})\) be the corresponding nonzero vector satisfying (5.17). Then there exists a nonzero vector \(\mathbf {h}\) and coefficients \(\eta _1,\eta _2,\ldots ,\eta _{s + 1}\) such that
$$\begin{aligned} \mathbf {p}(m,n) = {\left\{ \begin{array}{ll} \sum _{i = 1}^s \eta _i \alpha ^m \beta _i^n \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i), &{} m \ge 0, ~ n \ge 1, \\ \alpha ^m \mathbf {h}, &{} m \ge 0, ~ n = 0, \\ \begin{aligned}{}[b]&{}\nu \alpha ^m \beta ^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ) \\ &{}+ \eta _{s + 1} \alpha ^m \beta _{s + 1}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) \end{aligned}, &{} m \ge 0, ~ n \le -1, \end{array}\right. } \end{aligned}$$
(5.61)
satisfies (2.2)–(2.6).
The vector \(\mathbf {h}\) and the coefficients \(\eta _1,\eta _2,\ldots ,\eta _{s + 1}\) satisfy
$$\begin{aligned} \bigl (A_{0,1} + \alpha A_{-1,1} \bigr ) \mathbf {h}- \alpha \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i)&= \mathbf {0}, \end{aligned}$$
(5.62)
$$\begin{aligned} \alpha B_{0,0} \mathbf {h}+ \sum _{i = 1}^s \eta _i \beta _i \bigl ( A_{1,-1}&+ \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\ + \eta _{s + 1} \beta _{s + 1} \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1})&= - \nu \beta \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta ), \end{aligned}$$
(5.63)
$$\begin{aligned} \alpha s h(0) + (1 + s)\rho q h(s-1) - \eta _{s + 1} \alpha s&= \nu \alpha s. \end{aligned}$$
(5.64)
 
Proof
(i) Equations (5.58) and (5.60) follow from exactly the same reasoning as outlined in the proof of Lemma 6. Equation (5.60) follows from substituting (5.57) in the negative horizontal boundary equations (2.5) and using (5.15). Note that this indeed reduces to a single equation.
(ii) The proof is identical to the proof of (i). \(\square \)

5.6 Constructing the equilibrium distribution

The compensation approach ultimately leads to the following expressions for the equilibrium distribution of the SED system, where the symbol \(\propto \) indicates proportionality.
(i)
For \(m \ge 0, ~ n \ge 1\),
$$\begin{aligned} \mathbf {p}(m,n)&\propto \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \eta _{l,d(i) + j} \alpha _{l,i}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l,i},\beta _{l,d(i) + j}) \nonumber \\&+ \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \sum _{j = 1}^s \nu _{l + 1,d(i) + j} \alpha _{l + 1,d(i) + j}^m \beta _{l,d(i) + j}^n \mathbf {i}_{\scriptscriptstyle +}(\alpha _{l + 1,i},\beta _{l,d(i) + j}). \end{aligned}$$
(5.65a)
 
(ii)
For \(m \ge 0\),
$$\begin{aligned} \mathbf {p}(m,0)&\propto \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \alpha _{l,i}^m \mathbf {h}_{l,i}. \end{aligned}$$
(5.65b)
 
(iii)
For \(m \ge 0, ~ n \le -1\),
$$\begin{aligned} \mathbf {p}(m,n)&\propto \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \eta _{l,i(s + 1)} \alpha _{l,i}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l,i},\beta _{l,i(s + 1)}) \nonumber \\&+ \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^l} \nu _{l + 1,i(s + 1)} \alpha _{l+1,i(s + 1)}^m \beta _{l,i(s + 1)}^{-n} \mathbf {i}_{\scriptscriptstyle -}(\alpha _{l+1,i(s + 1)},\beta _{l,i(s + 1)}). \end{aligned}$$
(5.65c)
 
We briefly describe the indexing of the compensation terms, which grows as a tree. We indicate the level at which a parameter resides with l, starting at level \(l = 0\). Within a level, we differentiate between parameters by using an additional index i. The procedure for generating terms is as follows:
(I)
The initial solution is determined from Lemma 6.
 
(V)
For a vertical compensation step with fixed \(\beta _{l,i}\):
(i)
If the index i is not a multiple of \(s + 1\), then the compensation terms are determined according to Lemma 7(i).
 
(ii)
Otherwise, the terms are determined according to Lemma 7(ii).
 
 
(H)
For a horizontal compensation step with fixed \(\alpha _{l,i}\):
(i)
If the index i is not a multiple of \(s + 1\), then the compensation terms are determined according to Lemma 8(i).
 
(ii)
Otherwise, the terms are determined according to Lemma 8(ii).
 
 
Recall the definition \(d(i) :=(i - 1)(s + 1)\) and note that \(d(i) + s + 1 = i(s + 1)\). Informally, the indexing of subsequent terms is visualized in Fig. 9. Note that the first term in the compensation approach is \(\alpha _{0,1} = \rho ^{1 + s}\), c.f. Lemma 6.

5.7 Absolute convergence

In this subsection we prove that the series for the equilibrium probabilities, as formulated in (5.65), are absolutely convergent. Before we are able to do that, we need some preliminary results. Specifically, we establish that the sequences \(\{\alpha _{l,i}\}_{l \in \mathbb {N}_0, ~ i = 1,2,\ldots ,(s + 1)^l}\) and \(\{\beta _{l,i}\}_{l \in \mathbb {N}_0, ~ i = 1,2,\ldots ,(s + 1)^{l + 1}}\) decrease exponentially fast and uniformly in the levels of the parameter tree. Furthermore, we investigate the asymptotic behavior of (ratios of) the parameters \(\alpha _{l,i}\), \(\beta _{l,i}\), the coefficients, and the elements of the (eigen)vectors.
Corollary 1
Let \(\bar{\alpha }_l :=\max _{i = 1,2,\ldots ,(s + 1)^l} |\alpha _{l,i}|\), \(\bar{\beta }_l :=\max _{i = 1,2,\ldots ,(s + 1)^{l + 1}} |\beta _{l,i}|\). Then,
(i)
\(\displaystyle \rho ^{1 + s} = |\alpha _{0,1}|> \bar{\beta }_0> \bar{\alpha }_1> \bar{\beta }_1> \bar{\alpha }_2> \bar{\beta }_2 > \cdots \)
 
(ii)
There exists \(c \in (0,1)\), such that \(\displaystyle 0< \bar{\alpha }_l,\bar{\beta }_l < c^l\).
 
Proof
(i) Follows immediately from Lemmas 4 and 5.
(ii) For a fixed \(\alpha \), let \(\beta _1,\beta _2,\ldots ,\beta _s\) be the roots of (5.24) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and let \(\beta _{s + 1}\) be the root of (5.35) with \(|\beta _{s + 1}| < |\alpha |\). We define
$$\begin{aligned} t(\alpha ) :=\max _{i = 1,2,\ldots ,s + 1} |\beta _i/\alpha | \end{aligned}$$
(5.66)
and let \(\mathcal {C} = \{ \alpha \in \mathbb {C}\mid |\alpha | \le \alpha _{0,1} = \rho ^{1 + s} \}\). Using Lemmas 4 and 5 we have \(t(\alpha ) < 1\) and in particular, since \(\rho \in (0,1)\), we have that \(\mathcal {C}\) is a closed proper subset of the unit disk and thus \(c_1:=\max _{\alpha \in \mathcal {C}} t(\alpha ) < 1\). One can reason along the same lines and obtain a bound \(c_2\) for a fixed \(\beta \). The exponential decrease in terms of the level l follows from the fact that each \(\beta \) that is generated from an \(\alpha \) satisfies \(|\beta | < |\alpha | c_1\) and each \(\alpha \) that is generated from a \(\beta \) satisfies \(|\alpha | < |\beta |c_2\). Thus, we define \(c :=c_1c_2\). \(\square \)
In Corollary 1 we established that both sequences \(\{\alpha _{l,i}\}_{l \in \mathbb {N}_0, ~ i = 1,2,\ldots ,(s + 1)^l}\) and \(\{\beta _{l,i}\}_{l \in \mathbb {N}_0, ~ i = 1,2,\ldots ,(s + 1)^{l + 1}}\) tend to zero as \(l \rightarrow \infty \). Thus, letting \(\alpha \downarrow 0\) or \(\beta \downarrow 0\) is equivalent to letting \(l \rightarrow \infty \). In what follows, whenever the specific dependence on the level l is not needed, we opt to use the simpler notation of Sects. 5.25.5. The next lemma presents the limiting behavior of the compensation parameters and their associated eigenvectors.
Lemma 9
(i)
For a fixed \(\alpha \), let \(\beta _1,\beta _2,\ldots ,\beta _s\) be the roots of (5.24) with \(|\beta _i| < |\alpha |, ~ i = 1,2,\ldots ,s\) and let \(\beta _{s + 1}\) be the root of (5.35) with \(|\beta _{s + 1}| < |\alpha |\). Then, as \(\alpha \downarrow 0\),
(a)
the ratio \(\beta _i/\alpha \rightarrow v_{\scriptscriptstyle -}, ~ i = 1,2,\ldots ,s\) with \(v_{\scriptscriptstyle -}< 1\), where \(v_{\scriptscriptstyle -}\) is the root of
$$\begin{aligned} v^2(1 + s)\rho - v(1 + s)(\rho + 1) + 1 = 0. \end{aligned}$$
(5.67)
 
(b)
the eigenvector \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \rightarrow \mathbf {e}_{0}, ~ i = 1,2,\ldots ,s\).
 
(c)
the ratio \(\beta _{s + 1}/\alpha \rightarrow w_{\scriptscriptstyle -}\) with \(w_{\scriptscriptstyle -}< 1\), where \(w_{\scriptscriptstyle -}\) is the root of
$$\begin{aligned} w^2((1 + s)\rho )^s - ws^s (\psi _{\scriptscriptstyle +}(0)^s + \psi _{\scriptscriptstyle -}(0)^s) + s^s = 0, \end{aligned}$$
(5.68)
where \(\psi _{\scriptscriptstyle \pm }(0)\) is defined in (5.18).
 
(d)
for \(r = 0,1,\ldots ,s-1\), the elements of the eigenvector \(i_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1},r) \rightarrow \psi _{\scriptscriptstyle +}(0)^r\).
 
 
(ii)
For a fixed \(\beta \), let \(\alpha _1,\alpha _2,\ldots ,\alpha _s\) be the roots of (5.24) with \(|\alpha _i| < |\beta |, ~ i = 1,2,\ldots ,s\) and let \(\alpha _{s + 1}\) be the root of (5.35) with \(|\alpha _{s + 1}| < |\beta |\). Then, as \(\beta \downarrow 0\),
(a)
the ratio \(\alpha _i/\beta \rightarrow 1/v_{\scriptscriptstyle +}, ~ i = 1,2\ldots ,s\) with \(v_{\scriptscriptstyle +}> 1\), where \(v_{\scriptscriptstyle +}\) is the root of (5.67).
 
(b)
the eigenvector \(\mathbf {i}_{\scriptscriptstyle +}(\alpha _i,\beta ) \rightarrow \mathbf {e}_{0}, ~ i = 1,2\ldots ,s\).
 
(c)
the ratio \(\alpha _{s + 1}/\beta \rightarrow 1/w_{\scriptscriptstyle +}\) with \(w_{\scriptscriptstyle +}> 1\), where \(w_{\scriptscriptstyle +}\) is the root of (5.68).
 
(d)
for \(r = 0,1,\ldots ,s-1\), the elements of the eigenvector \(i_{\scriptscriptstyle -}(\alpha _{s + 1},\beta ,r) \rightarrow \psi _{\scriptscriptstyle -}(0)^r\).
 
 
Proof
(i)(a) Set \(v= \beta _i/\alpha \) and let \(\alpha \downarrow 0\) in (5.26) to establish (5.67). The roots of (5.67) are defined in (5.27) and satisfy \(0< v_{\scriptscriptstyle -}< 1\) and \(v_{\scriptscriptstyle +}> 1\).
(i)(b) We have that \(i_{\scriptscriptstyle +}(\alpha ,\beta ,0) = 1\) for all \(\alpha \) and for \(\alpha \downarrow 0\) the other elements in (5.14) go to \(((1+s)(\rho + 1) - v_{\scriptscriptstyle -}(1 + s)\rho - 1/v_{\scriptscriptstyle -})/s\), which is equal to zero by (i)(a).
(i)(c) Set \(w= \beta _{s + 1}/\alpha \), divide (5.35) by \(\alpha ^2\) and let \(\alpha \downarrow 0\) to establish (5.68). We note that \(\psi _{\scriptscriptstyle \pm }(\cdot )\) is a continuous function, so that indeed for \(\alpha \downarrow 0\), \(\psi _{\scriptscriptstyle \pm }(\alpha w) \rightarrow \psi _{\scriptscriptstyle \pm }(0)\). Furthermore, (5.68) is a quadratic equation in \(w\) with roots \(|w_{\scriptscriptstyle -}| < 1\) and \(|w_{\scriptscriptstyle +}| > 1\), where the inequalities follow from Lemma 5, and also satisfy \(w_{\scriptscriptstyle -},w_{\scriptscriptstyle +}\in \mathbb {R}_+\).
(i)(d) Again, set \(w= \beta _{s + 1}/\alpha \). Recall the definition of the eigenvector in (5.17). We aim at showing that for \(\alpha \downarrow 0\), \(\varPsi (\alpha ,\beta _{s + 1},\psi _{\scriptscriptstyle -}(\alpha w)) \rightarrow 0\) and \(\varPsi (\alpha ,\beta _{s + 1},\psi _{\scriptscriptstyle +}(\alpha w))\) tends to some nonzero constant. As \(\alpha \downarrow 0\), we have that
$$\begin{aligned} \varPsi (\alpha ,\beta _{s + 1},\psi ) \rightarrow (1 + s)\rho \psi ^{-1} \bigl ( w_{\scriptscriptstyle -}\psi ^s - 1 \bigr ). \end{aligned}$$
(5.69)
Note that \(\psi _{\scriptscriptstyle -}(0) \in (0,1)\) and \(\psi _{\scriptscriptstyle +}(0) > 1\). Since \(w_{\scriptscriptstyle -}< 1\), the limiting value in (5.69) can only equal zero in the case that \(w_{\scriptscriptstyle -}= 1/\psi _{\scriptscriptstyle +}(0)^s\). Thus, we have established that \(\varPsi (\alpha ,\beta _{s + 1},\psi _{\scriptscriptstyle -}(\alpha w))\) tends to a nonzero constant as \(\alpha \downarrow 0\). We next verify that \(w_{\scriptscriptstyle -}= 1/\psi _{\scriptscriptstyle +}(0)^s\) is a solution to (5.68), which proves that \(\varPsi (\alpha ,\beta _{s + 1},\psi _{\scriptscriptstyle +}(\alpha w)) \rightarrow 0\) as \(\alpha \downarrow 0\). Substituting \(w= 1/\psi _{\scriptscriptstyle +}(0)^s\) in the left-hand side of (5.68) yields
$$\begin{aligned}&\frac{1}{\psi _{\scriptscriptstyle +}(0)^{2s}} ((1 + s)\rho )^s - \frac{1}{\psi _{\scriptscriptstyle +}(0)^s} s^s ( \psi _{\scriptscriptstyle +}(0)^s + \psi _{\scriptscriptstyle -}(0)^s ) + s^s, \nonumber \\&\quad = \frac{1}{\psi _{\scriptscriptstyle +}(0)^{2s}} \Bigl ( ((1 + s)\rho )^s - s^s \psi _{\scriptscriptstyle +}(0)^s ( \psi _{\scriptscriptstyle +}(0)^s + \psi _{\scriptscriptstyle -}(0)^s ) + s^s \psi _{\scriptscriptstyle +}(0)^{2s} \Bigr ), \nonumber \\&\quad = \frac{1}{\psi _{\scriptscriptstyle +}(0)^{2s}} \Bigl ( ((1 + s)\rho )^s - s^s ( \psi _{\scriptscriptstyle +}(0) \psi _{\scriptscriptstyle -}(0) )^s \Bigr ) = 0, \end{aligned}$$
(5.70)
where we used \(\psi _{\scriptscriptstyle +}(0)\psi _{\scriptscriptstyle -}(0) = (1 + s)\rho /s\), as found in (5.23).
(ii) The proof is identical to the proof of (i). \(\square \)
Finally, we describe the limiting behavior of the coefficients. In the following lemma we introduce the variable \(\gamma \) associated with a product-form solution that satisfies the positive inner equations, and the variable \(\theta \) associated with a product-form solution that satisfies the negative inner equations.
Lemma 10
(i)
Consider the setting of Lemma 7(i). Then, as \(\beta \downarrow 0\),
$$\begin{aligned} \frac{\nu _1}{\eta } \rightarrow - \frac{1 - v_{\scriptscriptstyle -}(1 + s)\rho }{1 - v_{\scriptscriptstyle +}(1 + s)\rho } =:\gamma _{\nu }. \end{aligned}$$
(5.71)
 
(ii)
Consider the setting of Lemma 7(ii). Then, as \(\beta \downarrow 0\),
$$\begin{aligned} \frac{\nu _{s + 1}}{\eta } \rightarrow - \frac{s - w_{\scriptscriptstyle -}(1 + s) \rho \psi _{\scriptscriptstyle +}(0)^{s-1}}{s - w_{\scriptscriptstyle +}(1 + s)\rho \psi _{\scriptscriptstyle -}(0)^{s-1}} =:\theta _{\nu _{s + 1}}. \end{aligned}$$
(5.72)
 
(iii)
Consider the setting of Lemma 8(i). Then, as \(\alpha \downarrow 0\),
(a)
$$\begin{aligned} \frac{\mathbf {h}}{\nu } \rightarrow \mathbf {0}. \end{aligned}$$
(5.73)
 
(b)
$$\begin{aligned} \frac{\eta _{s + 1}}{\nu } \rightarrow \frac{v_{\scriptscriptstyle -}- v_{\scriptscriptstyle +}}{v_{\scriptscriptstyle -}s \frac{1-q}{q} + w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s-1}} =:\gamma _{\eta _{s + 1}}. \end{aligned}$$
(5.74)
 
(c)
$$\begin{aligned} \lim _{\alpha \downarrow 0} \Bigl | \frac{\eta _i}{\nu } \Bigr | \le \max _{1 \le j,r+1 \le s} |a_j(r)| =:\gamma _{\eta }, \end{aligned}$$
(5.75)
where \(\mathbf {a}_j = (a_j(0),a_j(1),\ldots ,a_j(s-1))^T\) is the solution to the following linear system of equations for a specific j, with \(\mathbf {b}_j\) a column vector of size s with unknowns,
$$\begin{aligned}&v_{\scriptscriptstyle -}W \mathbf {a}_j + L \mathbf {b}_j = - v_{\scriptscriptstyle +}\mathbf {w}_j - \gamma _{\eta _{s + 1}}w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s - 1}\mathbf {e}_{0}, \end{aligned}$$
(5.76)
$$\begin{aligned}&- W \mathbf {a}_j + (1 + s)\rho (1 - q) M^{(0,s-1)} \mathbf {b}_j = \mathbf {w}_j \end{aligned}$$
(5.77)
with
$$\begin{aligned} W&:=\begin{pmatrix} 1 &{}\quad 1 &{}\quad \cdots &{}\quad 1 \\ v_{\scriptscriptstyle -}^{1/s} u_1 &{}\quad v_{\scriptscriptstyle -}^{1/s} u_2 &{}\quad \cdots &{}\quad v_{\scriptscriptstyle -}^{1/s} u_s \\ \vdots &{}\quad \vdots &{}\quad &{}\quad \vdots \\ v_{\scriptscriptstyle -}^{(s-1)/s} u_1^{s-1} &{}\quad v_{\scriptscriptstyle -}^{(s-1)/s} u_2^{s-1} &{}\quad \cdots &{}\quad v_{\scriptscriptstyle -}^{(s-1)/s} u_s^{s-1} \end{pmatrix}, \end{aligned}$$
(5.78)
$$\begin{aligned} \mathbf {w}_j&:=\begin{pmatrix} 1&v_{\scriptscriptstyle +}^{1/s} u_j&\cdots&v_{\scriptscriptstyle +}^{(s-1)/s} u_j^{s-1} \end{pmatrix}^T. \end{aligned}$$
(5.79)
 
 
(iv)
Consider the setting of Lemma 8(ii). Then, as \(\alpha \downarrow 0\),
(a)
$$\begin{aligned} \frac{\mathbf {h}}{\nu } \rightarrow \mathbf {0}. \end{aligned}$$
(5.80)
 
(b)
$$\begin{aligned} \frac{\eta _{s + 1}}{\nu } \rightarrow - \frac{v_{\scriptscriptstyle -}s \frac{1-q}{q} + w_{\scriptscriptstyle +}\psi _{\scriptscriptstyle -}(0)^{s-1}}{v_{\scriptscriptstyle -}s \frac{1-q}{q} + w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s-1}} =:\theta _{\eta _{s + 1}}. \end{aligned}$$
(5.81)
 
(c)
$$\begin{aligned} \lim _{\alpha \downarrow 0} \Bigl | \frac{\eta _i}{\nu } \Bigr | \le \max _{0 \le r \le s - 1} |c(r)| =:\theta _{\eta }, \end{aligned}$$
(5.82)
where \(\mathbf {c} = (c(0),c(1),\ldots ,c(s-1))^T\) is the solution to, with \(\mathbf {d}\) a column vector of size s with unknowns,
$$\begin{aligned}&v_{\scriptscriptstyle -}W \mathbf {c} + L\mathbf {d} = -\bigl ( w_{\scriptscriptstyle +}\psi _{\scriptscriptstyle -}(0)^{s-1} + \theta _{\eta _{s + 1}}w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s-1} \bigr ) \mathbf {e}_{0},\end{aligned}$$
(5.83)
$$\begin{aligned}&- W \mathbf {c}+ (1 + s) \rho (1 - q) M^{(0,s-1)} \mathbf {d} = \mathbf {0}. \end{aligned}$$
(5.84)
 
 
Proof
(i)
Divide both sides of (5.52) by \(\eta \) and let \(\alpha \downarrow 0\).
 
(ii)
The proof is identical to the proof of (i).
 
(iii)
Due to the length of this proof, the proof has been relegated to Appendix 1.
 
(iv)
The proof is identical to the proof of (iii). \(\square \)
 
We have established the limiting behavior of ratios of compensation parameters, coefficients, eigenvectors, and the vector on the horizontal axis. As we have seen in Lemma 9, the limiting values of the eigenvectors \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) and \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) are finite. So, we can bound the absolute value of both of them by a constant not depending on \(\alpha \) or \(\beta \). In doing so, one does not need to take into account the eigenvectors \(\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )\) and \(\mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta )\) to establish absolute convergence. Based on this observation and the asymptotic results derived above, we now show that the series appearing in (5.65) are absolutely convergent.
Theorem 2
There exists a positive integer N such that
(i)
the series
$$\begin{aligned} \sum _{l = 0}^\infty \sum _{i = 1}^{(s+1)^l} \big |\alpha _{l,i}^m\big | \sum _{j = 1}^{s+1} |\eta _{l,d(i)+j}| \Big |\beta _{l,d(i) + j}^{|n|}\Big | \end{aligned}$$
(5.85)
converges for all \(m \ge 0, ~ |n| \ge 1\) with \(m + |n| > N\).
 
(ii)
the series
$$\begin{aligned} \sum _{l = 0}^\infty \sum _{i = 1}^{(s+1)^{l + 1}} |\nu _{l+1,i}| \big |\alpha _{l+1,i}^m\big | \Big |\beta _{l,i}^{|n|}\Big | \end{aligned}$$
(5.86)
converges for all \(m \ge 0, ~ |n| \ge 1\) with \(m + |n| > N\).
 
(iii)
the series
$$\begin{aligned} \sum _{l = 0}^\infty \sum _{i = 1}^{(s+1)^l} \big |\alpha _{l,i}^m\big | |\mathbf {h}_{l,i}| \end{aligned}$$
(5.87)
converges for all \(m \ge N\).
 
(iv)
the series
$$\begin{aligned} \sum _{m + |n| > N} p(m,n,r), \quad r = 0,1,\ldots ,s-1 \end{aligned}$$
(5.88)
converges absolutely, where p(mnr) is given in (5.65).
 
Proof
For this proof, we introduce a partitioning of the elements in a matrix of size \(s + 1\), namely \(\mathcal {I} = \{ (j,k) \mid j = 1,2,\ldots ,s, ~ k = 1,2,\ldots ,s \}\), \(\mathcal {H} = \{ (j,k) \mid j = s + 1, ~ k = 1,2,\ldots ,s \}\), \(\mathcal {V} = \{ (j,k) \mid j = 1,2,\ldots ,s, ~ k = s + 1 \}\), and \(\mathcal {E} = \{ (j,k) \mid j = s + 1, ~ k = s + 1 \}\).
(i) We can view the series as an infinite tree with \(s + 1\) roots and each term has \(s + 1\) children. We define the ratios of a term with each of its descendants as
$$\begin{aligned} R^{(1)}_{l,i,j \rightarrow k}(m,n) :=\frac{|\eta _{l+1,d(d(i)+j)+k}| \big |\alpha _{l+1,d(i) + j}^m\big | \Big |\beta _{l+1,d(d(i)+j) + k}^{|n|}\Big |}{|\eta _{l,d(i)+j}| \big |\alpha _{l,i}^m\big | \Big |\beta _{l,d(i)+j}^{|n|}\Big |}. \end{aligned}$$
(5.89)
We multiply the ratio \(R^{(1)}_{l,i,j \rightarrow k}(m,n)\) by \(\Bigl | \frac{\nu _{l,d(i) + j}}{\nu _{l,d(i) + j}} \frac{\alpha _{l+1,d(i) + j}^{|n|}}{\alpha _{l+1,d(i) + j}^{|n|}} \frac{\alpha _{l,i}^{|n|}}{\alpha _{l,i}^{|n|}} \frac{\beta _{l,d(i) + j}^{m + |n|}}{\beta _{l,d(i) + j}^{m + |n|}} \Bigr |\), yielding
$$\begin{aligned} R^{(1)}_{l,i,j \rightarrow k}(m,n) = \Biggl |&\frac{\eta _{l+1,d(d(i)+j)+k}}{\nu _{l,d(i) + j}} \frac{\nu _{l,d(i) + j}}{\eta _{l,d(i)+j}} \frac{\alpha _{l+1,d(i) + j}^{m+|n|}}{\beta _{l,d(i) + j}^{m+|n|}} \nonumber \\&\times \frac{\beta _{l,d(i) + j}^{m + |n|}}{\alpha _{l,i}^{m + |n|}} \frac{\beta _{l+1,d(d(i) + j) + k}^{|n|}}{\alpha _{l+1,d(i) + j}^{|n|}} \frac{\alpha _{l,i}^{|n|}}{\beta _{l,d(i) + j}^{|n|}} \Biggr |. \end{aligned}$$
(5.90)
As \(l \rightarrow \infty \) we obtain by Lemmas 9 and 10 that \(\lim _{l \rightarrow \infty } R^{(1)}_{l,i,j \rightarrow k}(m,n) \le R^{(1)}_{j \rightarrow k}(m,n)\), where the inequality is element-wise, and
$$\begin{aligned} R^{(1)}_{j \rightarrow k}(m,n) = {\left\{ \begin{array}{ll}|\gamma _{\eta }| |\gamma _{\nu }| |v_{\scriptscriptstyle -}/ v_{\scriptscriptstyle +}|^{m + |n|}, &{} (j,k) \in \mathcal {I}, \\ |\theta _{\eta }| |\theta _{\nu _{s + 1}}| |v_{\scriptscriptstyle -}|^{|n|} |w_{\scriptscriptstyle -}/w_{\scriptscriptstyle +}|^m |1/w_{\scriptscriptstyle +}|^{|n|}, &{} (j,k) \in \mathcal {H}, \\ |\gamma _{\eta _{s + 1}}| |\gamma _{\nu }| |v_{\scriptscriptstyle -}/v_{\scriptscriptstyle +}|^m |1/v_{\scriptscriptstyle +}|^{|n|} |w_{\scriptscriptstyle -}|^{|n|}, &{} (j,k) \in \mathcal {V}, \\ |\theta _{\eta _{s + 1}}| |\theta _{\nu _{s + 1}}| |w_{\scriptscriptstyle -}/ w_{\scriptscriptstyle +}|^{m + |n|}, &{} (j,k) \in \mathcal {E}. \end{array}\right. } \end{aligned}$$
(5.91)
The convergence of the series is determined by the spectral radius of the corresponding matrix of ratios \(R^{(1)}(m,n) :=(R^{(1)}_{j \rightarrow k}(m,n))_{j,k = 1,2,\ldots ,s+1}\). If the spectral radius of the matrix \(R^{(1)}(m,n)\) is less than one, the series converges. For more details, see, for example, [5, Sect. 9] or [2, proof of Theorem 5.2]. Observe that \(|v_{\scriptscriptstyle -}|,|1/v_{\scriptscriptstyle +}|,|w_{\scriptscriptstyle -}|,|1/w_{\scriptscriptstyle +}| < 1\), but the values of the limiting ratios \(|\gamma _{\nu }|,|\theta _{\nu _{s + 1}}|,|\gamma _{\eta }|,|\theta _{\eta }|,|\gamma _{\eta _{s + 1}}|,|\theta _{\eta _{s + 1}}|\) can be larger than one. The spectral radius of the matrix \(R^{(1)}(m,n)\) depends on \(m + |n|\) and thus we can find an integer N with \(m + n > N\) for which the spectral radius is less than one, ensuring the series converges for \(m + |n| > N\).
(ii) Using a similar analysis as in (i), we define the ratio as
$$\begin{aligned} R^{(2)}_{l,i,j \rightarrow k} (m,n) :=\frac{|\nu _{l+2,d(d(i) + j) + k}| \big |\alpha _{l+2,d(d(i) + j) + k}^m\big | \big |\beta _{l+1,d(d(i) + j) + k}^{|n|}\big |}{|\nu _{l+1,d(i) + j}| \big |\alpha _{l+1,d(i) + j}^m\big | \big |\beta _{l,d(i) + j}^{|n|}\big |}. \end{aligned}$$
(5.92)
For \(l \rightarrow \infty \) we have that \(\lim _{l \rightarrow \infty } R^{(2)}_{l,i,j \rightarrow k}(m,n) \le R^{(2)}_{j \rightarrow k}(m,n)\) and
$$\begin{aligned} R^{(2)}_{j \rightarrow k}(m,n) = {\left\{ \begin{array}{ll} |\gamma _{\eta }| |\gamma _{\nu }| |v_{\scriptscriptstyle -}/v_{\scriptscriptstyle +}|^{m + |n|}, &{} (j,k) \in \mathcal {I}, \\ |\theta _{\eta }| |\gamma _{\nu }| |v_{\scriptscriptstyle -}|^{|n|} |v_{\scriptscriptstyle -}/v_{\scriptscriptstyle +}|^m |1/w_{\scriptscriptstyle +}|^{|n|}, &{} (j,k) \in \mathcal {H}, \\ |\gamma _{\eta _{s + 1}}| |\theta _{\nu _{s + 1}}| |1/v_{\scriptscriptstyle +}|^{|n|} |w_{\scriptscriptstyle -}|^{|n|} |w_{\scriptscriptstyle -}/w_{\scriptscriptstyle +}|^m, &{} (j,k) \in \mathcal {V}, \\ |\theta _{\eta _{s + 1}}| |\theta _{\nu _{s + 1}}| |w_{\scriptscriptstyle -}/w_{\scriptscriptstyle +}|^{m + |n|}, &{} (j,k) \in \mathcal {E}. \end{array}\right. } \end{aligned}$$
(5.93)
The spectral radius of the matrix \(R^{(2)}(m,n) :=(R^{(2)}_{j \rightarrow k}(m,n))_{j,k = 1,2,\ldots ,s+1}\) depends on \(m + |n|\) and thus we can find an integer N with \(m + n > N\) for which the spectral radius is less than one.
(iii) We can rewrite (5.87) as
$$\begin{aligned} \sum _{l = 0}^\infty \sum _{i = 1}^{(s+1)^l} \big |\alpha _{l,i}^m\big | |\mathbf {h}_{l,i}| = \big |\alpha _{0,1}^m\big | |\mathbf {h}_{0,1}| + \sum _{l = 1}^\infty \sum _{i = 1}^{(s+1)^l} |\nu _{l,i}| \big |\alpha _{l,i}^m\big | \frac{|\mathbf {h}_{l,i}|}{|\nu _{l,i}|}. \end{aligned}$$
(5.94)
Lemmas 10(iii)(a) and (iv)(a) show that \(|\mathbf {h}_{l,i}|/|\nu _{l,i}| \rightarrow \mathbf {0}\) and thus we can bound it from above and need not consider it when proving convergence of the series. Proving convergence of
$$\begin{aligned} \sum _{l = 0}^\infty \sum _{i = 1}^{(s + 1)^{l + 1}} |\nu _{l + 1,i}| \big |\alpha _{l + 1,i}^m\big | \end{aligned}$$
(5.95)
establishes convergence of (5.87). Exploiting the similarity with (5.86), we define the ratio \(R^{(3)}_{l,i,j \rightarrow k} (m) :=R^{(2)}_{l,i,j \rightarrow k}(m,0)\). Hence, the limiting ratios can be bound from above: \(\lim _{l \rightarrow \infty } R^{(3)}_{l,i,j \rightarrow k} (m) \le R^{(3)}_{j \rightarrow k} (m) = R^{(2)}_{j \rightarrow k} (m,0)\). The spectral radius of the matrix \(R^{(3)}(m) :=(R^{(3)}_{j \rightarrow k}(m))_{j,k = 1,2,\ldots ,s+1}\) depends on m and thus we can find an integer N with \(m \ge N\) for which the spectral radius is less than one.
(iv) This follows straightforwardly, along the same lines as in [5, Sect. 9]. \(\square \)
Remark 6
We note that the series in Theorem 2(i) (without the absolute values) corresponds to the sum of the first series in (5.65a) and the first series in (5.65c). The series in Theorem 2(ii) (without the absolute values) corresponds to the sum of the second series in (5.65a) and the second series in (5.65c). So, proving convergence of the series in Theorem 2 establishes convergence of the series in (5.65).
Remark 7
The index N is the minimal nonnegative integer for which the spectral radii of the matrices \(R^{(1)}(m,n)\) and \(R^{(2)}(m,n)\) are both less than one for \(m + |n| > N\) and the spectral radius of \(R^{(3)}(m)\) is less than one for \(m \ge N\). So, for all states (mn) with \(m + |n| > N\) and (N, 0) the series in (5.65) converges. In general, the index N is small. In Table 2 we list the index N for fixed q, while varying the values of s and \(\rho \). Note that the area of convergence might be bigger than the one based on N, since N is based on \(R^{(1)}(m,n)\), \(R^{(2)}(m,n)\), and \(R^{(3)}(m)\) which are upper bounds on the rate of convergence.
Table 2
The index N for fixed \(q = 0.4\) and varying s and \(\rho \)
s
2
5
\(\rho \)
0.1
0.3
0.5
0.7
0.9
0.1
0.3
0.5
0.7
0.9
N
1
1
1
1
1
1
1
1
1
1
We are now in the position to formulate the main result of the paper.
Theorem 3
For all states \((m,n,r), ~ m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, ~ r = 0,1,\ldots ,s - 1\) and \(m + |n| > N\) including \(m = N, ~ n = 0\), the equilibrium probabilities \(\mathbf {p}(m,n)\) are proportional, up to a multiplicative constant C, to the expressions in (5.65), see (3.3), where C is the normalization constant of the equilibrium distribution. The remaining \(\mathbf {p}(m,n), ~ m + |n| \le N\) are determined by the finite system of equilibrium equations for the states (mn) with \(m + |n| \le N\), where N is the minimal nonnegative integer for which the spectral radii of the matrices \(R^{(1)}(m,n)\) and \(R^{(2)}(m,n)\) are both less than one for \(m + |n| > N\) and the spectral radius of \(R^{(3)}(m)\) is less than one for \(m \ge N\).
Proof
This proof is similar to the proof in [2, proof of Theorem 5.3] and [5, Sect. 11], but we include it here for completeness. Define \(\mathcal {L}_{N} = \{ (m,n) \mid m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, m + |n| > N \} \cup \{(N,0,s - 1)\}\) and note the similarity with the set of states defined in the proof of Lemma 1 and the associated transition rate diagram in Fig. 8. Then \(\mathcal {L}_{N}\) is a set of states for which the series in (5.65) converge absolutely. The restricted stochastic process on the set \(\mathcal {L}_{N}\) is an irreducible Markov process, whose associated equilibrium equations are identical to the equations of the original unrestricted process on the set \(\mathcal {L}_{N}\), except for the equilibrium equation of state \((N,0,s - 1)\). Hence, the process restricted to \(\mathcal {L}_{N}\) is ergodic so that the series in (5.65) can be normalized to produce the equilibrium distribution of the restricted process on \(\mathcal {L}_{N}\). Since the set \(\mathbb {N}_0 \times \mathbb {Z}\times \{0,1,\ldots ,s - 1\} \setminus \mathcal {L}_{N}\) is finite, it follows that the original process is ergodic and relating appropriately the equilibrium probabilities of the unrestricted and restricted process completes the proof. \(\square \)
The following remark is in line with Remark 2.
Remark 8
(An efficient numerical scheme) The following scheme exploits the fact that the rate of convergence of the series in (5.65) increases as \(m + |n|\) increases. So, further away from the origin, fewer compensation steps L are needed to achieve the same accuracy according to (4.1) and (4.2). This property was seen in Sect. 4 and in particular Fig. 7. Denote a triangular set of states as \(\mathcal {T}_x :=\{ (m,n) \mid m \in \mathbb {N}_0, ~ n \in \mathbb {Z}, ~ m + |n| \le x \}\). Figure 10 serves as a visual aid.
(i)
Determine the minimal nonnegative integer N for which the spectral radii of the matrices \(R^{(1)}(m,n)\) and \(R^{(2)}(m,n)\) are both less than one for \(m + |n| > N\) and the spectral radius of \(R^{(3)}(m)\) is less than one for \(m \ge N\).
 
(ii)
Select integers M and K such that \(N< M < K\).
 
(iii)
Determine \(\mathbf {p}(m,n), ~ (m,n) \in \mathcal {T}_K \setminus \mathcal {T}_M\) according to (4.1) with \(C = 1\) and L the minimal integer such that (4.2) holds.
 
(iv)
Determine \(\mathbf {p}(m,n), ~ (m,n) \in \mathcal {T}_M\) from the equilibrium equations for the states \((m,n) \in \mathcal {T}_M\).
 
(v)
Normalize the equilibrium distribution by dividing each equilibrium probability by the sum \(\sum _{(m,n) \in \mathcal {T}_K} \mathbf {p}(m,n) \mathbf {1}\).
 
The integer L in step (iii) depends on M. As M increases, L decreases or stays constant, but the size of the system of equilibrium equations in step (iv) increases. This tradeoff is clearly in favor of selecting a larger M: decreasing L decreases the number of computation steps exponentially, whereas the size of the system of equilibrium equations increases polynomially with M. As K increases, the equilibrium probabilities become more accurate. K can be chosen arbitrarily large; it has little to no impact on the performance.

6 Conclusion

We have studied a queueing system with two nonidentical servers with service rates 1 and \(s \in \mathbb {N}\), respectively, Poisson arrivals and the SED routing policy. This policy assigns an arriving customer to the queue with the smallest expected delay, i.e., waiting time plus service time. The SED routing policy is a natural and simple routing policy that balances the load for the two nonidentical servers. Although not always optimal, SED performs well at both ends of system utilization range. Moreover, SED routing is asymptotically optimal in the heavy traffic regime.
The SED system can be modeled as an inhomogeneous random walk in the quadrant. By appropriately transforming the state space, we mapped the two-dimensional state space into a half-plane with a finite third dimension. The random walks on each quadrant are different, yet homogeneous inside each quadrant. Extending the compensation approach to this three-dimensional setting, we showed in this paper that the equilibrium distribution of the joint queue length can be represented as a series of product-form solutions. These product-form solutions are generated iteratively to compensate for the error introduced by its preceding product-form term.
The analysis presented in this paper proves that the compensation approach can be applied in the context of a three-dimensional state space. We believe that a similar analysis can be used to investigate general conditions for applicability of the compensation approach for three-dimensional Markov processes. These conditions will be comparable to the three conditions in Sect. 1. Furthermore, the compensation approach can possibly be extended to the case where all three dimensions are infinite, paving the way for performance analysis of higher-dimensional Markov processes.
The insights gained for the SED system with two servers, specifically the series expressions for the equilibrium probabilities, can be used to develop approximations of the performance of heterogeneous multiserver systems with a SED routing protocol. These approximations can be derived along the same lines as in [20]. An approximate performance analysis for a system with two servers can be found in [27].
Another interesting direction for future research is to study rare events or tail probabilities in the SED system, in a similar way as done for JSQ systems in [26]. Since the compensation approach determines the complete expansion of each equilibrium probability, one can approximate rare events with arbitrary precision.

Acknowledgments

The authors would like to thank A. J. E. M. Janssen for proving step (c) of Lemma 4(i). This work was supported by an NWO free competition grant, an ERC starting grant, and the NWO Gravitation Project NETWORKS.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Anhänge

Proof of step (c) of Lemma 4(i)

The following argument was communicated to us by A. J. E. M. Janssen. Our goal is to show that for every \(\alpha \) with \(|\alpha | \in (0,1)\) the equation
$$\begin{aligned} \frac{\alpha \beta (1+s)(\rho + 1) - \beta ^2(1 + s)\rho - \alpha ^2}{\alpha \beta s} = u_i \beta ^{1/s}, \quad i = 1,2,\ldots ,s, \end{aligned}$$
(7.1)
has at least one root \(\beta _i\) with \(|\beta _i| < |\alpha |\) for each i. To that end, we consider the more general formulation
$$\begin{aligned} \sigma = f(z) = \frac{E}{z^t} (v_{\scriptscriptstyle +}- z) (z - v_{\scriptscriptstyle -}), \end{aligned}$$
(7.2)
with \(z = \beta /\alpha \), \(E = \frac{(1 + s)\rho }{s} > 0\), \(t = \frac{s + 1}{s} \in (1,2]\) and \(v_{\scriptscriptstyle \pm }\) defined in (5.27) with the properties
$$\begin{aligned} 0< v_{\scriptscriptstyle -}< 1, \quad v_{\scriptscriptstyle +}> 1, \quad v_{\scriptscriptstyle +}+ v_{\scriptscriptstyle -}= \frac{1 + \rho }{\rho }, \quad v_{\scriptscriptstyle +}v_{\scriptscriptstyle -}= \frac{1}{(1 + s)\rho }. \end{aligned}$$
(7.3)
One retrieves the original form (7.1) by setting \(\sigma = u_i \alpha ^{1/s}\).
The proof consists of three main steps.
(a)
We derive the inverse function of \(\sigma = f(z)\) on a neighborhood of \(z = v_{\scriptscriptstyle -}\) using the Lagrange inversion theorem [10, Sect. 2.2]. We establish that \(z = g(\sigma )\), where \(g(\sigma )\) is a power series that is analytic on a neighborhood of \(\sigma = 0\) and has positive power series coefficients.
 
(b)
Employing Pringsheim’s theorem [8, Theorem 1], we are able to extend the radius of convergence for the power series \(g(\sigma )\) to \(\sigma _{\mathrm {max}} > 1\).
 
(c)
Finally, we establish that for \(\sigma = u_i \alpha ^{1/s}, ~ i = 1,2,\ldots ,s\) satisfying (7.2), the corresponding z has \(|z| < 1\).
 
(a) We note that f(z) is analytic near \(z = v_{\scriptscriptstyle -}\). So, by the Lagrange inversion theorem, we have, in a neighborhood of \(\sigma = f(v_{\scriptscriptstyle -}) = 0\),
$$\begin{aligned} z = g(\sigma )&= v_{\scriptscriptstyle -}+ \sum _{n = 1}^\infty \frac{\sigma ^n}{n!} \frac{ d ^{n-1}}{ d z^{n-1}} \Bigl ( \frac{z - v_{\scriptscriptstyle -}}{\frac{E}{z^t}(v_{\scriptscriptstyle +}- z) (z - v_{\scriptscriptstyle -})} \Bigr )^n \bigg \vert _{z = v_{\scriptscriptstyle -}} \nonumber \\&= v_{\scriptscriptstyle -}+ \sum _{n = 1}^\infty \frac{(\sigma /E)^n}{n!} \frac{ d ^{n-1}}{ d z^{n-1}} \frac{z^{n t}}{(v_{\scriptscriptstyle +}- z)^n} \bigg \vert _{z = v_{\scriptscriptstyle -}}. \end{aligned}$$
(7.4)
Note that \(g(\cdot )\) is analytic in a neighborhood of \(\sigma = f(v_{\scriptscriptstyle -}) = 0\). Let \(g_1(z) = z^{nt}\) and \(g_2(z) = (v_{\scriptscriptstyle +}- z)^{-n}\) so that we can write
$$\begin{aligned} \frac{ d ^{n-1}}{ d z^{n-1}} g_1(z) g_2(z) = \sum _{k = 0}^{n - 1} \left( {\begin{array}{c}n - 1\\ k\end{array}}\right) \frac{ d ^{n - k - 1}}{ d z^{n - k - 1}} g_1(z) \frac{ d ^k}{ d z^k} g_2(z), \end{aligned}$$
(7.5)
where
$$\begin{aligned} \frac{ d ^{n - k - 1}}{ d z^{n - k - 1}} g_1(z)&= \Bigl ( \prod _{i = 0}^{n - k - 2} (nt - i) \Bigr ) z^{nt - (n - k - 1)}, \end{aligned}$$
(7.6)
$$\begin{aligned} \frac{ d ^k}{ d z^k} g_2(z)&= \frac{(n + k - 1)!}{(n - 1)!} (v_{\scriptscriptstyle +}- z)^{-(n + k)}. \end{aligned}$$
(7.7)
Thus, we can conclude that
$$\begin{aligned} \frac{ d ^{n-1}}{ d z^{n-1}} \frac{z^{n t}}{(v_{\scriptscriptstyle +}- z)^n} \bigg \vert _{z = v_{\scriptscriptstyle -}} = \sum _{k = 0}^{n - 1} \frac{(n + k - 1)!}{k! (n - 1 - k)!} \left( \prod _{i = 0}^{n - k - 2} (nt - i) \right) \frac{v_{\scriptscriptstyle -}^{nt - (n - k - 1)}}{(v_{\scriptscriptstyle +}- v_{\scriptscriptstyle -})^{n + k}} > 0. \end{aligned}$$
(7.8)
Hence, the power series \(g(\sigma )\) has positive power series coefficients.
(b) We now extend the convergence range of \(g(\sigma )\) from a neighborhood of 0 to the entire unit disk. This is achieved by using the Pringsheim theorem. We are allowed to do so because the coefficients of the power series of \(g(\sigma )\) are positive. According to the Pringsheim theorem we can limit attention to the positive real line. With reference to Fig. 11, we let \(\sigma _{\mathrm {max}} = \max \{ f(z) \mid z \ge v_{\scriptscriptstyle -}\}\). Note that for \(z \in [v_{\scriptscriptstyle -},g(\sigma _{\mathrm {max}}))\) the function f(z) is strictly increasing and is analytic by (7.2), and thus its inverse, \(g(\sigma )\), is analytic for \(\sigma \in [f(v_{\scriptscriptstyle -}),f(g(\sigma _{\mathrm {max}}))) = [0,\sigma _{\mathrm {max}})\). Clearly, \(\sigma _{\mathrm {max}} > 1\), hence by the Pringsheim theorem \(g(\sigma )\) is analytic for \(|\sigma | < \sigma _{\mathrm {max}}\) and thus inside the unit disk.
(c) It is important to see where \(g(\sigma _{\mathrm {max}})\) lies. We find \(z = g(\sigma _{\mathrm {max}})\) from
$$\begin{aligned} 0&= \frac{ d }{ d z} \left( \frac{1}{z^t}(v_{\scriptscriptstyle +}- z)(z - v_{\scriptscriptstyle -}) \right) \nonumber \\&= \frac{1}{z^{t + 1}} \Bigl ( v_{\scriptscriptstyle +}v_{\scriptscriptstyle -}t - (t - 1)(v_{\scriptscriptstyle +}+ v_{\scriptscriptstyle -}) z - (2 - t)z^2 \Bigr ) \nonumber \\&= \frac{1}{z^{t + 1}} \Bigl ( \frac{t}{(1 + s)\rho } - (t - 1) \frac{1 + \rho }{\rho } z - (2 - t) z^2 \Bigr ). \end{aligned}$$
(7.9)
For \(t \in (1,2)\) we take the positive solution of the quadratic equation in (7.9) and obtain after some rewriting
$$\begin{aligned} z = g(\sigma _{\mathrm {max}}) = \frac{2}{1 + \rho + \sqrt{(1 + \rho )^2 + 4\rho (s - 1)}} < 1, \end{aligned}$$
(7.10)
which also holds for \(t = 2\). So, for a \(\sigma \) with \(|\sigma | < \sigma _{\mathrm {max}}\), we have from the positivity of the power series coefficients that
$$\begin{aligned} |g(\sigma )| \le g(|\sigma |) \le g(\sigma _{\mathrm {max}}) < 1. \end{aligned}$$
(7.11)
By taking \(\sigma _i = u_i \alpha ^{1/s}, ~ i = 1,2,\ldots ,s\) with \(|\sigma _i| < 1\) we see that the power series \(g(\sigma _i)\) converges and \(|z| = |g(\sigma _i)| < 1\). Finally, we can conclude that for every \(\alpha \) with \(|\alpha | \in (0,1)\), (7.1) has at least one root \(\beta _i\) with \(|\beta _i| < |\alpha |\) for each i.

Proof of Lemma 10(iii)

In this appendix we prove the convergence of the ratio of coefficients used in the horizontal compensation step for a solution that satisfies the positive inner equations. Specifically, we consider the setting of Lemma 7(i). We wish to establish the limiting values, or an upper bound on the absolute value of the following ratios, as \(\alpha \downarrow 0\):
(a)
\(\mathbf {h}/\nu \),
 
(b)
\(\eta _{s + 1}/\nu \),
 
(c)
\(\eta _i/\nu , ~ i = 1,2,\ldots ,s\).
 
We reiterate below the equations that determine these ratios, which can be found in the main body of the paper in (5.58)–(5.60),
$$\begin{aligned}&\bigl (A_{0,1} + \alpha A_{-1,1} \bigr ) \mathbf {h}- \alpha \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = \nu \alpha \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ),\end{aligned}$$
(8.1)
$$\begin{aligned}&\alpha B_{0,0} \mathbf {h}+ \sum _{i = 1}^s \eta _i \beta _i \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\&\quad + \eta _{s + 1} \beta _{s + 1} \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) = - \nu \beta \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ), \end{aligned}$$
(8.2)
$$\begin{aligned}&- \eta _{s + 1} \alpha s + \alpha s h(0) + (1 + s)\rho q h(s-1) = 0. \end{aligned}$$
(8.3)
As it turns out, when we let \(\alpha \downarrow 0\) in the system of linear equations (8.1)–(8.3), we obtain a degenerate system of equations. By properly scaling the system, one obtains a nondegenerate system of equations.
In this appendix we adopt the following notation to indicate the limiting value of a ratio:
$$\begin{aligned} x^\mathrm{lim} :=\lim _{\alpha \downarrow 0} \frac{x}{\nu }. \end{aligned}$$
(8.4)
(a) Divide (8.3) by \(\nu \) and rearrange to obtain
$$\begin{aligned} \frac{h(s-1)}{\nu } = \alpha \frac{s}{q} \frac{1}{(1 + s)\rho } \Bigl ( \frac{\eta _{s + 1}}{\nu } - \frac{h(0)}{\nu } \Bigr ). \end{aligned}$$
(8.5)
By letting \(\alpha \downarrow 0\) in (8.5) we get \(h^\mathrm{lim}(s-1) = 0\). Next, divide (8.2) by \(\alpha \) and \(\nu \), which yields
$$\begin{aligned}&B_{0,0} \frac{\mathbf {h}}{\nu } + \sum _{i = 1}^s \frac{\eta _i}{\nu } \frac{\beta _i}{\alpha } \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\&\quad + \frac{\eta _{s + 1}}{\nu } \frac{\beta _{s + 1}}{\alpha } \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) = - \frac{\beta }{\alpha } \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.6)
We let \(\alpha \downarrow 0\) and use that \(A_{1,-1} = (1 + s)\rho I\) and \(B_{1,1} = (1 + s)\rho M^{(0,s-1)}\). This gives
$$\begin{aligned} B_{0,0} \mathbf {h}^\mathrm{lim} + v_{\scriptscriptstyle -}(1 + s)\rho \sum _{i = 1}^s \eta _i^\mathrm{lim} \mathbf {e}_{0} + \eta _{s + 1}^\mathrm{lim} w_{\scriptscriptstyle -}(1 + s)\rho \psi _{\scriptscriptstyle +}(0)^{s - 1} \mathbf {e}_{0} = - v_{\scriptscriptstyle +}(1 + s)\rho \mathbf {e}_{0}. \end{aligned}$$
(8.7)
The matrix \(B_{0,0}\) is a tri-diagonal matrix and together with \(h^\mathrm{lim}(s-1) = 0\) we find that \(\mathbf {h}^\mathrm{lim} = 0\).
(b) Together with \(\mathbf {h}^\mathrm{lim} = 0\), equation (8.7) reduces to the single equation
$$\begin{aligned} v_{\scriptscriptstyle -}\sum _{i = 1}^s \eta _i^\mathrm{lim} + \eta _{s + 1}^\mathrm{lim} w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s - 1} = - v_{\scriptscriptstyle +}. \end{aligned}$$
(8.8)
Now, divide (8.1) by \(\alpha \) and \(\nu \), yielding
$$\begin{aligned} A_{0,1} \frac{\mathbf {h}}{\alpha \nu } + A_{-1,1} \frac{\mathbf {h}}{\nu } - \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.9)
Note that \(A_{0,1} \mathbf {h}/(\alpha \nu ) = (1 + s)\rho (1 - q) h(s-1)/(\alpha \nu ) \mathbf {e}_{0}\), so that, together with (8.5) and \(A_{-1,1} = I\), (8.9) reduces to
$$\begin{aligned} s \frac{1 - q}{q} \Bigl ( \frac{\eta _{s + 1}}{\nu } - \frac{h(0)}{\nu } \Bigr )\mathbf {e}_{0} + \frac{\mathbf {h}}{\nu } - \sum _{i = 1}^s \eta _i \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.10)
Moreover, letting \(\alpha \downarrow 0\) again gives us a single equation, namely
$$\begin{aligned} - \sum _{i = 1}^s \eta _i^\mathrm{lim} + s \frac{1 - q}{q} \eta _{s + 1}^\mathrm{lim} = 1. \end{aligned}$$
(8.11)
One can solve the linear system of equations (8.8) and (8.11) for the two unknowns \(\sum _{i = 1}^s \eta _i^\mathrm{lim}\) and \(\eta _{s + 1}^\mathrm{lim}\), where the solution of \(\eta _{s + 1}^\mathrm{lim}\) is given in Lemma 10(iii)(b).
(c) We wish to establish a scaling of each equation in (8.1)–(8.3) so that, in the limit for \(\alpha \downarrow 0\), we obtain a nondegenerate system of equations. From (5.14) and (5.25) we know that \(i_{\scriptscriptstyle +}(\alpha ,\beta _i,r) = u_i \beta _i^{r/s}, r = 0,1,\ldots ,s-1\) and from Remark 4 it is clear that there exists a j for which \(i_{\scriptscriptstyle +}(\alpha ,\beta ,r) = u_j \beta ^{r/s}, ~ r = 0,1,\ldots ,s-1\). Thus, the correct scaling to establish a nondegenerate limit of a term \(i_{\scriptscriptstyle +}(\alpha ,\beta ,r)\) is \(\alpha ^{-r/s}\).
We are going to multiply both (8.6) and (8.9) element-wise by the column vector
$$\begin{aligned} \varvec{\alpha } :=\begin{pmatrix} 1&\alpha ^{-1/s}&\alpha ^{-2/s}&\cdots&\alpha ^{-(s-1)/s} \end{pmatrix}^\mathrm{{T}}. \end{aligned}$$
(8.12)
To be able to do that, we need some new notation. Let us introduce the column vectors
$$\begin{aligned} \varvec{\eta } :=\begin{pmatrix} \eta _1&\eta _2&\cdots&\eta _s \end{pmatrix}^\mathrm{{T}}, \quad \text {and} \quad \tilde{\mathbf {h}} :=\begin{pmatrix} \frac{h(0)}{\alpha ^{1/s}}&\frac{h(1)}{\alpha ^{2/s}}&\cdots&\frac{h(s-1)}{\alpha } \end{pmatrix}^\mathrm{{T}}, \end{aligned}$$
(8.13)
and the matrices
$$\begin{aligned} W(\alpha )&:=\begin{pmatrix}\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _1)&\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _2)&\cdots&\mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _s) \end{pmatrix}, \end{aligned}$$
(8.14)
$$\begin{aligned} \tilde{W}(\alpha )&:=\begin{pmatrix}\frac{\beta _1}{\alpha } \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _1)&\frac{\beta _2}{\alpha } \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _2)&\cdots&\frac{\beta _s}{\alpha } \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _s) \end{pmatrix}. \end{aligned}$$
(8.15)
Using the introduced vectors and matrices we can write
$$\begin{aligned} \sum _{i = 1}^s \frac{\eta _i}{\nu } \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = W(\alpha ) \frac{\varvec{\eta }}{\nu }, \quad \text {and} \quad \sum _{i = 1}^s \frac{\eta _i}{\nu } \frac{\beta _i}{\alpha } \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) = \tilde{W}(\alpha ) \frac{\varvec{\eta }}{\nu }. \end{aligned}$$
(8.16)
We introduce the symbol \(\circ \) for element-wise (Hadamard) multiplication, i.e., for column vectors \(\mathbf {x}\) and \(\mathbf {y}\), we have that \(\mathbf {z} = \mathbf {x} \circ \mathbf {y}\) means that element \(z(r) = x(r) y(r)\). Below we list some useful element-wise multiplications that we will use:
$$\begin{aligned} \varvec{\alpha } \circ W(\alpha )&= \begin{pmatrix}1 &{} 1 &{} \cdots &{} 1 \\ \bigl ( \frac{\beta _1}{\alpha } \bigr )^{1/s} u_1 &{} \bigl ( \frac{\beta _2}{\alpha } \bigr )^{1/s} u_2 &{} \cdots &{} \bigl ( \frac{\beta _s}{\alpha } \bigr )^{1/s} u_s \\ \vdots &{} \vdots &{} &{} \vdots \\ \bigl ( \frac{\beta _1}{\alpha } \bigr )^{(s-1)/s} u_1^{s-1} &{} \bigl ( \frac{\beta _2}{\alpha } \bigr )^{(s-1)/s} u_2^{s-1} &{} \cdots &{} \bigl ( \frac{\beta _s}{\alpha } \bigr )^{(s-1)/s} u_s^{s-1} \end{pmatrix}, \end{aligned}$$
(8.17)
$$\begin{aligned} \exists j : \varvec{\alpha } \circ \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta )&= \begin{pmatrix}1&\bigl ( \frac{\beta }{\alpha } \bigr )^{1/s} u_j&\cdots&\bigl ( \frac{\beta }{\alpha } \bigr )^{(s-1)/s} u_j^{s-1} \end{pmatrix}^T, \end{aligned}$$
(8.18)
$$\begin{aligned} \varvec{\alpha } \circ \mathbf {h}&= \alpha ^{1/s} \tilde{\mathbf {h}}. \end{aligned}$$
(8.19)
Moreover, for \(\alpha \downarrow 0\), we determine that \(\varvec{\alpha } \circ W(\alpha ) \rightarrow W\), \(\varvec{\alpha } \circ \tilde{W}(\alpha ) \rightarrow v_{\scriptscriptstyle -}W\), and \(\varvec{\alpha } \circ \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ) \rightarrow \mathbf {w}_j\), where W and \(\mathbf {w}_j\) are given in (5.78)–(5.79).
We are now in a position to multiply both (8.6) and (8.9) element-wise by \(\varvec{\alpha }\). For the element-wise multiplication of (8.6) we get
$$\begin{aligned}&\varvec{\alpha } \circ B_{0,0} \frac{\mathbf {h}}{\nu } + \varvec{\alpha } \circ \sum _{i = 1}^s \frac{\eta _i}{\nu } \frac{\beta _i}{\alpha } \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) \nonumber \\&+ \frac{\eta _{s + 1}}{\nu } \frac{\beta _{s + 1}}{\alpha } \varvec{\alpha } \circ \bigl ( B_{1,1} + \alpha B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) = - \frac{\beta }{\alpha } \varvec{\alpha } \circ \bigl ( A_{1,-1} + \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.20)
Combining (8.20) with the definition of the transition matrices \(B_{0,0} = (1 + s)\rho L- (1 + s)(\rho + 1)I+ sL^T\), \(A_{1,-1} = (1 + s)\rho I\) and \(B_{1,1} = (1 + s)\rho M^{(0,s-1)}\), and the matrix-vector notation described above, we obtain
$$\begin{aligned}&\Bigl ( (1 + s)\rho L- \alpha ^{1/s} (1 + s)(\rho + 1) I+ \alpha ^{2/s} s L^T \Bigr ) \frac{\tilde{\mathbf {h}}}{\nu } + (1 + s)\rho \varvec{\alpha } \circ \tilde{W}(\alpha ) \frac{\varvec{\eta }}{\nu } \nonumber \\&\qquad + \varvec{\alpha } \circ \sum _{i = 1}^s \frac{\eta _i}{\nu } \frac{\beta _i}{\alpha } \alpha A_{0,-1} \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta _i) + \frac{\eta _{s + 1}}{\nu } \frac{\beta _{s + 1}}{\alpha } \bigl ( (1 + s)\rho M^{(0,s-1)} + \alpha \varvec{\alpha } \circ B_{0,1} \bigr ) \mathbf {i}_{\scriptscriptstyle -}(\alpha ,\beta _{s + 1}) \nonumber \\&\quad = - \frac{\beta }{\alpha } \varvec{\alpha } \circ \bigl ( (1 + s)\rho I+ \alpha A_{0,-1} \bigr ) \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.21)
Multiplying (8.9) element-wise by \(\varvec{\alpha }\) gives
$$\begin{aligned} \Bigl ( (1 + s) \rho (1 - q) M^{(0,s-1)} + \alpha ^{1/s} I\Bigr ) \frac{\tilde{\mathbf {h}}}{\nu } - \varvec{\alpha } \circ W(\alpha ) \frac{\varvec{\eta }}{\nu } = \varvec{\alpha } \circ \mathbf {i}_{\scriptscriptstyle +}(\alpha ,\beta ). \end{aligned}$$
(8.22)
Finally, if we let \(\alpha \downarrow 0\) in (8.21) and (8.22) we obtain the limiting system of linear equations
$$\begin{aligned}&L \tilde{\mathbf {h}}^\mathrm{lim} + v_{\scriptscriptstyle -}W \varvec{\eta }^\mathrm{lim} = - v_{\scriptscriptstyle +}\mathbf {w}_j - \gamma _{\eta _{s + 1}}w_{\scriptscriptstyle -}\psi _{\scriptscriptstyle +}(0)^{s - 1}\mathbf {e}_{0}, \end{aligned}$$
(8.23)
$$\begin{aligned}&(1 + s)\rho (1 - q) M^{(0,s-1)} \tilde{\mathbf {h}}^\mathrm{lim} - V \varvec{\eta }^\mathrm{lim}= \mathbf {w}_j. \end{aligned}$$
(8.24)
We have constructed 2s equations, (8.23)–(8.24), for the 2s unknowns \(\tilde{\mathbf {h}}^\mathrm{lim}\) and \(\varvec{\eta }^\mathrm{lim}\). The solution to this system of equations depends on j. So, let \(\tilde{\mathbf {h}}^\mathrm{lim}_j\) and \(\varvec{\eta }^\mathrm{lim}_j\) be the solution to (8.23)–(8.24) for a specific j. Then,
$$\begin{aligned} \lim _{\alpha \downarrow 0} \Bigl | \frac{\eta _i}{\nu } \Bigr | \le \max _{1 \le j,r+1 \le s} |\eta ^\mathrm{lim}_j(r)|. \end{aligned}$$
(8.25)
This concludes the proof of part (c).
Literatur
1.
Zurück zum Zitat Adan, I., Boxma, O., Resing, J.: Queueing models with multiple waiting lines. Queueing Syst. 37(1–3), 65–98 (2001)CrossRef Adan, I., Boxma, O., Resing, J.: Queueing models with multiple waiting lines. Queueing Syst. 37(1–3), 65–98 (2001)CrossRef
2.
Zurück zum Zitat Adan, I., Kapodistria, S., van Leeuwaarden, J.: Erlang arrivals joining the shorter queue. Queueing Syst. 74(2–3), 273–302 (2013)CrossRef Adan, I., Kapodistria, S., van Leeuwaarden, J.: Erlang arrivals joining the shorter queue. Queueing Syst. 74(2–3), 273–302 (2013)CrossRef
3.
Zurück zum Zitat Adan, I., Wessels, J.: Shortest expected delay routing for Erlang servers. Queueing Syst. 23(1–4), 77–105 (1996)CrossRef Adan, I., Wessels, J.: Shortest expected delay routing for Erlang servers. Queueing Syst. 23(1–4), 77–105 (1996)CrossRef
4.
Zurück zum Zitat Adan, I., Wessels, J., Zijm, W.: Analysis of the symmetric shortest queue problem. Stoch. Model. 6(4), 691–713 (1990)CrossRef Adan, I., Wessels, J., Zijm, W.: Analysis of the symmetric shortest queue problem. Stoch. Model. 6(4), 691–713 (1990)CrossRef
5.
Zurück zum Zitat Adan, I., Wessels, J., Zijm, W.: Analysis of the asymmetric shortest queue problem. Queueing Syst. 8(1), 1–58 (1991)CrossRef Adan, I., Wessels, J., Zijm, W.: Analysis of the asymmetric shortest queue problem. Queueing Syst. 8(1), 1–58 (1991)CrossRef
6.
Zurück zum Zitat Adan, I., Wessels, J., Zijm, W.: A compensation approach for two-dimensional Markov processes. Adv. Appl. Probab. 25, 783–817 (1993)CrossRef Adan, I., Wessels, J., Zijm, W.: A compensation approach for two-dimensional Markov processes. Adv. Appl. Probab. 25, 783–817 (1993)CrossRef
7.
Zurück zum Zitat Banawan, S., Zeidat, N.: A comparative study of load sharing in heterogeneous multicomputer systems. In: Proceedings of the 25th Annual Symposium on Simulation, pp. 22–31. IEEE Computer Society Press, Los Alamitos (1992) Banawan, S., Zeidat, N.: A comparative study of load sharing in heterogeneous multicomputer systems. In: Proceedings of the 25th Annual Symposium on Simulation, pp. 22–31. IEEE Computer Society Press, Los Alamitos (1992)
8.
Zurück zum Zitat Bateman, P., Diamond, H.: On the oscillation theorems of Pringsheim and Landau. In: Bambah, R.P., Dumir, V.C., Hans-Gill, R.J. (eds.) Number Theory, pp. 43–54. Springer, Basel (2000)CrossRef Bateman, P., Diamond, H.: On the oscillation theorems of Pringsheim and Landau. In: Bambah, R.P., Dumir, V.C., Hans-Gill, R.J. (eds.) Number Theory, pp. 43–54. Springer, Basel (2000)CrossRef
9.
Zurück zum Zitat Blanc, J.: Bad luck when joining the shortest queue. Eur. J. Oper. Res. 195(1), 167–173 (2009)CrossRef Blanc, J.: Bad luck when joining the shortest queue. Eur. J. Oper. Res. 195(1), 167–173 (2009)CrossRef
10.
Zurück zum Zitat de Bruijn, N.: Asymptotic Methods in Analysis, vol. 4. Courier Corporation, New York (1970) de Bruijn, N.: Asymptotic Methods in Analysis, vol. 4. Courier Corporation, New York (1970)
11.
Zurück zum Zitat Cohen, J.: Analysis of the asymmetrical shortest two-server queueing model. Int. J. Stoch. Anal. 11(2), 115–162 (1998)CrossRef Cohen, J.: Analysis of the asymmetrical shortest two-server queueing model. Int. J. Stoch. Anal. 11(2), 115–162 (1998)CrossRef
12.
Zurück zum Zitat Cohen, J., Boxma, O.: Boundary Value Problems in Queueing System Analysis. Elsevier, Amsterdam (2000) Cohen, J., Boxma, O.: Boundary Value Problems in Queueing System Analysis. Elsevier, Amsterdam (2000)
13.
Zurück zum Zitat Coombs-Reyes, J.: Customer allocation policies in a two server network: stability and exact asymptotics. Ph.D. thesis, Georgia Institute of Technology (2003) Coombs-Reyes, J.: Customer allocation policies in a two server network: stability and exact asymptotics. Ph.D. thesis, Georgia Institute of Technology (2003)
14.
Zurück zum Zitat Ephremides, A., Varaiya, P., Walrand, J.: A simple dynamic routing problem. IEEE Trans. Autom. Control 25(4), 690–693 (1980)CrossRef Ephremides, A., Varaiya, P., Walrand, J.: A simple dynamic routing problem. IEEE Trans. Autom. Control 25(4), 690–693 (1980)CrossRef
15.
Zurück zum Zitat Fayolle, G., Iasnogorodski, R., Malyshev, V.: Random Walks in the Quarter-Plane: Algebraic Methods, Boundary Value Problems and Applications, vol. 40. Springer Science & Business Media, New York (1999)CrossRef Fayolle, G., Iasnogorodski, R., Malyshev, V.: Random Walks in the Quarter-Plane: Algebraic Methods, Boundary Value Problems and Applications, vol. 40. Springer Science & Business Media, New York (1999)CrossRef
16.
Zurück zum Zitat Flatto, L., McKean, H.: Two queues in parallel. Commun. Pure Appl. Math. 30(2), 255–263 (1977)CrossRef Flatto, L., McKean, H.: Two queues in parallel. Commun. Pure Appl. Math. 30(2), 255–263 (1977)CrossRef
17.
Zurück zum Zitat Foley, R., McDonald, D.: Join the shortest queue: stability and exact asymptotics. Ann. Appl. Probab. 11, 569–607 (2001) Foley, R., McDonald, D.: Join the shortest queue: stability and exact asymptotics. Ann. Appl. Probab. 11, 569–607 (2001)
18.
Zurück zum Zitat Foschini, G.: On heavy traffic diffusion analysis and dynamic routing in packet switched networks. In: Chadey, K., Reiser, M. (eds.) Computer Performance, pp. 499–513. NorthHolland, Amsterdam (1977) Foschini, G.: On heavy traffic diffusion analysis and dynamic routing in packet switched networks. In: Chadey, K., Reiser, M. (eds.) Computer Performance, pp. 499–513. NorthHolland, Amsterdam (1977)
19.
Zurück zum Zitat Gould, H.: The Girard-Waring power sum formulas for symmetric functions, and Fibonacci sequences. Fibonacci Q 37, 135–140 (1999) Gould, H.: The Girard-Waring power sum formulas for symmetric functions, and Fibonacci sequences. Fibonacci Q 37, 135–140 (1999)
20.
Zurück zum Zitat Gupta, V., Harchol-Balter, M., Sigman, K., Whitt, W.: Analysis of join-the-shortest-queue routing for web server farms. Perform. Eval. 64(9), 1062–1081 (2007)CrossRef Gupta, V., Harchol-Balter, M., Sigman, K., Whitt, W.: Analysis of join-the-shortest-queue routing for web server farms. Perform. Eval. 64(9), 1062–1081 (2007)CrossRef
21.
Zurück zum Zitat Haight, F.: Two queues in parallel. Biometrika 45(3–4), 401–410 (1958)CrossRef Haight, F.: Two queues in parallel. Biometrika 45(3–4), 401–410 (1958)CrossRef
22.
Zurück zum Zitat Hajek, B.: Optimal control of two interacting service stations. IEEE Trans. Autom. Control 29(6), 491–499 (1984)CrossRef Hajek, B.: Optimal control of two interacting service stations. IEEE Trans. Autom. Control 29(6), 491–499 (1984)CrossRef
23.
Zurück zum Zitat Halfin, S.: The shortest queue problem. J. Appl. Probab. 22, 865–878 (1985)CrossRef Halfin, S.: The shortest queue problem. J. Appl. Probab. 22, 865–878 (1985)CrossRef
24.
Zurück zum Zitat Hille, E.: Analytic Function Theory, vol. 1. Ginn, Boston (1959) Hille, E.: Analytic Function Theory, vol. 1. Ginn, Boston (1959)
25.
Zurück zum Zitat Kingman, J.: Two similar queues in parallel. Ann. Math. Stat. 32, 1314–1323 (1961)CrossRef Kingman, J.: Two similar queues in parallel. Ann. Math. Stat. 32, 1314–1323 (1961)CrossRef
26.
Zurück zum Zitat Li, H., Miyazawa, M., Zhao, Y.: Geometric decay in a QBD process with countable background states with applications to a join-the-shortest-queue model. Stoch. Model. 23(3), 413–438 (2007)CrossRef Li, H., Miyazawa, M., Zhao, Y.: Geometric decay in a QBD process with countable background states with applications to a join-the-shortest-queue model. Stoch. Model. 23(3), 413–438 (2007)CrossRef
27.
Zurück zum Zitat Selen, J., Adan, I., Kapodistria, S.: Approximate performance analysis of generalized join the shortest queue routing. In: W. Knottenbelt, K. Wolter, A. Busic, M. Gribaudo, P. Reinecke (eds.) VALUETOOLS, 9th EAI International Conference on Performance Evaluation Methodologies and Tools (2016) Selen, J., Adan, I., Kapodistria, S.: Approximate performance analysis of generalized join the shortest queue routing. In: W. Knottenbelt, K. Wolter, A. Busic, M. Gribaudo, P. Reinecke (eds.) VALUETOOLS, 9th EAI International Conference on Performance Evaluation Methodologies and Tools (2016)
28.
Zurück zum Zitat de Smit, J.: The queue \(GI/M/s\) with customers of different types or the queue \(GI/H_m/s\). Adv. Appl. Probab. 15, 392–419 (1983)CrossRef de Smit, J.: The queue \(GI/M/s\) with customers of different types or the queue \(GI/H_m/s\). Adv. Appl. Probab. 15, 392–419 (1983)CrossRef
29.
Zurück zum Zitat Whitt, W.: Deciding which queue to join: some counterexamples. Oper. Res. 34(1), 55–62 (1986)CrossRef Whitt, W.: Deciding which queue to join: some counterexamples. Oper. Res. 34(1), 55–62 (1986)CrossRef
30.
Zurück zum Zitat Yao, H., Knessl, C.: On the infinite server shortest queue problem: symmetric case. Stoch. Model. 21(1), 101–132 (2005)CrossRef Yao, H., Knessl, C.: On the infinite server shortest queue problem: symmetric case. Stoch. Model. 21(1), 101–132 (2005)CrossRef
31.
Zurück zum Zitat Yao, H., Knessl, C.: On the shortest queue version of the Erlang loss model. Stud. Appl. Math. 120(2), 129–212 (2008)CrossRef Yao, H., Knessl, C.: On the shortest queue version of the Erlang loss model. Stud. Appl. Math. 120(2), 129–212 (2008)CrossRef
Metadaten
Titel
Steady-state analysis of shortest expected delay routing
verfasst von
Jori Selen
Ivo Adan
Stella Kapodistria
Johan van Leeuwaarden
Publikationsdatum
11.08.2016
Verlag
Springer US
Erschienen in
Queueing Systems / Ausgabe 3-4/2016
Print ISSN: 0257-0130
Elektronische ISSN: 1572-9443
DOI
https://doi.org/10.1007/s11134-016-9497-7

Weitere Artikel der Ausgabe 3-4/2016

Queueing Systems 3-4/2016 Zur Ausgabe