Skip to main content
Erschienen in: Queueing Systems 3-4/2020

Open Access 05.07.2020

Markovian bulk-arrival and bulk-service queues with general state-dependent control

verfasst von: Anyue Chen, Xiaohan Wu, Jing Zhang

Erschienen in: Queueing Systems | Ausgabe 3-4/2020

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We study a modified Markovian bulk-arrival and bulk-service queue incorporating general state-dependent control. The stopped bulk-arrival and bulk-service queue is first investigated, and the relationship between this stopped queue and the full queueing model is examined and exploited. Using this relationship, the equilibrium behaviour for the full queueing process is studied and the probability generating function of the equilibrium distribution is obtained. Queue length behaviour is also examined, and the Laplace transform of the queue length distribution is presented. The important questions regarding hitting times and busy period distributions are answered in detail, and the Laplace transforms of these distributions are presented. Further properties regarding the busy period distributions including expectation and conditional expectation of busy periods are also explored.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Markovian queues occupy a significant niche in applied probability. Indeed, Markovian queues play a very important role both in the development of general queueing models and in the theory and applications of continuous-time Markov chains. Good references, among many others, for the former are Asmussen [4], Gross and Harris [18], Kleinrock [23] and Medhi [27], and, for the latter, Anderson [1], Chung [12], Freedman [15] and Syski [36].
Within the framework of queueing theory, there are two particularly interesting topics which have attracted much attention. The first one is bulk queues, which have wide and very important applications. The theory of bulk queues, (including bulk arrivals and/or bulk service) has attracted extensive attention and is well developed. Note that bulk arrivals (sometimes to be called batch arrivals) and bulk service queues are commonplace in scenarios such as industrial assembly lines, road traffic flow, the movement of aircraft passengers, etc., and thus the related models have extensive and important applications. One important reference in this topic is the good monograph by Chaudhry and Templeton [7]. For advances in this topic, see Armero and Conesa [2], Arumuganathan and Ramaswami [3], Chang, Choi and Kim [6], Fakinos [14], Srinivasan, Renganathan and Kalyanaraman [33], Sumita and Masuda [35] and Ushakumari and Krishnamoorthy [37], Stadje [34], V. Ramaswami [31], Lucantoni [25] and many others. The second topic is state-dependent input and output mechanisms, usually called state-dependent controls, which have also attracted considerable attention. For example, Chen and Renshaw [10, 11] considered models which have allowed the possibility of clearing the entire workload.
It seems that it is Chen et al. [8] who first combined the two modifications together by interweaving the bulk-arrival and bulk-service queues with state-dependent control either at idle time or at time with empty waiting line, which thus generalises the Chen-Renshaw models [10, 11] to make them more relevant and applicable. See also Chen et al. [9].
The model discussed in Chen et al. [8] has close links with so-called negative arrivals. It seems to us that Gelenbe [16] and Gelenbe, Glynn and Sigman [17] first introduced the particularly useful concept of negative arrivals, and this was followed up by many other authors, including Bayer and Boxma [5], Harrison and Pitel [19], Henderson [20] and Jain and Sigman [22]. The model also has close theoretical links with the versatile Markovian arrival processes introduced by Neuts [28], which includes several kinds of batch-arrival processes. Additionally, Neuts [29] described a number of interesting batch-arrival models together with useful methods for analysing them. For further developments, see, for example, Lucantoni and Neuts [26], Nishimura and Sato [30] and Dudin and Nishimura [13].
However, the model discussed in Chen et al. [8] has the serious limitation that the control effect only happens when the queue is empty or has only one customer, which prevents the model from having extensive applications. The main aim of this paper is therefore to overcome this limitation. That is, the current paper combines the bulk-arrival and bulk-service mechanism with general state-dependent control. More specifically, based on the bulk-arrival and bulk-service structure, the control effect can happen for arbitrary (but finitely) many states. This makes the model have much wider applications.
We now give a formal definition of our model. Our model is a Markovian one and thus the model is usually specified by an infinitesimal q-matrix; see Asmussen [4] or Anderson [1]. Now our model discussed in this paper has an infinitesimal q-matrix \(Q=\{q_{ij};\, i,j \in Z_+\}\), where \(Z_+=\{0,1,2\ldots \}\), which takes the following form: there exist a positive integer \(N\ge 1\) such that
$$\begin{aligned} q_{ij} =\left\{ \begin{array}{ll} h_{ij} &{}\quad \hbox {if} \ 0\le i\le N-1, \ j\ge 0 \\ b_{j-i+N} &{}\quad \hbox {if}\ i\ge N,\ j\ge i-N\\ 0 &{}\quad \text{ otherwise }, \end{array} \right. \end{aligned}$$
(1.1)
where
$$\begin{aligned}&h_{ij}\ge 0~(i\ne j),~0\le -h_{ii}=\sum _{j\ne i}h_{ij}<+\infty ~(0\le i\le N-1), \end{aligned}$$
(1.2)
$$\begin{aligned}&b_0>0, ~b_j\ge 0 ~(j\ne N), \sum _{j= N+1}^{\infty }b_j>0 \;\; \text{ and } \;\; 0<-b_N=\sum _{\begin{array}{c} j\ne N \end{array}}b_j<+\infty .\nonumber \\ \end{aligned}$$
(1.3)
By the above definition, we see that the underlying structure of our model is something like an \(M^X/M^Y/1\) queue. However, mainly due to the applications, the state-dependent input and output mechanisms have been incorporated into this underlying structure. Intuitively speaking, this extra structure indicates that when the queueing length is less than some specific level, N say, then the manager may wish, randomly, to move some “customers” or “workload”, stored somewhere else, to the system in order to speed up working and thus make the work more effectively. However, because of applications, we shall not only consider increasing working loads but also consider some other aspects, and thus make the extra structure arbitrary. This is exactly the reason why we name this extra structure “state-dependent control” even though it may not be an appropriate terminology. It should be noticed that due to this arbitrary effect, the underlying structure is seriously affected and, in particular, the original “arrival process” and “service process” are closely interwoven and correlated with each other. This makes some traditional methods and techniques, including the powerful matrix-analytic method, less effective in analysing our current model. It should be also noticed that our model is a Markovian one and thus our model has more deep properties than, for example, the M/G/1-type Markov chains. Being a Markovian model, we also have more powerful methods and techniques such as Kolmogorov backward and forward equations and Ito’s excursion law which enable us to get many more fruitful results than for the M/G/1-type Markov chains, say.
Another reason for us to use the current approach is that the results obtained in this paper open the door and paves the way to study another advanced and extremely important topic of quasi-limiting distributions, including determining the decay parameter and invariant measure, and functions which reveal deep properties regarding transient behaviour of our current queuing models. It is remarkable that the quasi-limiting behaviour is quite different for the continuous time and discrete time processes. Therefore, as far as quasi-limiting distributions are concerned, the behaviour of our current model is also remarkably different to say, the M/G/1-type Markov chains. We shall discuss this topic in a couple of subsequent papers.
The paper is organised as follows: In Sect. 2, we present a fundamental construction theorem together with some key lemmas. All later developments depend heavily on these results. Section 3 concentrates on discussing the so-called stopped queue with both bulk arrivals and bulk service. A delicate structure is revealed. The results obtained in this section regarding stopped queues are not only of their own interest but also crucial in our later analysis. Sect. 4 concerns questions of recurrence and ergodicity, as well as equilibrium distributions. The generating function of the equilibrium distribution is derived. The queue length distribution is also derived in this section. In Sect. 5, the bulk-arrival and bulk-service queues stopped at the idle state is analysed in detail, which paves the way to study the busy period distributions. In Sect. 6, the important hitting time distributions and the busy period distributions are discussed. Many important properties regarding these hitting time distributions and busy period distributions are revealed and many deep results regarding these distributions are presented. In the final Sect. 7, an example is provided to illustrate the conclusions obtained in the previous sections.

2 Preliminaries

We first pack our known data specified in (1.1)–(1.3) into a few generating functions. For any \(0 \le i\le N-1\), define
$$\begin{aligned} H_i(s)=\sum _{j=0}^{\infty }h_{ij}s^j~~(0\le i\le N-1) \end{aligned}$$
(2.1)
and
$$\begin{aligned} B(s)= \sum _{j=0}^{\infty }b_{j}s^j. \end{aligned}$$
(2.2)
Here we view B(s) and \(H_i(s)~(0\le i\le N-1)\) as complex functions. Note that B(s) and \(H_i(s)~(0\le i\le N-1)\) may have their (usually different) convergence radii. But due to conditions (1.2) and (1.3), they are all well-defined at least on the closed unit disc \(\{s;|s|\le 1\}\) and analytic on the open unit disc \(\{s;|s|<1\}\). Considering that the q-matrix Q given in (1.1)–(1.3) is conservative and bounded, the corresponding Q-process is unique and is just the Feller minimal Q-process. It follows that the Feller minimal Q-resolvent \(R(\lambda )\) satisfies both the Kolmogorov backward and the forward equations. Using the Kolmogorov forward equations, we immediately obtain the following construction theorem which will be the starting point for our further analysis.
Theorem 2.1
For any \(i\ge 0\), the Feller minimal Q-resolvent \(R(\lambda )=\{r_{ij}(\lambda );\, i,j\in Z_+\}\) satisfies the equation
$$\begin{aligned} \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j = \frac{\sum _{k=0}^{N-1}r_{ik}(\lambda )(B(s)s^k-H_k(s)s^{N})-s^{i+N}}{B(s)-\lambda s^N}, \end{aligned}$$
(2.3)
where B(s) and \(H_k(s)~(0\le k\le N-1)\) are defined in (2.1) and (2.2), respectively.
Proof
By the Kolmogorov forward equation \(\lambda R(\lambda )-I=R(\lambda )Q\), together with noting the form of Q given in (1.1), we immediately obtain that, for any \(i,j \in Z_+\),
$$\begin{aligned} \lambda r_{ij}(\lambda )-\delta _{ij}=\sum _{k=0}^{N-1}r_{ik}(\lambda )h_{kj} +\sum _{k=N}^{j+N}r_{ik}(\lambda )b_{j-k+N}. \end{aligned}$$
(2.4)
Multiplying by \(s^j\), where \(|s|<1\), on both sides of (2.4) and summing over j from 0 to \(\infty \) yields
$$\begin{aligned} \lambda \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j-s^i=\sum _{j=0}^{\infty } \left( \sum _{k=0}^{N-1}r_{ik}(\lambda )h_{kj}\right) s^j+\sum _{j=0}^{\infty } \left( \sum _{k=N}^{j+N}r_{ik}(\lambda )b_{j-k+N}\right) s^j. \end{aligned}$$
By noting the definitions given in (2.1) and (2.21), we immediately obtain
$$\begin{aligned} \lambda \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j-s^i=\sum _{k=0}^{N-1}r_{ik} (\lambda )H_k(s)+\frac{B(s)}{s^N}\sum _{k=N}^{\infty }r_{ik}(\lambda )s^k. \end{aligned}$$
(2.5)
Now (2.3) easily follows from (2.5), which ends the proof. \(\square \)
By Theorem 2.1, particularly by (2.24), it is clear we need to define, for each \(\lambda >0\),
$$\begin{aligned} B_{\lambda }(s)=B(s)-\lambda s^N \end{aligned}$$
(2.6)
which is \(C^{\infty }\) at least on \((-1,1)\). Similarly to B(s), we view \(B_{\lambda }(s)\) as complex functions of s and note that \(B_\lambda (s)\) is well defined, at least on the closed unit disc \(\{s; |s|\le 1\}\), and is analytic on the open unit disc \(\{s; |s|< 1\}\).
We now provide a couple of fundamental lemmas which will be our stepping stones for further analysis. To make these lemmas, as well as the conclusions obtained thereafter, enjoy probabilistic meanings, let
$$\begin{aligned} m_b=\sum _{j=N+1}^{\infty }(j-N)b_j \quad \text{ and } \quad m_d=\sum _{j=0}^{N-1}(N-j)b_j \end{aligned}$$
(2.7)
denote the mean “arrival” and “service” rates, respectively. Note that \(0< m_d< +\infty \) and \(0<m_b\le +\infty \). Clearly,
$$\begin{aligned} B'(1)=m_b-m_d~~\text {with}~~-\infty <B'(1)\le +\infty , \end{aligned}$$
(2.8)
which explains the probabilistic meaning of \(B'(1)\).
The following conclusions are just corollaries of results obtained in Li and Chen [24].
Lemma 2.1
The equation \(B(s)=0\) has either N or \(N+1\) roots in the closed disc \(\{s;|s|\le 1\}\). Moreover, it has N roots if and only if \(B'(1)\le 0\). More specifically,
(i)
If \(B'(1)<0\)  (i.e. \(m_b<m_d\)), then 1 is the only real and single root on the interval [0, 1], and, for any \(s\in [0,1)\), \(B(s)>0\).
 
(ii)
If \(B'(1)=0\) (i.e. \(m_b=m_d\)), then 1 is the only real root but is a “double” root (with multiplicity 2) on [0, 1] and, again, for any \(s\in [0,1)\), \(B(s)>0\).
 
(iii)
If \(B'(1)>0\) (i.e. \(m_d<m_b\le +\infty \)), then, in addition to the root 1, we have another positive root, denoted by u, such that \(B(s)>0\) for all \(s\in [0,u)\) and \(B(s)<0\) for all \(s\in (u,1)\).
 
(iv)
All the other \((N-1)\) roots of \(B(s)=0\) in \(\{s;|s|\le 1\}\) are either negative or complex conjugate roots. Also, their moduli are strictly less than the smallest positive root of \(B(s)=0\). That is, if \(B'(1)\le 0\), then for any such root, the modulus is strictly less than 1, while if \(B'(1)>0\), then all the moduli of these roots are less than u.
 
Lemma 2.2
For each fixed \(\lambda >0\), the equation \(B_{\lambda }(s)=0\) has exactly N roots on the open unit disc \(\{s;|s|<1\}\) and has no roots on the unit circle \(\{s;|s|=1\}\). Denote these N roots as \(u_0(\lambda ), u_1(\lambda ),\ldots , u_{N-1}(\lambda )\). Then
(i)
\(u_0(\lambda )\) is the only positive root of \(B_{\lambda }(s)=0\) on [0, 1] satisfying \(0<u_0(\lambda )<1\).
 
(ii)
All the other \(N-1\) roots are either negative or complex conjugate roots. Moreover, \(|u_i(\lambda )|<u_0(\lambda )\) \((1\le i\le N-1)\).
 
(iii)
All these N roots are continuous and \(C^{\infty }(0,\infty )\) functions of \(\lambda >0\). Also, \(\lim _{\lambda \rightarrow 0}u_i(\lambda )= u_i(0)\) \((0\le i\le N-1)\), where \(u_i(0)\) are the roots of \(B(s)=0\) as stated in the above Lemma 2.1. In particular, if \(B'(1)\le 0\), then \(\lim _{\lambda \rightarrow 0}u_0(\lambda )= u_0(0)=1\), while if \(B'(1)>0\), then \(\lim _{\lambda \rightarrow 0}u_0(\lambda )= u_0(0)=u<1\).
 
For the full proof of these important conclusions, readers could consult Li and Chen [24]. Clearly the key point in Lemma 2.2 is the fact that there exist exactly N roots on the unit open disc \(\{s;|s|<1\}\). This important fact can be easily proved by using Rouché’s Lemma. Indeed, by Rouché’s lemma we can show that \(B_{\lambda }(s)\) and \(f(s)=s^N\) have the same number of zeros within the open disc \(\{s;|s|<1\}\). For details, see Li and Chen [24].
The most important root is the positive root \(u_0(\lambda )\) of the equation \(B_{\lambda }(z)=0\) on [0, 1). Hence, in the following lemma, we concentrate on discussing important properties of \(u_0(\lambda )\). First note that \(u_0(\lambda )\) is defined only for \(\lambda >0\) at the current stage.
Lemma 2.3
The root \(u_0(\lambda )\) given in Lemma 2.2 has the following properties:
(i)
\(u_0(\lambda ) \in C^{\infty } (0,\infty )\);
 
(ii)
\(u_0(\lambda )\) is a decreasing function of \(\lambda >0\) and as \(\lambda \rightarrow \infty \) we have \(u_0(\lambda ) \downarrow 0\) and \(\lambda (u_0(\lambda ))^N\rightarrow b_0\).
 
(iii)
Let u denotes the smallest real root of \(B(s)=0\) on [0, 1]. Then, as \(\lambda \rightarrow 0\),
$$\begin{aligned} u_0(\lambda ) \uparrow u \ \left\{ \begin{array}{ll} =1 &{} \quad \mathrm{{if}}\ m_{b} \le m_d \\ <1 &{} \quad \mathrm{{if}}\ m_{b}>m_d . \end{array} \right. \end{aligned}$$
(2.9)
 
(iv)
For any positive integer k
$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{1-(u_0(\lambda ))^k}{\lambda } = \left\{ \begin{array}{ll} \infty &{}\quad {\mathrm{if}} \ m_{b} \ge m_d \\ k/(m_d-m_{b}) &{} \quad {\mathrm{if}} \ m_{b}<m_d . \end{array} \right. \end{aligned}$$
(2.10)
 
(v)
If \(m_{d}<m_b\le +\infty \) (and hence \(u<1\)), then, for any positive integer k,
$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{1-(u_0(\lambda )/u)^{k}}{\lambda } = \frac{-ku^{N-1}}{B^{\prime }(u)}. \end{aligned}$$
(2.11)
 
(vi)
If \(m_{b}= m_d\) (and hence \(u=1\)), then, for any positive integer k,
$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{1-(u_0(\lambda ))^{k}}{\sqrt{\lambda }} = \left\{ \begin{array}{ll} k\sqrt{\frac{2}{B^{{\prime \prime }}(1)}} &{} \quad {\mathrm{if}} \ B^{{\prime \prime }}(1) <\infty \\ 0 &{} \quad {\mathrm{if}} \ B^{{\prime \prime }}(1)=\infty . \end{array} \right. \end{aligned}$$
(2.12)
 
Remark 2.1
Note that B(s) and \(B^{\,\prime }(s)\) are both finite when \(-1< s <1\), and \(u_0(\lambda )\) is defined only for \(\lambda >0\). However, once Lemma 2.3 is proven, in particular if the conclusions (i)–(iii) are proved, then by defining \(u_0(0)=u_0=u\), we then obtain a continuous function on \([0,\infty )\). Similarly, for the other \(u_i(\lambda )~(1\le i\le N-1)\) we may get nearly the same results as \(u_0(\lambda )\) (see the remark below). Then by defining \(u_i(0)=u_i~(1\le i\le N-1)\), we may obtain the other \((N-1)\) continuous functions on \([0,\infty )\).
Proof
First, since \(u_0(\lambda )\) is the unique root of \(B_{\lambda }(s)=0\) on [0, 1], it can be viewed as the positive x-coordinate of the intersection point of the two curves \(y=B(x)\) and \(y=\lambda x^N\). Parts (ii) and (iii) then follow immediately because the latter curve is an increasing function of \(x \ge 0\). Part (i) is a direct consequence of (iii) of Lemma 2.2. Indeed, \(u_0(\lambda )\) is the root of the equation \(\lambda =s^{-N}B(s)\), and so \(\lambda \), as a function of \(s \in (0,1]\), is \(C^{\infty }\). Hence the inverse function \(u_0(\lambda )\) belongs to \(C^{\infty }[0,\infty )\). We have thus proved (i)–(iii).
To prove the other parts, first note by the proven (2.9) and on writing \(u_0=u_0(0)\) we have
$$\begin{aligned} u_0(\lambda )-u_0 = \lambda u_0^{\,\prime }(\xi ) \qquad (0<\xi <\lambda ), \end{aligned}$$
(2.13)
since \(u_0(\lambda )\) is differentiable for \(\lambda >0\). Also, since \(u_0(\lambda )\) is a decreasing function of \(\lambda >0\), we thus have that \(u_0'(\xi )<0\) for any \(\xi \in (0,\lambda )\). Now since \(m_{b}> m_d\) implies \(u<1\) (refer to (2.9)), we see that (2.10) holds true if \(m_b > m_d\). Hence we need only consider the case that \(m_{b}\le m_d\), in which case (2.13) can be written as
$$\begin{aligned} (1-u_0(\lambda ))/\lambda = -u_0^{\prime }(\xi ) \qquad (0<\xi <\lambda ). \end{aligned}$$
(2.14)
Whence, on considering \(u_0(\lambda )\) as the root of \(B_{\lambda }(s)=0\), we may get \(B(u_0(\lambda ))=\lambda (u_0(\lambda ))^N\). Differentiation with \(\lambda >0\) then yields
$$\begin{aligned} B^{\,\prime }(u_0(\lambda ))u_0^{\prime } (\lambda )-N\lambda u_0^{N-1}(\lambda )u_0^{\prime }(\lambda )=(u_0(\lambda ))^N. \end{aligned}$$
(2.15)
Letting \(\lambda \rightarrow 0\), and noting that \(u_0(\lambda )\) and \(B^{\prime }(u_0(\lambda ))\) are continuous functions of \(\lambda \) on \([0,\infty )\) (see Remark 2.1 and noting that we have already proved (i)–(iii)) and that \(\lambda u_0^N(\lambda )\) tends to 0 as \(\lambda \rightarrow 0\) leads us to conclude that
$$\begin{aligned} \lim _{ \lambda \rightarrow 0}~ [B^{\prime }(u_0(\lambda ))u_0^{\prime }(\lambda ) ]= u^N, \end{aligned}$$
(2.16)
which is true for all cases.
Note that the right-hand side of the above (2.16) is a finite positive value and thus so is the left-hand side of (2.16). However, \(B'(s)\) is a continuous function of s, at least for \(0\le s< 1\), and \(\lim \limits _{\lambda \rightarrow 0}B'(u(\lambda ))=B'(u)\), which is zero if and only if \(u=1\) and \(B'(1)=0\), or if and only if \(m_d=m_b\). Hence, by (2.16) we get the conclusion that if \(m_d>m_b\), then
$$\begin{aligned} u'(0)~\hat{=}~\lim \limits _{\lambda \rightarrow 0}u_0'(\lambda )=\frac{1}{B'(1)}=\frac{1}{m_b-m_d}<0, \end{aligned}$$
(2.17)
while if \(m_d<m_b\le +\infty \), then, again by (2.16),
$$\begin{aligned} u'(0)~\hat{=}~\lim _{\lambda \rightarrow 0}u_0'(\lambda )=\frac{u^N}{B'(u)}<0, \end{aligned}$$
(2.18)
In (2.18), \(0<u<1\) is the smallest positive root of \(B(s)=0\) as defined before. Finally, if \(m_d=m_b\), then by (2.16)
$$\begin{aligned} \lim _{\lambda \rightarrow 0}~[B'(u_0(\lambda ))\cdot u_0'(\lambda ) ]=1 \end{aligned}$$
(2.19)
and, thus since \(B'(u_0(\lambda ))\) tends to \(B'(1)=0\), we know that if \(m_d=m_b\) then
$$\begin{aligned} u'(0)~\hat{=}~\lim _{\lambda \rightarrow 0}u_0'(\lambda )=-\infty , \end{aligned}$$
(2.20)
which is a consequence of (2.19) together with the fact that \(B'(u_0(\lambda ))\) is negative when \(\lambda \downarrow 0.\) Now, \(m_b\le m_d\) implies \(u_0=1\), and combining this fact with (2.14) and noting that \(B^{\,\prime }(1)=m_b-m_d\) then yields
$$\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{1}{\lambda }\left( 1-u_0(\lambda )\right) = \left\{ \begin{array}{ll} 1/(m_d-m_{b}) &{}\quad \text{ if } \ m_{b}< m_d \\ \infty &{}\quad \text{ if } \ m_{b}=m_d\, , \end{array} \right. \end{aligned}$$
(2.21)
and so (2.10) holds true for \(k=1\). Now, for \(m_{b}<m_d\), we may rewrite (2.21) as \(u_0(\lambda ) = 1 - \lambda /(m_d-m_{b}) + o(\lambda )\). Hence, for any positive integer k, \((u_0(\lambda ))^{k} = 1 - k\lambda /(m_d-m_{b}) + o(\lambda )\) and so (2.10) follows. This completes the proof of (iv).
Turning to (v), since \(m_d<m_b\le +\infty \) and hence \(u_0<1\), both \(B^{\,\prime }(u_0)\) and \(B^{\,{\prime \prime }}(u_0)\) are finite. Now by (2.18) we see that the finiteness of \(B'(u)\) implies \(u'(0)\) is a nonzero and finite value and thus we have
$$\begin{aligned} u_0(\lambda )=u_0+u'(0)\lambda +o(\lambda ) \end{aligned}$$
(2.22)
and, by using (2.22) together with (2.18), we get that, for any positive integer k,
$$\begin{aligned} (u_0(\lambda ))^{k} = u_0^{k} + \frac{k\lambda u_0^{N+k-1}}{B^{\,\prime }(u_0(\lambda ))} + o(\lambda ) , \end{aligned}$$
from which (2.11) follows by also noting \(u_0=u\), which ends the proof of (v).
We now proceed to the subtle case where \(m_d=m_b\). Recall if \(m_d=m_b\) then B(1) and \(B'(1)\) are both zero, and thus, if we assume that \(B''(1)\) is finite, then we have, for \(0<s<1\),
$$\begin{aligned} B(s)=\frac{B''(1)}{2}(s-1)^2+o((s-1)^2). \end{aligned}$$
Letting \(s=u(\lambda )\) for \(\lambda >0\) in the above and noting that \(0<u(\lambda )<1\), we get
$$\begin{aligned} B(u_0(\lambda ))=\frac{1}{2}(1-u_0(\lambda ))^2B^{{\prime \prime }}(1)+ o((1-u(\lambda ))^2). \end{aligned}$$
It then follows that
$$\begin{aligned} \frac{1-u_0(\lambda )}{\sqrt{\lambda }}=\left( \frac{2}{B''(1)} \cdot \frac{B(u(\lambda ))}{\lambda }+\frac{o((1-u(\lambda ))^2)}{\lambda } \right) ^\frac{1}{2} \end{aligned}$$
(2.23)
by noting that \(0<B''(1)<+\infty \). Now letting \(\lambda \rightarrow 0\) in the above (2.23) we see that, firstly,
$$\begin{aligned} \lim _{\lambda \rightarrow 0}~\frac{o((1-u(\lambda ))^2)}{\lambda }=0 \end{aligned}$$
and, secondly, by noting that \(\lim \limits _{\lambda \rightarrow 0}B'(u(\lambda ))=B(1)=0\) and using Hospital’s rule, we obtain
$$\begin{aligned} \lim \limits _{\lambda \rightarrow 0}~\frac{B(u(\lambda ))}{\lambda }=\lim _{\lambda \rightarrow 0}~B'(u(\lambda ))\cdot u'(\lambda ), \end{aligned}$$
which is just one; see (2.19).
Hence, we obtain
$$\begin{aligned} \lim _{\lambda \rightarrow 0}~\frac{1-u_0(\lambda )}{\sqrt{\lambda }}=\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}, \end{aligned}$$
(2.24)
we have thus proved (2.12) for the case \(k=1\). For the general case \(k>1\), we may use (2.23) together with some easy algebra to show that (2.12) holds true for any \(k\ge 1\), which then finishes the proof of (vi) and thus the whole of Lemma 2.3. \(\square \)
Remark 2.2
Note that in proving properties of \(u_0(\lambda )\) in Lemma 2.3, the only condition we have used is that \(u_0(\lambda )\) is a zero of \(B_\lambda (s)\). Hence, all the conclusions, particularly statements analogous to (ii)–(iii), hold true for all the other zeros of \(B_\lambda (s)\), i.e. \(u_i(\lambda )~(1\le i\le N-1)\) given in Lemma 2.2.

3 The stopped bulk-arrival and bulk-service queue

In this section, we assume that all the first N states are absorbing. That is, we assume \(h_{ij}\equiv 0\) \((0\le i\le N-1,~j\ge 0)\) and thus \(H_i(s)\equiv 0\) \((0\le i\le N-1)\). The corresponding q-matrix is denoted by \(Q^*\). There are two main reasons for us to study the \(Q^*\)-process first. On the one hand, the properties of the \(Q^*\)-process will serve as a tool in investigating the main models which will be discussed in detail in the following sections and, on the other hand, to study the corresponding \(Q^*\)-process has its own interests since it can be viewed as a model of generalised Markov branching processes rather than a queueing model. Just because of the latter reason, we shall use some notation and terminology, such as extinction probabilities, within this section. For both reasons, we are now interested in getting closed forms of the Feller minimal \(Q^*\)-resolvent \(\{\phi _{ik}^{*}(\lambda )\}\).
By Lemma 2.2 the equation \(B_\lambda (s)=0\) has exactly N roots on the open unit disc \(\{s; |s|< 1\}\), denoted by \(\{u_0(\lambda ), u_1(\lambda ),\ldots ,u_{N-1}(\lambda )\}\), with \(u_0(\lambda )\) as the unique positive root on (0, 1). We now use the N-dimensional column vector \(\mathbf{U} (\lambda )\) to denote these N roots, i.e.,
$$\begin{aligned} \mathbf{U} (\lambda )=(u_0(\lambda ), u_1(\lambda ), \ldots , u_{N-1}(\lambda ))^\mathrm{T}, \end{aligned}$$
(3.1)
where T denote the transpose. Also, for any non-negative integers \(k\ge 0\), denote
$$\begin{aligned} \mathbf{U} ^k(\lambda )=((u_0(\lambda ))^k, (u_1(\lambda ))^k, \ldots , (u_{N-1}(\lambda ))^k)^\mathrm{T}. \end{aligned}$$
(3.2)
Of course \(\mathbf{U} ^{1}(\lambda )=\mathbf{U} (\lambda )\) and \(\mathbf{U} ^0(\lambda )=\mathbf{1 }\); here \(\mathbf{1 }\) denotes the column vector whose components are all one. Similarly, we let
$$\begin{aligned} \mathbf{U}(0) =(u_0(0), u_1(0), \ldots , u_{N-1}(0))^\mathrm{T} \end{aligned}$$
(3.3)
denote the N roots of the equation \(B(s)=0\) on the unit closed disc \(\{s; |s|\le 1\}\) as Lemma 2.1 shows. In fact, we have \(\lim _{\lambda \rightarrow 0}{} \mathbf{U} (\lambda )=\mathbf{U} (0)\). In many cases, we shall just denote \(\mathbf{U }=\mathbf{U} (0)\), or in component form,
$$\begin{aligned} \mathbf{U }=(u_0, u_1,\ldots , u_{N-1})^\mathrm{T} . \end{aligned}$$
(3.4)
Note that, however, if \(m_d<m_b\le +\infty \), \(u_0(0)=u<1\), and the trivial root 1 is not included in (3.3). As a result, all the component of \(\mathbf{U} (0)\equiv \mathbf{U }\) are totally distinct. It is also convenient to denote
$$\begin{aligned} \hat{\mathbf{U }}(\lambda )=(u_1(\lambda ),\ldots , u_{N-1}(\lambda ))^\mathrm{T} . \end{aligned}$$
(3.5)
That is, \({\hat{\mathbf{U }}}(\lambda )\) is nothing but cutting off the first component of \(\mathbf{U} (\lambda )\). Similarly, \(\hat{\mathbf{U }}(0)\hat{=}\hat{\mathbf{U }}\) is just the column vector obtained by cutting off the first component from U. Finally, the determinant of the \(N\times N\) matrix \((\mathbf{1 },\mathbf{U} (\lambda )\), \(\mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda )\)) will be denoted by
$$\begin{aligned} \Delta (\lambda )=|\mathbf{1 },\mathbf{U} (\lambda ), \mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda ) |\equiv |\mathbf{U} \,^0(\lambda ),\mathbf{U} (\lambda ), \mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda )|.\nonumber \\ \end{aligned}$$
(3.6)
By applying properties regarding determinants, it is easy to see that \(\Delta (\lambda )\) defined in the above (3.6) can be rewritten in the following forms:
$$\begin{aligned} \Delta (\lambda )&=|\mathbf{1 }-\mathbf{U} (\lambda ), \mathbf{U} (\lambda )-\mathbf{U} \,^2(\lambda ),\mathbf{U} \,^2(\lambda ) -\mathbf{U} \,^3(\lambda ),\ldots , \mathbf{U} \,^{N-1}(\lambda )-\mathbf{U} \,^{N-2}(\lambda ), \mathbf{U} \,^{N-1}(\lambda ) | \nonumber \\&=|\mathbf{1 }-\mathbf{U} (\lambda ),\mathbf{1 }-\mathbf{U} \,^2(\lambda ), \mathbf{1 }-\mathbf{U} \,^3(\lambda ),\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1}(\lambda ),\mathbf{1 }|. \end{aligned}$$
(3.7)
Also, for any \(0\le i\le N-1\), denote
$$\begin{aligned} \Delta ^{(i)}(\lambda )=|\mathbf{U} \,^0(\lambda ),\mathbf{U} \,^1(\lambda ), \cdots ,\mathbf{U} \,^{k-1}(\lambda ),\mathbf{U} \,^{i}(\lambda ), \mathbf{U} \,^{k+1}(\lambda ),\cdots ,\mathbf{U} \,^{N-1}(\lambda )|. \end{aligned}$$
(3.8)
By comparing the forms of \(\Delta (\lambda )\) and \(\Delta ^{(i)}(\lambda )\) given in (3.6) and (3.8), we see that they are the same except that the kth column of the former is replaced by the ith column of the latter.
Keeping the above notation in mind, we may claim the following simple yet interesting result.
Theorem 3.1
Let \(Q^{*}\) be the q-matrix given in (1.1)–(1.3) together with the conditions that \(h_{ij}\equiv 0\) \((0\le i\le N-1, j\in Z_+)\) and let \(\Phi ^{*}(\lambda )=\{\phi _{ij}^{*}(\lambda ); i,j\in Z_+\}\) be the (unique) Feller minimal \(Q^{*}\)-resolvent. Then, for \(0\le i \le N-1\) and \(\lambda >0\),
$$\begin{aligned} \phi _{ij}^{*}(\lambda )=\frac{1}{\lambda }\delta _{ij} \qquad (0\le i\le N-1,~j\in Z_+) \end{aligned}$$
(3.9)
and, for \(i\ge N\) and \(\lambda >0\),
$$\begin{aligned} \sum _{j=0}^{\infty }\phi _{ij}^{*}(\lambda )s^j=\frac{B(s) \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )s^k-s^{i+N}}{B(s)-\lambda s^N}, \end{aligned}$$
(3.10)
where \(\{\phi _{ik}^{*}(\lambda );~i\ge N,~0\le k \le N-1\}\) on the right-hand side of the above (3.10) are given by
$$\begin{aligned} \lambda \phi _{ik}^{*}(\lambda )=\frac{\begin{vmatrix} \mathbf{1 }&\mathbf{U} (\lambda )&\ldots&\mathbf{U} \,^{k-1}(\lambda )&\mathbf{U} \,^{i}(\lambda )&\mathbf{U} \,^{k+1}(\lambda )&\ldots&\mathbf{U} \,^{N-1}(\lambda ) \end{vmatrix}}{\begin{vmatrix} \mathbf{1 }&\mathbf{U} (\lambda )&\ldots&\mathbf{U} \,^{k-1}(\lambda )&\mathbf{U} \,^{k}(\lambda )&\mathbf{U} \,^{k+1}(\lambda )&\ldots&\mathbf{U} \,^{N-1}(\lambda ) \end{vmatrix}}=\frac{\Delta ^{(i)}(\lambda )}{\Delta (\lambda )}.\nonumber \\ \end{aligned}$$
(3.11)
Proof
(3.9) follows from the fact that each state of \(\{0,1,\ldots ,N-1\}\) is absorbing. (3.10) is simply a consequence of (2.3) in Theorem 2.1 together with the fact that all \(H_k(s)\equiv 0\) \((0\le k\le N-1)\) due to the assumptions \(h_{ij}\equiv 0\) \((0\le i\le N-1,~j \in Z_+)\). Hence we only need to show that (3.11) is true. However, this is easy. Indeed, by Lemma 2.2, \(B_{\lambda }(s)=0\) has a root \(u_0(\lambda )\) in (0, 1). Since \(0<u_0(\lambda )<1\), it is clear that \(\sum _{j=0}^{\infty }\phi ^*_{ij}(\lambda )s^j\) is well defined at \(s=u_0(\lambda )\). It follows from (3.10) that the right-hand side of (3.10) is also well defined for \(s=u_0(\lambda )\). But \(u_0(\lambda )\) is a zero of \(B_{\lambda }(s)\) and thus the denominator of the right-hand side of (3.10) is zero. It follows that \(u_0(\lambda )\) must also be a zero of the numerator of the right-hand side of (3.10). Therefore, we have
$$\begin{aligned} B(u_0(\lambda ))\left( \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )(u_0(\lambda ))^k\right) - (u_0(\lambda ))^{i+N} = 0, \end{aligned}$$
and hence
$$\begin{aligned} \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )(u_0(\lambda ))^k =\frac{(u_0(\lambda ))^{i+N}}{B(u_0(\lambda ))}. \end{aligned}$$
But, \(u_0(\lambda )\) is a root of \(B_{\lambda }(s)=0\) and thus \(B(u_0(\lambda ))=\lambda (u_0(\lambda ))^N\), and then, by noting we use \(u_0^i(\lambda )\) to denote \( (u_0(\lambda ))^i\), we obtain
$$\begin{aligned} \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )u_0^k(\lambda ) = \frac{1}{\lambda } u_0^{i}(\lambda ). \end{aligned}$$
(3.12)
Similarly, since each \(u_{i}(\lambda )\)  \((i=1,2,\ldots ,N-1)\) is a root of \(B_{\lambda }(s)=0\) on the open unit disc \(\{s;~|s|<1\}\), the same argument yields
$$\begin{aligned} \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )u_l^k(\lambda ) =\frac{1}{\lambda } u_l^{i}(\lambda ) \qquad (1 \le l\le N-1). \end{aligned}$$
(3.13)
By (3.12) and (3.13), we immediately obtain the first equality in (3.11). The second equality follows from the notation in (3.6) and (3.8). This ends the proof. \(\square \)
Remark 3.1
Note that the coefficient determinant \(\Delta (\lambda )\) of the linear equation (3.12)–(3.13) is just a Vandermonde determinant, and thus \(\Delta (\lambda )=\prod _{0\le i< j\le N-1}(u_i(\lambda )-u_j(\lambda ))\). Now, the solutions of the linear equation (3.12)–(3.13) is just \(\{ \lambda \phi _{ik}^{*}(\lambda );~0\le k\le N-1\}\) which is, of course, well-defined. This implies that \(\Delta (\lambda )\ne 0\). It follows that the N roots of the equation \(B_\lambda (s)=0\) are all distinct.
It is interesting to note that (3.10) and (3.11) provide a perfect solution to our process, since the whole resolvent can be simply expressed by the N roots of the known equation \(B(s)-\lambda s^N=0\). Therefore the expressions (3.10) and (3.11) will be extremely helpful in analysing the properties of the \(Q^{*}\)-process. In particular, let \(\{Z^{*}_{t},\, t\ge 0\}\) be the \(Q^{*}\)-process. Define the hitting time to state i \((i=0,1,\ldots , N-1)\) as \(\tau ^{*}_{i} = \inf \{ t>0: Z^{*}_{t}=i \}\) if \(Z^{*}_{t}=i\) for some \(t>0\), and \(\tau ^{*}_{i}=\infty \) if \(Z^{*}_{t}\ne i\) for all \(t>0\). Also, let \(\tau ^{*}:=\wedge _{k=0}^{N-1} \tau ^{*}_k\) denote the “overall” hitting time (the hitting time to the absorbing set \(\{0,1,\ldots , N-1\}\) ). We are now ready and interested in getting these important quantities. We first claim the following simple lemma.
Lemma 3.1
For any \(n\ge N\) we have
$$\begin{aligned} \Delta (\lambda ) \cdot \lambda \phi _{n0}^{*}(\lambda )=\begin{vmatrix} \mathbf{U} ~^{n}(\lambda ),&\mathbf{U} (\lambda ),&\mathbf{U} \,^2(\lambda ),&\ldots ,&\mathbf{U} \,^{N-2}(\lambda ),&\mathbf{U} \,^{N-1}(\lambda ) \end{vmatrix}. \end{aligned}$$
(3.14)
Moreover, for any \(1\le k\le N-1\), we have
$$\begin{aligned}&\Delta (\lambda )\cdot \sum _{l=0}^k \lambda \phi _{nl}^{*}(\lambda )\nonumber \\&\quad =\begin{vmatrix} \mathbf{1}-U (\lambda ),&\mathbf{U} (\lambda )-\mathbf{U} \,^2(\lambda ),&\ldots ,&\mathbf{U} \,^{k-1}(\lambda )-\mathbf{U} \,^{k}(\lambda ),&\mathbf{U} \,^{n}(\lambda ),&\mathbf{U} \,^{k+1}(\lambda ),&\ldots ,&\mathbf{U} \,^{N-1}(\lambda ) \end{vmatrix} \end{aligned}$$
(3.15)
and, in particular,
$$\begin{aligned} \Delta (\lambda )\cdot \sum _{l=0}^{N-1} \lambda \phi _{nl}^{*}(\lambda )=\begin{vmatrix} \mathbf{1}-U (\lambda ),&\mathbf{U} (\lambda ) -\mathbf{U} \,^2(\lambda ),&\ldots ,&\mathbf{U} \,^{N-2}(\lambda ) -\mathbf{U} \,^{N-1}(\lambda ),&\mathbf{U} \,^{n}(\lambda ) \end{vmatrix}. \end{aligned}$$
(3.16)
Proof
First note that (3.14) is just a rewritten form of the proved (3.11) by letting \(i=n\) and \(k=0\). We now use mathematical induction to prove (3.15). By using (3.14) and (3.11), we obtain
$$\begin{aligned}&\Delta (\lambda )(\lambda \phi _{n0}^{*}(\lambda )+\lambda \phi _{n1}^{*}(\lambda )) \\&\quad =|\mathbf{U} \,^n(\lambda ),\mathbf{U} (\lambda ), \mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda )|+ |\mathbf{1 },\mathbf{U} \,^n(\lambda ),\mathbf{U} \,^2(\lambda ),\ldots , \mathbf{U} \,^{N-1}(\lambda )| \qquad \\&\quad =|-\mathbf{U} (\lambda ),\mathbf{U} \,^n(\lambda ), \mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda )|+ |\mathbf{1 },\mathbf{U} \,^n(\lambda ),\mathbf{U} \,^2(\lambda ),\ldots , \mathbf{U} \,^{N-1}(\lambda )|\\&\quad =|\mathbf{1 }-\mathbf{U} (\lambda ),\mathbf{U} \,^n(\lambda ), \mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{U} \,^{N-1}(\lambda )|, \end{aligned}$$
and hence (3.15) is true for \(k=1\).
Now, suppose that for some k, where \(1\le k \le N-2\), (3.15) is true. Then by using properties of determinants together with some easy algebra, it is easily seen that (3.15) is true for \(k+1\). Therefore (3.15) is true for any \(n\ge N\) and any \(0\le k\le N-1\). In particular, (3.16) is true since it is a consequence of (3.15). \(\square \)
As a consequence of Lemma 3.1, we have the following corollary which can be more easily applied to our later analysis.
Corollary 3.1
Let \(\Phi ^{*}(\lambda )=\{\phi _{ij}^{*}(\lambda ); i,j\in Z_+\}\) be the \(Q^{*}\text {-}\)resolvent. Then, for any \(n\ge N\), we have
$$\begin{aligned} \lambda \sum _{k=0}^{N-1}\phi _{nk}^{*}(\lambda )=\frac{|\mathbf{1 }-\mathbf{U}(\lambda ) ,\mathbf{1 }-\mathbf{U}\,^2(\lambda ) ,\ldots , \mathbf{1 }-\mathbf{U}\,^{N-1}(\lambda ) ,\mathbf{U}\,^{n}(\lambda ) |}{|\mathbf{1 }-\mathbf{U}(\lambda ) ,\mathbf{1 }-\mathbf{U}\,^2(\lambda ) ,\ldots , \mathbf{1 }-\mathbf{U}\,^{N-1}(\lambda ) ,\mathbf ~~~~~1 |}. \end{aligned}$$
(3.17)
Proof
By noting (3.16) and (3.6), we get that, for all \(n\ge N\),
$$\begin{aligned} \lambda \sum _{k=0}^{N-1}\phi _{nk}^{*}(\lambda )=\frac{|\mathbf{1 } -\mathbf{U} (\lambda ),~~~\mathbf{U} (\lambda )-\mathbf{U} \,^2(\lambda ),\ldots , \mathbf{U} \,^{N-2}(\lambda )-\mathbf{U} \,^{N-1}(\lambda ), ~~~~\mathbf{U} \,^{n}(\lambda )|}{|~~~~~~~\mathbf{1 },~~~~~~~~~~\mathbf{U} (\lambda ),~~~~~~~~ \ldots ,~~~~~~~~\mathbf{U} \,^{N-2}(\lambda ),~~~~~~~\mathbf{U} \,^{N-1}(\lambda )|}.\nonumber \\ \end{aligned}$$
(3.18)
Now (3.17) follows by noting (3.7). \(\square \)
We are now ready to reveal properties of the \(Q^{*}\)-process. Let \(P^{*}(t)=\{p_{ij}^{*}(t);\,i,j\ge 0\}\) be the \(Q^{*}\)-function. Thus, for any \(n\ge N\), \(0\le k\le N-1\), \(p_{nk}^{*}(t)\) is just the cumulative distribution function of \(\tau _k\) given that the \(Q^{*}\text {-}\)process starts at state \(n\ge N\). That is,
$$\begin{aligned} \phi ^*_{nk}(\lambda )=\int _0^{\infty }e^{-\lambda t}{\mathbb {P}}\{\tau _k^{*}\le t|Z^{*}_0=n\}\hbox {d}t=\int _0^{\infty }e^{-\lambda t}{\mathbb {P}}_n\{\tau _k^{*}\le t\}\hbox {d}t. \end{aligned}$$
Here we have denoted \({\mathbb {P}}\{\tau _k^{*}\le t|Z_0^{*}=n\}\) as \({\mathbb {P}}_n\{\tau _k^{*}\le t\}\).
Let \(a_{nk}^{*}={\mathbb {P}}_n\{\tau _k^{*}< \infty \}\) \((n\ge N;~0\le k\le N-1)\), and thus \(a_{nk}^{*}\) is the hitting probability to state \(k~(0\le k\le N-1)\), given that the process \(\{Z_t^{*}(\omega );~t\ge 0\}\) starts at the state \(n\ge N\). We shall use the terms “extinction probabilities”, etc., as though the \(Q^{*}\text {-}\)process is some kind of branching processes. Similarly, let \(a_{n}^{*}={\mathbb {P}}_n\{\tau ^{*}< \infty \}\) be the overall “extinction probability” to the “extinction set” \(\{0,1,2,\ldots ,N-1\}\) given that the \(Q^{*}\text {-}\) process starts at state \(n \ge N\). Now we may claim the following important conclusion by noting that \(\sum _{j=0}^{\infty }\phi ^* _{ij}(\lambda )s^j\) is the Laplace transform of the generating function of transition probabilities \(p_{ij}^{*}(t)\); i.e.,
$$\begin{aligned} \sum _{j=0}^{\infty }\phi ^* _{ij}(\lambda )s^j=\sum _{j=0}^{\infty } \left( \int _0^{\infty }e^{-\lambda t}p_{ij}^{*}(t)dt\right) s^{j}. \end{aligned}$$
Theorem 3.2
Let \(\{Z_t^{*}(\omega );t\ge 0\}\) be the \(Q^{*}\text {-}\)process. Then the overall extinction probabilities \(\{a_n^{*};n\ge N\}\) and the extinction probabilities \(\{a_{nk}^{*};n\ge N,0\le k\le N-1\}\) are given as follows:
$$\begin{aligned} a_{nk}^{*}=\frac{|\mathbf{1 },\mathbf{U },\mathbf{U} \,^2,\ldots , \mathbf{U} \,^{k-1},\mathbf{U} \,^{n}, \mathbf{U} \,^{k+1},\ldots ,\mathbf{U} \,^{N-1}|}{|\mathbf{1 }, \mathbf{U },\mathbf{U} \,^2,\ldots ,\mathbf{U} \,^{k-1},\mathbf{U} \,^{k}, \mathbf{U} \,^{k+1},\ldots ,\mathbf{U} \,^{N-1}|}, \end{aligned}$$
(3.19)
where, for example, \(\mathbf{U} \,^2=\mathbf{U} \,^2(0)=\lim _{\lambda \rightarrow 0}{} \mathbf{U} \,^2(\lambda )\), etc., as stated before.
Moreover, for the overall extinction probabilities \(a_n^{*}={\mathbb {P}}_n(\tau ^{*}<\infty )\)\( (n\ge N)\), we have that
(i)
If \(m_d\ge m_b\) (i.e. if \(B^{\prime }(1)\le 0 \)), then, for each \(n\ge N\), \(a_n^{*}=1(n\ge N)\).
 
(ii)
If \(m_d<m_b\le +\infty \) (i.e., if \(0<B^{\prime }(1)\le \infty \)), then, for any \(n\ge N\), \(a_n^{*}<1\) and the value of \(a_n^*\) is given by
$$\begin{aligned} a_n^{*}=\frac{\begin{vmatrix} 1-u_0&1-u_0^2&\ldots&1-u_0^{N-1}&u_0^n\\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&{\hat{\mathbf{U }}}\,^{n} \end{vmatrix}}{\begin{vmatrix} 1-u_0&1-u_0^2&\ldots&1-u_0^{N-1}&1\\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&\mathbf{1 } \end{vmatrix}}. \end{aligned}$$
(3.20)
 
Proof
Noting that \(a_{nk}^{*}={\mathbb {P}}\{\tau _k^{*}<\infty |Z_0^{*}=n\}=\lim _{\lambda \rightarrow 0}\lambda \phi _{nk}^{*}(\lambda )\), (3.19) immediately follows from (3.11) together with the fact that, for each \(k\ge 0\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}{} \mathbf{U} \,^k(\lambda )~=~\mathbf{U} \,^k(0)~\hat{=}~\mathbf{U} \,^k. \end{aligned}$$
We now show that (i) and (ii), including (3.20), are true. Indeed, since, for any \(n\ge N\), \(a_n^{*}=\lim _{\lambda \rightarrow 0}\lambda \sum _{k=0}^{N-1}\phi _{nk}^{*}(\lambda )\), by (3.17) in Corollary 3.1, we obtain
$$\begin{aligned} a_n^{*}=\frac{|\mathbf{1 }-\mathbf{U },\mathbf{1 } -\mathbf{U} \,^2,\ldots ,\mathbf{1 }-\mathbf{U} \,^{N-1},\mathbf{U} \,^n|}{|\mathbf{1 }-\mathbf{U },\mathbf{1 -\mathbf{U} \,^2},\ldots ,\mathbf{1 }-\mathbf{U} \,^{N-1},~~~\mathbf{1 }|}.\nonumber \\ \end{aligned}$$
(3.21)
By using the notation introduced in (3.5) we may write (3.21) as
$$\begin{aligned} a_n^{*}=\frac{\begin{vmatrix} 1-u_0&1-u_0^2&\ldots&1-u_0^{N-1}&u_0^n\\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&{\hat{\mathbf{U }}}\,^{n} \end{vmatrix}}{\begin{vmatrix} 1-u_0&1-u_0^2&\ldots&1-u_0^{N-1}&1 \\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&\mathbf{1 } \end{vmatrix}}. \end{aligned}$$
(3.22)
Now, if \(m_d\ge m_b\), then \(u_0=1\) and (3.22) becomes
$$\begin{aligned} a_n^{*}= & {} \frac{\begin{vmatrix} 0&0&\ldots&0&1\\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-\hat{\mathbf{U }}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&{\hat{\mathbf{U }}}\,^{n} \end{vmatrix}}{\begin{vmatrix} 0&0&\ldots&0&1\\ \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}&\mathbf{1 } \end{vmatrix}} \nonumber \\= & {} \frac{\begin{vmatrix} \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1} \end{vmatrix}}{\begin{vmatrix} \mathbf{1 }-{\hat{\mathbf{U }}}&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2&\ldots&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}\end{vmatrix}}. \end{aligned}$$
(3.23)
We see that the numerator and the denominator of (3.23) are exactly the same and also nonzero, and thus, \(a_n^{*}=1\) for all \(n\ge N\). Here the fact that the numerator and the denominator in (3.23) are the same is clear, while the fact that, they are nonzero is due to the fact that, by using the properties of determinants,
$$\begin{aligned} | \mathbf{1 }-{\hat{\mathbf{U }}}, \mathbf{1 }-{\hat{\mathbf{U }}}\,^2,\ldots , \mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}|=| \mathbf{1 },{\hat{\mathbf{U }}}, {\hat{\mathbf{U }}}\,^2,\ldots ,{\hat{\mathbf{U }}}\,^{N-1}|, \end{aligned}$$
which is nothing but \(\Pi _{1\le i<j\le N-1}(u_i-u_j)\) and thus nonzero. Thus (i) is proved.
On the other hand, if \(m_d<m_b\le +\infty \), then \(\lim _{\lambda \rightarrow 0}u_0(\lambda )=u_0<1\). Then, by (3.22), we get that
$$\begin{aligned} a_n^{*}=\frac{|\mathbf{1 }-\mathbf{U }, \mathbf{1 }-\mathbf{U} \,^2,\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1},\mathbf{U} \,^{n}|}{|\mathbf{1 }-\mathbf{U }, \mathbf{1 }-\mathbf{U} \,^2,\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1},~\mathbf{1 }~~|}. \end{aligned}$$
(3.24)
It is easy to see, by noting (3.24), that \(a_n^{*}<1\) \((n\ge N)\). Indeed, the difference between the denominator and the numerator of (3.24) is just
$$\begin{aligned} |\mathbf{1 }-\mathbf{U },\mathbf{1 }-\mathbf{U} \,^2,\ldots ,{ \mathbf{1} }-\mathbf{U} \,^{N-1},\mathbf{1 }-\mathbf{U} \,^n|, \end{aligned}$$
which is clearly nonzero and thus strictly great than zero, and thus \(a_n^{*}<1\) and the value of \(a_n^{*}\) is just (3.24) (i.e., (3.20)). This completes the proof. \(\square \)
By Theorem 3.2, we see that if \(m_d \ge m_b\), then the \(Q^*\)-process will definitely go to extinction. We are therefore interested in obtaining the mean extinction time, i.e., \({\mathbb {E}}(\tau ^{*}|Z_0^{*}(\omega )=n)\), where \(n\ge N\). We shall denote this important quantity as \({\mathbb {E}}_n(\tau ^{*})~(n\ge N).\)
Theorem 3.3
The overall mean extinction times \({\mathbb {E}}_n(\tau ^{*})~(n\ge N)\) of the \(Q^*\)-process are finite if and only if \(m_d>m_b\). More specifically, if \(m_d\le m_b\le +\infty \), then \({\mathbb {E}}_n(\tau ^{*})=+\infty ~(n\ge N)\), while if \(m_d>m_b\), then \({\mathbb {E}}_n(\tau ^{*})<+\infty ~(n\ge N)\) and these finite values are given by
$$\begin{aligned} {\mathbb {E}}_n(\tau ^{*})=\frac{1}{(m_d-m_b)\Delta } \begin{vmatrix} 1,&2,&3,&\ldots ,&N-1,&n\\ \mathbf{1 }-{\hat{\mathbf{U }}},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^3,&\ldots ,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{n} \end{vmatrix},\nonumber \\ \end{aligned}$$
(3.25)
where \(\Delta \) in (3.25) is given by
$$\begin{aligned} \Delta =\begin{vmatrix} \mathbf{1 },&\mathbf{U },&\mathbf{U} \,^2,&\ldots ,&\mathbf{U} \,^{N-2},&\mathbf{U} \,^{N-1} \end{vmatrix} \end{aligned}$$
and \({\mathbb {E}}_n(\cdot )\) denote the mathematical expectation under \({\mathbb {P}}_n(\cdot )\), i.e. under the condition that the \(Q^*\)-process starts at state \(n\ge N\).
Proof
First if \(m_d<m_b\le +\infty \), then by Theorem 3.2, \(a_n^{*}<1\) and thus, trivially, \({\mathbb {E}}_n(\tau ^{*})=\infty ~(n\ge N)\). Hence we only need to consider the case that \(m_d\ge m_b\). For this case, applying the Tauberian Theorem yields
$$\begin{aligned} {\mathbb {E}}_n(\tau ^{*})=\int _0^{\infty }{\mathbb {P}}_n(\tau ^{*}>t)~dt=\lim _{\lambda \rightarrow 0}\frac{1-\lambda \sum _{k=0}^{N-1}\phi _{nk}^{*}(\lambda )}{\lambda }. \end{aligned}$$
(3.26)
Now, by (3.17) we have
$$\begin{aligned} \lambda \sum _{k=0}^{N-1}\phi _{nk}^{*}(\lambda )=\frac{\Delta _n(\lambda )}{\Delta (\lambda )}, \end{aligned}$$
(3.27)
where \(\Delta (\lambda )\) is given in (3.6) or (3.7) and
$$\begin{aligned} \Delta _n(\lambda )=\begin{vmatrix} \mathbf{1 }-\mathbf{U} (\lambda ),&\mathbf{1 } -\mathbf{U} \,^2(\lambda ),&\ldots ,&\mathbf{1 }-\mathbf{U} \,^{N-1}(\lambda ),&\mathbf{U} \,^n(\lambda ) \end{vmatrix}. \end{aligned}$$
(3.28)
Substituting (3.27)–(3.28) into (3.26) yields
$$\begin{aligned} {\mathbb {E}}_n(\tau ^{*})=\,\lim _{\lambda \rightarrow 0}\frac{1}{\lambda }\left( 1-\frac{\Delta _n(\lambda )}{\Delta (\lambda )}\right) =\,\lim _{\lambda \rightarrow 0}\frac{1}{\Delta (\lambda )}\frac{\Delta (\lambda )-\Delta _n(\lambda )}{\lambda }. \end{aligned}$$
(3.29)
By (3.6) and (3.7) and noting the notation \({\hat{\mathbf{U }}}(\lambda )\) introduced in (3.5), it is easy to get that
$$\begin{aligned} \frac{\Delta (\lambda )-\Delta _n(\lambda )}{\lambda }= \begin{vmatrix}\frac{1-u_0(\lambda )}{\lambda },&\frac{1-u_0^2(\lambda )}{\lambda },&\ldots ,&\frac{1-u_0^{N-1}(\lambda )}{\lambda },&\frac{1-u_0^n(\lambda )}{\lambda }\\ \mathbf{1 }-{\hat{\mathbf{U }}}(\lambda ),&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2(\lambda ),&\ldots ,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1}(\lambda ),&\mathbf{1 } -{\hat{\mathbf{U }}}\,^n(\lambda ) \end{vmatrix},\qquad \end{aligned}$$
(3.30)
Letting \(\lambda \rightarrow 0\) in (3.30) and using (2.10) proved in Lemma 2.3 immediately yields the result that if \(m_d=m_b\) then
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{\Delta (\lambda )-\Delta _n(\lambda )}{\lambda }=+\infty , \end{aligned}$$
(3.31)
while if \(m_d>m_b\) then
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{\Delta (\lambda )-\Delta _n(\lambda )}{\lambda } {=}\frac{1}{m_d-m_b}\begin{vmatrix} 1,&2,&3,&\ldots ,&N-1,&n\\ \mathbf{1 }-{\hat{\mathbf{U }}},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^3,&\ldots ,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^n \end{vmatrix},\nonumber \\ \end{aligned}$$
(3.32)
where we recall that \(\mathbf{1 }-{\hat{\mathbf{U }}}\,^k\) (k is a positive integer) is nothing but
$$\begin{aligned} \mathbf{1 }-{\hat{\mathbf{U }}}\,^k=(1-u_1^k(0),1-u_2^k(0),\ldots ,1-u_{N-1}^k(0))^\mathrm{T}. \end{aligned}$$
On the other hand, it is trivial to see that \(\lim _{\lambda \rightarrow 0}\Delta (\lambda )=\Delta (0)\), which is finite and strictly positive. By (3.6) we see that, if we let \(\Delta =\Delta (0)\), then
$$\begin{aligned} \Delta =\begin{vmatrix} \mathbf{1 },&\mathbf{U },&\mathbf{U} \,^2,&\ldots ,&\mathbf{U} \,^{N-2},&\mathbf{U} \,^{N-1} \end{vmatrix}. \end{aligned}$$
(3.33)
Substituting (3.31)–(3.33) into (3.29) shows that we have proved our conclusion. \(\square \)
Remark 3.2
Although somewhat similar the notations \(\Delta ^{(i)}(\lambda )\) and \(\Delta _n(\lambda )\) defined in (3.8) and (3.28), respectively, are not the same. In particular, for the former we have \(0\le i\le N-1\) and thus the position of the column vector \(\mathbf{U} \,^{i}(\lambda )\) is varying while for the latter \(n\ge N\) and thus the position of \(\mathbf{U} \,^{n}\) is always in the last column.
By Theorem 3.3 we see that, if \(m_d < m_{b}\le \infty \), the mean time to overall extinction \({\mathbb {E}}_n(\tau ^{*})\) is infinite which is quite informative. The main reason is that if \(m_d < m_{b}\le \infty \) then the overall extinction probability \(a^{*}_{n}\) is strictly less than 1. This prompts us to consider the expectation of \(\tau ^{*}\) conditional on \(\tau ^{*}<\infty \) which will be much more informative.
In order to state our results more simply, we give the following non-standard definition: Suppose \(A=\{a_1, a_2, \ldots , a_n\}^\mathrm{T}\) and \(B=\{b_1, b_2, \ldots , b_n\}^\mathrm{T}\) are two n-dimensional column vectors. We define
$$\begin{aligned} A\otimes B= \{a_1\cdot b_1, a_2\cdot b_2, \ldots , a_n\cdot b_n\}^\mathrm{T} \end{aligned}$$
(3.34)
as a new n-dimensional column vector.
Theorem 3.4
If \(m_d<m_{b}\le \infty \), then the overall mean extinction time under the condition that \(\tau ^{*}< \infty \) and \(Z_0^{*}=n~(n\ge N)\) is given by
$$\begin{aligned} {\mathbb {E}}_n(\tau ^*|\tau ^* < \infty )=\frac{-1}{a_{n}^*}\left( \frac{d}{d\lambda } \left( \lambda \sum _{k=0}^{N-1}\phi _{nk}^*(\lambda )\right) \right) \Bigg |_{\lambda =0}, \end{aligned}$$
(3.35)
where \(a_n^*\) is given in (3.20) and \(\sum _{k=0}^{N-1}\lambda \phi _{nk}^*(\lambda )\) is given in (3.17). More specifically, for \(n\ge N\),
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-020-09660-0/MediaObjects/11134_2020_9660_Equ63_HTML.png
(3.36)
where \(\Delta \) is defined in (3.33) and \(\Delta _n\) is defined as below:
$$\begin{aligned} \Delta _n=\left| \mathbf{1 }-\mathbf{U },\mathbf{1 }-\mathbf{U} \,^2,\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1},\mathbf{U} \,^{n}\right| \quad (n\ge N) \end{aligned}$$
(3.37)
and in fact, \(\Delta _n=\mathop {\lim }_{\lambda \rightarrow 0}\Delta _n(\lambda )\); here \(\Delta _n(\lambda )\) is defined in (3.28). Also, for \(1\le k\le N-1\),
$$\begin{aligned} {\mathfrak {D}}_k=\left| \mathbf{1} ,~\mathbf{U }\ldots , ~\mathbf{U} \,^{k-1},~\mathbf{U} \,^{k-1}\otimes U\,'(0), ~\mathbf{U} \,^{k+1},\ldots ,\mathbf{U} \,^{N-1}\right| , \end{aligned}$$
(3.38)
$$\begin{aligned} {\mathfrak {F}}_k^{(n)}= k\cdot |\mathbf{1}-U ,\ldots ,~\mathbf{1}-U \,^{k-1},~ -\mathbf{U} \,^{k-1}\otimes U\,'(0),1-\mathbf{U} \,^{k+1}\ldots ,\mathbf{U} \,^n~|. \end{aligned}$$
(3.39)
Proof
First note that, for any \(n\ge N\),
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_n(\tau ^*|\tau ^*<\infty )&=\int _0^{\infty } [1-{\mathbb {P}}\{\tau ^*\le t|\tau ^*<\infty ,Z_0^*=n\}]\mathrm{d}t. \end{aligned} \end{aligned}$$
(3.40)
But it is easy to see that
$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\{\tau ^*\le t|\tau ^*<\infty ,~Z_0^*=n\}=\frac{{\mathbb {P}}\{\tau ^*\le t|Z_0^*=n\}}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}} \end{aligned} \end{aligned}$$
and thus substituting the above into (3.40) yields
$$\begin{aligned} {\mathbb {E}}_n(\tau ^*|\tau ^*<\infty )=\int _0^{\infty }\left[ 1-\frac{{\mathbb {P}}\{\tau ^*\le t|Z_0^*=n\}}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}}\right] ~\mathrm{d}t. \end{aligned}$$
(3.41)
However, \({\mathbb {P}}\{\tau ^*\le t|Z_0^*=n\}\) is just the \(p_{n0}^*(t)\) for the \({Q^*}\)-function of the \(Q^*\)-process \(\{Z_t^*;~t\ge 0\}\) whose Laplace transform is just \(\phi _{n0}^*(\lambda )~(n\ge N)\) which is given in (3.11). It follows that the Laplace transform of \(\int _0^t[1-\frac{{\mathbb {P}}\{\tau ^*\le u|Z_0^*=n\}}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}}]~du\) is just
$$\begin{aligned} \frac{1}{\lambda }\left( \frac{1}{\lambda }-\frac{\sum _{k=0}^{k=N-1} \phi _{nk}^*(\lambda )}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}}\right) \end{aligned}$$
and thus by applying the Tauberian Theorem in (3.41) yields
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_n&(\tau ^*|\tau ^*<\infty )=\lim _{\lambda \rightarrow 0}~\lambda \cdot \frac{1}{\lambda }\left( \frac{1}{\lambda }-\frac{\sum _{k=0}^{N-1} \phi _{nk}^*(\lambda )}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}}\right) \\&=\frac{1}{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}}\lim _{\lambda \rightarrow 0}\frac{{\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}-\lambda \sum _{k=0}^{N-1} \phi _{nk}^*(\lambda )}{\lambda }. \end{aligned} \end{aligned}$$
(3.42)
Noting that \({\mathbb {P}}\{\tau ^*<\infty |Z_0^*=n\}\) is nothing but \(a_{n}^* (n\ge N)\) which is given in (3.20) (since in our case \(m_d<m_b\)), the crucial thing is to calculate
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{a_{n}^*-\lambda \sum _{k=0}^{N-1} \phi _{nk}^*(\lambda )}{\lambda }. \end{aligned}$$
(3.43)
Applying L’Hospital’s rule in (3.43) shows that the limit in (3.43) is just
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\left[ -\frac{d}{d\lambda }\left( \lambda \sum _{k=0}^{N-1} \phi _{nk}^*(\lambda )\right) \right] . \end{aligned}$$
(3.44)
Hence (3.35) is proven. Moreover, substituting (3.17) into the above (3.44) yields that the limit in (3.44) is just
$$\begin{aligned} (-1)\cdot \lim _{\lambda \rightarrow 0}\left[ \frac{d}{d\lambda }\cdot \frac{|\mathbf{1 }-\mathbf{U} (\lambda ), \mathbf{1 }-\mathbf{U} \,^2(\lambda ),\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1}(\lambda ),\mathbf{U} \,^{n}(\lambda )|}{|\mathbf{1 }-\mathbf{U} (\lambda ),\mathbf{1 }-\mathbf{U} \,^2(\lambda ),\ldots , \mathbf{1 }-\mathbf{U} \,^{N-1}(\lambda ),~~~~~\mathbf{1} |} \right] . \end{aligned}$$
(3.45)
Now, using (3.17), (3.33), (3.28), (3.37) and applying the rules of differentiating determinants, together with the notation introduced in (3.31), (3.38) and (3.39), we immediately obtain all the conclusions stated in the second part of this theorem. \(\square \)
Expressions (3.9)–(3.11) in Theorem 3.1 can also be used to obtain information about the mean function of the \(Q^*\)-process. Let \(m^*_i(t)={\mathbb {E}}_i(Z_t^{*}(\omega ))\) and denote its Laplace transform by \(\xi ^*_i(\lambda )\). Then, since \(m^*_i(t)=\sum _{k=1}^{\infty }kp^*_{ik}(t)\), we have
$$\begin{aligned} \xi ^*_i(\lambda )=\sum _{k=1}^{\infty }k\phi ^*_{ik}(\lambda ) =\left. \frac{\partial }{\partial s} \sum _{j=0}^{\infty }\phi ^*_{ij}(\lambda )s^j \, \right| _{s=1} \, , \end{aligned}$$
(3.46)
from which we can obtain the following result.
Theorem 3.5
Assume that \(m_b <\infty \). Then
$$\begin{aligned} \xi ^*_i(\lambda ) = \frac{i}{\lambda }+\frac{m_b-m_d}{\lambda ^2} \left( 1-(\Delta (\lambda ))^{-1}\cdot |\mathbf{1 }-\mathbf{U} (\lambda ), \mathbf{1 }-\mathbf{U} \,^2(\lambda ),\ldots ,\mathbf{1 } -\mathbf{U} \,^{N-1}(\lambda ), \mathbf{U} \,^i(\lambda )|\right) \nonumber \\ \end{aligned}$$
(3.47)
and
$$\begin{aligned} m^*_i(t)= i+(m_b-m_d)t-(m_b-m_d) \int _0^t\sum _{k=0}^{N-1}p_{ik}(\tau )\, d\tau , \end{aligned}$$
(3.48)
where \(\Delta (\lambda )\) in (3.47) is given by (3.6). Moreover, if \(m_b<m_d\) then, as \(t\rightarrow \infty \),
$$\begin{aligned} m^*_i(t)\downarrow i-\frac{1}{\Delta }\begin{vmatrix} 1,&2,&\ldots ,&N-1,\qquad ~~~ i\\ \mathbf{1 }-{\hat{\mathbf{U }}},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2,&\ldots ,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^{N-1},\quad \mathbf{1} \end{vmatrix}, \end{aligned}$$
(3.49)
where \(\Delta \) in (3.49) is again given in (3.33) or, equivalently,
$$\begin{aligned} \Delta =\begin{vmatrix} \mathbf{1 }-\mathbf{U },&\mathbf{1 }-\mathbf{U} \,^2,&\ldots ,&\mathbf{1 }-\mathbf{U} \,^{N-1},&\mathbf{1 } \end{vmatrix}, \end{aligned}$$
(3.50)
while if \(m_d<m_b<\infty \) then \(m^*_i(t)\uparrow \infty \). If \(m_b=m_d\) then \(m^*_i(t)\equiv i\). Finally, if \(m_b=\infty \) then \(m^*_i(t)=\infty \).
Proof
By recalling that \(B_{\lambda }(s)=B(s)-\lambda s^N\), we may rewrite (3.10) as
$$\begin{aligned} (B(s)-\lambda s^N)\sum _{j=0}^{\infty }\phi ^*_{ij}(\lambda )s^j =B(s)\sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )s^k-s^{i+N}. \end{aligned}$$
(3.51)
Differentiating with respect to s, letting \(s=1\) and using (3.46) yields
$$\begin{aligned}&\xi ^*_i(\lambda )(B(1)-\lambda )+(B^{\,\prime }(1)-N\lambda ) \sum _{j=0}^{\infty }\phi ^*_{ij}(\lambda )\nonumber \\&\quad = B'(1)\sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )+B(1)\sum _{k=1}^{N-1}k \phi _{ik}^{*}(\lambda )-(i+N). \end{aligned}$$
(3.52)
Since \(B(1)=0\), \(\sum _{j=0}^{\infty }\phi ^*_{ij}(\lambda )=1/\lambda \) and \(B^{\,\prime }(1)=m_b-m_d\), and using (3.17) together with the notation introduced in (3.6) we obtain (3.47) for the case \(m_b<\infty \). Taking the inverse Laplace transform of (3.47) gives (3.48). On writing (3.48) as
$$\begin{aligned} m^*_i(t)=i+(m_b-m_d)\int _0^t \left( 1-\sum _{k=0}^{N-1}p_{ik}(\tau )\right) \mathrm{d}\tau , \end{aligned}$$
(3.53)
we see that if \(m_d<m_b<\infty \) then \(m^*_i(t)\) is an increasing function of t, and if \(m_b<m_d\) then it is decreasing, while if \(m_b=m_d\) it remains at the constant value \(m^*_i(t)\equiv i\). Furthermore, if \(m_d<m_b<\infty \), then \(\lim _{t \rightarrow \infty }(1-\sum _{k=0}^{N-1}p_{ik}(t))>0\), and so \(\lim _{t \rightarrow \infty }\int _0^t (1-\sum _{k=0}^{N-1}p_{ik}(\tau ))\, d\tau =\infty \). The result for \(m_b<m_d\) is best derived from the Laplace transform (3.47):
$$\begin{aligned} \lim _{t \rightarrow \infty } m^*_i(t)=\lim _{\lambda \rightarrow 0}\lambda \xi ^*_i(\lambda )=i-(m_d-m_b)\cdot \lim _{\lambda \rightarrow 0} \frac{1-\lambda \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda )}{\lambda }.\qquad \end{aligned}$$
(3.54)
However, we know that if \(m_d>m_b\), then \(\lim _{\lambda \rightarrow 0} (1-\lambda \sum _{k=0}^{N-1}\phi _{ik}^{*}(\lambda ))\lambda ^{-1}\) is just \({\mathbb {E}}_i(\tau ^{*})\) and thus is given by (see (3.25) in Theorem 3.3)
$$\begin{aligned} (m_d-m_b)^{-1}\cdot \Delta ^{-1}\cdot \begin{vmatrix} 1,&2,&3,&\ldots ,&N-1,&i\\ \mathbf{1 }-{\hat{\mathbf{U }}},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^2,&\mathbf{1 }-{\hat{\mathbf{U }}}\,^3,&\ldots ,&\mathbf{1 } -{\hat{\mathbf{U }}}\,^{N-1},&\mathbf{1 }-{\hat{\mathbf{U }}}\,^i \end{vmatrix}. \end{aligned}$$
Substituting the above expression into (3.53) immediately yields the result for the case \(m_d>m_b\). Finally, letting \(m_b\rightarrow \infty \) in (3.53) establishes the fact that if \(m_b=\infty \) then \(m^*_i(t)=\infty \). \(\square \)

4 The full bulk-arrival and bulk-service queues

After studying the properties of the \(Q^*\)-processes in the previous section, we are now ready to study the full queueing model with bulk-arrival-bulk-service and general state-dependent control; this is the key section of this paper. Our q-matrix Q now takes the general form (1.1)−(1.3) with the feature that for all \(0\le k\le N-1\) we have \(h_{kk}\ne 0 \). It follows that \(H_k(s)\ne 0~(0\le k\le N-1)\). For convenience, we shall further assume that the q-matrix Q is irreducible. It follows that the (unique Feller minimal) Q-function and Q-resolvent are also irreducible. Denote the corresponding queueing process as \(\{X_t;~t\ge 0\}\).
This section consists of three sub-sections. In the first sub-section, we consider the structure of the Q-resolvent of the queueing process \(\{X_t;~t\ge 0\}\) which is the main tool to investigate properties of our full queuing model. Based on the conclusions obtained in the first sub-section, the ergodic property, particularly the equilibrium distribution, which is always one of the most important topics for all kinds of queuing models, is fully discussed in the second sub-section. In the final sub-section, the queue length distribution, which is also one of the important topics in queuing models, is investigated in detail. Another extremely important topic, the busy period distribution, will be investigated in the following Sects. 5 and 6.

4.1 Construction of the Q-resolvent

We now consider the structure of the Q-resolvent of the process \(\{X_t; t\ge 0\}\). By using Theorem 2.1 and similar methods to those used in the last section, we can immediately obtain the following important construction theorem.
Theorem 4.1
Suppose Q is given in (1.1)–(1.3) and let \(R(\lambda )=(r_{ij}(\lambda ); \, i,j \ge 0)\) be the Feller minimal Q-resolvent. Then
(i)
For any \(i\ge 0,~|s|\le 1,\)
$$\begin{aligned} \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j=\frac{\sum _{k=0}^{N-1}r_{ik} (\lambda )\left( B(s)s^k-H_k(s)s^{N}\right) -s^{i+N}}{B(s)-\lambda s^N}. \end{aligned}$$
(4.1)
 
(ii)
The \(\{r_{ij}(\lambda );~i\ge 0,0\le j\le N-1\}\) in the numerator of the right-hand side of (4.1) take the form
$$\begin{aligned} r_{i0}(\lambda )=\frac{|\mathbf{U} \,^i(\lambda ), \mathbf {V} _1(\lambda ),\ldots ,\mathbf {V} _{N-1}(\lambda )|}{|\mathbf {V} _0(\lambda ),\mathbf {V} _1(\lambda ),\ldots , \mathbf {V} _{N-1}(\lambda )|} \end{aligned}$$
(4.2)
and, for \(1\le k\le N-2\),
$$\begin{aligned} r_{ik}(\lambda )=\frac{|\mathbf {V} _0(\lambda ), \mathbf {V} _1(\lambda ),\ldots ,\mathbf {V} _{k-1}(\lambda ), \mathbf{U} \,^i(\lambda ),\mathbf {V} {k+1}(\lambda ),\ldots , \mathbf {V} _{N-1}(\lambda )|}{|\mathbf {V} _0(\lambda ),\mathbf {V} _1(\lambda ),\ldots , \mathbf {V} _{k-1}(\lambda ),\mathbf {V} _k(\lambda ), \mathbf {V} _{k+1}(\lambda ),\ldots ,\mathbf {V} _{N-1}(\lambda )|} \end{aligned}$$
(4.3)
and
$$\begin{aligned} r_{i\,N-1}(\lambda )=\frac{|\mathbf {V} _0(\lambda ), \mathbf {V} _1(\lambda ),\ldots ,\mathbf {V} _{N-2}(\lambda ), ~~\mathbf{U} \,^i(\lambda )|}{~|\mathbf {V} _0(\lambda ),\mathbf {V} _1(\lambda ), \ldots ,\mathbf {V} _{N-2}(\lambda ),\mathbf {V} _{N-1}(\lambda )|}, \end{aligned}$$
(4.4)
where, in (4.2)–(4.4), the N-dimensional column vector \(\mathbf {V} _{k}(\lambda )~(0\le k\le N-1)\) is defined as
$$\begin{aligned} {V}_{k}(\lambda )=\lambda \mathbf{U} \,^k(\lambda ) -{H}_k({\mathbf{U }}(\lambda ))~(0\le k\le N-1) \end{aligned}$$
(4.5)
and \(\mathbf{U} \,^k(\lambda )\) is given in (3.2), and, for \(0\le k\le N-1\),
$$\begin{aligned} \mathbf {H} _k(\mathbf{U} (\lambda ))=(H_k(u_0(\lambda )), H_k(u_1(\lambda )),\ldots ,H_k(u_{N-1}(\lambda )))^\mathrm{T}. \end{aligned}$$
(4.6)
Of course, \(\mathbf{V }_0(\lambda )=\lambda \mathbf{1 }-\mathbf{H }_0({\mathbf{U }}(\lambda ))\) since \(\mathbf{U }^{\,0}(\lambda )={\mathbf{1 }}\).
 
Proof
(4.1) is a rewriting of (2.24) in Theorem 2.1. For other parts, just note that again, by the facts that \(B(s)-\lambda s^N=0\) has N roots \(\{u_0(\lambda ), u_1(\lambda ),\ldots ,u_{N-1}(\lambda )\}\) on the open unit disc \(\{s;~|s|<1\}\) and the right-hand side of (4.1) is finite at these roots, we know that the numerator of the left-hand side of (4.1) vanishes at these N roots. It follows that, for \(0\le l\le N-1\) and \(i\ge 0\),
$$\begin{aligned} \sum _{k=0}^{N-1}r_{ik}(\lambda )[B(u_l(\lambda ))(u_l(\lambda ))^k-H_k(u_l(\lambda )) (u_l(\lambda ))^N]=(u_l(\lambda ))^{i+N}. \end{aligned}$$
Now, using the fact that \(B(u_l(\lambda ))=\lambda (u_l(\lambda ))^N~(0\le l\le N-1),\) we obtain that, for all \(i\ge 0\),
$$\begin{aligned} \sum _{k=0}^{N-1}r_{ik}(\lambda )[\lambda u_l^k(\lambda )-H_k(u_l(\lambda ))]=(u_l(\lambda ))^i~(0\le l\le N-1). \end{aligned}$$
(4.7)
By solving the linear equations (4.7) and noting the notation (4.5), we immediately obtain (4.2)–(4.4). This completes the proof of Theorem 4.1. \(\square \)

4.2 Recurrence, ergodic and equilibrium properties

We are now ready to consider the ergodic and equilibrium behaviour of the full queueing model. Recall that we have assumed that the q-matrix Q and thus the Feller minimal Q-process is irreducible. By applying the conclusions obtained in the previous Sect. 3, we immediately obtain the following important theorem.
Theorem 4.2
The irreducible queueing process determined by the Q given in (1.1)–(1.3) is recurrent if and only if \(m_d \ge m_b\). Moreover, it is positive-recurrent if and only if \(m_d> m_b\) and
$$\begin{aligned} H_k^{\prime }(1)<+\infty \quad ( 0\le k\le N-1), \end{aligned}$$
or, equivalently,
$$\begin{aligned} \sum _{j=k+1}^{\infty }j\cdot h_{kj}<+\infty \qquad ( 0\le k\le N-1). \end{aligned}$$
(4.8)
Proof
We use a probabilistic approach to prove this theorem. Just consider the relationship between the Q-process and the \(Q^*\)-process discussed in the previous Sect. 3. It is obvious that the Q-process is recurrent if and only if, for the \(Q^*\)-process, the overall extinction probability is 1, and the latter is equivalent to the fact that \(m_d \ge m_b\). Next, it is also easy to see that the Q-process is positive recurrent if and only if, for the \(Q^*\)-process, the mean extinction time is finite and for any state \(i~ (0\le i\le N-1)\), the mean returning time to states \(\{N,N+1,\ldots \}\) from any state \(k~(0\le k\le N-1)\) is finite. However, it is easily seen that the former is equivalent to \(m_d > m_b\), and the latter is equivalent to (4.8) holding true. \(\square \)
Under the positive recurrence conditions, we know that there exists a unique equilibrium distribution. We are now interested in obtaining a closed form for this unique equilibrium distribution. Interestingly, the closed form can be easily obtained.
Theorem 4.3
Under the conditions that \(m_d>m_b\) and \(H'_k(1)<+\infty ~(0\le k \le N-1)\), the limiting distribution \(\{\pi _j;~j\ge 0\}\) of the Q-process \(\{X_t(\omega );~t \ge 0\}\) exists. Moreover, the generating function of this unique limiting distribution, \(\Pi (s)=\sum _{j=0}^{\infty }\pi _js^j\), is given by
$$\begin{aligned} \Pi (s)=\sum _{j=0}^{\infty }\pi _js^j=\sum _{k=0}^{N-1}\pi _k\left( s^k-\frac{H_k(s)}{B(s)}s^N\right) , \end{aligned}$$
(4.9)
where \(\pi _k~(0\le k\le N-1)\) on the right-hand side of (4.9) are given by
$$\begin{aligned} \pi _0=\frac{|\mathbf{H }_1({\hat{\mathbf{U }}}),\mathbf{H }_2({\hat{\mathbf{U }}}) ,\ldots ,\mathbf{H }_{N-1}({\hat{\mathbf{U }}})|}{\kappa } \end{aligned}$$
(4.10)
and, for \(1\le k\le N-2\),
$$\begin{aligned} \pi _k=(-1)^k\cdot \frac{|\mathbf{H }_0({\hat{\mathbf{U }}}),\ldots , \mathbf{H }_{k-1}({\hat{\mathbf{U }}}),\mathbf{H }_{k+1}({\hat{\mathbf{U }}}),\ldots , \mathbf{H }_{N-1}({\hat{\mathbf{U }}})|}{\kappa } \end{aligned}$$
(4.11)
and
$$\begin{aligned} \pi _{N-1}=(-1)^{N-1}\cdot \frac{|\mathbf{H }_0({\hat{\mathbf{U }}}), \mathbf{H }_{1}({\hat{\mathbf{U }}}),\ldots ,\mathbf{H }_{N-2}({\hat{\mathbf{U }}})|}{\kappa }, \end{aligned}$$
(4.12)
and the \(\kappa \) in (4.10)–(4.12) is given by
$$\begin{aligned} \kappa =\begin{vmatrix}1,&1,&\ldots ,&1\\ \mathbf{H }_0({\hat{\mathbf{U }}}),&\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix} +\frac{1}{m_d-m_b}\begin{vmatrix}H'_0(1),&H'_1(1),&\ldots ,&H'_{N-1}(1)\\ \mathbf{H }_0({\hat{\mathbf{U }}}),&\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix}. \end{aligned}$$
(4.13)
Proof
Letting \(i=0\) in (4.1) and multiplying by \(\lambda >0\) yields
$$\begin{aligned} \sum _{j=0}^{\infty }\lambda r_{0j}(\lambda )s^j=\frac{\sum _{k=0}^{N-1}\lambda r_{0k}(\lambda )(B(s)s^k-H_k(s)s^N)-\lambda s^N}{B(s)-\lambda s^N}. \end{aligned}$$
(4.14)
Letting \(\lambda \rightarrow 0\) in the above (4.14), noting that \(\lim \limits _{\lambda \rightarrow 0}\lambda r_{0j}(\lambda )=\pi _j\), and using the Dominated Convergence Theorem yields
$$\begin{aligned} \Pi (s)=\sum _{j=0}^{\infty }\pi _js^j=\frac{\sum _{k=0}^{N-1}\pi _k \left( B(s)s^k-H_k(s)s^N\right) }{B(s)}, \end{aligned}$$
which is just (4.9).
To determine the form of \(\pi _k\) for \(0\le k\le N-1\), we turn our attention to the proven expressions (4.2)−(4.4). Letting \(i=0\) and then multiplying \(\lambda >0\) on both side of (4.2) yields
$$\begin{aligned} \lambda r_{00}(\lambda ) =\frac{\begin{vmatrix} 1,&\lambda u_0(\lambda )-H_1(u_0(\lambda )),&\ldots ,&\lambda u_0^{N-1}(\lambda )-H_{N-1}(u_0(\lambda ))\\ \mathbf{1 },&\lambda {\hat{\mathbf{U }}}(\lambda )-\mathbf{H }_1 ({\hat{\mathbf{U }}}(\lambda )),&\ldots ,&\lambda {\hat{\mathbf{U }}}^{N-1}(\lambda ) -\mathbf{H }_{N-1}({\hat{\mathbf{U }}}(\lambda ))\end{vmatrix}}{\begin{vmatrix} 1 - \frac{1}{\lambda }{H_0(u_0(\lambda ))},&u_0(\lambda ) -\frac{1}{\lambda }{H_1(u_0(\lambda ))},&\ldots ,&u_0^{N-1}(\lambda ) -\frac{1}{\lambda }{H_{N-1}(u_0(\lambda ))}\\ \\ \lambda \mathbf{1 }-\mathbf{H }_0({\hat{\mathbf{U }}}(\lambda )),&\lambda {\hat{\mathbf{U }}}(\lambda )-\mathbf{H }_1({\hat{\mathbf{U }}}(\lambda )),&\ldots ,&\lambda {\hat{\mathbf{U }}}^{N-1}(\lambda )-\mathbf{H }_{N-1} ({\hat{\mathbf{U }}}(\lambda ))\end{vmatrix}}. \end{aligned}$$
(4.15)
Letting \(\lambda \rightarrow 0\) in (4.15), noting that \(\lim \limits _{\lambda \rightarrow 0}\left( u_0^k(\lambda )-\frac{1}{\lambda } H_k(u_0(\lambda ))\right) =1+\frac{H_k'(1)}{m_d-m_b}~(0\le k \le N-1)\) and \(\lim \limits _{\lambda \rightarrow 0}\left( \lambda u_0^k(\lambda )- H_k(u_0(\lambda ))\right) =0-H_k(1)=0~ (1\le k\le N-1)\) then yields
$$\begin{aligned} \lim \limits _{\lambda \rightarrow 0}\lambda r_{00}(\lambda ) =&\frac{\begin{vmatrix} 1,&0,&\ldots ,&0\\ \mathbf{1 },&-\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&-\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix}}{\begin{vmatrix} 1+\frac{H'_0(1)}{m_d-m_b},&1+\frac{H'_1(1)}{m_d-m_b},&\ldots ,&1 +\frac{H'_{N-1}(1)}{m_d-m_b}\\ -\mathbf{H }_0({\hat{\mathbf{U }}}),&-\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&-\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix}}\\ =&\frac{\begin{vmatrix} 1,&0,&\ldots ,&0\\ \mathbf{1 }&\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1} ({\hat{\mathbf{U }}})\end{vmatrix}}{\kappa }, \end{aligned}$$
where \(\kappa \) is given in (4.13), which is just \(\pi _0\). This also shows that (4.10) is true.
Similarly, for \(1\le k \le N-1\), by letting \(i=0\) and multiplying by \(\lambda >0\) on both sides of either (4.3) or (4.4), together with some similar algebra, it is quite easy to show that in letting \(\lambda \rightarrow 0\), (4.11) or (4.12) becomes true. This finishes the proof. \(\square \)
The particular interesting forms presented in the above Theorem 4.3 are very convenient to obtain other important queueing quantities in the equilibrium distribution \(\Pi =\{\pi _{j};~j\ge 0\}\). For example, we may obtain the mean equilibrium queue size, \(\Pi \), as the following corollary shows.
Corollary 4.1
The expectation of the equilibrium queueing size distribution \(\Pi \) is given by
$$\begin{aligned} {\mathbb {E}}(\Pi )=\sum _{k=1}^{N-1}k\pi _k+\frac{1}{m_d-m_b}\sum _{k=0}^{N-1} \pi _k\left[ \frac{H_k'(1)\cdot B''(1)}{2(m_d-m_b)}+NH_k'(1)+\frac{1}{2}H''_k(1)\right] \nonumber \\ \end{aligned}$$
(4.16)
which is finite if and only if (in addition to \(m_d>m_b\) ) \(B^{\,{\prime \prime }}(1)<+\infty \) and \(H_k^{\,{\prime \prime }}(1)<+\infty ~(0\le k\le N-1).\)
Note: \(H''_k(1)<+\infty \) implies that \(H'_k(1)<+\infty \).
Proof
By using (4.9), we can get that
$$\begin{aligned} \Pi '(s)=\sum _{k=1}^{N-1}k\pi _ks^{k-1}-\sum _{k=0}^{N-1} \pi _k\left[ \frac{H_k(s)}{B(s)}Ns^{N-1}+s^N\frac{B(s)H_k'(s)-H_k(s)B'(s)}{B^2(s)}\right] .\nonumber \\ \end{aligned}$$
(4.17)
In order to consider \({\mathbb {E}}(\Pi )\), we first consider the following two limits:
$$\begin{aligned} \lim \limits _{s\rightarrow 1}\frac{H_k(s)}{B(s)} ~~\text { and }~~ \lim \limits _{s\rightarrow 1}\frac{B(s)H'_k(s)-H_k(s)B'(s)}{B^2(s)}. \end{aligned}$$
Clearly, the former is just \(\frac{H'_k(1)}{B'(1)}\) whose finiteness is guaranteed by our conditions that \(m_d>m_b\) and \(H_k'(1)< \infty ~(0\le k \le N-1),\) while the latter, by using Hospital’s rules two times and after some easy algebra, is just
$$\begin{aligned} \lim \limits _{s\rightarrow 1}\frac{B(s)H''_k (s)-H_k(s)B''(s)}{2B(s)B'(s)}=\frac{H''_k(1)}{2B'(1)} -\frac{H'_k(1)B''(1)}{2(B'(1))^2} \end{aligned}$$
which is finite if and only if \(B''(1)<+\infty \) and \(H''_k(1)<+\infty \).
Now, letting \(s\uparrow 1\) in (4.17) immediately yields
$$\begin{aligned} {\mathbb {E}}(\Pi )=\sum _{k=1}^{N-1}k\pi _k-\sum _{k=0}^{N-1}\pi _k\left[ N\cdot \frac{H_k'(1)}{B'(1)}+\frac{H''_k(1)}{2B'(1)}-\frac{H_k'(1)B''(1)}{2(B'(1))^2}\right] , \end{aligned}$$
which can be rewritten as (4.16) by noting that \(B'(1)=m_b-m_d<0\). \(\square \)
The higher moments, including the variance of \(\Pi \), can be similarly obtained. But we shall omit the details here.

4.3 Queue length distributions

We now turn to consider another important topic, the queuing length distribution, which people are usually quite interested in. The construction Theorem 4.1 presented in sub-section 4.1 is also particularly convenient in analysing the related properties. We, again, assume that for all k\((0\le k\le N-1),~h_{kk}\ne 0\) and thus \(H_k(s)\ne 0~(0\le k\le N-1).\)
Now let \(m_i(t)\) be the mean length of the queueing process \(\{X_t\}\) at time \(t>0\), starting from \(X_0=i\). Let \(\xi _i(\lambda )\) denote the Laplace transform of \(m_i(t)\).
Theorem 4.4
The Laplace transform of the mean queue length function, \(\xi _i(\lambda )\), starting from \(X_0=i\ge 0\), is given, by
$$\begin{aligned} \xi _i(\lambda )=\frac{i}{\lambda }+(m_b-m_d)\frac{1-\sum _{k=0}^{N-1}\lambda r_{ik(\lambda )}}{\lambda ^2}+\frac{1}{\lambda }\sum _{k=0}^{N-1}r_{ik}(\lambda )H'_k(1), \end{aligned}$$
(4.18)
where the quantities \(\{r_{ik}(\lambda );~0\le k\le N-1\}\) are given in (4.2)–(4.4). Moreover, the mean queue length function at time \(t\ge 0\) is given by
$$\begin{aligned} m_i(t)=i+(m_b-m_d)\int _{0}^{t}\left( 1-\sum _{k=0}^{N-1}p_{ik}(\tau )\right) d\tau +\sum _{k=0}^{N-1}H_k'(1)\cdot \int _{0}^{t}p_{ik}(\tau )d\tau ,\nonumber \\ \end{aligned}$$
(4.19)
where \(\{p_{ik}(t);~0\le k\le N-1\}\) can be obtained by inverting the Laplace-transform regarding the known quantities \(\{r_{ik}(\lambda );~0\le k \le N-1\}\) given in (4.2)–(4.4).
Proof
We first rewrite (4.1) as
$$\begin{aligned} \left( B(s)-\lambda s^N\right) \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j =\sum _{k=0}^{N-1}r_{ik}(\lambda )s^k\left( B(s)-H_k(s)s^{N-k}\right) -s^{i+N}. \end{aligned}$$
Differentiating both sides of the above equation with respect to s yields
$$\begin{aligned} \begin{aligned}&\left( B'(s)-\lambda Ns^{N-1}\right) \sum _{j=0}^{\infty }r_{ij}(\lambda )s^j +\left( B(s)-\lambda s^N\right) \sum _{j=1}^{\infty }jr_{ij} (\lambda )s^{j-1}\\&\quad =\sum _{k=1}^{N-1}r_{ik}(\lambda )s^{k-1}\cdot k(B(s) -H_k(s)s^{N-k})\\&\quad \quad +\sum _{k=0}^{N-1}r_{ik}(\lambda )s^k(B'(s) -H'_k(s)s^{N-k}-H_k(s)(N-k)s^{N-k-1})-(i+N)s^{i+N-1}. \end{aligned} \end{aligned}$$
(4.20)
Letting \(s\uparrow 1\) in (4.20) and noting that \(B(1)=0\) and \(H_k(1)=0~(0\le k\le N-1)\), together with \(\sum _{j=0}^{\infty } r_{ij}(\lambda )=\frac{1}{\lambda }\), immediately yields
$$\begin{aligned} \left( \frac{B'(1)}{\lambda }-N\right) -\lambda \sum _{j=1}^{\infty }jr_{ij} (\lambda )=\sum _{k=0}^{N-1}r_{ik}(\lambda )[B'(1)-H_k'(1)]-(i+N), \end{aligned}$$
which, by denoting \(\xi _i(\lambda )=\sum _{j=1}^{\infty }jr_{ij}(\lambda )\), can be rewritten as
$$\begin{aligned} \lambda \xi _i(\lambda )=\frac{B'(1)}{\lambda }+i+\sum _{k=0}^{N-1}r_{ik}(\lambda ) \left( H'_k(1)-B'(1)\right) . \end{aligned}$$
(4.18) now follows by noting that \(B'(1)=m_b-m_d\). Taking the inverse Laplace transform of (4.18) immediately yields (4.19). \(\square \)
Remark 4.1
By (4.19), it is easily seen that if \(m_b\ge m_d\) and \(H_k'(1)\ge 0~(\forall \, 1\le k\le N-1)\), \(m_i(t)\) is increasing with \(t>0\).
The higher moments of the queueing length can be similarly given. As an example, we consider the second moment as follows.
Theorem 4.5
The Laplace transform of the second moment of the queue length function, denoted by \(\eta _i(\lambda )\hat{=}\sum _{j=2}^{\infty }j(j-1)r_{ij}(\lambda )\), starting from state \(X_0=i\ge 0\), is given by
$$\begin{aligned} \begin{aligned} \lambda \eta _i(\lambda )&=2(B'(1)-\lambda N)\xi _i(\lambda )+\frac{B^{{\prime \prime }}(1)}{\lambda }+2\sum _{k=1}^{N-1} kr_{ik}(\lambda )(H'_k(1)-B'(1))\\&\quad +\sum _{k=0}^{N-1}r_{ik}(\lambda )[H^{{\prime \prime }}_{k}(1)+2(N-k)H'_k(1) -B^{{\prime \prime }}(1)]+i(2N+i-1), \end{aligned}\nonumber \\ \end{aligned}$$
(4.21)
where \(\{\xi _i(\lambda )\}\) and \(\{r_{ik}(\lambda );0\le k\le N-1\}\) are given in (4.21) and (4.2)–(4.4), respectively.
Proof
Differentiating with respect to s in (4.20), once again, yields
$$\begin{aligned} \begin{aligned}&(B^{{\prime \prime }}(s)-\lambda N(N-1)s^{N-2})\sum _{j=0}^{\infty }r_{ij}(\lambda )s^j +2(B'(s)-\lambda Ns^{N-1})\sum _{j=1}^{\infty }jr_{ij}(\lambda )s^{j-1}\\&\qquad +(B(s)-\lambda s^N)\sum _{j=2}^{\infty }j(j-1)r_{ij}(\lambda )s^{j-2}\\&\quad =\sum _{k=2}^{N-1}r_{ik}(\lambda )k(k-1)s^{k-2}(B(s)-H_k(s)s^{N-k})\\&\qquad +2\sum _{k=1}^{N-1}r_{ik}(\lambda )ks^{k-1}[B'(s)-H'_k(s)s^{N-k}-H_k(s)(N-k)s^{N-k-1}]\\&\qquad +\sum _{k=0}^{N-1}r_{ik}(\lambda )s^k[B^{{\prime \prime }}(s)-H^{{\prime \prime }}_k(s)s^{N-k}-2H'_k(s)(N-k)s^{N-k-1}\\&\qquad -H_k(s)(N-k)(N-k-1)s^{N-k-2}] -(i+N)(i+N-1)s^{i+N-2}. \end{aligned} \end{aligned}$$
Letting \(s=1\) in the above expression and noting that \(B(1)=H_k(1)=0\) and \(\xi _i(\lambda )=\sum _{j=1}^{\infty }jr_{ij}(\lambda )\), together with the fact that \(\sum _{j=0}^{\infty }r_{ij}(\lambda )=\frac{1}{\lambda }\), yields that if we let \(\eta _i(\lambda )=\sum _{j=2}^{\infty }j(j-1)r_{ij}(\lambda )\), then we obtain the expression (4.21). \(\square \)

5 The bulk-arrival and bulk-service queue stopped at idle state

In this, as well as the following, section, we concentrate on discussing the very important topic of the busy period distribution. As a preparation, the probabilistic laws regarding hitting time to the idle state zero are firstly revealed in this section. To achieve this aim, we need to study a special structure of our queueing model. More specifically, we need to examine the structure of the regular process \(\{Y_t;\,t\ge 0\}\) whose (regular) q-matrix \({\hat{Q}}\) is defined in (1.1)−(1.3) but with the special feature of \(\{h_{0j}\}=0~(\forall j\ge 0)\), but \(h_{ii}\ne 0~(1\le i\le N-1)\). We shall immediately see that there exists a close relationship between this process and our full queuing model. In particular, the hitting properties of the process \(\{Y_t;\,t\ge 0\}\) are directly related to the busy period properties of our full queueing process \(\{X_t\}\). For convenience, as in Sect. 3, let’s view \(\{Y_t;\,t\ge 0\}\) as a branching process and thus use terms such as extinction probability etc. In fact, the process \(\{Y_t;\,t\ge 0\}\) can indeed be viewed as a generalised Markov branching process with state-dependent immigration, and thus also has its own interest.
Similar to in Sect. 3, we now again provide a special structure for the \({\hat{Q}}\)-resolvent of the process \(\{Y_t;\,t\ge 0\}\). But this is easy. In fact, it is simply a direct consequence of Theorem 2.1. Indeed, letting \(H_0(s)=0\) in Theorem 2.1 immediately yields the following conclusion.
Theorem 5.1
Suppose that \(h_{0j}\equiv 0\) for all \(j\ge 0\) and \(h_{jj}\ne 0\) for all \(1\le j \le N-1\). Then the \({\hat{Q}}\)-resolvent \(\Phi (\lambda )=(\phi _{ij}(\lambda );\, i,j\ge 0)\) of the \({\hat{Q}}\)-process \(\{Y_t;\,t\ge 0\}\) possesses the following form:
(i)
$$\begin{aligned} \phi _{0j}(\lambda )=\frac{1}{\lambda }\delta _{0j}. \end{aligned}$$
(5.1)
 
(ii)
For any \(i\ge 1,~|s|\le 1,\)
$$\begin{aligned} \sum _{j=0}^{\infty }\phi _{ij}(\lambda )s^j=\frac{B(s)\phi _{i0}(\lambda ) +\sum _{k=1}^{N-1}\phi _{ik}(\lambda )(B(s)s^k-H_k(s)s^{N})-s^{i+N}}{B(s)-\lambda s^N},\nonumber \\ \end{aligned}$$
(5.2)
 
where the known function B(s) and \(H_k(s) (1\le k\le N-1)\) are given in (2.2) and (2.1), respectively. Moreover, the \(\{\phi _{ij}(\lambda );\,i\ge 1,0\le j\le N-1\}\) in the numerator of the right-hand side of (5.2) take the form
$$\begin{aligned} \phi _{i0}(\lambda )=\frac{|\mathbf{U} \,^i(\lambda ), \mathbf {V} _1(\lambda ),\mathbf {V} _2(\lambda ),\ldots , \mathbf {V} _{N-1}(\lambda )|}{|~~\lambda \mathbf{1} ~,\mathbf {V} _1(\lambda ), \mathbf {V} _2(\lambda ),\ldots ,\mathbf {V} _{N-1}(\lambda )}|, \end{aligned}$$
(5.3)
and, for \(1\le k\le N-2\),
$$\begin{aligned}&\phi _{ik}(\lambda ) =\frac{|\lambda \mathbf{1} ,\mathbf {V} _1(\lambda ), ~\ldots ,\mathbf {V} _{k-1}(\lambda ),\mathbf{U} \,^{i}(\lambda ), \mathbf {V} _{k+1}(\lambda ),\ldots ,\mathbf {V} _{N-1}(\lambda )|}{|\lambda \mathbf{1} ,\mathbf {V} _1(\lambda ),~\ldots , \mathbf {V} _{k-1}(\lambda ),\mathbf {V} _{k}(\lambda ), \mathbf {V} _{k+1}(\lambda ),\ldots ,\mathbf {V} _{N-1}(\lambda )|}, \end{aligned}$$
(5.4)
$$\begin{aligned}&\phi _{i\,N-1}(\lambda )=\frac{~\lambda |\mathbf{1} ~, \mathbf {V} _1(\lambda ),~\ldots ,\mathbf {V} _{N-2}(\lambda )~, \mathbf{U} \,^i(\lambda )|}{|~\lambda \mathbf{1} ~,\mathbf {V} _1(\lambda ),~\ldots , \mathbf {V} _{N-2}(\lambda )~,\mathbf {V} _{N-1}(\lambda )|}. \end{aligned}$$
(5.5)
Proof
Using similar arguments as in proving Theorem 4.1, we can immediately prove this theorem. \(\square \)
Similar to \({\hat{{\textit{\textbf{U}}}}}(\lambda )\) defined in (3.5), we define \({\hat{{\textit{\textbf{V}}}}_k(\lambda )}\) as follows:
$$\begin{aligned} \hat{{{\textit{\textbf{V}}}}}_k(\lambda )=\lambda \hat{{{\textit{\textbf{U}}}}}\,^k(\lambda )-{{\varvec{{H}}}}_k (\hat{{\textit{\textbf{U}}}}(\lambda ))\quad (0\le k\le N-1). \end{aligned}$$
(5.6)
Note that the component form of \(\hat{{{\textit{\textbf{V}}}}}_k(\lambda )\) is, for \(0\le k\le N-1\),
$$\begin{aligned} \hat{{{\textit{\textbf{V}}}}}_k(\lambda )=\left( \lambda u_1^k(\lambda )-H_k(u_1(\lambda )), ~\lambda u_2^k(\lambda )-H_k(u_2(\lambda )),\ldots ,~\lambda u_{N-1}^k(\lambda )-H_k(u_{N-1}(\lambda ))\right) .\nonumber \\ \end{aligned}$$
(5.7)
We now use Theorem 5.1 to investigate properties of the branching process (named as above) \(\{Y_t;\,t\ge 0\}\). Denote by \(\tau \) the extinction time to state 0, i.e. \(\tau :=\inf \{t>0;\, Y_t=0\}\) if \(Y_t=0\) for some \(t>0\) and \(\tau =\infty \) otherwise. Let
$$\begin{aligned} q_{i0}(t):={\mathbb {P}}_i(\tau \le t)\ \ \ (i\ge 1) \end{aligned}$$
and
$$\begin{aligned} q_{i0}=\lim _{t\rightarrow \infty }q_{i0}(t)={\mathbb {P}}_i(\tau <\infty )\ \ \ (i\ge 1) \end{aligned}$$
denote the conditional distribution of the extinction time and the extinction probability, respectively, conditional on the process starting at state \(i\ge 1\). By noting that the Laplace transforms of \(q_{i0}(t)\) are actually given in (5.3), we can immediately obtain the following important result.
Theorem 5.2
If \(m_d\ge m_b\) then \(q_{i0}=1\ (i\ge 1)\). On the other hand, if \(m_d<m_b\) (including \(m_b=+\infty \)), then \(q_{i0}<1\ (i\ge 1)\), in which case \(q_{i0}\) is given by
$$\begin{aligned} q_{i0}=\frac{\begin{vmatrix} u^i,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ {\hat{\mathbf{U }}}\,^i,&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}~~(i\ge 1), \end{aligned}$$
(5.8)
where \(u<1\) is the unique smallest positive root of \(B(s)=0\) on (0, 1).
Proof
By using (5.3) and noting that \(\mathbf{U} (\lambda )\)\(\mathbf {U} ^k(\lambda )\) and \(\mathbf {H} _k(\lambda )\) are all continuous functions of \(\lambda \ge 0\), together with the fact that \(\lim _{\lambda \rightarrow 0}{} \mathbf {U} ^k(\lambda )\) is finite, we obtain
$$\begin{aligned} \lim _{\lambda \rightarrow 0}{{\varvec{{V}}}_k(\lambda )} =\lim _{\lambda \rightarrow 0}\left( \lambda {{\textit{\textbf{U}}}}\,^k(\lambda ) -{\varvec{{H}}}_k(U(\lambda ))\right) =-{{\varvec{{H}}}}_k(U(0))=-{{\varvec{{H}}}}_k(U). \end{aligned}$$
(5.9)
By applying the Tauberian Theorem, we immediately get that, for \(i\ge 1\),
$$\begin{aligned} q_{i0}=\lim _{t\rightarrow \infty }q_{i0}(t)=\lim _{\lambda \rightarrow 0}\lambda \phi _{i0}(\lambda ) =\frac{|{{\textit{\textbf{U}}}}\,^i, \mathbf {H} _1(U),\ldots ,\mathbf {H} _{N-1} (U)|}{|~~{{{\textit{\textbf{1}}}}~, \mathbf {H} _1(U)},\ldots ,\mathbf {H} _{N-1}(U)|}. \end{aligned}$$
(5.10)
Note that in obtaining (5.10), we have used the fact that, for any \(u_k(\lambda )~(1\le k\le N-1)\), \(|u_k(\lambda )|\le 1\) and thus \(\lim _{\lambda \rightarrow 0}\lambda {{\textit{\textbf{U}}}}\,^k(\lambda )=\mathbf{0 }\) for any non-negative integer \(k\ge 0\) (and hence (5.10)), together with using the rules regarding taking limit in determinants.
By noting the notations introduced in (3.5), using (4.6) and (5.9), and applying the properties of determinants, we can rewrite (5.10) as follows:
$$\begin{aligned} q_{i0}= \frac{\begin{vmatrix} { u_0^i},&{H_1(u_0)},&H_2(u_0),&\ldots ,&H_{N-1}(u_0)\\ {\varvec{{\hat{U}}}}^i ,&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&H_1(u_0),&H_2(u_0),&\ldots ,&H_{N-1}(u_0)\\ {\textit{\textbf{1}}},&{\textit{\textbf{H}}}_1 ({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}\qquad (i\ge 1). \end{aligned}$$
(5.11)
Now, if \(m_d\ge m_b\), then \(u_0=u=1\) and thus, for all \(1\le k\le N-1\), we have \(H_k(u_0)=H_k(1)=0\). It follows that, by applying \(H_k(1)=0\) in (5.11),
$$\begin{aligned} q_{i0} = \frac{\begin{vmatrix} 1,&0,&0,&\ldots ,&0\\ {\hat{\mathbf{U }}}\,^i,&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&0,&0,&\ldots ,&0\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}} =\frac{\begin{vmatrix} {\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} {\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}=1. \end{aligned}$$
On the other hand, if \(m_d<m_b\le \infty \), then \(u_0=u<1\) and thus not all of the \(H_k(u)=0\). Hence,
$$\begin{aligned} q_{i0}= \frac{\begin{vmatrix} u^i,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ {\hat{\mathbf{U }}}\,^i,&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}},\qquad (i\ge 1), \end{aligned}$$
(5.12)
which is less than 1. Indeed, the determinants in the numerator and denominator are exactly the same except for the two first columns. This finishes the proof. \(\square \)
By the above theorem, we see that if \(m_d\ge m_b\), then, starting from any state \(i\ge 1\), the process will go to extinction with probability one. Therefore we are interested in obtaining the mean extinction times \({\mathbb {E}}_i(\tau )\ (i\ge 1)\) as well as the conditions under which these mean extinction times are finite.
Theorem 5.3
\({\mathbb {E}}_i(\tau )\ (i\ge 1)\) is finite if and only if \(m_d>m_b\) and \(H_k^{\prime }(1)<\infty ~(1\le k\le N-1)\), in which case
$$\begin{aligned} {\mathbb {E}}_i(\tau )= \frac{\begin{vmatrix} ~\frac{i}{m_d-m_b},&1+\frac{H_1^{\prime }(1)}{m_d-m_b},&\ldots ,&1+\frac{H_{N-1}^{\prime }(1)}{m_d-m_b}\\ \mathbf{1 }-{\hat{\mathbf{U }}}\,^i,&~~~{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} {\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),~~&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1} ({\varvec{{\hat{U}}}})\end{vmatrix}}. \end{aligned}$$
(5.13)
Proof
First note that if \(m_d<m_b\), then the extinction probability is strictly less than 1, and thus, trivially, \({\mathbb {E}}_i(\tau )=\infty ~(i\ge 1)\). Hence we only need to consider the case that \(m_d\ge m_b\). However, under this condition we know that, for any \(i\ge 1\),
$$\begin{aligned} {\mathbb {E}}_i(\tau )=\lim _{\lambda \rightarrow 0}\frac{1-\lambda \phi _{i0}(\lambda )}{\lambda }, \end{aligned}$$
where \(\phi _{i0}(\lambda )\) is given in (5.3) of Theorem 5.1.
Substituting (5.3) into the above, a little algebra gives that
$$\begin{aligned} {\mathbb {E}}_i(\tau ) =\lim _{\lambda \rightarrow 0}\frac{\begin{vmatrix} \lambda ^{-1}(1-u_0^i(\lambda )),&u_0(\lambda ) -\lambda ^{-1}H_1(u_0(\lambda )),&\ldots ,&u_0^{N-1}(\lambda ) -\lambda ^{-1}H_{N-1}(u_0(\lambda ))\\ \mathbf{1 },&{\varvec{{\hat{V}}}}_1(\lambda ),&\ldots ,&{\varvec{{\hat{V}}}}_{N-1} (\lambda )\end{vmatrix}}{\left| \mathbf{1 },~~~~~~\mathbf {V} _1(\lambda ),\qquad \qquad \ldots , ~~~\mathbf {V} _{N-1}(\lambda )\right| }. \end{aligned}$$
(5.14)
By the expression (2.10) of Lemma 2.3, we know that if \(m_d > m_b\) then
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-u_0^i(\lambda )}{\lambda }=\frac{i}{m_d-m_b}=\frac{i}{-B'(1)}<+\infty , \end{aligned}$$
(5.15)
and if \(m_d \le m_b\), this limit is \(+\infty \). Also, if \(m_d>m_b\) then, for any \(1\le k\le N-1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\left( u_0^k(\lambda )-\frac{H_k(u_0(\lambda ))}{\lambda }\right) =1-H'_k(1)\cdot u'_0(1), \end{aligned}$$
(5.16)
where in obtaining (5.15) and (5.16) we have used the facts that \(\lim _{\lambda \rightarrow 0}u_0(\lambda )=1\), due to the conditions that \(m_d\ge m_b\), and that both \(H_k(s)\) and B(s) are continuous functions of \(s\ge 0\).
Recall that \(u_0(\lambda )\) satisfies the equation \(B(s)-\lambda s^N=0\) and thus \(B(u_0(\lambda ))=\lambda (u_0(\lambda ))^N.\) Differentiating with respect to \(\lambda > 0\) gives \( B'(u_0(\lambda ))\cdot u'_0(\lambda )=(u_0(\lambda ))^N+\lambda N(u_0(\lambda ))^{N-1}\cdot u'_0(\lambda ).\)
Letting \(\lambda \rightarrow 0\), noting that \(\lim _{\lambda \rightarrow 0}u_0(\lambda )=1\) and \(\lim _{\lambda \rightarrow 0}u'_0(\lambda )<\infty \) yields that \(B'(1)u'_0(1)=1\), or,
$$\begin{aligned} u'_0(1)=(B'(1))^{-1}. \end{aligned}$$
(5.17)
Also, it is easily seen that, for \(0\le k\le N-1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}{\textit{\textbf{V}}}_k(\lambda )=\lim _{\lambda \rightarrow 0} \left( \lambda {\textit{\textbf{U}}}^k(\lambda )-{\textit{\textbf{H}}}_k({\textit{\textbf{U}}}(\lambda ))\right) =-{\textit{\textbf{H}}}_k({\textit{\textbf{U}}}(0))=-{\textit{\textbf{H}}}_k({\textit{\textbf{U}}}) \end{aligned}$$
(5.18)
due to the facts that \({\textit{\textbf{U}}}(\lambda )\) is a bounded function of \(\lambda \) (bounded by 1, say) and \({\textit{\textbf{H}}}_k(\cdot )\) and \({{\varvec{{U}}}}_k(\cdot )\) are continuous functions, where \({\textit{\textbf{U}}}_k(0)\hat{=}{\textit{\textbf{U}}}=(u_0,u_1,\ldots ,u_{N-1})\) are the N roots of the equation \(B(s)=0\) on the unit disc \(\{s;\,|s|\le 1\}\).
By applying (5.15)–(5.18) in (5.14), we immediately get that, for \(i\ge 1\),
$$\begin{aligned} {\mathbb {E}}_i(\tau )=\frac{\begin{vmatrix} \frac{i}{-B'(1)},&1-\frac{H_1^{\prime }(1)}{B'(1)},&\ldots ,&1-\frac{H_{N-1}^{\prime }(1)}{B'(1)}\\ \mathbf{1 }-{\varvec{{\hat{U}}}}^i,&-{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&\ldots ,&-{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&-H_1(1),&\ldots ,&-H_{N-1}(1)\\ -{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&-{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&-{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}. \end{aligned}$$
(5.19)
By noting that, for any \(1\le k\le N-1\), \(H_k(1)\equiv 0\) and that \(B'(1)=(-1)(m_d-m_b)\), we see by (5.19) that if \(m_d=m_b\) or if there exists some k such that, \(H'_k(1)=\infty ~~(1\le k\le N-1)\), then \({\mathbb {E}}_i(\tau )=\infty ~(i\ge 1)\) while, if \(m_d>m_b\) and, for all \(1\le k\le N-1\), \(H_k'(1)<\infty \), then (5.19) is just (5.13). The proof is finished. \(\square \)
Theorem 5.3 provides useful information regarding the mean extinction time \({\mathbb {E}}_i(\tau )\) under the conditions that \(m_d > m_b\) and \(H_k'(1)<+\infty \)\((1\le k\le N-1)\). If \(m_d<m_b\le +\infty \), then \({\mathbb {E}}_i(\tau )\) (for all \(i\ge 1\)) will be infinite since, by the above Theorem 5.2, the extinction probabilities \(q_{i0}<1\). To get rid of such uninteresting situations, we turn our attention to the conditional mean extinction time \({\mathbb {E}}_i(\tau \mid \tau <\infty )\) which will be much more informative.
Theorem 5.4
Suppose \(m_d < m_b \le +\infty \). Then the conditional mean extinction times \({\mathbb {E}}_i(\tau \mid \tau <\infty )\ (i\ge 1) \) are finite and given by
$$\begin{aligned} {\mathbb {E}}_i(\tau |\tau < \infty )=\frac{-1}{q_{i0}}\left( \frac{d}{d\lambda } (\lambda \phi _{i0}(\lambda ))\right) \Bigg |_{\lambda =0}, \end{aligned}$$
(5.20)
where \(q_{i0}\) is given in (5.12) and \(\phi _{i0}(\lambda )\) is given in (5.3).
More specifically, for \(i\ge 1\),
https://static-content.springer.com/image/art%3A10.1007%2Fs11134-020-09660-0/MediaObjects/11134_2020_9660_Equ123_HTML.png
(5.21)
where
$$\begin{aligned} {\textit{\textbf{A}}}= & {} |\mathbf{1 },~{\textit{\textbf{H}}}_1({\textit{\textbf{U}}}(0)),~{\textit{\textbf{H}}}_2({\textit{\textbf{U}}}(0)), \ldots ,~{\textit{\textbf{H}}}_{N-1}({\textit{\textbf{U}}}(0))|, \end{aligned}$$
(5.22)
$$\begin{aligned} {\textit{\textbf{B}}}^{(i)}= & {} |{\textit{\textbf{U}}}^i(0),~{\textit{\textbf{H}}}_1({\textit{\textbf{U}}}(0)),~{\textit{\textbf{H}}}_2({\textit{\textbf{U}}}(0)), \ldots ,~{\textit{\textbf{H}}}_{N-1}({\textit{\textbf{U}}}(0))|, \end{aligned}$$
(5.23)
$$\begin{aligned} \mathfrak {{\tilde{F}}}_0^{(i)}= & {} i\cdot |{\textit{\textbf{U}}}^{i-1}(0) \otimes \,{\textit{\textbf{U}}}'(0),~\mathbf {H} _1(U(0)),~ \mathbf {H} _2(U(0)),\ldots ,\mathbf {H} _{N-1}(U(0))|, \quad \end{aligned}$$
(5.24)
and, for \(1\le k\le N-1\), \(\mathfrak {{\tilde{F}}}_k^{(i)}\) and \(\mathfrak {{\tilde{G}}}_k\) are given in the following: (5.25) and (5.26), respectively:
$$\begin{aligned}&|{\textit{\textbf{U}}}^{i}(0),~\mathbf {H} _1(U(0)),~\ldots , ~\mathbf {H} _{k-1}(U(0)),~{\textit{\textbf{H}}}_k'({\textit{\textbf{U}}}(0)),\nonumber \\&\quad \otimes \, {\textit{\textbf{U}}}'(0)-{\textit{\textbf{U}}}^k(0),~\mathbf {H} _{k+1}(U(0)),\ldots , ~\mathbf {H} _{N-1}(U(0))|, \end{aligned}$$
(5.25)
$$\begin{aligned}&|\mathbf{1 },~\mathbf {H} _1(U(0)),~\ldots ,~ \mathbf {H} _{k-1}(U(0)),~{\textit{\textbf{H}}}_k'({\textit{\textbf{U}}}(0))\nonumber \\&\quad \otimes \, {\textit{\textbf{U}}}'(0)-{\textit{\textbf{U}}}^k(0),~\mathbf {H} _{k+1}(U(0)),\ldots ,~ \mathbf {H} _{N-1}(U(0))|. \end{aligned}$$
(5.26)
Proof
First note that, for any \(i\ge 1\),
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_i(\tau |\tau<\infty )&=\int _0^{\infty } \left( 1-{\mathbb {P}}\{\tau \le t|\tau <\infty ,Y_0=i\}\right) dt. \end{aligned} \end{aligned}$$
(5.27)
But
$$\begin{aligned} \begin{aligned}&{\mathbb {P}}\{\tau \le t|\tau<\infty ,~Y_0=i\}=\frac{{\mathbb {P}}\{\tau \le t|Y_0=i\}}{{\mathbb {P}}\{\tau <\infty |Y_0=i\}} \end{aligned} \end{aligned}$$
and thus substituting the above into (5.27) yields
$$\begin{aligned} {\mathbb {E}}_i(\tau |\tau<\infty )=\int _0^{\infty }\left( 1-\frac{{\mathbb {P}}\{\tau \le t|Y_0=i\}}{{\mathbb {P}}\{\tau <\infty |Y_0=i\}}\right) {\hbox {d}}t. \end{aligned}$$
(5.28)
However, \({\mathbb {P}}\{\tau \le t|Y_0=i\}\) is just the \(p_{i0}(t)\) for the \({\hat{Q}}\)-function of the \({\hat{Q}}\)-process \(\{Y_t;\,t\ge 0\}\) whose Laplace-transform is just \(\phi _{i0}(\lambda )~(i\ge 1)\) which is given in (5.3). It follows that the Laplace-transform of \(\int _0^t(1-\frac{{\mathbb {P}}\{\tau \le u|Y_0=i\}}{{\mathbb {P}}\{\tau <\infty |Y_0=i\}})\hbox {d}u\) is nothing but
$$\begin{aligned} \frac{1}{\lambda }\left( \frac{1}{\lambda }-\frac{\phi _{i0}(\lambda )}{{\mathbb {P}}\{\tau <\infty |Y_0=i\}}\right) \end{aligned}$$
and thus, applying the Tauberian Theorem in (5.28) yields
$$\begin{aligned} \begin{aligned} {\mathbb {E}}_i&(\tau |\tau<\infty )=\lim _{\lambda \rightarrow 0}~\lambda \cdot \frac{1}{\lambda }\left( \frac{1}{\lambda }-\frac{\phi _{i0(\lambda )}}{{\mathbb {P}}\{\tau<\infty |Y_0=i\}}\right) \\&=\frac{1}{{\mathbb {P}}\{\tau<\infty |Y_0=i\}}\lim _{\lambda \rightarrow 0}\frac{{\mathbb {P}}\{\tau <\infty |Y_0=i\}-\lambda \phi _{i0}(\lambda )}{\lambda }. \end{aligned} \end{aligned}$$
(5.29)
Noting that \({\mathbb {P}}\{\tau <\infty |Y_0=i\}\) is nothing but \(q_{i0} (i\ge 1)\) given in (5.8) (since in our case \(m_d<m_b\)), the crucial thing is to calculate
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{q_{i0}-\lambda \phi _{i0}(\lambda )}{\lambda }. \end{aligned}$$
(5.30)
Applying Hospital’s rule in (5.30) shows that the limit in (5.30) is just
$$\begin{aligned} \lim _{\lambda \rightarrow 0}-\frac{\text {d}}{\text {d}\lambda }(\lambda \phi _{i0}(\lambda )), \end{aligned}$$
(5.31)
where \(\phi _{i0}(\lambda )(k\ge 1)\) is given in (5.3). Hence (5.20) is proven. Moreover, substituting (5.3) into the above (5.31) yields that the limit in (5.31) is just
$$\begin{aligned} (-1)\cdot \lim _{\lambda \rightarrow 0}\left( \frac{d}{d\lambda }\cdot \frac{\left| {\textit{\textbf{U}}}^i(\lambda ),{\textit{\textbf{V}}}_1(\lambda ), {\textit{\textbf{V}}}_2(\lambda ),~\ldots ,{\textit{\textbf{V}}}_{N-1}(\lambda )\right| }{\left| ~~\mathbf{1} ,~{\varvec{~~~V}}_1(\lambda ),{\textit{\textbf{V}}}_2(\lambda ),~\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )\right| } \right) . \end{aligned}$$
(5.32)
Now by applying the differential rules regarding the determinants together with some lengthy but elementary algebra, we can get all the conclusions stated in Theorem 5.4. \(\square \)

6 Busy period distributions

We are now ready to consider the busy period distributions of our full queueing processes. As in Sect. 4, the full irreducible queueing process will be denoted by \(\{X_t;\,t\ge 0\}\) which denotes the number of customers in the queue (including the customers being served) at time \(t\ge 0\). Without loss of any generality, we assume that \(X_0=0\). We now define a sequence of random variables \(\{\sigma _n; \,n\ge 0\}\) as follows:
$$\begin{aligned} \sigma _0=0,\ \ \sigma _1=\inf \{t>0;\,X_t\ne 0\}, \ \ \sigma _2=\inf \{t>\sigma _1;\,X_t=0\} \end{aligned}$$
and, for all \(n\ge 1\),
$$\begin{aligned} \sigma _{2n}=\inf \{t>\sigma _{2n-1};\,X_t=0\}\ \ \text{ and } \ \ \sigma _{2n+1}=\inf \{t>\sigma _{2n}; \,X_t\ne 0\}. \end{aligned}$$
It is clear that \(\{\sigma _n; \,n\ge 0\}\) is a sequence of increasing stopping times. Note that the random variables \(\{\sigma _{2n}-\sigma _{2n-1};\,n\ge 1\}\) are just the busy periods. By \(\hbox {It}{\hat{o}}\)’s excursion law, see [21], we know that \(\{\sigma _{2n}-\sigma _{2n-1};\,n\ge 1\}\) are independent identically distributed random variables. For more details regarding this important excursion law, one can also see Rogers and Williams [32]. Our main aim now is to find this common distribution. Now, by the strong Markov property, we have, for all \(n\ge 1\),
$$\begin{aligned} {\mathbb {P}}(\sigma _{2n}-\sigma _{2n-1}\le t)= & {} \sum ^{\infty }_{k=1}{\mathbb {P}}(X_{\sigma _{2n-1}}=k) {\mathbb {P}}(\sigma _{2n}-\sigma _{2n-1}\le t\mid X_{\sigma _{2n-1}}=k)\nonumber \\= & {} \sum ^{\infty }_{k=1}-\frac{h_{0k}}{h_{00}}{\mathbb {P}}(\sigma _{2n} -\sigma _{2n-1}\le t\mid X_{\sigma _{2n-1}}=k)\nonumber \\= & {} \sum ^{\infty }_{k=1}-\frac{h_{0k}}{h_{00}}{\mathbb {P}}(\sigma _{2}-\sigma _{1}\le t\mid X_{\sigma _1}=k). \end{aligned}$$
(6.1)
However, under the condition that \( X_{\sigma _1}=k~~(k\ge 1)\), the distribution of \(\sigma _{2}-\sigma _{1}\) is just the extinction distribution of the \(\{Y_t(\omega );\,t\ge 0\}\) discussed in the previous Sect. 5. In other words, the Laplace transform of the conditional distribution \({\mathbb {P}}(\sigma _{2}-\sigma _{1}\le t\mid X_{\sigma _1}=k)\) is just \(\phi _{k0}(\lambda )~(k\ge 1)\) given in (5.3). More specifically, let T denote the length of the first busy period (recall that all busy periods are independent identically distributed random variables, and thus, it is enough to consider T only), and denote the cumulative distribution function of T by
$$\begin{aligned} F_k(t)={\mathbb {P}}_k(T\le t),\quad k\ge 1. \end{aligned}$$
We then can find the Laplace transform of \(F_k(t)\), i.e. \(\int _0^{\infty }e^{-\lambda t}\hbox {d}F_k(t)\), using the results obtained in the previous section. To this end, we claim the following conclusion.
Theorem 6.1
For the queueing process \(\{X_t; t\ge 0\}\) with the irreducible q-matrix Q given in (1.1)–(1.3), we have, for \(k\ge 1\),
$$\begin{aligned} \int _0^{\infty }e^{-\lambda t}\hbox {d}F_k(t)=\frac{\begin{vmatrix} \mathbf{U }^{\,k}(\lambda ),&\mathbf{V }_1(\lambda ),&\mathbf{V }_2(\lambda ),\ldots ,\mathbf{V }_{N-1}(\lambda )\end{vmatrix}}{\begin{vmatrix} ~~\lambda \mathbf{1 },&\mathbf{V }_1(\lambda ),&\mathbf{V }_2(\lambda ),\ldots , \mathbf{V }_{N-1}(\lambda )\end{vmatrix}}, \end{aligned}$$
(6.2)
where \(\mathbf{V }_i(\lambda )~(1\le i\le N-1)\) is defined as in (4.5). Moreover, \({\mathbb {E}}_k(T)<\infty \) if and only if \(m_d > m_b\) and \(H'_i(1)<+\infty ~(1 \le i \le N-1)\) and if these conditions are satisfied, then this finite-valued \({\mathbb {E}}_k(T)\) is given by
$$\begin{aligned} {\mathbb {E}}_k(T)=\frac{\begin{vmatrix} u^k,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ {\hat{\mathbf{U }}}\,^k,&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} 1,&H_1(u),&H_2(u),&\ldots ,&H_{N-1}(u)\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}\qquad (k\ge 1), \end{aligned}$$
(6.3)
Proof
Let \(\{f_{ij}(t);\,i,j\in Z_{+}\}\) and \(\{\phi _{ij}(\lambda );\,i,j\in Z_{+}\}\) denote the \({\hat{Q}}\)-function and \({\hat{Q}}\)-resolvent, respectively, of the process \(\{Y_t(\omega );\,t\ge 0\}\) discussed in the previous Sect. 5. Then their hitting time properties have been revealed in the previous section. In particular, the Laplace transform of \(F_i(t)=f_{i0}(t)\) is given in (5.3), and thus we obtain (6.2) directly from (5.3). Moreover, again by the arguments in the previous Sect. 5, in particular regarding Theorem 5.3, we can get (6.3) by using (5.13). The finiteness conditions for \({\mathbb {E}}_k(T)\) also follow. \(\square \)
We now can claim the following important conclusion.
Theorem 6.2
For the queueing process \(\{X_t;\,t\ge 0\}\) with the irreducible q-matrix Q given in (1.1)–(1.3), the busy periods \(\{\sigma _{2n}-\sigma _{2n-1}\}\) are independent, identically distributed random variables whose Laplace transform of this common distribution, denoted by \(g_T(\lambda )\), say, is given by
$$\begin{aligned} \lambda g_T(\lambda )=1+\frac{\lambda }{h}\cdot \frac{\quad \begin{vmatrix} \mathbf{H }_0(\mathbf{U }(\lambda )),&\mathbf{V }_1(\lambda ),&\mathbf{V }_2(\lambda ),\ldots ,\mathbf{V }_{N-1}(\lambda )\end{vmatrix}\quad }{\quad \begin{vmatrix} ~~~~\lambda \mathbf{1 },&\mathbf{V }_1(\lambda ),&\mathbf{V }_2(\lambda ), \ldots ,\mathbf{V }_{N-1}(\lambda )\end{vmatrix}}, \end{aligned}$$
(6.4)
where \(h=-h_{00}>0\) and \(\mathbf{V }_i(\lambda )~(0\le i\le N-1)\) are still given in (4.5).
Proof
By (6.1) and the remarks made before Theorem 5.3, we have
$$\begin{aligned} \begin{aligned} g_T(\lambda )&= \sum ^{\infty }_{k=1}\frac{\quad h_{0k}}{-h_{00}}\phi _{k0}(\lambda )\\&=\sum ^{\infty }_{k=1}\frac{h_{0k}}{-h_{00}}\cdot \frac{\quad \begin{vmatrix} {\textit{\textbf{U}}}^{k}(\lambda ),&{\textit{\textbf{V}}}_1(\lambda ),&{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )\end{vmatrix}\quad }{\quad \begin{vmatrix} \lambda {\textit{\textbf{1}}},~~&{\textit{\textbf{V}}}_1(\lambda ),&{\textit{\textbf{V}}}_2(\lambda ),\ldots ,{\textit{\textbf{V}}}_{N-1} (\lambda )\end{vmatrix}\quad }\\&=\frac{\quad \begin{vmatrix} \sum ^{\infty }_{k=1}\frac{h_{0k}}{-h_{00}}{\textit{\textbf{U}}}^{k}(\lambda ),&{\textit{\textbf{V}}}_1(\lambda ),&{\textit{\textbf{V}}}_2(\lambda ),\ldots ,{\textit{\textbf{V}}}_{N-1}(\lambda )\end{vmatrix}\quad }{\quad \begin{vmatrix} \lambda {\textit{\textbf{1}}},&{\textit{\textbf{V}}}_1(\lambda ),&{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )\end{vmatrix}\quad }. \end{aligned} \end{aligned}$$
(6.5)
Note that \(|{\textit{\textbf{U}}}(\lambda )|\le {\textit{\textbf{1}}}\) and thus \(\sum _{k=1}^{\infty } h_{0k}U^k(\lambda )\) is well-defined and that
$$\begin{aligned} \begin{aligned} \sum _{k=1}^{\infty }\frac{h_{0k}}{-h_{00}}{\textit{\textbf{U}}}^k(\lambda )&=-\frac{1}{-h_{00}}\left[ \sum _{k=0}^\infty h_{0k}{\textit{\textbf{U}}}^k(\lambda )-\sum _{k=0}^{\infty }h_{0k}{\textit{\textbf{U}}}^0(\lambda )\right] \\&=\frac{1}{-h_{00}}[{\textit{\textbf{H}}}({\textit{\textbf{U}}}(\lambda ))-h_{00}{\textit{\textbf{1}}}]=\mathbf{1 } +\frac{{\textit{\textbf{H}}}({\textit{\textbf{U}}}(\lambda ))}{h}, \end{aligned} \end{aligned}$$
(6.6)
where \(h=-h_{00}>0\).
Substituting (6.6) into (6.5) together with some simple algebra regarding determinants immediately yields (6.4), which then ends the proof. \(\square \)
By (6.4), we may get many properties regarding busy periods. For example, we may get an elegant form of the mean time of the busy periods as follows: The importance of such conclusions in queueing theory is well known.
Theorem 6.3
The mean busy period is finite if and only if \(m_d>m_b\) and \(H'_{i}(1)<\infty ~(1\le i\le N-1)\). Moreover, under these conditions, the (finite) mean busy period is given by
$$\begin{aligned} {\mathbb {E}}(\sigma _{2n}-\sigma _{2n-1})=\frac{\quad \begin{vmatrix} \frac{H'_{0}(1)}{m_d-m_b},&1+\frac{H'_{1}(1)}{m_d-m_b},&\ldots ,&1+\frac{H'_{N-1}(1)}{m_d-m_b}\\ \\ -\mathbf{H }_0({\hat{\mathbf{U }}}),&\mathbf{H }_1({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix}\quad }{\quad h\cdot \begin{vmatrix}\mathbf{H }_1({\hat{\mathbf{U }}}),&\mathbf{H }_2({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1}({\hat{\mathbf{U }}})\end{vmatrix}\quad }, \end{aligned}$$
(6.7)
where, again, \(h=-h_{00}>0.\)
Proof
We only need to show (6.7) since the former part is obvious. Although (6.7) is a direct consequence of the proven (6.3), we may get (6.7) more simply. In fact, by (6.1) we have that
$$\begin{aligned} {\mathbb {E}}(\sigma _{2n}-\sigma _{2n-1})={\mathbb {E}}(\sigma _2-\sigma _1)=\sum ^{\infty }_{i=1}\frac{h_{0i}}{h}{\mathbb {E}}_i(\tau ), \end{aligned}$$
where \({\mathbb {E}}_i(\tau )\ (i\ge 1)\) is given in (5.13). Substituting (5.13) into the above expression together with some simple algebra immediately yields (6.7). \(\square \)
If \(m_d<m_b\le +\infty \), then, although the mean busy period is infinite, we may still get much information by providing an expression regarding the conditional mean busy period. By the i.i.d property, it is enough to consider \({\mathbb {E}}(\sigma _2-\sigma _1\mid \sigma _2<\infty )\) only.
Theorem 6.4
If \(m_d<m_b\le +\infty \), then the conditional expectation of the busy period, under the condition that the queue reaches the idle period, is given by
$$\begin{aligned} \begin{aligned} {\mathbb {E}}(\sigma _2-\sigma _1\mid \sigma _2<\infty )&=\frac{1}{\mathbf{A }^2}\sum _{k=1}^{N-1} \mathfrak {{\tilde{G}}}_k\left( \sum _{i=1}^{\infty }\frac{h_{0i}}{h} \cdot \frac{\mathbf{B }^{(i)}}{q_{i0}}\right) \\&-\frac{1}{\mathbf{A }}\sum _{k=0}^{N-1}\left( \sum _{i=1}^{\infty } \frac{h_{0i}\cdot \mathfrak {{\tilde{F}}}_k^{(i)}}{hq_{i0}}\right) , \end{aligned} \end{aligned}$$
(6.8)
where \(\mathbf{A }\), \(\mathbf{B }^{(i)} (i\ge 1)\), \(\mathfrak {{\tilde{F}}}_k^{(i)} (k\ge 1)\) and \(\mathfrak {{\tilde{G}}}_k (k\ge 1)\) are given in (5.22)–(5.26), respectively.
Proof
Just note that
$$\begin{aligned} {\mathbb {E}}(\sigma _2-\sigma _1\mid \sigma _2<\infty )=\sum ^{\infty }_{i=1}\frac{h_{0i}}{h}{\mathbb {E}}_i(\tau \mid \tau <\infty ), \end{aligned}$$
(6.9)
where \({\mathbb {E}}_i(\tau \mid \tau <\infty )\ (i\ge 1)\) are given in (5.22)–(5.26) and \(h=-h_{00}\). Substituting (5.22)–(5.26) into the above expression immediately yields the conclusion. \(\square \)
It is interesting to note that, as Theorem 6.4 shows, so long as \(m_d<m_b\le +\infty \), the conditional expectation of the busy period is always finite even if \(B'(1)\) and all \(H_k'(1)\ (0\le k\le N-1)\) are infinite. Note that, however, neither Theorem 6.3 nor Theorem 6.4 covers the case \(m_d=m_b\). In fact, this critical case is more subtle. Indeed, although the busy period is almost surely finite, the expected value is infinite. Nevertheless, we are still able to provide interesting information regarding the asymptotic behaviour of the busy period for this subtle case. For this purpose, we first provide the following lemma. Recall that \(\{\phi _{k0}(\lambda ); \,k\ge 1\}\) given in (5.3) is the Laplace transforms of the \(p_{k0}(t)\) for the absorbing process \(\{Y_t;\,t\ge 0\}\) which was constructed in Theorem 5.1.
Lemma 6.1
If \(m_d=m_b\) and \(B^{{\prime \prime }}(1)\) is finite, then, for any \(i\ge 1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-\lambda \phi _{i0}(\lambda )}{\sqrt{\lambda }}=\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\cdot \frac{\begin{vmatrix} i,&H_1'(1),&H_2'(1),&\ldots ,&H_{N-1}'(1)\\ \mathbf{1 }-{\hat{\mathbf{U }}},&\mathbf{H }_1({\hat{\mathbf{U }}}),&\mathbf{H }_2({\hat{\mathbf{U }}}),&\ldots ,&\mathbf{H }_{N-1}({\hat{\mathbf{U }}}) \end{vmatrix}}{\begin{vmatrix} \mathbf{1 }&\mathbf{H }_1(\mathbf{U }),&\mathbf{H }_2(\mathbf{U }),&\ldots ,&\mathbf{H }_{N-1}(\mathbf{U })\end{vmatrix}}. \end{aligned}$$
(6.10)
Proof
We first point out that if \(m_d=m_b\) and \(B^{{\prime \prime }}(1)<\infty \), then we have
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-u_0^i(\lambda )}{\sqrt{\lambda }}=i\sqrt{\frac{2}{B^{{\prime \prime }}(1)}} \end{aligned}$$
(6.11)
and, for any k\((1\le k\le N-1)\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{\lambda u_0^k(\lambda )-H_k(u_0(\lambda ))}{\sqrt{\lambda }}=\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\cdot H_k'(1). \end{aligned}$$
(6.12)
(6.11) is easy and has been proved in Lemma 2.3 (see (2.12)). We now show that (6.12) is also true. Indeed, for any k where \(1\le k\le N-1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{\lambda u_0^k(\lambda )-H_k(u_0(\lambda ))}{\sqrt{\lambda }} =-\lim _{\lambda \rightarrow 0}\frac{H_k(u_0(\lambda ))}{\sqrt{\lambda }} \end{aligned}$$
(6.13)
since, when \(\lambda \rightarrow 0\), \(u_0^k(\lambda )\) is finite. Moreover, for the right-hand side of the above (6.13), we first note that
$$\begin{aligned} \begin{aligned} H_k(u_0(\lambda ))&=\sum _{j=0}^{\infty }h_{kj}\cdot u_0^j(\lambda ) =\sum _{j\ne k}h_{kj}\cdot u_0^{j}(\lambda )+h_{kk}u_0^k(\lambda )=\sum _{j\ne k}h_{kj}\cdot (u_0^{j}(\lambda )-u_0^k(\lambda )), \end{aligned} \end{aligned}$$
which can be written as
$$\begin{aligned} \sum _{j=0}^{k-1}h_{kj}\cdot u_0^j(\lambda )(1-u_0^{k-1}(\lambda ))-\sum _{j=k+1}^{\infty }h_{kj}\cdot u_0^k(\lambda )(1-u_0^{j-k}(\lambda )). \end{aligned}$$
Dividing the above by \(\sqrt{\lambda }\), letting \(\lambda \rightarrow 0\) and using (6.12) together with noting (6.12) is trivial true for \(k=0\) yields
$$\begin{aligned} \begin{aligned} -\lim _{\lambda \rightarrow 0}\frac{H_k(u_0(\lambda ))}{\sqrt{\lambda }}&=\sum _{j+k+1}^{\infty }h_{kj}\cdot (j-k)\sqrt{\frac{2}{B^{{\prime \prime }}(1)}} -\sum _{j=0}^{k-1}h_{kj}\cdot (k-j)\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\\&=\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\left[ \sum _{j=k+1}^{\infty }h_{kj}\cdot (j-k)-\sum _{j=0}^{k-1}h_{kj}\cdot (k-j)\right] , \end{aligned} \end{aligned}$$
which is just \(\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\cdot H_k'(1)\), and hence (6.12) is proved.
We are now ready to prove (6.10). By (5.3), we know that
$$\begin{aligned} \begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-\lambda \phi _{i0}(\lambda )}{\sqrt{\lambda }}&=\lim _{\lambda \rightarrow 0}\frac{1}{\sqrt{\lambda }} \left[ 1-\lambda \cdot \frac{|{\textit{\textbf{U}}}^i(\lambda ),{\textit{\textbf{V}}}_1(\lambda ),{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )|}{|~~\lambda {\textit{\textbf{1}}}~~,{\textit{\textbf{V}}}_1(\lambda ),{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )|}\right] \\&=\lim _{\lambda \rightarrow 0}\frac{1}{\sqrt{\lambda }}\cdot \frac{|{\textit{\textbf{1}}}-{\textit{\textbf{U}}}^i(\lambda ),{\textit{\textbf{V}}}_1(\lambda ),{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )|}{|{\varvec{~~~~~1~~~~~~~}},{\textit{\textbf{V}}}_1(\lambda ),{\textit{\textbf{V}}}_2(\lambda ),\ldots , {\textit{\textbf{V}}}_{N-1}(\lambda )|} \end{aligned} \end{aligned}$$
which is just
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1}{\sqrt{\lambda }}\cdot \frac{\begin{vmatrix} 1-u_0^i(\lambda ),&\lambda u_0(\lambda )-H_1(u_0(\lambda )),&\ldots ,&\lambda u_0^{N-1}(\lambda )-H_{N-1}(u_0(\lambda ))\\ {\textit{\textbf{1}}}-{\varvec{{\hat{U}}}}(\lambda ),&\lambda {\varvec{{\hat{U}}}}(\lambda ) -{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}(\lambda )),&\ldots ,&{\varvec{{\hat{U}}}}^{N-1}(\lambda ) -{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}}(\lambda ))\end{vmatrix}}{\begin{vmatrix} \mathbf{1 },&{\textit{\textbf{V}}}_1(\lambda ),&\ldots ,~{\textit{\textbf{V}}}_{N-1}(\lambda )\end{vmatrix}}. \end{aligned}$$
Now, moving \(\frac{1}{\sqrt{\lambda }}\) into the first row of the numerator of the above, and then letting \(\lambda \rightarrow 0\) and using the proved (6.11) and (6.12), together with the trivial and proved results, that, for any \(1\le k\le N-1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}{\textit{\textbf{V}}}_k(\lambda )=-{\textit{\textbf{H}}}_k({\textit{\textbf{U}}}) \text{ and } \lim _{\lambda \rightarrow 0}[\lambda {\varvec{{\hat{U}}}}^k(\lambda ) -{\textit{\textbf{H}}}_{k}({\varvec{{\hat{U}}}}(\lambda ))]=-{\textit{\textbf{H}}}_{k}({\varvec{{\hat{U}}}}), \end{aligned}$$
we immediately obtain that, for all \(i\ge 1\),
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-\lambda \phi _{i0}(\lambda )}{\sqrt{\lambda }}=\sqrt{\frac{2}{B^{{\prime \prime }}(1)}}\cdot \frac{\begin{vmatrix} i,&H_1'(1),&H_2'(1),&\ldots ,&H_{N-1}'(1)\\ \\ \mathbf{1 }-{\varvec{{\hat{U}}}},&-{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&-{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&-{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}}) \end{vmatrix}}{\begin{vmatrix} ~~~\mathbf{1 },~~~&-{\textit{\textbf{H}}}_1({\textit{\textbf{U}}}),&-{\textit{\textbf{H}}}_2({\textit{\textbf{U}}}),&\ldots ,&-{\textit{\textbf{H}}}_{N-1}({\textit{\textbf{U}}})\end{vmatrix}} \end{aligned}$$
which is the same as (6.10). Lemma 6.1 is thus proven. \(\square \)
We are now able to tackle the subtle case of \(m_d=m_b\).
Theorem 6.5
Suppose that \(m_d=m_b\) and \(B''(1)<\infty \), and further assume that \(H'_k(1)\) \((1\le k\le N-1)\) are all finite. Then the expectation of the busy periods \(\{\sigma _{2n}-\sigma _{2n-1}\}\) is infinite but with
$$\begin{aligned} {\mathbb {E}}(\sigma _{2n}-\sigma _{2n-1})\sim \sqrt{\frac{8t}{\pi B^{{\prime \prime }}(1)}}\frac{1}{h}\cdot \frac{\begin{vmatrix} H_0'(1),&H_1'(1),&H_2'(1),&\ldots ,&H_{N-1}'(1)\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} ~~~\mathbf{1 }~~,&{\textit{\textbf{H}}}_1({\textit{\textbf{U}}}),&{\textit{\textbf{H}}}_2({\textit{\textbf{U}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\textit{\textbf{U}}})\end{vmatrix}}~~\text{ as } \ t\rightarrow \infty .\nonumber \\ \end{aligned}$$
(6.14)
Proof
Recall that we have used \(g_T(\lambda )\) to denote the Laplace transform of the common distribution of the busy periods. Hence, to prove (6.14) we only need to show that
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-\lambda g_T(\lambda )}{\sqrt{\lambda }}=\frac{\sqrt{2/B^{{\prime \prime }}(1)}}{h}\cdot \frac{\begin{vmatrix} H_0'(1),&H_1'(1),&H_2'(1),&\ldots ,&H_{N-1}'(1)\\ \mathbf{1 },&{\textit{\textbf{H}}}_1({\varvec{{\hat{U}}}}),&{\textit{\textbf{H}}}_2({\varvec{{\hat{U}}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1}({\varvec{{\hat{U}}}})\end{vmatrix}}{\begin{vmatrix} ~~~\mathbf{1 }~~,&{\textit{\textbf{H}}}_1({\textit{\textbf{U}}}),&{\textit{\textbf{H}}}_2({\textit{\textbf{U}}}),&\ldots ,&{\textit{\textbf{H}}}_{N-1} ({\textit{\textbf{U}}})\end{vmatrix}}.\nonumber \\ \end{aligned}$$
(6.15)
However, (6.15) is obvious. Indeed, by (6.1) and (6.4) we have
$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1-\lambda g_T(\lambda )}{\sqrt{\lambda }}=\sum ^{\infty }_{i=1}\frac{h_{0i}}{h}\lim _{\lambda \rightarrow 0} \frac{1-\lambda \phi _{i0}(\lambda )}{\sqrt{\lambda }}. \end{aligned}$$
Now using (6.10) together with some easy algebra we immediately obtain (6.15). This ends the proof. \(\square \)

7 An example

In this final section, we provide an example to illustrate conclusions obtained in the previous sections. For notational convenience, we shall use k to replace N in this section. We assume that the control sequences \(\{h_{ij}\}\) takes a birth-death structure of the following forms: for any \(0\le i\le k-1\),
$$\begin{aligned} h_{i,i-1}=\mu _i,~~~h_{i,i}=-(\lambda _i+\mu _i),~~~h_{i,i+1}=\lambda _i, \end{aligned}$$
(7.1)
Here we assume \(\mu _{-1}= \mu _0=0,~\mu _i>0~(1\le i\le k-1)\) and \(\lambda _i>0~(0\le i \le k-1)\). Hence, by (7.1) we have obviously
$$\begin{aligned} H_0(s)=-\lambda _0+\lambda _0s=-\lambda _0(1-s) \end{aligned}$$
(7.2)
for \(1\le i \le k-1\),
$$\begin{aligned} H_i(s)=\mu _is^{i-1}-(\mu _i+\lambda _i)s^i+\lambda _is^{i+1}=s^{i-1} (1-s)(\mu _i-\lambda _is)~~~(1\le i\le k-1),\nonumber \\ \end{aligned}$$
(7.3)
and thus
$$\begin{aligned} H_0'(1)=\lambda _0>0~~~\text { and }~~~ H'_i(1)=\lambda _i-\mu _i~(1\le i\le k-1).\nonumber \\ \end{aligned}$$
(7.4)
We further assume that the arrival-service sequence \(\{b_j;j\ge 0\}\) takes the following special forms:
$$\begin{aligned} b_0=a>0,~~~b_{k+1}=b>0,~~~b_k=-(a+b), \end{aligned}$$
(7.5)
and hence, by (7.5), all the other components of the sequence \(\{b_j;j\ge 0\}\) are just zero. Hence this example is a queueing model with a Poisson arrival rate \(b > 0\) together with a full loading service rate \(a>0\). That is, suppose the service is taken by some vans, say, then only when the van is fully packed will the van begin taking service. For this example, we know that
$$\begin{aligned} B(s)=a-(a+b)s^k+bs^{k+1} \end{aligned}$$
(7.6)
with \(m_d= ka\) and \(m_b= b\), and, for any \(\lambda >0\),
$$\begin{aligned} B_\lambda (s)=a-(\lambda +a+b)s^k+bs^{k+1}. \end{aligned}$$
(7.7)
By the results obtained before, we know that \(B_\lambda (s)\) has a unique positive zero in (0, 1), denoted by \(u(\lambda )\), together with the other \((k-1)\) zeros on the open unit disc \(\{z;|z|<1\}\). Note that, however, for this special case, \(B_\lambda (s)\) is simply a polynomial with degree \((k+1)\), and thus \(B_\lambda (s)\) has exactly \((k+1)\) zeros. By noting that \(B_\lambda (1)=-\lambda <0\) and \(\lim \limits _{s\rightarrow +\infty }B_\lambda (s)=+\infty \), we know that, in addition to the k zeros on the open unit disc \(\{z;|z|<1\}\), \(B_\lambda (s)\) has a unique positive zero on \(\{z;|z|>1\}\). We now let \(v(\lambda )\) denote this zero. Therefore, \(B_\lambda (s)\) has exactly two positive zeros \(u(\lambda )\) and \(v(\lambda )\) which satisfy, for all \(\lambda >0\),
$$\begin{aligned} 0<u(\lambda )<1<v(\lambda )<+\infty . \end{aligned}$$
(7.8)
Similarly, B(s) is also a polynomial with degree \((k+1)\) and thus has exactly \((k+1)\) zeros on the complex plane. By noting that \(B'(1)=b-ka\), we can get the following simple conclusion.
Lemma 7.1
For the B(s) defined in (7.6) we have the following conclusions:
(i)
B(s) has exactly \((k+1)\) zeros on the complex plane. Among them exactly two are positive zeros. Moreover, if \(b<ka\) then these two positive zeros are just 1 and v, where \(1<v<+\infty \), and if \(b>ka\) then these two positive zeros are just 1 and q, where \(0<q<1\), and if \(b=ka\), then these two positive zeros are both 1.
 
(ii)
The other \((k-1)\) zeros, denoted by \(\{\omega _i;1\le i\le k\}\) are all located within the open unit disc \(\{z;|z|<1\}\) and they are either negative zeros or conjugate complex zeros. Moreover, if \(b>ka\), then \( |\omega _i|<q<1 \), where \(q<1\) is given in the above (i).
 
Proof
Obvious and thus omitted.\(\square \)
Using the notation introduced in (3.4) and (3.5), we then have
$$\begin{aligned} \hat{\mathbf{U }} \hat{=} \hat{\mathbf{U }}(0)=(\omega _1,\omega _2,\ldots \omega _{k-1})^\mathrm{T}, \end{aligned}$$
(7.9)
where \((\omega _1,\omega _2,\ldots \omega _{k-1})\) are the \((k-1)\) zeros of B(s) specified in (ii) of the above Lemma 7.1. We can now obtain many results regarding this example by citing conclusions obtained in the previous sections. However, we shall be mainly interested in the ergodic property and, in particular, the limiting distributions.
For this purpose, we introduce some new notations as follows:
$$\begin{aligned} \mathbf{D }^{(0)}=\left[ -\lambda _0, -\lambda _0, \ldots , -\lambda _0\right] ^\mathrm{T}= (-\lambda _0)[1,1, \ldots ,1]^\mathrm{T} \end{aligned}$$
(7.10)
and, for \(1\le j\le k-1\),
$$\begin{aligned} \mathbf{D }^{(j)}=\left[ \omega _1^{j-1}(\mu _j-\lambda _j\omega _1), \ldots ,\omega _i^{j-1}(\mu _j-\lambda _j\omega _i),\ldots ,\omega _{k-1}^{j-1} (\mu _j-\lambda _j\omega _{k-1})\right] ^\mathrm{T}.\nonumber \\ \end{aligned}$$
(7.11)
Hence \(\mathbf{D }^{(j)}~(0\le j\le k-1)\) are all \((k-1)\)-dimensional column vectors and the ith component of \(\mathbf{D }^{(j)}~(1\le j\le k-1)\) is just
$$\begin{aligned} (\mathbf{D }^{(j)})_i=\omega _i^{j-1}(\mu _j-\lambda _j\omega _i). \end{aligned}$$
(7.12)
Now, the meaning of the following determinants should be self-explanatory:
$$\begin{aligned} |\mathbf{D }^{(0)}, \mathbf{D }^{(1)},\ldots ,\mathbf{D }^{(j-1)}, \mathbf{D }^{(j+1)},\ldots ,\mathbf{D }^{(k-1)}|. \end{aligned}$$
(7.13)
Applying the results obtained in the previous section, particularly Theorem 4.3, we can claim the following conclusion.
Theorem 7.1
Suppose the control sequences \(\{h_{ij}\}\) are given in (7.1) and the arrival-service sequence \(\{b_j; j\ge 0\}\) is given in (7.5). Then we have
(i)
The corresponding queueing process is recurrent if and only if \(ka\ge b\) and positive recurrent if and only if \(ka>b\).
 
(ii)
If \(ka>b\), then there exists a unique limiting distribution \(\{\pi _j; j\ge 0\}\) whose generating function is given by
$$\begin{aligned} \Pi (s)=\sum _{j=0}^{\infty }\pi _js^j=\sum _{j=0}^{k-1}\pi _js^j\left( 1+\frac{\lambda _js -\mu _j}{G(s)}s^{k-1}\right) ,\nonumber \\ \end{aligned}$$
(7.14)
where \(G(s)= \frac{B(s)}{1-s} = \frac{a-(a+b)s^k+bs^{k+1}}{1-s}\) and \(\{\pi _j; 0\le j\le k-1\}\) are given by
$$\begin{aligned} \pi _0=\frac{|\mathbf{D }^{(1)},\mathbf{D }^{(2)},\ldots , \mathbf{D }^{(k-1)}|}{\tilde{\mathbf{D }}} \end{aligned}$$
(7.15)
and, for \(1\le j\le k-2\),
$$\begin{aligned} \pi _j=(-1)^j\frac{|\mathbf{D }^{(0)},\ldots ,\mathbf{D }^{(j-1)}, \mathbf{D }^{(j+1)},\ldots ,\mathbf{D }^{(k-1)}|}{\tilde{\mathbf{D }}} \end{aligned}$$
(7.16)
and
$$\begin{aligned} \pi _{k-1}=(-1)^{k-1}\cdot \frac{|\mathbf{D }^{(0)},\mathbf{D }^{(1)},\ldots , \mathbf{D }^{(k-2)}|}{\tilde{\mathbf{D }}}, \end{aligned}$$
(7.17)
where the numerators of (7.15)–(7.17) are explained in (7.13) and the \(\tilde{\mathbf{D }}\) in (7.15)–(7.17) is given by
$$\begin{aligned} \tilde{\mathbf{D }}=det(A)+\frac{\lambda _0}{ka-b}det(B). \end{aligned}$$
(7.18)
Here the \((k-1)\times (k-1)\) matrices \(A=\{a_{ij}; 1\le i\le k-1, 1\le j\le k-1\}\) and \(B=\{b_{ij}; 1\le i\le k-1, 1\le j\le k-1\}\) are given by
$$\begin{aligned} a_{ij}=\lambda _0+\omega _i^{j-1}(\mu _j-\lambda _j\omega _i) \end{aligned}$$
(7.19)
and
$$\begin{aligned} b_{ij}=\lambda _j(1-\omega _i^j)-\mu _j(1-\omega _i^{j-1}). \end{aligned}$$
(7.20)
 
Proof
By noting (7.4), conclusion (i) immediately follows from Theorem 4.2. To show (ii), just apply Theorem 4.3 to our example. In particular, write \(\kappa \) in (4.13) as
$$\begin{aligned} \kappa =\kappa _1+\frac{1}{ka-b}\kappa _2, \end{aligned}$$
where \(\kappa _1\) and \(\kappa _2\) should be self-explanatory by noting the form (4.13).
Using (7.2) and (7.3) together with (7.9), we immediately obtain that \(\kappa _1\) is just
$$\begin{aligned} \kappa _1=\begin{vmatrix} 1,&1,&\ldots ,&1\\\\ \mathbf{D }^{(0)},&\mathbf{D }^{(1)},&\ldots ,&\mathbf{D }^{(k-1)} \end{vmatrix} \times \prod _{i=1}^{k-1}(1-\omega _i). \end{aligned}$$
By noting the special form of \(\mathbf{D }^{(0)}\) in (7.10) it is easy to see that
$$\begin{aligned} \kappa _1=\prod _{i=1}^{k-1}(1-\omega _i)\cdot det(A), \end{aligned}$$
(7.21)
where \(A=\{a_{ij}; 1\le i\le k-1, 1\le j\le k-1\}\) is given in (7.19).
Similarly, for \(\kappa _2\), just using (7.4) and (7.9), we can obtain that
$$\begin{aligned} \kappa _2=\begin{vmatrix} \lambda _0,&\lambda _1-\mu _1,&\ldots ,&\lambda _j-\mu _j,&\ldots ,&\lambda _{k-1}-\mu _{k-1}\\ \mathbf{D }^{(0)},&\mathbf{D }^{(1)},&\ldots ,&\mathbf{D }^{(i)},&\ldots ,&\mathbf{D }^{(k-1)}, \end{vmatrix} \times \prod _{i=1}^{k-1}(1-\omega _i), \end{aligned}$$
(7.22)
which, by simply applying properties of determinants and noting (7.10), is just
$$\begin{aligned} \kappa _2=\prod _{i=1}^{k-1}(1-\omega _i)\times det(B), \end{aligned}$$
(7.23)
where \(B=\{b_{ij}\}\) is given in (7.20).
It follows from (7.21) and (7.23) that the \(\kappa \) in (4.13) is just
$$\begin{aligned} \kappa =\prod _{i=1}^{k-1}(1-\omega _i)\times \tilde{\mathbf{D }} \end{aligned}$$
(7.24)
if we define \(\tilde{\mathbf{D }}\) as in (7.18).
Now consider \(\pi _0\) in (4.10). It is fairly easy to see that the numerator of \(\pi _0\) in (4.10) is just, by applying (7.2) and (7.3) again,
$$\begin{aligned} \prod _{i=1}^{k-1}(1-\omega _i)\times \begin{vmatrix} \mathbf{D }^{(1)},&\mathbf{D }^{(2)},&\ldots ,&\mathbf{D }^{(k-1)} \end{vmatrix}. \end{aligned}$$
(7.25)
By noting (7.24) and (7.25) and cancelling the common term \(\prod _{i=1}^{k-1}(1-\omega _i)\), we see that (7.15) is true.
By the same arguments, together with using either (4.11) or (4.12), we immediately obtain (7.16) or (7.17). This ends the proof. \(\square \)
In order to get a more concrete and convincing example, we consider the further special case where \(\lambda _i\equiv \lambda >0\) \((0\le i\le k-1)\) and \(\mu _i\equiv \mu \) \((1\le i\le k-1)\). That is, we assume that the control sequence takes an M|M|1 structure. Then the forms (7.14)–(7.20) take particularly simple expressions. In particular, the Vandermonde determinant will play an interesting role. For simplicity, we further assume that \(k=3\). Then, as a direct consequence of Theorem 7.1, we can get the following simple yet interesting corollary.
Corollary 7.1
Suppose that the control sequences \(\{h_{ij}\}\) are given in (7.1) together with the further assumptions that \(\lambda _i\equiv \lambda >0\) \((0\le i\le k-1)\) and \(\mu _i\equiv \mu \) \((1\le i\le k-1)\) and the arrival-service sequence \(\{b_j; j\ge 0\}\) is given in (7.5). Further assume that \(k=3\). Then we have the following conclusions:
(i)
The corresponding queueing process is recurrent if and only if \(3a\ge b\) and positive recurrent if and only if \(3a>b\). Moreover, if \(3a>b\), then B(s) has exactly 4 zeros, \(\omega _1\), \(\omega _2\), 1 and v, where \(1<v<+\infty \) and \(\omega _1\) and \(\omega _2\) are conjugate complex zeros within the open unit disc \(\{z;|z|<1\}\).
 
(ii)
If \(3a>b\), then there exists a unique limiting distribution \(\{\pi _j; j\ge 0\}\) whose generating function is given by
$$\begin{aligned} \Pi (s)=\sum _{j=0}^{\infty } \pi _{j} s^{j}= {\left( \pi _{0}+\pi _{1} s+\pi _{2} s^{2}\right) \left( 1+\frac{\lambda }{G(s)} s^{3}\right) -\left( \pi _{1}+\pi _{2} s\right) \cdot \frac{\mu }{G(s)} s^{3}},\nonumber \\ \end{aligned}$$
(7.26)
where \(G(s)=a(1+s+s^2)-bs^3\) and \(\{\pi _j; 0\le j\le 2\}\) are given by
$$\begin{aligned} \pi _{0}&=\frac{\theta _{0}}{\theta _{0}+\lambda \theta _{1}+\lambda ^{2}+\sigma }, \end{aligned}$$
(7.27)
$$\begin{aligned} \pi _{1}&=\frac{\lambda \theta _{1}}{ \theta _{0}+\lambda \theta _{1}+\lambda ^{2}+\sigma }, \end{aligned}$$
(7.28)
and
$$\begin{aligned} \pi _{2}=\frac{\lambda ^{2}}{\theta _{0}+\lambda \theta _{1}+\lambda ^{2}+\sigma }, \end{aligned}$$
(7.29)
where
$$\begin{aligned} \theta _{0}&=\left( \mu -\lambda \omega _{1}\right) \left( \mu -\lambda \omega _{2}\right) , \end{aligned}$$
(7.30)
$$\begin{aligned} \theta _{1}&=\mu -\lambda \left( \omega _{1}+\omega _{2}\right) , \end{aligned}$$
(7.31)
and
$$\begin{aligned} \sigma =\frac{\lambda ^{3}\left( 1-\omega _{1}\right) \left( 1-\omega _{2}\right) }{3 a-b}. \end{aligned}$$
(7.32)
 
Proof
Just apply Theorem 7.1 together with some easy algebra. \(\square \)
Remark 7.1
Note that by expressions (7.26)–(7.32), we see that the limiting distribution takes a very simple yet interesting form and all these expressions only depend on the given values \(a,b,\lambda ,\mu \) and the two zeros \(\omega _1\) and \(\omega _2\) of the polynomial \(a(1+s+s^2)-bs^3\). Note also that the other properties, including busy period distributions and queueing length distribution, can also be expressed explicitly, but will be omitted here.

Acknowledgements

The authors would like to express their sincere thanks to the anonymous referees and the associate editor who have provided extremely helpful comments and suggestions, which have resulted in a much improved version of this paper
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Anderson, W.: Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York (1991)CrossRef Anderson, W.: Continuous-Time Markov Chains: An Applications-Oriented Approach. Springer, New York (1991)CrossRef
2.
Zurück zum Zitat Armero, C., Conesa, D.: Prediction in Markovian bulk arrival queues. Queueing Syst. Theory Appl. 34, 327–350 (2000)CrossRef Armero, C., Conesa, D.: Prediction in Markovian bulk arrival queues. Queueing Syst. Theory Appl. 34, 327–350 (2000)CrossRef
3.
Zurück zum Zitat Arumuganathan, R., Ramaswami, K.S.: Analysis of a bulk queue with state dependent arrivals and multiple vacations. Indian J. Pure Appl. Math. 36, 301–317 (2005) Arumuganathan, R., Ramaswami, K.S.: Analysis of a bulk queue with state dependent arrivals and multiple vacations. Indian J. Pure Appl. Math. 36, 301–317 (2005)
4.
Zurück zum Zitat Asmussen, S.: Applied Probability and Queues, 2nd edn. Springer, New York (2003) Asmussen, S.: Applied Probability and Queues, 2nd edn. Springer, New York (2003)
5.
Zurück zum Zitat Bayer, N., Boxma, O.J.: Wiener–Hopf analysis of an M/G/1 queue with negative customers and of a related class of random walks. Queueing Syst. 23, 301–316 (1996)CrossRef Bayer, N., Boxma, O.J.: Wiener–Hopf analysis of an M/G/1 queue with negative customers and of a related class of random walks. Queueing Syst. 23, 301–316 (1996)CrossRef
6.
Zurück zum Zitat Chang, S.H., Choi, D.W., Kim, T.S.: Performance analysis of a finite-buffer bulk-arrival and bulk-service queue with variable server capacity. Stoch. Anal. Appl. 22, 1151–1173 (2004)CrossRef Chang, S.H., Choi, D.W., Kim, T.S.: Performance analysis of a finite-buffer bulk-arrival and bulk-service queue with variable server capacity. Stoch. Anal. Appl. 22, 1151–1173 (2004)CrossRef
7.
Zurück zum Zitat Chaudhry, M.L., Templeton, J.G.C.: A First Course in Bulk Queues. Wiley, New York (1983) Chaudhry, M.L., Templeton, J.G.C.: A First Course in Bulk Queues. Wiley, New York (1983)
8.
Zurück zum Zitat Chen, A.Y., Pollett, P.K., Li, J.P., Zhang, H.J.: Markovian bulk-arrival and bulk-service queues with state-dependent control. Queueing Syst. 64, 267–304 (2010)CrossRef Chen, A.Y., Pollett, P.K., Li, J.P., Zhang, H.J.: Markovian bulk-arrival and bulk-service queues with state-dependent control. Queueing Syst. 64, 267–304 (2010)CrossRef
9.
Zurück zum Zitat Chen, A.Y., Li, J.P., Hou, Z.T., Ng, K.W.: Decay properties and quasi-stationary distributions for stopped Markovian bulk-arrival and bulk-service queues. Queueing Syst. 66, 275–311 (2010)CrossRef Chen, A.Y., Li, J.P., Hou, Z.T., Ng, K.W.: Decay properties and quasi-stationary distributions for stopped Markovian bulk-arrival and bulk-service queues. Queueing Syst. 66, 275–311 (2010)CrossRef
10.
Zurück zum Zitat Chen, A.Y., Renshaw, E.: The M/M/1 queue with mass exodus and mass arrivals when empty. J. Appl. Probab. 34, 192–207 (1997)CrossRef Chen, A.Y., Renshaw, E.: The M/M/1 queue with mass exodus and mass arrivals when empty. J. Appl. Probab. 34, 192–207 (1997)CrossRef
11.
Zurück zum Zitat Chen, A.Y., Renshaw, E.: Markovian bulk-arriving queues with state-dependent control at idle time. Adv. Appl. Probab. 36, 499–524 (2004)CrossRef Chen, A.Y., Renshaw, E.: Markovian bulk-arriving queues with state-dependent control at idle time. Adv. Appl. Probab. 36, 499–524 (2004)CrossRef
12.
Zurück zum Zitat Chung, K.L.: Markov Chains with Stationary Transition Probabilities, 2nd edn. Springer, New York (1967) Chung, K.L.: Markov Chains with Stationary Transition Probabilities, 2nd edn. Springer, New York (1967)
13.
Zurück zum Zitat Dudin, A., Nishimura, S.: A BMAP/SM/1 queueing system with Markovian arrival input of disasters. J. Appl. Probab. 36, 868–881 (1999)CrossRef Dudin, A., Nishimura, S.: A BMAP/SM/1 queueing system with Markovian arrival input of disasters. J. Appl. Probab. 36, 868–881 (1999)CrossRef
14.
Zurück zum Zitat Fakinos, D.: The relation between limiting queue size distributions at arrival and departure epochs in a bulk queue. Stoch. Process. Appl. 37, 327–329 (1991)CrossRef Fakinos, D.: The relation between limiting queue size distributions at arrival and departure epochs in a bulk queue. Stoch. Process. Appl. 37, 327–329 (1991)CrossRef
15.
Zurück zum Zitat Freedman, D.: Markov Chains. Springer, New York (1982) Freedman, D.: Markov Chains. Springer, New York (1982)
16.
Zurück zum Zitat Gelenbe, E.: Product from networks with negative and positive customers. J. Appl. Probab. 28, 656–663 (1991)CrossRef Gelenbe, E.: Product from networks with negative and positive customers. J. Appl. Probab. 28, 656–663 (1991)CrossRef
17.
Zurück zum Zitat Gelenbe, E., Glynn, P., Sigman, K.: Queues with negative arrivals. J. Appl. Probab. 28, 245–250 (1991)CrossRef Gelenbe, E., Glynn, P., Sigman, K.: Queues with negative arrivals. J. Appl. Probab. 28, 245–250 (1991)CrossRef
18.
Zurück zum Zitat Gross, D., Harris, C.M.: Fundamentals of Queueing Theory. Wiley, New York (1985) Gross, D., Harris, C.M.: Fundamentals of Queueing Theory. Wiley, New York (1985)
19.
Zurück zum Zitat Harrison, P.G., Pitel, E.: Sojourn times in single-server queues with negative customers. J. Appl. Probab. 30, 943–963 (1993)CrossRef Harrison, P.G., Pitel, E.: Sojourn times in single-server queues with negative customers. J. Appl. Probab. 30, 943–963 (1993)CrossRef
20.
Zurück zum Zitat Henderson, W.: Queueing networks with negative customers and negative queue lengths. J. Appl. Probab. 30, 931–942 (1993)CrossRef Henderson, W.: Queueing networks with negative customers and negative queue lengths. J. Appl. Probab. 30, 931–942 (1993)CrossRef
21.
Zurück zum Zitat Itô, K.: Poisson point processes attached to Markov processes. In: Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, pp. 225–240. University of California Press, Berkeley (1971) Itô, K.: Poisson point processes attached to Markov processes. In: Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, pp. 225–240. University of California Press, Berkeley (1971)
22.
Zurück zum Zitat Jain, G., Sigman, K.: A Pollaczek–Khintchine formula for M/G/1 queues with disasters. J. Appl. Probab. 33, 1191–1200 (1996)CrossRef Jain, G., Sigman, K.: A Pollaczek–Khintchine formula for M/G/1 queues with disasters. J. Appl. Probab. 33, 1191–1200 (1996)CrossRef
23.
Zurück zum Zitat Kleinrock, L.: Queueing Systems, vol. 1. Wiley, New York (1975) Kleinrock, L.: Queueing Systems, vol. 1. Wiley, New York (1975)
24.
Zurück zum Zitat Li, J.P., Chen, A.Y.: Generalized Markov interacting branching processes. Sci. China Math. 61, 545–562 (2018)CrossRef Li, J.P., Chen, A.Y.: Generalized Markov interacting branching processes. Sci. China Math. 61, 545–562 (2018)CrossRef
25.
Zurück zum Zitat Lucantoni, D.M.: New results on the single server queue with a batch Markovian arrival process. Stoch. Models 7, 1–46 (1991)CrossRef Lucantoni, D.M.: New results on the single server queue with a batch Markovian arrival process. Stoch. Models 7, 1–46 (1991)CrossRef
26.
Zurück zum Zitat Lucantoni, D.M., Neuts, M.F.: Some steady-state distributions for the MAP/SM/1 queue. Stoch. Models 10, 575–589 (1994)CrossRef Lucantoni, D.M., Neuts, M.F.: Some steady-state distributions for the MAP/SM/1 queue. Stoch. Models 10, 575–589 (1994)CrossRef
27.
Zurück zum Zitat Medhi, J.: Stochastic Models in Queuing Theory. Academic Press, San Diego, CA (1991) Medhi, J.: Stochastic Models in Queuing Theory. Academic Press, San Diego, CA (1991)
28.
Zurück zum Zitat Neuts, M.F.: A versatile Markovian point process. J. Appl. Probab. 16, 764–779 (1979)CrossRef Neuts, M.F.: A versatile Markovian point process. J. Appl. Probab. 16, 764–779 (1979)CrossRef
29.
Zurück zum Zitat Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach. Johns Hopkins University Press, Baltimore, MD (1981) Neuts, M.F.: Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach. Johns Hopkins University Press, Baltimore, MD (1981)
30.
Zurück zum Zitat Nishimura, S., Sato, H.: Eigenvalue expression for a batch Markovian arrival process. J. Oper. Res. Soc. Jpn. 40, 122–132 (1997)CrossRef Nishimura, S., Sato, H.: Eigenvalue expression for a batch Markovian arrival process. J. Oper. Res. Soc. Jpn. 40, 122–132 (1997)CrossRef
31.
Zurück zum Zitat Ramaswami, V.: The N/G/1 queue and its detailed anylasis. Adv. Appl. Probab. 12(1), 222–261 (1980)CrossRef Ramaswami, V.: The N/G/1 queue and its detailed anylasis. Adv. Appl. Probab. 12(1), 222–261 (1980)CrossRef
32.
Zurück zum Zitat Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 2. Wiley, Chichester (1990) Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes and Martingales, vol. 2. Wiley, Chichester (1990)
33.
Zurück zum Zitat Srinivasan, L., Renganathan, N., Kalyanaraman, R.: Single server, bulk arrival, Bernoulli feedback queue with vacations—some performance measures. Int. J. Inf. Manag. Sci. 13, 45–54 (2002) Srinivasan, L., Renganathan, N., Kalyanaraman, R.: Single server, bulk arrival, Bernoulli feedback queue with vacations—some performance measures. Int. J. Inf. Manag. Sci. 13, 45–54 (2002)
34.
Zurück zum Zitat Stadje, W.: Some exact expressions for the bulk-arrival queue \(M^X/M/1\). Queuing Syst. 4, 85–92 (1989)CrossRef Stadje, W.: Some exact expressions for the bulk-arrival queue \(M^X/M/1\). Queuing Syst. 4, 85–92 (1989)CrossRef
35.
Zurück zum Zitat Sumita, U., Masuda, Y.: Tandem queues with bulk arrivals, infinitely many servers and correlated service times. J. Appl. Probab. 34, 248–257 (1997)CrossRef Sumita, U., Masuda, Y.: Tandem queues with bulk arrivals, infinitely many servers and correlated service times. J. Appl. Probab. 34, 248–257 (1997)CrossRef
36.
Zurück zum Zitat Syski, R.: Passage Times for Markov Chains. IOS Press, Amsterdam (1992) Syski, R.: Passage Times for Markov Chains. IOS Press, Amsterdam (1992)
37.
Zurück zum Zitat Ushakumari, P.V., Krishnamoorthy, A.: On a bulk arrival bulk service infinite server queue. Stoch. Anal. Appl. 16, 585–595 (1998)CrossRef Ushakumari, P.V., Krishnamoorthy, A.: On a bulk arrival bulk service infinite server queue. Stoch. Anal. Appl. 16, 585–595 (1998)CrossRef
Metadaten
Titel
Markovian bulk-arrival and bulk-service queues with general state-dependent control
verfasst von
Anyue Chen
Xiaohan Wu
Jing Zhang
Publikationsdatum
05.07.2020
Verlag
Springer US
Erschienen in
Queueing Systems / Ausgabe 3-4/2020
Print ISSN: 0257-0130
Elektronische ISSN: 1572-9443
DOI
https://doi.org/10.1007/s11134-020-09660-0

Weitere Artikel der Ausgabe 3-4/2020

Queueing Systems 3-4/2020 Zur Ausgabe