Skip to main content
Top
Published in: Queueing Systems 3-4/2019

Open Access 13-05-2019

Infinite-server systems with Coxian arrivals

Authors: Onno Boxma, Offer Kella, Michel Mandjes

Published in: Queueing Systems | Issue 3-4/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We consider a network of infinite-server queues where the input process is a Cox process of the following form: The arrival rate is a vector-valued linear transform of a multivariate generalized (i.e., being driven by a subordinator rather than a compound Poisson process) shot-noise process. We first derive some distributional properties of the multivariate generalized shot-noise process. Then these are exploited to obtain the joint transform of the numbers of customers, at various time epochs, in a single infinite-server queue fed by the above-mentioned Cox process. We also obtain transforms pertaining to the joint stationary arrival rate and queue length processes (thus facilitating the analysis of the corresponding departure process), as well as their means and covariance structure. Finally, we extend to the setting of a network of infinite-server queues.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

In queueing theory it is commonly assumed that the customer arrival process is a Poisson process. Some recent empirical studies (see, for example, [4] for references) suggest, however, that arrival processes may exhibit overdispersion, i.e., the variance of the number of arrivals in an interval is larger than the corresponding mean. This has triggered research on queueing systems with overdispersed input.
The focus of the present paper is on infinite-server queues with, as input process, a doubly stochastic process, also known as a Cox process. Such a process can be seen as a Poisson process in which the rate is not a constant; instead, the rate process \(\{\Lambda (t), t \in {{\mathbb {R}}}\}\) itself is a (nonnegative) stochastic process. As an immediate consequence of the law of total variance, Cox processes indeed are overdispersed. Before describing the main contributions of the present paper, we first provide a brief account of the existing literature in this area.
Literature In [7], the arrival process of the infinite-server queues is a Cox process in which the arrival rate is a shot-noise process. More specifically, the jumps of the shot-noise process occur according to a homogeneous Poisson process and are i.i.d. (independent and identically distributed) with a general distribution, and between jumps, the shot-noise process decays exponentially fast. The main object of study in [7] is a feedforward network of infinite-server queues, and the main result is the joint transform of the shot-noise-driven arrival rates and the numbers of customers in the queues.
Infinite-server queues are also studied in [2] and [8], but the arrival process there is a Hawkes process, the so-called self-exciting process—jumps in the shot-noise process in turn increase the intensity of the occurrence of the jumps. Daw and Pender [2] present several interesting motivating examples. They consider deterministic jump sizes in the shot-noise process and study in particular the Hawkes/Ph/\(\infty \) and Hawkes/D/\(\infty \) queues, obtaining detailed expressions for moments and autocovariances. Koops et al. [8] allow generally distributed jump sizes. For the case of exponentially distributed service times, the joint Laplace and z-transform of the Hawkes intensity and the number of customers is characterized via a partial differential equation, and that PDE is exploited to obtain recursive expressions for (joint) moments of the Hawkes intensity and the number of customers. For the case of generally distributed service times, the Hawkes process is viewed as a branching process, which allows expressing the z-transform of the number of customers in terms of the solution of a fixed-point equation.
Main goals and results In this paper we aim to develop a general framework for the study of infinite-server queues with, as input, a quite general Cox process and to provide an exact analysis of networks of such queues. The modeling framework of [7] is extended in several ways; in particular, the shot-noise process is multivariate and not driven by a compound Poisson process, but by a Lévy subordinator. The class of Lévy subordinators not only contains compound Poisson processes, but also, for example, linear drifts and Gamma processes (where the latter process has the notable property that it exhibits infinitely many jumps in any time interval of positive length).
The main results of the paper are compact, elegant, transform expressions for joint distributions of arrival rates and numbers of customers, and properties of the departure processes, which reveal an interesting generalization of the classical results for the number-of-customers process and the departure process in the M/G/\(\infty \) queue. While large parts of the paper are quite technical, we also try to make clear why a detailed analysis of this complicated network is possible. In a nutshell, crucial for the analysis are the following convenient properties: (i) Lévy processes have stationary and independent increments, (ii) in a shot-noise process with exponential decay, all shots simultaneously decay at the same exponential rate, independently of each other, and (iii) in (a network of) infinite-server queues, all customers move through the queues independently, without interfering with each other.
Motivation Our motivation for this study is threefold. (i) We started studying queues with Coxian input as a framework naturally allowing the incorporation of overdispersion. (ii) When we realized that a detailed analysis of an isolated infinite-server queue with Coxian input is possible, we were motivated to develop a very general framework for studying networks of infinite-server queues with a multivariate Coxian input process that is of great generality but still allows a detailed exact analysis. Together with product-form networks and linear stochastic fluid networks (cf. [6]) these are some of the rare examples of queueing networks for which one can obtain the joint distribution of key performance measures. (iii) Finally, while infinite-server queues with overdispersed input have various applications, we were in particular inspired by Web server applications. Consider the number of simultaneous visitors to a popular Web site or online video [11, 12]. While the arrival process of visitors to the Web site may well be a Poisson process, its rate may jump up due to some (external) event, then decay gradually, only to jump up again because of another event. Such an example formed one of the motivations for [2, 7, 8], which all study infinite-server queues with an overdispersed arrival process.
Cox input processes driven by a multivariate Lévy subordinator are quite relevant for the above-mentioned example of a Web site with a visit rate that jumps up because of a certain event and subsequently decays again. They allow one to take into account multiple, possibly related, events that affect the visit rate of the Web site. Those events could occur instantaneously (the Lévy subordinator being a compound Poisson process), but there might also, for example, be a linear (fluid) component.
Since we also consider networks of infinite-server queues, we can also model the situation in which a visitor to a Web site, after a while, clicks in order to move to another Web site.
Organization of the paper In Sect. 2 we introduce the multivariate shot-noise process and we describe distributional properties of this process. Section 3 studies a single infinite-server queue with, as input process, the above-described Cox process. The general case of a network of infinite-server queues is dealt with in Sect. 4. Each of these three sections starts with a brief overview of related known results: the one-dimensional shot-noise process (Sect. 2.1), the classical M/G/\(\infty \) queue (Sect. 3.1), and a tandem network of M/G/\(\infty \) queues (Sect. 4.1). Section 5 contains some conclusions and suggestions for further research.

2 Multivariate shot-noise

In this preliminary section, we first briefly introduce shot-noise (Sect. 2.1) and define multivariate shot-noise and its stationary version (Sect. 2.2), then we recall some basic facts on Poisson random measures (Sect. 2.3), and we conclude in Sect. 2.4 with describing distributional properties of the multivariate shot-noise process.

2.1 Shot-noise processes

We start our exposition by describing the classical shot-noise process \(\{X(t), t \ge 0 \}\), as follows: This process increases according to a compound Poisson process \(\{J(t), t \ge 0\}\) with jump rate c of the Poisson jump process \(\{N(t), t \ge 0\}\) and independent, identically distributed upward jumps \(B_1,B_2,\dots \) with Laplace–Stieltjes transform \(\beta (\cdot )\); see Fig. 1.
In between jumps, the process decreases with a state-dependent rate rX(t). Possible representations are hence, with \(T_1,T_2,\dots \) denoting the jump epochs of the compound Poisson process,
$$\begin{aligned} X(t) = \mathrm{e}^{-rt} X(0) + \int _0^t \mathrm{e}^{-r(t-s)} \mathrm{d}J(s) = \mathrm{e}^{-rt} X(0) + \sum _{j=1}^{N(t)} B_j \mathrm{e}^{-r(t-T_j)} , ~~~t \ge 0;\quad \end{aligned}$$
(1)
alternatively, \(X(\cdot )\) is the unique solution of the stochastic integral equation
$$\begin{aligned} X(t) = X(0) + \sum _{j=1}^{N(t)} B_j - r \int _0^t X(s) \mathrm{d}s, ~~~~ t \ge 0. \end{aligned}$$
(2)

2.2 Definition of multivariate shot-noise

After having defined the classical shot-noise process, we now describe the generalized version of the shot-noise process that we work with in this paper.
Let \({{\varvec{X}}}(\cdot )\) be a (generalized) d-dimensional multivariate shot-noise process, which is defined as follows: Let \({\varvec{J}}(\cdot )\) be a d-dimensional subordinator, i.e., a d-dimensional Lévy process which is non-decreasing in all components. Examples of such non-decreasing Lévy processes include linear drifts, compound Poisson processes and Gamma processes (but, obviously, not Brownian motion). We define the exponent of \({\varvec{J}}(\cdot )\) by \(-\eta (\cdot )\); it satisfies, for \({\varvec{\alpha }}\in \mathbb {R}_+^d\) and with \('\) denoting transposition,
$$\begin{aligned} \eta ({\varvec{\alpha }})=-\log \left( {{\mathbb {E}}}\,\mathrm{e}^{-{\varvec{\alpha }}' {\varvec{J}}(1)}\right) ={\varvec{\alpha }}'{\varvec{c}}+\int _0^\infty (1-\mathrm{e}^{-{\varvec{\alpha }}' {\varvec{x}}})\nu (\mathrm{d}{\varvec{x}}), \end{aligned}$$
where \({\varvec{c}}\in \mathbb {R}_+^d\) and \(\nu \) is an associated Lévy measure satisfying
$$\begin{aligned} \nu \left( \left( \mathbb {R}_+^d\right) ^\mathrm{c}\cup \{0\}\right) =0\,\,\,\text{ and }\,\,\,\int _{\mathbb {R}_+^d}\left( \Vert {\varvec{x}}\Vert \wedge 1\right) \nu (\mathrm{d}{\varvec{x}})<\infty ; \end{aligned}$$
cf. [9, 13].
Also, let \(Q=(q_{ij})\) be a \((d\times d)\)-matrix with nonnegative diagonal and non-positive off-diagonal entries, and with all eigenvalues having strictly positive real parts. An example of such a matrix is \(Q=(I-P')D\), where D is a positive diagonal matrix and P is a substochastic matrix satisfying \(P^n\rightarrow 0\) as \(n\rightarrow \infty \). A detailed motivation for studying this setting is provided in [6, Section 4].
We now introduce, for \({\varvec{X}}(0)\) componentwise strictly positive and independent of \({\varvec{J}}(\cdot )\), the multivariate shot-noise process through
$$\begin{aligned} {\varvec{X}}(t)=\mathrm{e}^{-Qt}{\varvec{X}}(0)+\int _{(0,t]}\mathrm{e}^{-Q(t-s)}\mathrm{d}{\varvec{J}}(s); \end{aligned}$$
(3)
cf. (1) and (2) above. Alternatively, \({\varvec{X}}(\cdot )\) is the unique (strong) solution to the stochastic integral equation
$$\begin{aligned} {\varvec{X}}(t)={\varvec{X}}(0)+{\varvec{J}}(t)-Q\int _0^t{\varvec{X}}(s)\,\mathrm{d}s . \end{aligned}$$
(4)
The cumulative external input to station i is \(X_i(0)+J_i(t)\), the internal input rate from station i to station j is \(\sum _{j\ne i}q_{ij}X_j(t)\), and the output rate (some of which goes to other stations, while the rest leaves the system) is \(q_iX_i(t)\), where \(q_i=-q_{ii}\ge 0\). The reason for the above assumption that the eigenvalues of Q have a strictly positive real part is that it ensures that all the coordinates of the matrix \(e^{-Qt}\) vanish as \(t\rightarrow \infty \) and thus, from (3), it follows that the effect of each arriving particle vanishes as well.
The process \({\varvec{X}}(\cdot )\) is Markovian and strictly positive (i.e., never hits or crosses any of the axes). Moreover, it has a unique stationary/ergodic/limiting distribution. This limiting distribution is identical to that of
$$\begin{aligned} {\varvec{X}}(\infty ) =\int _{(0,\infty )}\mathrm{e}^{-Qs}\,\mathrm{d}{\varvec{J}}(s). \end{aligned}$$
(5)
So far we have considered \({\varvec{J}}(t)\) for positive values of t, but we can extend \({\varvec{J}}(\cdot )\) to the whole real line (with \({\varvec{J}}(0)={\varvec{0}}\)). It follows directly that
$$\begin{aligned} {\varvec{X}}^*(t):=\int _{(-\infty ,t]}\mathrm{e}^{-Q (t-s)}\mathrm{d}{\varvec{J}}(s) \end{aligned}$$
(6)
is a stationary process satisfying the same conditions that \({\varvec{X}}\) does, that is, relations (3) and (4). From here on we assume that \({\varvec{X}}={\varvec{X}}^*\).
In Sect. 3 we shall consider an infinite-server queue with as arrival process a non-homogeneous Poisson process with arrival rate function \(\Lambda (t) = {\varvec{a}}'{\varvec{X}}(t)\) at time t. But first, in the next subsection, we review some well-known properties and results for Poisson random measures. We need those properties and results in order to provide an elegant treatment of the transforms of the arrival process and numbers of customers in networks of infinite-server queues with Coxian arrivals. In Sect. 2.4 we already demonstrate that by deriving an expression for the d-dimensional Laplace–Stieltjes transform (LST) of \(\mathbf{X}\).

2.3 Properties of Poisson random measures

Readers who are not familiar with Poisson random measures are referred to, for example, Chapter 6 of Çinlar [1]. To proceed, we first recall that a Poisson random measure N on some measurable space \(({\mathbb {X}},{\mathscr {G}})\) is a random measure having the property that, for every pairwise disjoint \(A_1,\ldots ,A_n\in {{\mathscr {G}}}\), the random variables \(N(A_1),\ldots ,N(A_n)\) are independent with \(N(A_i)\sim \text {Poisson}(\mu (A_i))\), where \(\mu (A)={{\mathbb {E}}}N(A)\) is assumed to be \(\sigma \)-finite. When \(\mu (A)\) is 0 or \(\infty \) then N(A) is defined to be almost surely 0 or \(\infty \), respectively. We note that it is well known that N is a Poisson random measure with a (\(\sigma \)-finite) mean measure \(\mu \) if and only if for any \({{\mathscr {G}}}\)-measurable \(f:\mathbb {X}\rightarrow \mathbb {R}_+\) we have that (cf. [1, Thm. 2.9 on p. 252]),
$$\begin{aligned} {{\mathbb {E}}}\,\exp \left( {-\int f(s)\,\mathrm{d}N(s)}\right) =\exp \left( {\int \big (1-\mathrm{e}^{-f(s)}\big )\,\mathrm{d}\mu (s)}\right) . \end{aligned}$$
Observe that \({{\mathbb {E}}}\int f(s)\,N(\mathrm{d}s)=\int f(s)\, \mathrm{d}\mu (s)\) (which holds even if N is not Poisson). Finally, it is also known, and actually easy to check (first for indicators, then simple functions, etc.), that if \(f_1(\cdot )\) and \(f_2(\cdot )\) are nonnegative \({{\mathscr {G}}}\)-measurable, then
$$\begin{aligned} {{\mathbb {E}}}\int f_1(s) \,\mathrm{d}N(s)\,\int f_2(s)\,\mathrm{d}N(s)&= \int f_1(s) f_2(s) \mathrm{d}\mu (s)\\&\quad +\int f_1 (s)\,\mathrm{d}\mu (s)\int f_2(s) \mathrm{d}\mu (s). \end{aligned}$$
Clearly, when \(\int f_1(s)\mathrm{d}\mu (s),\,\int f_2(s)\mathrm{d}\mu (s)\), and \(\int f_1(s)f_2(s)\mathrm{d}\mu (s)\) are all finite, then this is equivalent to
$$\begin{aligned} {{\mathbb {C}}}\text {ov}\left( \int f_1(s)\mathrm{d}N(s),\int f_2(s)\mathrm{d}N(s)\right) =\int f_1(s)f_2(s)\mathrm{d}\mu (s). \end{aligned}$$
Let \({\varvec{\varrho }}=(\varrho _i)\) be the mean rate of growth of the subordinator \({\varvec{J}}(\cdot )\), that is, \(\varrho _i:=c_i+\int _{\mathbb {R}_+^d}x_i\nu (\mathrm{d}{\varvec{x}})\). We can also write \(\varrho _i=c_i+\int _{\mathbb {R}_+}x_i\nu _i(\mathrm{d}x_i)\), using the notation
$$\begin{aligned} \nu _i(A)=\nu \left( \mathbb {R}_+^{i-1}\times A\times \mathbb {R}_+^{d-i}\right) . \end{aligned}$$
It is also well known (see, for example, Chapter 4 of [13]) that there exists a Poisson random measure \(N(\cdot ,\cdot )\) on \(\mathbb {R}\times \mathbb {R}_+^{d}\) (equipped with its Borel \(\sigma \)-field) with mean measure \(\ell \otimes \nu \), where \(\ell \) is Lebesgue measure, such that
$$\begin{aligned} J_i(t)=\left\{ \begin{array}{ll}c_it+{\displaystyle \int _{(0,t]\times \mathbb {R}_+^d}x_i\,\mathrm{d}N(s,{\varvec{x}})},&{}t\ge 0,\\ c_it-{\displaystyle \int _{[t,0)\times \mathbb {R}_+^d}x_i\,\mathrm{d}N(s,{\varvec{x}})},&{}t<0,\end{array}\right. \end{aligned}$$
where \({\varvec{x}}=(x_1,\ldots ,x_d)'\). This property entails that if we take \(f_1(s,{\varvec{x}})=\sum _{i=1}^d g_i(s)x_i\) and \(f_2(s,{\varvec{x}})=\sum _{i=1}^d h_i(s)x_i\), then we can immediately conclude that for each Borel \({\varvec{g}},{\varvec{h}}:\mathbb {R}\rightarrow \mathbb {R}_+^d\) the following three identities hold:
$$\begin{aligned} {{\mathbb {E}}}\int _{\mathbb {R}}{\varvec{g}}(s)'\,\mathrm{d}{\varvec{J}}(s)=&\,\int _{\mathbb {R}}{\varvec{g}}(s)'{\varvec{\varrho }}\, \mathrm{d}s, \end{aligned}$$
(7)
$$\begin{aligned} {{\mathbb {E}}}\exp \left( {-\int _{\mathbb {R}}{\varvec{g}}(s)'\mathrm{d}{\varvec{J}}(s)}\right) =&\,\exp \left( {-\int _{\mathbb {R}}\eta ({\varvec{g}}(s))\mathrm{d}s}\right) ,\, \end{aligned}$$
(8)
and
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}\left( {\int _{\mathbb {R}}{\varvec{g}}(s)'\mathrm{d}{\varvec{J}}(s),\int _{\mathbb {R}}{\varvec{h}}(s)'\mathrm{d}{\varvec{J}}(s)}\right) =\int _{\mathbb {R}}{\varvec{g}}(s)'\Sigma {\varvec{h}}(s)\mathrm{d}s, \end{aligned}$$
with
$$\begin{aligned} \Sigma :=-\nabla ^2\eta ({\varvec{0}})=\int _{\mathbb {R}_+^d}{\varvec{x}}{\varvec{x}}'\nu (\mathrm{d}{\varvec{x}}), \end{aligned}$$
provided that the corresponding variances \(\int _{\mathbb {R}}{\varvec{g}}(s)'\Sigma {\varvec{g}}(s)\mathrm{d}s\) and \(\int _{\mathbb {R}}{\varvec{h}}(s)'\Sigma {\varvec{h}}(s)\mathrm{d}s\) are both finite. With \(\sigma _{ij}\) denoting the (ij)th coordinate of \(\Sigma \), it also holds that
$$\begin{aligned} \sigma _{ij}=\int _{\mathbb {R}_+^d}x_ix_j\nu (\mathrm{d}{\varvec{x}})=\int _{\mathbb {R}_+^2}x_ix_j\,\nu _{ij}(\mathrm{d}x_i,\mathrm{d}x_j), \end{aligned}$$
(9)
where \(\nu _{ij}\) is the (ij)th marginal measure associated with \(\nu \) (i.e., the Lévy measure associated with the (ij)th coordinate of \({\varvec{J}}(\cdot )\)).

2.4 Distributional properties of multivariate shot-noise

Combining (5) with the powerful identity (8), we immediately obtain the following
Proposition 2.1
The d-dimensional Laplace–Stieltjes transform of \({\varvec{X}}\) is given by
$$\begin{aligned} {{\mathbb {E}}}\,\mathrm{e}^{-{\varvec{\alpha }}' {\varvec{X}}}=\exp \left( -\int _0^\infty \eta \left( \mathrm{e}^{-Q's}{\varvec{\alpha }}\right) \,\mathrm{d}s\right) \,. \end{aligned}$$
(10)
In the usual manner, moments can be found from the LST. The first moment takes the following form: With \({\varvec{\nabla }}\eta ({\varvec{0}})\) the vector of first derivatives,
$$\begin{aligned} {{\mathbb {E}}}\,{\varvec{X}} =Q^{-1}\, {\varvec{\nabla }}\eta ({\varvec{0}})=Q^{-1}{\varvec{\varrho }}. \end{aligned}$$
We now identify the covariance matrix of \({\varvec{X}}\). From (9) it follows, as was also shown in [6, Thm. 5.2], that the covariance matrix of \({\varvec{X}}\) is given by
$$\begin{aligned} \Sigma _0=\int _0^\infty \mathrm{e}^{-Qs}\,\Sigma \, \mathrm{e}^{-Q's}\,\mathrm{d}s, \end{aligned}$$
(11)
and is the unique solution of
$$\begin{aligned} Q\,\Sigma _0+\Sigma _0\,Q'=\Sigma . \end{aligned}$$
(12)
The next objective is to find an expression for the covariance between \({\varvec{X}}(t)\) and \({\varvec{X}}(t+h)\), again bearing in mind that we started the process at \(-\infty .\) Now, clearly
$$\begin{aligned} {\varvec{Y}}(t,h):= \int _{(t,t+h]}\mathrm{e}^{-Q(t+h-s)}\,\mathrm{d}{\varvec{J}}(s) \end{aligned}$$
is independent of \({\varvec{X}}(t)\), due to the independent increments property of \({\varvec{J}}(\cdot )\). As a consequence, with
$$\begin{aligned} {\varvec{Z}}(t,h):= \int _{(-\infty , t]}\mathrm{e}^{-Q(t+h-s)}\,\mathrm{d}{\varvec{J}}(s), \end{aligned}$$
we obtain, by splitting \({\varvec{X}}(t+h)\) into the sum of \({\varvec{Y}}(t,h)\) and \({\varvec{Z}}(t,h)\) (cf. (6)), that
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}({\varvec{X}}(t), {\varvec{X}}(t+h)) = {{\mathbb {C}}}\mathrm{ov}({\varvec{X}}(t), {\varvec{Y}}(t,h)+{\varvec{Z}}(t,h)) = {{\mathbb {C}}}\mathrm{ov}({\varvec{X}}(t), {\varvec{Z}}(t,h)). \end{aligned}$$
Denote (for \(h\ge 0\)) by \(\Sigma _h\) a matrix of which the (ij)th entry is \({{\mathbb {C}}}\mathrm{ov}({X_i(t),X_j(t+h)})\). It now follows from (9) that, by taking \({\varvec{g}}(s):=\mathrm{e}^{-Q'(t-s)}x\) and \({\varvec{h}}(s):=\mathrm{e}^{-Q'(t+h-s)}y\),
$$\begin{aligned} {\varvec{x}}'\Sigma _h {\varvec{y}}=\int _{-\infty }^t {\varvec{x}}'\mathrm{e}^{-Q(t-s)}\Sigma \mathrm{e}^{-Q'(t+h-s)}{\varvec{y}}\,\mathrm{d}s=\int _0^\infty {\varvec{x}}'\,\mathrm{e}^{-Qs}\,\Sigma \,\mathrm{e}^{-Q'(s+h)}{\varvec{y}} \,\mathrm{d}s, \end{aligned}$$
for every \({\varvec{x}},{\varvec{y}}\in \mathbb {R}^d\). It thus follows, using (11), that
$$\begin{aligned} \Sigma _h=\int _0^\infty \mathrm{e}^{-Qs}\,\Sigma \, \mathrm{e}^{-Q'(s+h)}\,\mathrm{d}s=\Sigma _0\,\mathrm{e}^{-Q'h}. \end{aligned}$$
(13)
Combining the above, we find the following result.
Proposition 2.2
The (\(\mathbb {R}^{2d}\)-valued) covariance matrix of \(({\varvec{X}}(t),{\varvec{X}}(t+h))\) is given by
$$\begin{aligned} \left( \begin{array}{cc}\Sigma _0&{}\Sigma _0\,\mathrm{e}^{-Q'h}\\ \mathrm{e}^{-Qh}\,\Sigma _0 &{}\Sigma _0\end{array}\right) . \end{aligned}$$
Since \(Q'\) and \(e^{-Q'h}\) commute, it also follows from (12) that
$$\begin{aligned} Q\,\Sigma _h+\Sigma _h\,Q'=\Sigma \, \mathrm{e}^{-Q'h}. \end{aligned}$$
In fact, employing the method of proof of [6, Thm. 5.2], \(\Sigma _h\) is the unique solution of this equation.
Remark 2.3
In the one-dimensional case, letting \(\sigma ^2\) and q denote the one-dimensional versions of the matrices \(\Sigma \) and Q, respectively, we conclude that the covariance between X(t) and \(X(t+h)\) is given by \(\frac{1}{2}\mathrm{e}^{-qh}\sigma ^2/q\).

3 Single infinite-server queue with a Cox input process

We consider an infinite-server queue in which, conditioned on \({\varvec{J}}(\cdot )\), the arrival process is a non-homogeneous Poisson process with rate function \(\Lambda (t)={\varvec{a}}'{\varvec{X}}(t)\) at time t, where \({\varvec{a}}\in \mathbb {R}_+^d\). It is throughout assumed that service times are i.i.d. and are independent of \({\varvec{J}}(\cdot )\) (hence also of \({\varvec{X}}(\cdot )\)) and the arrival process. The service times have distribution function \(F(\cdot )\) and complementary distribution function \({\bar{F}}(\cdot )\), and in addition we define \(F(s,t):=F(t)-F(s)\) for \(s<t\).
In this section, we start by reminding the reader of some well-known results concerning the classical M/G/\(\infty \) queue (Sect. 3.1), subsequently we derive, for the infinite-server queue with a Cox input process, the transform of queue lengths at various time epochs jointly with the numbers of arrivals in various time intervals (Sect. 3.2), and we conclude, in Sect. 3.3, with several calculations yielding moments as well as the joint steady-state transform of the number of customers in the infinite-server queue and the arrival rate.

3.1 The M/G/\(\infty \) queue

In this subsection we consider the classical M/G/\(\infty \) queue with fixed arrival rate \(\lambda \) and generic service time B. It is well known [3] that the number of customers L(t) at time t, starting with \(L(0)=0\), is Poisson distributed with mean
$$\begin{aligned} \lambda t \int _0^t \mathbb P(B> t-u) \frac{\mathrm{d}u}{t}= \lambda \int _0^t \mathbb P(B> u) \mathrm{d}u. \end{aligned}$$
This is easily seen by observing that the number of arrivals in [0, t] is Poisson(\(\lambda t\)), that different customers do not interact and that, given that there were n arrivals in [0, t], the arrival epochs were independent and uniformly distributed on [0, t]. So the Poisson arrival process is thinned, and the number of customers still present at t follows a Poisson distribution. Thus, the generating function of L(t) immediately follows:
$$\begin{aligned} \mathbb E[w^{L(t)}|L(0)=0] = \exp \left( {(w-1)\lambda \int _0^t \mathbb P(B>u) \mathrm{d} u} \right) . \end{aligned}$$
(14)
The pleasing properties of non-interfering customer behavior and of the uniformly distributed arrival epochs also quickly lead to elegant results for joint distributions of customer numbers at various epochs. Another well-known result about the M/G/\(\infty \) queue, going back to Mirasol [10], is that in stationarity the departure process of the queue also is Poisson. We now study the question of how these results change in the setting of the Cox input process.

3.2 Transform of queue lengths and numbers of arrivals

Our first objective is to derive the transform of queue lengths at various points in time, jointly with the number of arrivals in corresponding intervals.
To this end, we consider \(n\in {{\mathbb {N}}}\) time intervals, say \((t_0,t_1]\) up to \((t_{n-1},t_n]\) (where it is assumed that \(t_{i-1}<t_{i}\) for \(i=1,\ldots ,n\)). Our goal is to establish the joint transform of the queue lengths \(L_i\) at each of the \(t_i\) and the numbers of arrivals \(A_i\) in each of the intervals \((t_{i-1},t_i]\), i.e., we shall compute the transform, for \({\varvec{w}}\in {{\mathbb {R}}}^{n+1}\) and \({\varvec{z}}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}}):={{\mathbb {E}}} \left( \prod _{i=0}^n w_i^{L_i}\cdot \prod _{i=1}^n z_i^{A_i}\right) . \end{aligned}$$
Notice that a job arriving in the interval \((t_{i-1},t_i]\) contributes to \(A_i\) (and does not contribute to any of the other \(A_j\)) and potentially contributes to \(L_i\) up to \(L_n.\) More precisely, supposing that the job arrives at \(s\in (t_{i-1},t_i]\), with probability \({F}(t_j-s,t_{j+1}-s)\) it contributes to \(L_i\) up to \(L_j\), for \(j\in \{i,\ldots ,n\}\) and defining \(t_{n+1}:=\infty \). Conditional on \(\Lambda (\cdot )=\lambda (\cdot )\) this concerns a Poisson contribution with parameter
$$\begin{aligned} \int _{t_{i-1}}^{t_i} \lambda (s) {F}(t_j-s,t_{j+1}-s)\,\mathrm{d}s; \end{aligned}$$
conditional on \(\Lambda (\cdot )=\lambda (\cdot )\) all these contributions are independent. In addition, we recall that the probability-generating function (evaluated in z) of a Poisson random variable with parameter \(\lambda \) equals \(\exp (\lambda (z-1))\). Combining the above elements, we obtain
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}}) ={{\mathbb {E}}} \exp \left( \int _{-\infty }^{t_n}\Lambda (s) \psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\right) , \end{aligned}$$
(15)
where, with \(t_{-1}:=-\infty \),
$$\begin{aligned} \psi (s\,|\,{\varvec{w}},{\varvec{z}}) =\sum _{i=0}^n \psi _i(s\,|\,{\varvec{w}},{\varvec{z}}) 1_{\{s\in (t_{i-1},t_i]\}}; \end{aligned}$$
the individual \( \psi _i(s\,|\,{\varvec{w}},{\varvec{z}})\) are defined by
$$\begin{aligned} \psi _0(s\,|\,{\varvec{w}},{\varvec{z}}) :=&\, \sum _{j=0}^n F(t_{j}-s, t_{j+1}-s)\left( \prod _{i=0}^jw_i-1\right) \\ =&\, \sum _{j=0}^n F(t_{j}-s, t_{j+1}-s) \prod _{i=0}^jw_i +F(t_0-s)-1 \end{aligned}$$
for \(i=0\), and
$$\begin{aligned} \psi _i(s\,|\,{\varvec{w}},{\varvec{z}}):=&\, \sum _{j=i}^n F(t_{j}-s, t_{j+1}-s)\left( z_i\prod _{k=i}^jw_k-1\right) \\ =&\,\sum _{j=i}^n F(t_{j}-s, t_{j+1}-s)z_i\prod _{k=i}^jw_k +F(t_i-s)-1 \end{aligned}$$
for \(i\in \{1,\ldots ,n\}\).
Representation (15) generally holds, i.e., for any nonnegative arrival rate process \(\Lambda (\cdot )\). For the case of multivariate shot-noise, however, the expression can be made more explicit. To this end, we substitute (cf. (6))
$$\begin{aligned} \Lambda (s) = {\varvec{a}}'{\varvec{X}}(s)= {\varvec{a}}'\int _{-\infty }^s \mathrm{e}^{-Q(s-r)} \mathrm{d}{\varvec{J}}(r). \end{aligned}$$
By applying Eq. (10), we obtain that (15) equals
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}})=&\,{{\mathbb {E}}} \exp \left( {\varvec{a}}'\int _{-\infty }^{t_n} \int _{-\infty }^{s} \mathrm{e}^{-Q(s-r)} \mathrm{d}{\varvec{J}}(r) \,\psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\right) \\ =&\,{{\mathbb {E}}} \exp \left( {\varvec{a}}'\int _{-\infty }^{t_n} \int _{r}^{t_n} \mathrm{e}^{-Q(s-r)} \psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\,\mathrm{d}{\varvec{J}}(r)\right) \\ =&\, \exp \left( -\int _{-\infty }^{t_n} \eta \left( - \int _r^{t_n}\mathrm{e}^{-Q'(s-r)}\,\psi (s\,|\,{\varvec{w}},{\varvec{z}})\mathrm{d}s\,{\varvec{a}}\right) \,\mathrm{d}r\right) . \end{aligned}$$
We have thus established the following result.
Theorem 3.1
For any \({\varvec{w}}\in {{\mathbb {R}}}^{n+1}\) and \({\varvec{z}}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}}) =&\,{{\mathbb {E}}} \left( \prod _{i=0}^n w_i^{L_i}\cdot \prod _{i=1}^n z_i^{A_i}\right) \\ =&\,\exp \left( -\int _{-\infty }^{t_n} \eta \left( - \int _r^{t_n}\mathrm{e}^{-Q'(s-r)}\,\psi (s\,|\,{\varvec{w}},{\varvec{z}})\mathrm{d}s\,{\varvec{a}}\right) \,\mathrm{d}r\right) . \end{aligned}$$
Interestingly, the above result directly enables us to describe the distribution of the number of departures in all intervals. Let \(D_i\) be the number of departures in \((t_{i-1},t_i]\). Then we would like to evaluate
$$\begin{aligned} \Phi ({\varvec{v}}) := {{\mathbb {E}}}\left( \prod _{i=1}^n v_i^{D_i}\right) . \end{aligned}$$
Now observe that \(L_{i}=L_{i-1}+A_i-D_i\), and hence \(D_i=L_{i-1}-L_{i}+A_i\). As a consequence,
$$\begin{aligned} \Phi ({\varvec{v}})&= {{\mathbb {E}}}\left( \prod _{i=1}^n v_i^{L_{i-1}-L_{i}+A_i}\right) = {{\mathbb {E}}} \left( v_1^{L_0}\cdot \prod _{i=1}^{n-1}\left( \frac{v_{i+1}}{v_i}\right) ^{L_i} \cdot v_n^{-L_n}\cdot \prod _{i=1}^n v_i^{A_i}\right) . \end{aligned}$$
With \(w_1({\varvec{v}}) := v_1,\)\(w_i({\varvec{v}}):=v_{i+1}/v_i\) for \(i\in \{1,\ldots ,n-1\}\) and \(w_n({\varvec{v}}):= v_n^{-1},\) we thus find the following result.
Proposition 3.2
For any \({\varvec{v}}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} \Phi ({\varvec{v}}) = \Psi ({\varvec{w}}({\varvec{v}}), {\varvec{v}}). \end{aligned}$$

3.3 Explicit calculations

Let L(t) be the number of customers present at time t. In this subsection we compute (i) the mean and variance of L(0), (ii) the joint transform of \(\Lambda (0)\) and L(0), and (iii) the covariance between L(0) and L(t) for some \(t>0.\) As before, we assume that the process started at \(-\infty \), so that it displays stationary behavior at time 0 (and consequently at t as well). In principle, (factorial) moments can be derived by differentiating \(\Psi ({\varvec{w}},{\varvec{z}})\) suitably often and plugging in \({\varvec{w}}={\varvec{1}}\) and \({\varvec{z}}={\varvec{1}}\), but elegant direct arguments can be given, as we show now.
We first introduce some notation. Define \(\beta \) as the mean service time:
$$\begin{aligned} \beta = \int _0^\infty {\bar{F}}(s) \,\mathrm{d}s. \end{aligned}$$
The density of the residual service time \(B_\mathrm{e}\) is given by \(f_\mathrm{e}(s):= {\bar{F}}(s)/\beta .\)

3.4 \(\circ \) Mean and variance of \(\Lambda (0)\) and L(0)

It is easily checked that \({{\mathbb {E}}}\,\Lambda (0)=\lambda := {\varvec{a}}'\,Q^{-1}\,{\varvec{\varrho }}\) and \({{\mathbb {V}}}\mathrm{ar}\,\Lambda (0)={\varvec{a}}'\,\Sigma _0\,{\varvec{a}}\). We therefore now concentrate on the mean and variance of L(0).
Let \({{\mathscr {P}}}(\mu )\) denote a Poisson random variable with parameter \(\mu \). It is straightforward that
$$\begin{aligned} {{\mathbb {E}}}\,L(0) =&\, {{\mathbb {E}}}\left( {{\mathscr {P}}}\left( \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\,\mathrm{d}s\right) \right) ={\mathbb E}\left( \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\,\mathrm{d}s\right) . \end{aligned}$$
We thus find, with \({\varvec{J}}\) shorthand notation for the entire path of the process \({\varvec{J}}(\cdot )\),
$$\begin{aligned} {{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]&=\int _{-\infty }^0\Lambda (s)\bar{F}(-s)\,\mathrm{d}s=\int _0^\infty \Lambda (-s){\bar{F}}(s)\,\mathrm{d}s \nonumber \\&=\beta \int _0^\infty \Lambda (-s)f_\mathrm{e}(s)\,\mathrm{d}s =\beta \, {{\mathbb {E}}}[\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}}]. \end{aligned}$$
(16)
For later use, we rewrite this expression as
$$\begin{aligned} \beta \, {{\mathbb {E}}}[\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}}]&=\beta \,{{\mathbb {E}}}[{\varvec{a}}'{\varvec{X}}(-B_\mathrm{e})\,|\,{\varvec{J}}]=\beta \,{\varvec{a}}'\,{{\mathbb {E}}}[X(-B_\mathrm{e})\,|\,{\varvec{J}}] \nonumber \\&=\beta \int _{-\infty }^0\left( {{\mathbb {E}}}\,{\varvec{a}}'\,\mathrm{e}^{-Q(-B_\mathrm{e}-s)}1_{\{s\le -B_\mathrm{e}\}}\right) \,\mathrm{d}{\varvec{J}}(s)\nonumber \\&{\mathop {=}\limits ^\mathrm{d}}\beta \int _{0}^{\infty }\left( {{\mathbb {E}}}\,\mathrm{e}^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) '\mathrm{d}{\varvec{J}}(s),\, \end{aligned}$$
(17)
where the last step is due to time-reversibility of \({\varvec{J}}(\cdot )\).
Applying (7), we obtain
$$\begin{aligned} {{\mathbb {E}}}\,L(0) =&\, \beta \, {{\mathbb {E}}}\,\Lambda (0) =\lambda \beta . \end{aligned}$$
(18)
Now we move to computing \({{\mathbb {V}}}\mathrm{ar}\,L(0)\). Due to the fact that L(0) is mixed Poisson, we have that \({{\mathbb {V}}}\mathrm{ar}(L(0)\,|\,{\varvec{J}})={{\mathbb {E}}}(L(0)\,|\,{\varvec{J}})\) and thus \({{\mathbb {E}}}\,{{\mathbb {V}}}\mathrm{ar}(L(0)\,|\,{\varvec{J}})={\mathbb E}\,L(0)=\lambda \beta \). Using the law of total variance, (16), (18) and (9),
$$\begin{aligned} {{\mathbb {V}}}\mathrm{ar}\, L(0)&={{\mathbb {E}}}\,{{\mathbb {V}}}\mathrm{ar}\,(L(0)\,|\,{\varvec{J}})+{{\mathbb {V}}}\mathrm{ar}\,({\mathbb E}[L(0)\,|\,{\varvec{J}}]) \nonumber \\&=\lambda \beta +{{\mathbb {V}}}\mathrm{ar}\left( {{\mathbb {E}}}(\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}})\right) \beta ^2 \nonumber \\&=\lambda \beta +\beta ^2\int _0^\infty {\varvec{a}}'\left( {{\mathbb {E}}}\,\mathrm{e}^{-Q(s-B_{\varvec{e}})}1_{\{B_\mathrm{e}\le s\}}\right) \Sigma \left( {{\mathbb {E}}}\,\mathrm{e}^{-Q'(s-B_{\varvec{e}})}1_{\{B_{\varvec{e}}\le s\}}\right) {\varvec{a}}\,\mathrm{d}s. \end{aligned}$$
(19)
Expression (19) can be substantially simplified, which we do later in this section. Observe that \({{\mathbb {V}}}\mathrm{ar}\, L(0) \ge {{\mathbb {E}}}\, L(0)\), reflecting the fact that Coxian arrival rates lead to overdispersed arrival processes; see, for example, [4, 7].

3.5 \(\circ \) Joint transform of \(\Lambda (0)\) and L(0)

Here the goal is to determine \({{\mathbb {E}}}\,\mathrm{e}^{-v\Lambda (0)}\,w^{L(0)} .\) By arguments similar to those used in Sect. 3,
$$\begin{aligned} {{\mathbb {E}}}\, \mathrm{e}^{-v\Lambda (0)}\,w^{L(0)}&=\,{\mathbb E}\exp \left( (w-1) \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\mathrm{d}s -v\, \Lambda (0) \right) \\&=\,{{\mathbb {E}}}\exp \left( (w-1){\varvec{a}}' \int _{-\infty }^0 X(s) \bar{F}(-s)\mathrm{d}s -v\, {\varvec{a}}'{\varvec{X}}(0) \right) . \end{aligned}$$
Combining this relation with (cf. (6))
$$\begin{aligned} \int _{-\infty }^0 X(s) {\bar{F}}(-s)\mathrm{d}s&=\, \int _{-\infty }^0 \int _{-\infty }^s \mathrm{e}^{-Q(r-s)} \,\mathrm{d}{\varvec{J}}(r)\, {\bar{F}}(-s)\,\mathrm{d}s\\&=\, \int _{-\infty }^0 \int _{r}^0 \mathrm{e}^{-Q(r-s)} {\bar{F}}(-s)\mathrm{d}s \,\mathrm{d}{\varvec{J}}(r) \\&=\, \int _0^{\infty }\int _{0}^s \mathrm{e}^{-Q(s-r)} {\bar{F}}(r)\mathrm{d}r \,\mathrm{d}{\varvec{J}}(s), \end{aligned}$$
yields
$$\begin{aligned} {{\mathbb {E}}}\, \mathrm{e}^{-v\Lambda (0)}\,w^{L(0)} = {{\mathbb {E}}}\exp \left( {\varvec{a}}' \int _0^{\infty } \Omega (v,w,s) \,\mathrm{d}{\varvec{J}}(s) \right) , \end{aligned}$$
where
$$\begin{aligned} \Omega (v,w,s):=&\, (w-1) \int _0^{s} \mathrm{e}^{-Q(s-r)} \bar{F}(r)\,\mathrm{d}r -v\, \mathrm{e}^{-Qs} \\ =&\, (w-1) \beta \, {{\mathbb {E}}}\, \mathrm{e}^{-Q(s-B_\mathrm{e})} 1_{\{B_\mathrm{e}\le s\}} -v\, \mathrm{e}^{-Qs} . \end{aligned}$$
Applying (8), we thus obtain the following result.
Proposition 3.3
$$\begin{aligned} {{\mathbb {E}}}\, \mathrm{e}^{-v\Lambda (0)}\,w^{L(0)} = \exp \left( -\int _0^\infty \eta \left( -\Omega (v,w,s)' {\varvec{a}}\right) \mathrm{d}s \right) . \end{aligned}$$
It is possible to apply this joint transform to the computation of moments of L(0) and \(\Lambda (0)\), as well as their mixed moments, but lower moments can typically be computed by direct arguments. The mean and variance of \(\Lambda (0)\) and L(0) have been identified above. We therefore now focus on the covariance between \(\Lambda (0)\) and L(0). Since \({{\mathbb {E}}}[\Lambda (0)\,|\,{\varvec{J}}]=\Lambda (0)\) (as \({\varvec{X}}(\cdot )\) is a functional of \({\varvec{J}}(\cdot )\)), and since \(B_{\mathrm{e}}\) is independent of J, we find
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}(\Lambda (0),L(0))&={{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}],\Lambda (0))={{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[\Lambda (-B_{\mathrm{e}})\,|\,{\varvec{J}}],\Lambda (0))\\&={{\mathbb {C}}}\mathrm{ov}(\Lambda (-B_{\mathrm{e}}),\Lambda (0))={\mathbb E}\,{{\mathbb {C}}}\mathrm{ov}(\Lambda (-B_{\mathrm{e}}),\Lambda (0)\,|\,B_{\mathrm{e}}) \\&={\varvec{a}}'\,{{\mathbb {E}}}\,\Sigma _{B_{\mathrm{e}}}\,{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,{{\mathbb {E}}}\,\mathrm{e}^{-Q'B_{\mathrm{e}}}\,{\varvec{a}}\ , \end{aligned}$$
where we employ the fact that the covariance matrix between \({\varvec{X}}(t)\) and \({\varvec{X}}(t+h)\), which is the same as that between \({\varvec{X}}(t-h)\) and \({\varvec{X}}(t)\), is given by \(\Sigma _h=\Sigma _0\,\mathrm{e}^{-Q'h}\).

3.6 \(\circ \) Covariance between L(0) and L(t)

The starting point is the law of total covariance:
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}\, (L(0),L(t))&={{\mathbb {E}}}\,{{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})+{{\mathbb {C}}}\mathrm{ov}({\mathbb E}[L(0)\,|\,{\varvec{J}}], {{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}]). \end{aligned}$$
We evaluate the two terms separately. The first term can be rewritten as follows: Let \(C_1(t)\) denote the number of customers that arrive before time 0 and depart in (0, t]; \(C_2(t)\) is the number of customers that arrive before time 0 and depart after t; finally, \(C_3(t)\) is the number of customers that arrive in (0, t] and depart after t. Evidently, due to the conditional independence of these three quantities,
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})&={{\mathbb {C}}}\mathrm{ov}\,(C_1(t)+C_2(t),C_2(t)+C_3(t)\,|\,{\varvec{J}})\\ {}&={{\mathbb {V}}}\mathrm{ar}\,(C_2(t)|\,J)={{\mathbb {E}}}(C_2(t)\,|\,{\varvec{J}}), \end{aligned}$$
with, mimicking the above arguments,
$$\begin{aligned} {{\mathbb {E}}}(C_2(t)\,|\,{\varvec{J}})&{\mathop {=}\limits ^\mathrm{d}}\beta \int _{0}^\infty {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{t<B_\mathrm{e}\le t+s\}}\right) \,\mathrm{d}{\varvec{J}}(s), \end{aligned}$$
where the last equality is due to the fact that the conditional distribution of \(C_2(t)\) given \({\varvec{J}}\) is Poisson. Thus, with \(F_\mathrm{e}(\cdot )\) denoting the distribution function of \(B_\mathrm{e}\) and \({\bar{F}}_\mathrm{e}(\cdot )\) the corresponding complementary distribution function,
$$\begin{aligned} {{\mathbb {E}}}\,{{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})&=\beta \int _{0}^\infty {{\mathbb {E}}}\left( {\varvec{a}}'\mathrm{e}^{-Q(s+t-B_\mathrm{e})}{\varvec{\varrho }}1_{\{t<B_\mathrm{e}\le t+s\}}\right) \,\mathrm{d}s\\&=\beta \, {{\mathbb {E}}}\left( \int _{B_{\mathrm{e}}-t}^{\infty }\left( {\varvec{a}}'\mathrm{e}^{-Q(s-(B_\mathrm{e}-t))}{\varvec{\varrho }}\right) \mathrm{d}s\,1_{\{B_\mathrm{e}>t\}}\right) \\&=\beta \,{{\mathbb {E}}}\left( \int _{0}^{\infty }{\varvec{a}}'\mathrm{e}^{-Qs}\,\mathrm{d}s\,1_{\{B_\mathrm{e}>t\}}\right) {\varvec{\varrho }} \\&=\beta \,{\varvec{a}}'Q^{-1}\,{\varvec{\varrho }}\, {{\mathbb {P}}}(B_\mathrm{e}>t)=\lambda \beta {\bar{F}}_\mathrm{e}(t). \end{aligned}$$
Next, we move to the second term. To this end, we first recall (cf. (16) and (17)) that
$$\begin{aligned} {{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]=&\beta \int _{0}^{\infty } {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) \mathrm{d}{\varvec{J}}(s),\\ {{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}]=&\beta \int _{-t}^{\infty } {\mathbb E}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}{\varvec{J}}(s). \end{aligned}$$
As \({\varvec{J}}\) has independent increments, when considering the covariance between \({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]\) and \({\mathbb E}[L(t)\,|\,{\varvec{J}}]\), we can restrict ourselves to integrating over \(s>0\) only; more concretely,
$$\begin{aligned}&{{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}],{{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}])\nonumber \\&\quad =\beta ^2\,{{\mathbb {C}}}\mathrm{ov}\left( \int _{0}^\infty {\mathbb E}\left( e^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) \mathrm{d}{\varvec{J}}(s),\right. \nonumber \\&\quad \left. \quad \int _{0}^{\infty } \mathrm{E}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}{\varvec{J}}(s)\right) \nonumber \\&\quad =\beta ^2\int _0^\infty {{\mathbb {E}}}\left( {\varvec{a}}'\,e^{-Q(s-B_\mathrm{e})}1_{\{B_{\varvec{e}}\le s\}}\right) \Sigma \, {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}\,{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}s\,, \end{aligned}$$
(20)
where the last step follows by using (9).
As it turns out, the last formula (as well as (19)) may be simplified, as follows: Let \(B_{\mathrm{e},1}\) and \(B_{\mathrm{e},2}\) be two i.i.d. copies of \(B_\mathrm{e}\). Recalling (12) and denoting \(x^+:=x\vee 0\) and \(x^-:=-x\wedge 0=(-x)^+\), (20) can be rewritten as
$$\begin{aligned}&\beta ^2 {\varvec{a}}'\int _0^\infty {{\mathbb {E}}}\left( e^{-Q(s-B_{\mathrm{e}})}1_{\{B_{\mathrm{e}}\le s\}}\right) \Sigma \, {{\mathbb {E}}}\left( e^{-Q'(s+t-B_{\mathrm{e}})}1_{\{B_{\mathrm{e}}\le s+t\}}\right) \mathrm{d}s\,{\varvec{a}}\nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \int _{B_{\mathrm{e},1}\vee (B_{\mathrm{e},2}-t)}^\infty \mathrm{e}^{-Q(s-B_{\mathrm{e},1})} \Sigma \,\mathrm{e}^{-Q'(s+t-B_{\mathrm{e},2})}\mathrm{d}s\right) \,{\varvec{a}} \nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \mathrm{e}^{-Q(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^-} \left( \int _{0}^\infty \mathrm{e}^{-Qs} \Sigma \,\mathrm{e}^{-Q's}\mathrm{d}s\right) \mathrm{e}^{-Q'(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^+}\right) {\varvec{a}} \nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \mathrm{e}^{-Q(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^-} \Sigma _0\, \mathrm{e}^{-Q'(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^+}\right) {\varvec{a}}\,, \end{aligned}$$
where, in the last equality, (11) has been used. For any \(s\in \mathbb {R}\) we have that, recalling Equation (13),
$$\begin{aligned} {\varvec{a}}'\,\mathrm{e}^{-Qs^-}\,\Sigma _0\,\mathrm{e}^{-Q's^+}{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|s|}{\varvec{a}}={\varvec{a}}'\,\Sigma _{|s|}\,{\varvec{a}}. \end{aligned}$$
Thus, denoting by R(t) the autocorrelation function (with \(R(0)=1\)), then adding the two terms yields
$$\begin{aligned} R(t)\cdot {{\mathbb {V}}}\mathrm{ar}\,L(0)&={{\mathbb {C}}}\mathrm{ov}(L(0),L(t))=\lambda \beta {\bar{F}}_\mathrm{e}(t)+\beta ^2\,{\varvec{a}}'\,{{\mathbb {E}}}\left( \Sigma _{|B_{\mathrm{e},1}-B_{\mathrm{e},2}+t|}\right) {\varvec{a}}\,. \end{aligned}$$
(21)
In particular, when \(t=0\), (21) provides us with a simplified expression for \({{\mathbb {V}}}\mathrm{ar}\,L(0)\). The density of \(B_{\mathrm{e},1}-B_{\mathrm{e},2}\) is clearly symmetric around zero and is given by
$$\begin{aligned} g(x)=\int _0^\infty f_\mathrm{e}(y)f_\mathrm{e}(y+|x|)\,\mathrm{d}y. \end{aligned}$$
Since \(f_\mathrm{e}(\cdot )=\beta ^{-1}{\bar{F}}(\cdot )\) is non-increasing on \([0,\infty )\), this implies that \(g(\cdot )\) is unimodal.
We end this subsection by considering the one-dimensional case. Since both \(h(x):=\mathrm{e}^{-q|x|}\) (\(q>0\)) and g(x) are symmetric and unimodal (around zero), it follows by [15] that so is their convolution. Alternatively, this follows also from [5] as \(h(\cdot )\) is also log-concave. This implies that
$$\begin{aligned} {{\mathbb {E}}}\,\mathrm{e}^{-q|B_{\mathrm{e},1}-B_{\mathrm{e},2}+t|}&=\int _{-\infty }^\infty \mathrm{e}^{-q|x+t|}g(x)\,\mathrm{d}x=\int _{-\infty }^\infty \mathrm{e}^{-q|-x+t|}g(-x)\,\mathrm{d}x \\&=\int _{-\infty }^\infty \mathrm{e}^{-q|t-x|}g(x)\,\mathrm{d}x \end{aligned}$$
is unimodal with a mode at zero, hence non-increasing on \([0,\infty )\), and thus so is \(R(\cdot )\). A similar result holds if \({\varvec{a}}\) is an eigenvector associated with a real-valued (positive) eigenvalue q of \(Q'\) since then \({\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|x|}\,{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,{\varvec{a}}\, \mathrm{e}^{-q|x|}\). However, in general, even though \({\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|x|}\,{\varvec{a}}\) is a symmetric function, it is not clear if it is decreasing on \([0,\infty )\) or if it is log-concave. However, since it vanishes as \(|x|\rightarrow \infty \), it is clear that R(t) vanishes as \(t\rightarrow \infty \).

4 The network case

In this section we consider networks of infinite-server queues with a Cox input process. There are n queues, with the arrival rate of queue k being a non-homogeneous Poisson process with rate function \(\Lambda _k(t)={\varvec{a}}_k'{\varvec{X}}(t)\). This is the same as defining the vector of arrival processes to be \({\varvec{\Lambda }}(t)=A{\varvec{X}}(t)\) for some matrix A with rows \({\varvec{a}}_k'\). The \({\varvec{X}}(t)\) process is the same multivariate shot-noise process as before. Notice that in this construction the arrival processes at the various queues are potentially dependent.
Define by \(p_{km}(t)\) the probability that a job entering at queue k at time 0 is at queue m at time t; likewise, \(p_{k0}(t)\) is the probability that a job entering at queue k at time 0 has left the network by time t. In the case of exponentially distributed service times and probabilistic routing, these \(p_{km}(t)\)’s can be computed more explicitly (relying on the machinery developed for phase-type distributions).
Before studying the joint queue length distribution in the network (Sect. 4.2) and presenting an example (Sect. 4.3), we briefly remind the reader of the structure of the joint queue length distribution in the case of two M/G/\(\infty \) queues in tandem (Sect. 4.1). This helps sharpen our intuition and will make it easier to digest and understand the results of Sect. 4.2.

4.1 Two M/G/\(\infty \) queues in tandem

Consider a model of two M/G/\(\infty \) queues 1, 2 in series, with arrival rate \(\lambda \) at queue 1, generic service time \(B_i\) at queue i, \(i=1,2\), and all customers moving from queue 1 to 2 before leaving the system. Queue 2 has no external arrivals. The same concepts that were underlying the results mentioned in Sect. 3.1 (see, in particular, (14)) quickly lead to the following expression for the generating function of the joint distribution of the two queue lengths \(L_1(t),L_2(t)\):
$$\begin{aligned}&{\mathbb {E}}[w_1^{L_1(t)} w_2^{L_2(t)}|L_1(0)=L_2(0)=0]\\&\quad =\, \exp \left( {(w_1-1)\lambda \int _0^t \mathbb P(B_1>u) \mathrm{d} u + (w_2-1)\lambda \int _0^t \mathbb P(B_1 < u, B_1+B_2 > u)} \mathrm{d}u\right) . \end{aligned}$$
Furthermore, the generating function of the steady-state joint queue length distribution is given by
$$\begin{aligned} \mathbb E[w_1^{L_1} w_2^{L_2}] =\exp \big ({(w_1-1)\lambda \mathbb EB_1 + (w_2-1)\lambda \mathbb EB_2 }\big ). \end{aligned}$$

4.2 Joint queue length

We focus on analyzing the stationary joint queue length distribution. In principle, virtually all quantities studied in the previous section can be derived again, at the expense of introducing rather heavy notation.
In this section, we let \(K_m\) denote the stationary queue length at node \(m\in \{1,\ldots ,n\}\). Our objective is to compute the joint probability-generating function
$$\begin{aligned} \Pi ({\varvec{w}}) = {{\mathbb {E}}}\left( \prod _{m=1}^n w_m^{K_m}\right) . \end{aligned}$$
Using the same arguments as in the previous section, we conclude that \(K_m\) has a mixed Poisson distribution. More precisely, \(K_m\) has a Poisson distribution with (random) parameter
$$\begin{aligned} \int _{-\infty } ^0 \sum _{{k}=1}^n\Lambda _{{k}}(s) p_{{k}m}(-s)\,\mathrm{d}s. \end{aligned}$$
(22)
It follows that
$$\begin{aligned} \Pi ({\varvec{w}}) =&\, {{\mathbb {E}}}\left( \prod _{m=1}^n w_m^{K_m}\right) = {{\mathbb {E}}}\exp \left( \sum _{m=1}^n (w_m-1) \int _{-\infty }^0 \sum _{{k}=1}^n \Lambda _{{k}}(s)\,p_{{k}m}(-s)\,\mathrm{d}s\,\right) \nonumber \\ =&\, {{\mathbb {E}}}\exp \left( \sum _{m=1}^n (w_m-1) \int _{-\infty }^0 \sum _{{k}=1}^n {\varvec{a}}_{{k}}'{\varvec{X}}(s)\,p_{{k}m}(-s)\,\mathrm{d}s\,\right) . \end{aligned}$$
(23)
Substituting (cf. (6))
$$\begin{aligned} {\varvec{X}}(s) = \int _{-\infty }^s \mathrm{e}^{-Q(s-r)}\mathrm{d}{\varvec{J}}(r)\end{aligned}$$
in (23) gives
$$\begin{aligned} \Pi ({\varvec{w}}) = {{\mathbb {E}}}\exp \left( \sum _{{k}=1}^n {\varvec{a}}_{{k}}' \int _{-\infty }^0\int _{r}^0 \mathrm{e}^{-Q(s-r)}\,\pi _{{k}}(s\,|\,{\varvec{w}}) \,\mathrm{d}s\,\mathrm{d}{\varvec{J}}(r) \right) , \end{aligned}$$
with
$$\begin{aligned} \pi _{{k}}(s\,|\,{\varvec{w}}):=&\, \sum _{m=1}^n p_{{k}m}(-s) \,(w_m-1)= \sum _{m=1}^n p_{{k}m}(-s) \,w_m +p_{{k}0}(-s)-1. \end{aligned}$$
This leads to the following result.
Theorem 4.1
For any \({\varvec{w}}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} \Pi ({\varvec{w}}) =&\, {{\mathbb {E}}}\left( \prod _{m=1}^n w_m^{K_m}\right) \\ =&\, \exp \left( -\int _{-\infty }^0 \eta \left( -\sum _{{k}=1}^n \int _{r}^0 \mathrm{e}^{-Q'(s-r)}\,\pi _{{k}}(s\,|\,{\varvec{w}}) \,\mathrm{d}s\,{\varvec{a}}_{{k}}\right) \mathrm{d}r\right) . \end{aligned}$$

4.3 Example

In this illustrative example we consider a two-node tandem system in which the service times at both nodes are exponential with parameter \(\kappa >0.\) We assume for ease that there is only input at the first queue and that all customers move from the first queue to the second queue. It is immediate that, for \(t\geqslant 0\),
$$\begin{aligned} p_{11}(t) = \mathrm{e}^{-\kappa t}, \,\,\,p_{12}(t) =\kappa t\,\mathrm{e}^{-\kappa t},\,\,\, p_{10}(t) = 1- \mathrm{e}^{-\kappa t}- \kappa t\,\mathrm{e}^{-\kappa t}; \end{aligned}$$
use that the sojourn time in the system is Erlang(2). We find that
$$\begin{aligned} \pi _1(t\,|\,w_1,w_2) = (w_1-1) \,\mathrm{e}^{\kappa t}- (w_2-1) \, \kappa t\,\mathrm{e}^{\kappa t}. \end{aligned}$$
Assume for ease that \(d=1\) (i.e., one-dimensional shot-noise). We choose \(a_1=1\) (whereas \(a_2=0\), as we assumed no external input to the second queue). As a consequence, we have that \(\Lambda _1(t) = X(t)\) and \(\Lambda _2(t) =0\), where X(t) can be represented as
$$\begin{aligned} X(t) = \int _{(-\infty ,t]} \mathrm{e}^{-q(t-s)} \,\mathrm{d}J(s), \end{aligned}$$
for some \(q>0\) and a (scalar) Lévy subordinator \(J(\cdot )\) (with exponent \(-\eta (\cdot )\)).
Appealing to Thm. 4.1,
$$\begin{aligned} \Pi (w_1,w_2) = {{\mathbb {E}}}\big [w_1^{K_1} w_2^{K_2}\big ] = \exp \left( -\int _{-\infty }^0 \eta \left( -\int _r^0 \mathrm{e}^{-q(s-r)} \pi _1(s\,|\,w_1,w_2) \,\mathrm{d}s \right) \mathrm{d}r\right) .\!\!\!\!\!\nonumber \\ \end{aligned}$$
(24)
To evaluate this expression, observe that
$$\begin{aligned} \int _r^0 \mathrm{e}^{-q(s-r)} \pi _1(s\,|\,w_1,w_2) \,\mathrm{d}s=&\int _r^0 \mathrm{e}^{-q(s-r)}\left( (w_1-1) \,\mathrm{e}^{\kappa s}- (w_2-1) \, \kappa s\,\mathrm{e}^{\kappa s}\right) \mathrm{d}s \nonumber \\ =&\, (w_1-1) \frac{\mathrm{e}^{q r} -\mathrm{e}^{\kappa r}}{\kappa -q} +(w_2-1)\left( \frac{\kappa r\,\mathrm{e}^{\kappa r}}{\kappa -q} +\frac{\kappa (\mathrm{e}^{q r} -\mathrm{e}^{\kappa r})}{(\kappa -q)^2}\right) . \end{aligned}$$
(25)
It is easy to obtain moments of \(K_1\) and \(K_2\) from (24) and (25) by differentiation with respect to \(w_1\) and \(w_2\), respectively. In particular, straightforward algebra yields
$$\begin{aligned} \mathbb E[K_1] =\mathbb E[K_2]= \frac{\eta '(0)}{\kappa q}. \end{aligned}$$
Remark 4.2
Similar calculations can be performed for considerably more general networks. Application of Thm. 4.1 requires (i) expressions for the matrix exponential of \(Q'\) and (ii) expressions for \(\pi _k(s\,|\,{\varvec{w}})\) (and therefore for \(p_{km}(s)\)) for \(s\geqslant 0\). To this end, evaluate, using common techniques from linear algebra, the matrix exponential of \(Q'\); this can be done relying on standard computer algebra. Assuming the eigenvalues of Q to have multiplicity 1 (where we note that higher multiplicities can be dealt with too, but require some additional care), we thus obtain a matrix whose entries are weighted sums of exponentials. Likewise, if the service times are of phase type, then the probabilities \(p_{km}(s)\) can be written in terms of (specific entries of) a matrix exponential, and therefore (again under mild conditions) also as a weighted sum of exponentials. It thus follows that the integrand \(\mathrm{e}^{-Q'(s-r)}\,\pi _{{k}}(s\,|\,{\varvec{w}})\) is a mixture of exponentials (in s) and can be integrated by elementary calculus. This allows the computation of the (mixed) moments pertaining to the stationary queue lengths.
We now return to our tandem example. Suppose that \(J(\cdot )\) corresponds to a Gamma process [9, Section 1.2.4] with (without loss of generality) rate and shape parameters both equal to 1, i.e.,
$$\begin{aligned} \eta (\alpha ) =-\log \left( \frac{1}{1+\alpha }\right) = \log (1+\alpha ). \end{aligned}$$
The probability-generating function of the queue length in the first queue is therefore
$$\begin{aligned} \Pi (w,1) = \exp \left( -\int _{-\infty }^0 \log \left( 1+(1-w) \frac{\mathrm{e}^{q r} -\mathrm{e}^{\kappa r}}{\kappa -q}\right) \,\mathrm{d}r\right) . \end{aligned}$$
Using the Taylor expansion of the logarithm, the exponent of this expression can be rewritten as
$$\begin{aligned} \int _{-\infty }^0 \sum _{n=1}^\infty \frac{(-1)^{n}}{n} \left( v(w)\,( \mathrm{e}^{q r} -\mathrm{e}^{\kappa r})\right) ^n \mathrm{d}r, \end{aligned}$$
with \(v(w):= (1-w)/(\kappa -q).\) Relying on the binomium, this equals
$$\begin{aligned} \int _{-\infty }^0 \sum _{n=1}^\infty \frac{(-1)^{n}}{n} (v(w))^n\,\left( \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^{n-k} \,\mathrm{e}^{q r k+\kappa r(n-k)}\right) \mathrm{d}r. \end{aligned}$$
Swapping the order of the summations and the integral, we obtain
$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n} (v(w))^n\, \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^k\frac{1}{qk+\kappa (n-k)}. \end{aligned}$$
We thus end up with
$$\begin{aligned} {{\mathbb {E}}}\big [w^{K_1}\big ] =\exp \left( -\sum _{n=1}^\infty \frac{1}{n} (v(w))^n\, \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^k\frac{1}{qk+\kappa (n-k)}\right) . \end{aligned}$$
A similar procedure yields the probability-generating function of the second queue, but the resulting expressions are slightly more complicated.

5 Conclusions and suggestions for further research

In this paper we have considered a network of infinite-server queues where the input process is a Cox process that allows much modeling flexibility; the arrival rate is represented as a linear combination of the components of a multivariate generalized shot-noise process. We have derived various distributional properties of the multivariate shot-noise process, subsequently exploiting them to obtain the joint transform of the numbers of customers, at consecutive time epochs, in an infinite-server queue with, as input process, such a Cox process. We have also derived the joint steady-state transform of the vectors of arrival rate and queue length, as well as their means and covariance structure, and we have studied the departure process from the queue. Finally, we extended our analysis to the setting of a network of infinite-server queues, allowing the arrival processes at the various queues to be dependent Cox processes.
In a follow-up study [14] we investigate several related aspects. Specifically, (i) we develop a recursive scheme that allows us to obtain higher-order moments of \((\Lambda (t),L(t))\), (ii) we derive asymptotics of the queue length process, under assumptions regarding the tail behavior of the shot-noise process, (iii) we study the heavy-traffic behavior of the queue length process, and (iv) we obtain several numerical results for means, variances and correlations; this includes a numerical exploration of the sensitivity of various performance metrics with respect to the service time distribution. It could also be investigated to what extent the assumption of exponential decay in the shot-noise process can be relaxed.

Acknowledgements

OK’s work has been supported by Israel Science Foundation (Grant Number 1647/17) and the Vigevani Chair in Statistics. The research of OB and MM is partly funded by the NWO Gravitation Programme Networks (Grant Number 024.002.003) and an NWO Top Grant (Grant Number 613.001.352).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference Çinlar, E.: Probability and Stochastics. Graduate Texts in Mathematics, vol. 261. Springer, New York (2011)CrossRef Çinlar, E.: Probability and Stochastics. Graduate Texts in Mathematics, vol. 261. Springer, New York (2011)CrossRef
2.
go back to reference Daw, A., Pender, J.: Queues driven by Hawkes processes. Stoch. Syst. 8, 192–229 (2018)CrossRef Daw, A., Pender, J.: Queues driven by Hawkes processes. Stoch. Syst. 8, 192–229 (2018)CrossRef
3.
go back to reference Eick, S., Massey, W., Whitt, W.: The physics of the M\(_t\)/G/\(\infty \) queue. Oper. Res. 41, 731–742 (1993)CrossRef Eick, S., Massey, W., Whitt, W.: The physics of the M\(_t\)/G/\(\infty \) queue. Oper. Res. 41, 731–742 (1993)CrossRef
4.
go back to reference Heemskerk, M., van Leeuwaarden, J., Mandjes, M.: Scaling limits for infinite-server systems in a random environment. Stoch. Syst. 7, 1–31 (2017)CrossRef Heemskerk, M., van Leeuwaarden, J., Mandjes, M.: Scaling limits for infinite-server systems in a random environment. Stoch. Syst. 7, 1–31 (2017)CrossRef
5.
go back to reference Ibragimov, I.A.: On the composition of unimodal distributions. Theory Probab. Appl. 1, 255–260 (1956)CrossRef Ibragimov, I.A.: On the composition of unimodal distributions. Theory Probab. Appl. 1, 255–260 (1956)CrossRef
6.
go back to reference Kella, O., Whitt, W.: Linear stochastic fluid networks. J. Appl. Probab. 36, 244–260 (1999)CrossRef Kella, O., Whitt, W.: Linear stochastic fluid networks. J. Appl. Probab. 36, 244–260 (1999)CrossRef
7.
go back to reference Koops, D., Boxma, O., Mandjes, M.: Networks of \(\cdot \)/G/\(\infty \) queues with shot-noise-driven arrival intensities. Queueing Syst. 86, 301–325 (2017)CrossRef Koops, D., Boxma, O., Mandjes, M.: Networks of \(\cdot \)/G/\(\infty \) queues with shot-noise-driven arrival intensities. Queueing Syst. 86, 301–325 (2017)CrossRef
8.
go back to reference Koops, D., Saxena, M., Boxma, O., Mandjes, M.: Infinite-server queues with Hawkes input. J. Appl. Probab. 55, 920–943 (2018)CrossRef Koops, D., Saxena, M., Boxma, O., Mandjes, M.: Infinite-server queues with Hawkes input. J. Appl. Probab. 55, 920–943 (2018)CrossRef
9.
go back to reference Kyprianou, A.: Fluctuations of Lévy Processes with Applications. Introductory Lectures. Springer, Heidelberg (2014)CrossRef Kyprianou, A.: Fluctuations of Lévy Processes with Applications. Introductory Lectures. Springer, Heidelberg (2014)CrossRef
10.
go back to reference Mirasol, N.M.: The output of an M/G/\(\infty \) queuing system is Poisson. Oper. Res. 11, 282–284 (1963)CrossRef Mirasol, N.M.: The output of an M/G/\(\infty \) queuing system is Poisson. Oper. Res. 11, 282–284 (1963)CrossRef
11.
go back to reference Riziou, M.-A., Xie, L., Sanner, S., Cebrian, M., Yu, H., Van Henteryck, P.: Expecting to be HIP: Hawkes intensity processes for social media popularity. In: Proceedings of the 26th International Conference on World Wide Web (Republic and Canton of Geneva, Switzerland, 2017), WWW ’17, pp. 735–744 (2017) Riziou, M.-A., Xie, L., Sanner, S., Cebrian, M., Yu, H., Van Henteryck, P.: Expecting to be HIP: Hawkes intensity processes for social media popularity. In: Proceedings of the 26th International Conference on World Wide Web (Republic and Canton of Geneva, Switzerland, 2017), WWW ’17, pp. 735–744 (2017)
12.
13.
go back to reference Sato, K.-I.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (2005) Sato, K.-I.: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (2005)
15.
go back to reference Wintner, A.: Asymptotic Distributions and Infinite Convolutions. Edwards Brothers, Ann Arbor (1938) Wintner, A.: Asymptotic Distributions and Infinite Convolutions. Edwards Brothers, Ann Arbor (1938)
Metadata
Title
Infinite-server systems with Coxian arrivals
Authors
Onno Boxma
Offer Kella
Michel Mandjes
Publication date
13-05-2019
Publisher
Springer US
Published in
Queueing Systems / Issue 3-4/2019
Print ISSN: 0257-0130
Electronic ISSN: 1572-9443
DOI
https://doi.org/10.1007/s11134-019-09613-2

Other articles of this Issue 3-4/2019

Queueing Systems 3-4/2019 Go to the issue

Premium Partner