1 Introduction
In queueing theory it is commonly assumed that the customer arrival process is a Poisson process. Some recent empirical studies (see, for example, [
4] for references) suggest, however, that arrival processes may exhibit
overdispersion, i.e., the variance of the number of arrivals in an interval is larger than the corresponding mean. This has triggered research on queueing systems with overdispersed input.
The focus of the present paper is on infinite-server queues with, as input process, a doubly stochastic process, also known as a Cox process. Such a process can be seen as a Poisson process in which the rate is not a constant; instead, the rate process \(\{\Lambda (t), t \in {{\mathbb {R}}}\}\) itself is a (nonnegative) stochastic process. As an immediate consequence of the law of total variance, Cox processes indeed are overdispersed. Before describing the main contributions of the present paper, we first provide a brief account of the existing literature in this area.
Literature In [
7], the arrival process of the infinite-server queues is a Cox process in which the arrival rate is a
shot-noise process. More specifically, the jumps of the shot-noise process occur according to a homogeneous Poisson process and are i.i.d. (independent and identically distributed) with a general distribution, and between jumps, the shot-noise process decays exponentially fast. The main object of study in [
7] is a feedforward network of infinite-server queues, and the main result is the joint transform of the shot-noise-driven arrival rates and the numbers of customers in the queues.
Infinite-server queues are also studied in [
2] and [
8], but the arrival process there is a
Hawkes process, the so-called self-exciting process—jumps in the shot-noise process in turn increase the intensity of the occurrence of the jumps. Daw and Pender [
2] present several interesting motivating examples. They consider deterministic jump sizes in the shot-noise process and study in particular the Hawkes/Ph/
\(\infty \) and Hawkes/D/
\(\infty \) queues, obtaining detailed expressions for moments and autocovariances. Koops
et al. [
8] allow generally distributed jump sizes. For the case of exponentially distributed service times, the joint Laplace and z-transform of the Hawkes intensity and the number of customers is characterized via a partial differential equation, and that PDE is exploited to obtain recursive expressions for (joint) moments of the Hawkes intensity and the number of customers. For the case of generally distributed service times, the Hawkes process is viewed as a branching process, which allows expressing the z-transform of the number of customers in terms of the solution of a fixed-point equation.
Main goals and results In this paper we aim to develop a general framework for the study of infinite-server queues with, as input, a quite general Cox process and to provide an exact analysis of networks of such queues. The modeling framework of [
7] is extended in several ways; in particular, the shot-noise process is multivariate and not driven by a compound Poisson process, but by a Lévy subordinator. The class of Lévy subordinators not only contains compound Poisson processes, but also, for example, linear drifts and Gamma processes (where the latter process has the notable property that it exhibits infinitely many jumps in any time interval of positive length).
The main results of the paper are compact, elegant, transform expressions for joint distributions of arrival rates and numbers of customers, and properties of the departure processes, which reveal an interesting generalization of the classical results for the number-of-customers process and the departure process in the M/G/\(\infty \) queue. While large parts of the paper are quite technical, we also try to make clear why a detailed analysis of this complicated network is possible. In a nutshell, crucial for the analysis are the following convenient properties: (i) Lévy processes have stationary and independent increments, (ii) in a shot-noise process with exponential decay, all shots simultaneously decay at the same exponential rate, independently of each other, and (iii) in (a network of) infinite-server queues, all customers move through the queues independently, without interfering with each other.
Motivation Our motivation for this study is threefold. (i) We started studying queues with Coxian input as a framework naturally allowing the incorporation of overdispersion. (ii) When we realized that a detailed analysis of an isolated infinite-server queue with Coxian input is possible, we were motivated to develop a very general framework for studying networks of infinite-server queues with a multivariate Coxian input process that is of great generality but still allows a detailed exact analysis. Together with product-form networks and linear stochastic fluid networks (cf. [
6]) these are some of the rare examples of queueing networks for which one can obtain the joint distribution of key performance measures. (iii) Finally, while infinite-server queues with overdispersed input have various applications, we were in particular inspired by Web server applications. Consider the number of simultaneous visitors to a popular Web site or online video [
11,
12]. While the arrival process of visitors to the Web site may well be a Poisson process, its rate may jump up due to some (external) event, then decay gradually, only to jump up again because of another event. Such an example formed one of the motivations for [
2,
7,
8], which all study infinite-server queues with an overdispersed arrival process.
Cox input processes driven by a multivariate Lévy subordinator are quite relevant for the above-mentioned example of a Web site with a visit rate that jumps up because of a certain event and subsequently decays again. They allow one to take into account multiple, possibly related, events that affect the visit rate of the Web site. Those events could occur instantaneously (the Lévy subordinator being a compound Poisson process), but there might also, for example, be a linear (fluid) component.
Since we also consider networks of infinite-server queues, we can also model the situation in which a visitor to a Web site, after a while, clicks in order to move to another Web site.
Organization of the paper In Sect.
2 we introduce the multivariate shot-noise process and we describe distributional properties of this process. Section
3 studies a single infinite-server queue with, as input process, the above-described Cox process. The general case of a network of infinite-server queues is dealt with in Sect.
4. Each of these three sections starts with a brief overview of related known results: the one-dimensional shot-noise process (Sect.
2.1), the classical M/G/
\(\infty \) queue (Sect.
3.1), and a tandem network of M/G/
\(\infty \) queues (Sect.
4.1). Section
5 contains some conclusions and suggestions for further research.
3 Single infinite-server queue with a Cox input process
We consider an infinite-server queue in which, conditioned on \({\varvec{J}}(\cdot )\), the arrival process is a non-homogeneous Poisson process with rate function \(\Lambda (t)={\varvec{a}}'{\varvec{X}}(t)\) at time t, where \({\varvec{a}}\in \mathbb {R}_+^d\). It is throughout assumed that service times are i.i.d. and are independent of \({\varvec{J}}(\cdot )\) (hence also of \({\varvec{X}}(\cdot )\)) and the arrival process. The service times have distribution function \(F(\cdot )\) and complementary distribution function \({\bar{F}}(\cdot )\), and in addition we define \(F(s,t):=F(t)-F(s)\) for \(s<t\).
In this section, we start by reminding the reader of some well-known results concerning the classical M/G/
\(\infty \) queue (Sect.
3.1), subsequently we derive, for the infinite-server queue with a Cox input process, the transform of queue lengths at various time epochs jointly with the numbers of arrivals in various time intervals (Sect.
3.2), and we conclude, in Sect.
3.3, with several calculations yielding moments as well as the joint steady-state transform of the number of customers in the infinite-server queue and the arrival rate.
3.1 The M/G/\(\infty \) queue
In this subsection we consider the classical M/G/
\(\infty \) queue with fixed arrival rate
\(\lambda \) and generic service time
B. It is well known [
3] that the number of customers
L(
t) at time
t, starting with
\(L(0)=0\), is Poisson distributed with mean
$$\begin{aligned} \lambda t \int _0^t \mathbb P(B> t-u) \frac{\mathrm{d}u}{t}= \lambda \int _0^t \mathbb P(B> u) \mathrm{d}u. \end{aligned}$$
This is easily seen by observing that the number of arrivals in [0,
t] is Poisson(
\(\lambda t\)), that different customers do not interact and that, given that there were
n arrivals in [0,
t], the arrival epochs were independent and uniformly distributed on [0,
t]. So the Poisson arrival process is thinned, and the number of customers still present at
t follows a Poisson distribution. Thus, the generating function of
L(
t) immediately follows:
$$\begin{aligned} \mathbb E[w^{L(t)}|L(0)=0] = \exp \left( {(w-1)\lambda \int _0^t \mathbb P(B>u) \mathrm{d} u} \right) . \end{aligned}$$
(14)
The pleasing properties of non-interfering customer behavior and of the uniformly distributed arrival epochs also quickly lead to elegant results for joint distributions of customer numbers at various epochs. Another well-known result about the M/G/
\(\infty \) queue, going back to Mirasol [
10], is that in stationarity the departure process of the queue also is Poisson. We now study the question of how these results change in the setting of the Cox input process.
Our first objective is to derive the transform of queue lengths at various points in time, jointly with the number of arrivals in corresponding intervals.
To this end, we consider
\(n\in {{\mathbb {N}}}\) time intervals, say
\((t_0,t_1]\) up to
\((t_{n-1},t_n]\) (where it is assumed that
\(t_{i-1}<t_{i}\) for
\(i=1,\ldots ,n\)). Our goal is to establish the joint transform of the queue lengths
\(L_i\) at each of the
\(t_i\) and the numbers of arrivals
\(A_i\) in each of the intervals
\((t_{i-1},t_i]\), i.e., we shall compute the transform, for
\({\varvec{w}}\in {{\mathbb {R}}}^{n+1}\) and
\({\varvec{z}}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}}):={{\mathbb {E}}} \left( \prod _{i=0}^n w_i^{L_i}\cdot \prod _{i=1}^n z_i^{A_i}\right) . \end{aligned}$$
Notice that a job arriving in the interval
\((t_{i-1},t_i]\) contributes to
\(A_i\) (and does not contribute to any of the other
\(A_j\)) and potentially contributes to
\(L_i\) up to
\(L_n.\) More precisely, supposing that the job arrives at
\(s\in (t_{i-1},t_i]\), with probability
\({F}(t_j-s,t_{j+1}-s)\) it contributes to
\(L_i\) up to
\(L_j\), for
\(j\in \{i,\ldots ,n\}\) and defining
\(t_{n+1}:=\infty \). Conditional on
\(\Lambda (\cdot )=\lambda (\cdot )\) this concerns a Poisson contribution with parameter
$$\begin{aligned} \int _{t_{i-1}}^{t_i} \lambda (s) {F}(t_j-s,t_{j+1}-s)\,\mathrm{d}s; \end{aligned}$$
conditional on
\(\Lambda (\cdot )=\lambda (\cdot )\) all these contributions are independent. In addition, we recall that the probability-generating function (evaluated in
z) of a Poisson random variable with parameter
\(\lambda \) equals
\(\exp (\lambda (z-1))\). Combining the above elements, we obtain
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}}) ={{\mathbb {E}}} \exp \left( \int _{-\infty }^{t_n}\Lambda (s) \psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\right) , \end{aligned}$$
(15)
where, with
\(t_{-1}:=-\infty \),
$$\begin{aligned} \psi (s\,|\,{\varvec{w}},{\varvec{z}}) =\sum _{i=0}^n \psi _i(s\,|\,{\varvec{w}},{\varvec{z}}) 1_{\{s\in (t_{i-1},t_i]\}}; \end{aligned}$$
the individual
\( \psi _i(s\,|\,{\varvec{w}},{\varvec{z}})\) are defined by
$$\begin{aligned} \psi _0(s\,|\,{\varvec{w}},{\varvec{z}}) :=&\, \sum _{j=0}^n F(t_{j}-s, t_{j+1}-s)\left( \prod _{i=0}^jw_i-1\right) \\ =&\, \sum _{j=0}^n F(t_{j}-s, t_{j+1}-s) \prod _{i=0}^jw_i +F(t_0-s)-1 \end{aligned}$$
for
\(i=0\), and
$$\begin{aligned} \psi _i(s\,|\,{\varvec{w}},{\varvec{z}}):=&\, \sum _{j=i}^n F(t_{j}-s, t_{j+1}-s)\left( z_i\prod _{k=i}^jw_k-1\right) \\ =&\,\sum _{j=i}^n F(t_{j}-s, t_{j+1}-s)z_i\prod _{k=i}^jw_k +F(t_i-s)-1 \end{aligned}$$
for
\(i\in \{1,\ldots ,n\}\).
Representation (
15) generally holds, i.e., for any nonnegative arrival rate process
\(\Lambda (\cdot )\). For the case of multivariate shot-noise, however, the expression can be made more explicit. To this end, we substitute (cf. (
6))
$$\begin{aligned} \Lambda (s) = {\varvec{a}}'{\varvec{X}}(s)= {\varvec{a}}'\int _{-\infty }^s \mathrm{e}^{-Q(s-r)} \mathrm{d}{\varvec{J}}(r). \end{aligned}$$
By applying Eq. (
10), we obtain that (
15) equals
$$\begin{aligned} \Psi ({\varvec{w}},{\varvec{z}})=&\,{{\mathbb {E}}} \exp \left( {\varvec{a}}'\int _{-\infty }^{t_n} \int _{-\infty }^{s} \mathrm{e}^{-Q(s-r)} \mathrm{d}{\varvec{J}}(r) \,\psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\right) \\ =&\,{{\mathbb {E}}} \exp \left( {\varvec{a}}'\int _{-\infty }^{t_n} \int _{r}^{t_n} \mathrm{e}^{-Q(s-r)} \psi (s\,|\,{\varvec{w}},{\varvec{z}})\,\mathrm{d}s\,\mathrm{d}{\varvec{J}}(r)\right) \\ =&\, \exp \left( -\int _{-\infty }^{t_n} \eta \left( - \int _r^{t_n}\mathrm{e}^{-Q'(s-r)}\,\psi (s\,|\,{\varvec{w}},{\varvec{z}})\mathrm{d}s\,{\varvec{a}}\right) \,\mathrm{d}r\right) . \end{aligned}$$
We have thus established the following result.
Interestingly, the above result directly enables us to describe the distribution of the number of departures in all intervals. Let
\(D_i\) be the number of departures in
\((t_{i-1},t_i]\). Then we would like to evaluate
$$\begin{aligned} \Phi ({\varvec{v}}) := {{\mathbb {E}}}\left( \prod _{i=1}^n v_i^{D_i}\right) . \end{aligned}$$
Now observe that
\(L_{i}=L_{i-1}+A_i-D_i\), and hence
\(D_i=L_{i-1}-L_{i}+A_i\). As a consequence,
$$\begin{aligned} \Phi ({\varvec{v}})&= {{\mathbb {E}}}\left( \prod _{i=1}^n v_i^{L_{i-1}-L_{i}+A_i}\right) = {{\mathbb {E}}} \left( v_1^{L_0}\cdot \prod _{i=1}^{n-1}\left( \frac{v_{i+1}}{v_i}\right) ^{L_i} \cdot v_n^{-L_n}\cdot \prod _{i=1}^n v_i^{A_i}\right) . \end{aligned}$$
With
\(w_1({\varvec{v}}) := v_1,\)\(w_i({\varvec{v}}):=v_{i+1}/v_i\) for
\(i\in \{1,\ldots ,n-1\}\) and
\(w_n({\varvec{v}}):= v_n^{-1},\) we thus find the following result.
3.3 Explicit calculations
Let L(t) be the number of customers present at time t. In this subsection we compute (i) the mean and variance of L(0), (ii) the joint transform of \(\Lambda (0)\) and L(0), and (iii) the covariance between L(0) and L(t) for some \(t>0.\) As before, we assume that the process started at \(-\infty \), so that it displays stationary behavior at time 0 (and consequently at t as well). In principle, (factorial) moments can be derived by differentiating \(\Psi ({\varvec{w}},{\varvec{z}})\) suitably often and plugging in \({\varvec{w}}={\varvec{1}}\) and \({\varvec{z}}={\varvec{1}}\), but elegant direct arguments can be given, as we show now.
We first introduce some notation. Define
\(\beta \) as the mean service time:
$$\begin{aligned} \beta = \int _0^\infty {\bar{F}}(s) \,\mathrm{d}s. \end{aligned}$$
The density of the residual service time
\(B_\mathrm{e}\) is given by
\(f_\mathrm{e}(s):= {\bar{F}}(s)/\beta .\)
3.4 \(\circ \) Mean and variance of \(\Lambda (0)\) and L(0)
It is easily checked that \({{\mathbb {E}}}\,\Lambda (0)=\lambda := {\varvec{a}}'\,Q^{-1}\,{\varvec{\varrho }}\) and \({{\mathbb {V}}}\mathrm{ar}\,\Lambda (0)={\varvec{a}}'\,\Sigma _0\,{\varvec{a}}\). We therefore now concentrate on the mean and variance of L(0).
Let
\({{\mathscr {P}}}(\mu )\) denote a Poisson random variable with parameter
\(\mu \). It is straightforward that
$$\begin{aligned} {{\mathbb {E}}}\,L(0) =&\, {{\mathbb {E}}}\left( {{\mathscr {P}}}\left( \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\,\mathrm{d}s\right) \right) ={\mathbb E}\left( \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\,\mathrm{d}s\right) . \end{aligned}$$
We thus find, with
\({\varvec{J}}\) shorthand notation for the entire path of the process
\({\varvec{J}}(\cdot )\),
$$\begin{aligned} {{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]&=\int _{-\infty }^0\Lambda (s)\bar{F}(-s)\,\mathrm{d}s=\int _0^\infty \Lambda (-s){\bar{F}}(s)\,\mathrm{d}s \nonumber \\&=\beta \int _0^\infty \Lambda (-s)f_\mathrm{e}(s)\,\mathrm{d}s =\beta \, {{\mathbb {E}}}[\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}}]. \end{aligned}$$
(16)
For later use, we rewrite this expression as
$$\begin{aligned} \beta \, {{\mathbb {E}}}[\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}}]&=\beta \,{{\mathbb {E}}}[{\varvec{a}}'{\varvec{X}}(-B_\mathrm{e})\,|\,{\varvec{J}}]=\beta \,{\varvec{a}}'\,{{\mathbb {E}}}[X(-B_\mathrm{e})\,|\,{\varvec{J}}] \nonumber \\&=\beta \int _{-\infty }^0\left( {{\mathbb {E}}}\,{\varvec{a}}'\,\mathrm{e}^{-Q(-B_\mathrm{e}-s)}1_{\{s\le -B_\mathrm{e}\}}\right) \,\mathrm{d}{\varvec{J}}(s)\nonumber \\&{\mathop {=}\limits ^\mathrm{d}}\beta \int _{0}^{\infty }\left( {{\mathbb {E}}}\,\mathrm{e}^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) '\mathrm{d}{\varvec{J}}(s),\, \end{aligned}$$
(17)
where the last step is due to time-reversibility of
\({\varvec{J}}(\cdot )\).
Applying (
7), we obtain
$$\begin{aligned} {{\mathbb {E}}}\,L(0) =&\, \beta \, {{\mathbb {E}}}\,\Lambda (0) =\lambda \beta . \end{aligned}$$
(18)
Now we move to computing
\({{\mathbb {V}}}\mathrm{ar}\,L(0)\). Due to the fact that
L(0) is mixed Poisson, we have that
\({{\mathbb {V}}}\mathrm{ar}(L(0)\,|\,{\varvec{J}})={{\mathbb {E}}}(L(0)\,|\,{\varvec{J}})\) and thus
\({{\mathbb {E}}}\,{{\mathbb {V}}}\mathrm{ar}(L(0)\,|\,{\varvec{J}})={\mathbb E}\,L(0)=\lambda \beta \). Using the law of total variance, (
16), (
18) and (
9),
$$\begin{aligned} {{\mathbb {V}}}\mathrm{ar}\, L(0)&={{\mathbb {E}}}\,{{\mathbb {V}}}\mathrm{ar}\,(L(0)\,|\,{\varvec{J}})+{{\mathbb {V}}}\mathrm{ar}\,({\mathbb E}[L(0)\,|\,{\varvec{J}}]) \nonumber \\&=\lambda \beta +{{\mathbb {V}}}\mathrm{ar}\left( {{\mathbb {E}}}(\Lambda (-B_\mathrm{e})\,|\,{\varvec{J}})\right) \beta ^2 \nonumber \\&=\lambda \beta +\beta ^2\int _0^\infty {\varvec{a}}'\left( {{\mathbb {E}}}\,\mathrm{e}^{-Q(s-B_{\varvec{e}})}1_{\{B_\mathrm{e}\le s\}}\right) \Sigma \left( {{\mathbb {E}}}\,\mathrm{e}^{-Q'(s-B_{\varvec{e}})}1_{\{B_{\varvec{e}}\le s\}}\right) {\varvec{a}}\,\mathrm{d}s. \end{aligned}$$
(19)
Expression (
19) can be substantially simplified, which we do later in this section. Observe that
\({{\mathbb {V}}}\mathrm{ar}\, L(0) \ge {{\mathbb {E}}}\, L(0)\), reflecting the fact that Coxian arrival rates lead to overdispersed arrival processes; see, for example, [
4,
7].
Here the goal is to determine
\({{\mathbb {E}}}\,\mathrm{e}^{-v\Lambda (0)}\,w^{L(0)} .\) By arguments similar to those used in Sect.
3,
$$\begin{aligned} {{\mathbb {E}}}\, \mathrm{e}^{-v\Lambda (0)}\,w^{L(0)}&=\,{\mathbb E}\exp \left( (w-1) \int _{-\infty }^0 \Lambda (s) {\bar{F}}(-s)\mathrm{d}s -v\, \Lambda (0) \right) \\&=\,{{\mathbb {E}}}\exp \left( (w-1){\varvec{a}}' \int _{-\infty }^0 X(s) \bar{F}(-s)\mathrm{d}s -v\, {\varvec{a}}'{\varvec{X}}(0) \right) . \end{aligned}$$
Combining this relation with (cf. (
6))
$$\begin{aligned} \int _{-\infty }^0 X(s) {\bar{F}}(-s)\mathrm{d}s&=\, \int _{-\infty }^0 \int _{-\infty }^s \mathrm{e}^{-Q(r-s)} \,\mathrm{d}{\varvec{J}}(r)\, {\bar{F}}(-s)\,\mathrm{d}s\\&=\, \int _{-\infty }^0 \int _{r}^0 \mathrm{e}^{-Q(r-s)} {\bar{F}}(-s)\mathrm{d}s \,\mathrm{d}{\varvec{J}}(r) \\&=\, \int _0^{\infty }\int _{0}^s \mathrm{e}^{-Q(s-r)} {\bar{F}}(r)\mathrm{d}r \,\mathrm{d}{\varvec{J}}(s), \end{aligned}$$
yields
$$\begin{aligned} {{\mathbb {E}}}\, \mathrm{e}^{-v\Lambda (0)}\,w^{L(0)} = {{\mathbb {E}}}\exp \left( {\varvec{a}}' \int _0^{\infty } \Omega (v,w,s) \,\mathrm{d}{\varvec{J}}(s) \right) , \end{aligned}$$
where
$$\begin{aligned} \Omega (v,w,s):=&\, (w-1) \int _0^{s} \mathrm{e}^{-Q(s-r)} \bar{F}(r)\,\mathrm{d}r -v\, \mathrm{e}^{-Qs} \\ =&\, (w-1) \beta \, {{\mathbb {E}}}\, \mathrm{e}^{-Q(s-B_\mathrm{e})} 1_{\{B_\mathrm{e}\le s\}} -v\, \mathrm{e}^{-Qs} . \end{aligned}$$
Applying (
8), we thus obtain the following result.
It is possible to apply this joint transform to the computation of moments of
L(0) and
\(\Lambda (0)\), as well as their mixed moments, but lower moments can typically be computed by direct arguments. The mean and variance of
\(\Lambda (0)\) and
L(0) have been identified above. We therefore now focus on the covariance between
\(\Lambda (0)\) and
L(0). Since
\({{\mathbb {E}}}[\Lambda (0)\,|\,{\varvec{J}}]=\Lambda (0)\) (as
\({\varvec{X}}(\cdot )\) is a functional of
\({\varvec{J}}(\cdot )\)), and since
\(B_{\mathrm{e}}\) is independent of
J, we find
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}(\Lambda (0),L(0))&={{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}],\Lambda (0))={{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[\Lambda (-B_{\mathrm{e}})\,|\,{\varvec{J}}],\Lambda (0))\\&={{\mathbb {C}}}\mathrm{ov}(\Lambda (-B_{\mathrm{e}}),\Lambda (0))={\mathbb E}\,{{\mathbb {C}}}\mathrm{ov}(\Lambda (-B_{\mathrm{e}}),\Lambda (0)\,|\,B_{\mathrm{e}}) \\&={\varvec{a}}'\,{{\mathbb {E}}}\,\Sigma _{B_{\mathrm{e}}}\,{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,{{\mathbb {E}}}\,\mathrm{e}^{-Q'B_{\mathrm{e}}}\,{\varvec{a}}\ , \end{aligned}$$
where we employ the fact that the covariance matrix between
\({\varvec{X}}(t)\) and
\({\varvec{X}}(t+h)\), which is the same as that between
\({\varvec{X}}(t-h)\) and
\({\varvec{X}}(t)\), is given by
\(\Sigma _h=\Sigma _0\,\mathrm{e}^{-Q'h}\).
3.6 \(\circ \) Covariance between L(0) and L(t)
The starting point is the law of total covariance:
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}\, (L(0),L(t))&={{\mathbb {E}}}\,{{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})+{{\mathbb {C}}}\mathrm{ov}({\mathbb E}[L(0)\,|\,{\varvec{J}}], {{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}]). \end{aligned}$$
We evaluate the two terms separately. The first term can be rewritten as follows: Let
\(C_1(t)\) denote the number of customers that arrive before time 0 and depart in (0,
t];
\(C_2(t)\) is the number of customers that arrive before time 0 and depart after
t; finally,
\(C_3(t)\) is the number of customers that arrive in (0,
t] and depart after
t. Evidently, due to the conditional independence of these three quantities,
$$\begin{aligned} {{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})&={{\mathbb {C}}}\mathrm{ov}\,(C_1(t)+C_2(t),C_2(t)+C_3(t)\,|\,{\varvec{J}})\\ {}&={{\mathbb {V}}}\mathrm{ar}\,(C_2(t)|\,J)={{\mathbb {E}}}(C_2(t)\,|\,{\varvec{J}}), \end{aligned}$$
with, mimicking the above arguments,
$$\begin{aligned} {{\mathbb {E}}}(C_2(t)\,|\,{\varvec{J}})&{\mathop {=}\limits ^\mathrm{d}}\beta \int _{0}^\infty {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{t<B_\mathrm{e}\le t+s\}}\right) \,\mathrm{d}{\varvec{J}}(s), \end{aligned}$$
where the last equality is due to the fact that the conditional distribution of
\(C_2(t)\) given
\({\varvec{J}}\) is Poisson. Thus, with
\(F_\mathrm{e}(\cdot )\) denoting the distribution function of
\(B_\mathrm{e}\) and
\({\bar{F}}_\mathrm{e}(\cdot )\) the corresponding complementary distribution function,
$$\begin{aligned} {{\mathbb {E}}}\,{{\mathbb {C}}}\mathrm{ov}\,(L(0),L(t)\,|\,{\varvec{J}})&=\beta \int _{0}^\infty {{\mathbb {E}}}\left( {\varvec{a}}'\mathrm{e}^{-Q(s+t-B_\mathrm{e})}{\varvec{\varrho }}1_{\{t<B_\mathrm{e}\le t+s\}}\right) \,\mathrm{d}s\\&=\beta \, {{\mathbb {E}}}\left( \int _{B_{\mathrm{e}}-t}^{\infty }\left( {\varvec{a}}'\mathrm{e}^{-Q(s-(B_\mathrm{e}-t))}{\varvec{\varrho }}\right) \mathrm{d}s\,1_{\{B_\mathrm{e}>t\}}\right) \\&=\beta \,{{\mathbb {E}}}\left( \int _{0}^{\infty }{\varvec{a}}'\mathrm{e}^{-Qs}\,\mathrm{d}s\,1_{\{B_\mathrm{e}>t\}}\right) {\varvec{\varrho }} \\&=\beta \,{\varvec{a}}'Q^{-1}\,{\varvec{\varrho }}\, {{\mathbb {P}}}(B_\mathrm{e}>t)=\lambda \beta {\bar{F}}_\mathrm{e}(t). \end{aligned}$$
Next, we move to the second term. To this end, we first recall (cf. (
16) and (
17)) that
$$\begin{aligned} {{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]=&\beta \int _{0}^{\infty } {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) \mathrm{d}{\varvec{J}}(s),\\ {{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}]=&\beta \int _{-t}^{\infty } {\mathbb E}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}{\varvec{J}}(s). \end{aligned}$$
As
\({\varvec{J}}\) has independent increments, when considering the covariance between
\({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}]\) and
\({\mathbb E}[L(t)\,|\,{\varvec{J}}]\), we can restrict ourselves to integrating over
\(s>0\) only; more concretely,
$$\begin{aligned}&{{\mathbb {C}}}\mathrm{ov}({{\mathbb {E}}}[L(0)\,|\,{\varvec{J}}],{{\mathbb {E}}}[L(t)\,|\,{\varvec{J}}])\nonumber \\&\quad =\beta ^2\,{{\mathbb {C}}}\mathrm{ov}\left( \int _{0}^\infty {\mathbb E}\left( e^{-Q'(s-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s\}}\right) \mathrm{d}{\varvec{J}}(s),\right. \nonumber \\&\quad \left. \quad \int _{0}^{\infty } \mathrm{E}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}{\varvec{J}}(s)\right) \nonumber \\&\quad =\beta ^2\int _0^\infty {{\mathbb {E}}}\left( {\varvec{a}}'\,e^{-Q(s-B_\mathrm{e})}1_{\{B_{\varvec{e}}\le s\}}\right) \Sigma \, {{\mathbb {E}}}\left( \mathrm{e}^{-Q'(s+t-B_\mathrm{e})}\,{\varvec{a}}\,1_{\{B_\mathrm{e}\le s+t\}}\right) \mathrm{d}s\,, \end{aligned}$$
(20)
where the last step follows by using (
9).
As it turns out, the last formula (as well as (
19)) may be simplified, as follows: Let
\(B_{\mathrm{e},1}\) and
\(B_{\mathrm{e},2}\) be two i.i.d. copies of
\(B_\mathrm{e}\). Recalling (
12) and denoting
\(x^+:=x\vee 0\) and
\(x^-:=-x\wedge 0=(-x)^+\), (
20) can be rewritten as
$$\begin{aligned}&\beta ^2 {\varvec{a}}'\int _0^\infty {{\mathbb {E}}}\left( e^{-Q(s-B_{\mathrm{e}})}1_{\{B_{\mathrm{e}}\le s\}}\right) \Sigma \, {{\mathbb {E}}}\left( e^{-Q'(s+t-B_{\mathrm{e}})}1_{\{B_{\mathrm{e}}\le s+t\}}\right) \mathrm{d}s\,{\varvec{a}}\nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \int _{B_{\mathrm{e},1}\vee (B_{\mathrm{e},2}-t)}^\infty \mathrm{e}^{-Q(s-B_{\mathrm{e},1})} \Sigma \,\mathrm{e}^{-Q'(s+t-B_{\mathrm{e},2})}\mathrm{d}s\right) \,{\varvec{a}} \nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \mathrm{e}^{-Q(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^-} \left( \int _{0}^\infty \mathrm{e}^{-Qs} \Sigma \,\mathrm{e}^{-Q's}\mathrm{d}s\right) \mathrm{e}^{-Q'(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^+}\right) {\varvec{a}} \nonumber \\&\quad =\beta ^2 {\varvec{a}}'\,{{\mathbb {E}}}\left( \mathrm{e}^{-Q(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^-} \Sigma _0\, \mathrm{e}^{-Q'(B_{\mathrm{e},1}-B_{\mathrm{e},2}+t)^+}\right) {\varvec{a}}\,, \end{aligned}$$
where, in the last equality, (
11) has been used. For any
\(s\in \mathbb {R}\) we have that, recalling Equation (
13),
$$\begin{aligned} {\varvec{a}}'\,\mathrm{e}^{-Qs^-}\,\Sigma _0\,\mathrm{e}^{-Q's^+}{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|s|}{\varvec{a}}={\varvec{a}}'\,\Sigma _{|s|}\,{\varvec{a}}. \end{aligned}$$
Thus, denoting by
R(
t) the autocorrelation function (with
\(R(0)=1\)), then adding the two terms yields
$$\begin{aligned} R(t)\cdot {{\mathbb {V}}}\mathrm{ar}\,L(0)&={{\mathbb {C}}}\mathrm{ov}(L(0),L(t))=\lambda \beta {\bar{F}}_\mathrm{e}(t)+\beta ^2\,{\varvec{a}}'\,{{\mathbb {E}}}\left( \Sigma _{|B_{\mathrm{e},1}-B_{\mathrm{e},2}+t|}\right) {\varvec{a}}\,. \end{aligned}$$
(21)
In particular, when
\(t=0\), (
21) provides us with a simplified expression for
\({{\mathbb {V}}}\mathrm{ar}\,L(0)\). The density of
\(B_{\mathrm{e},1}-B_{\mathrm{e},2}\) is clearly symmetric around zero and is given by
$$\begin{aligned} g(x)=\int _0^\infty f_\mathrm{e}(y)f_\mathrm{e}(y+|x|)\,\mathrm{d}y. \end{aligned}$$
Since
\(f_\mathrm{e}(\cdot )=\beta ^{-1}{\bar{F}}(\cdot )\) is non-increasing on
\([0,\infty )\), this implies that
\(g(\cdot )\) is unimodal.
We end this subsection by considering the one-dimensional case. Since both
\(h(x):=\mathrm{e}^{-q|x|}\) (
\(q>0\)) and
g(
x) are symmetric and unimodal (around zero), it follows by [
15] that so is their convolution. Alternatively, this follows also from [
5] as
\(h(\cdot )\) is also log-concave. This implies that
$$\begin{aligned} {{\mathbb {E}}}\,\mathrm{e}^{-q|B_{\mathrm{e},1}-B_{\mathrm{e},2}+t|}&=\int _{-\infty }^\infty \mathrm{e}^{-q|x+t|}g(x)\,\mathrm{d}x=\int _{-\infty }^\infty \mathrm{e}^{-q|-x+t|}g(-x)\,\mathrm{d}x \\&=\int _{-\infty }^\infty \mathrm{e}^{-q|t-x|}g(x)\,\mathrm{d}x \end{aligned}$$
is unimodal with a mode at zero, hence non-increasing on
\([0,\infty )\), and thus so is
\(R(\cdot )\). A similar result holds if
\({\varvec{a}}\) is an eigenvector associated with a real-valued (positive) eigenvalue
q of
\(Q'\) since then
\({\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|x|}\,{\varvec{a}}={\varvec{a}}'\,\Sigma _0\,{\varvec{a}}\, \mathrm{e}^{-q|x|}\). However, in general, even though
\({\varvec{a}}'\,\Sigma _0\,\mathrm{e}^{-Q'|x|}\,{\varvec{a}}\) is a symmetric function, it is not clear if it is decreasing on
\([0,\infty )\) or if it is log-concave. However, since it vanishes as
\(|x|\rightarrow \infty \), it is clear that
R(
t) vanishes as
\(t\rightarrow \infty \).
4 The network case
In this section we consider networks of infinite-server queues with a Cox input process. There are n queues, with the arrival rate of queue k being a non-homogeneous Poisson process with rate function \(\Lambda _k(t)={\varvec{a}}_k'{\varvec{X}}(t)\). This is the same as defining the vector of arrival processes to be \({\varvec{\Lambda }}(t)=A{\varvec{X}}(t)\) for some matrix A with rows \({\varvec{a}}_k'\). The \({\varvec{X}}(t)\) process is the same multivariate shot-noise process as before. Notice that in this construction the arrival processes at the various queues are potentially dependent.
Define by \(p_{km}(t)\) the probability that a job entering at queue k at time 0 is at queue m at time t; likewise, \(p_{k0}(t)\) is the probability that a job entering at queue k at time 0 has left the network by time t. In the case of exponentially distributed service times and probabilistic routing, these \(p_{km}(t)\)’s can be computed more explicitly (relying on the machinery developed for phase-type distributions).
Before studying the joint queue length distribution in the network (Sect.
4.2) and presenting an example (Sect.
4.3), we briefly remind the reader of the structure of the joint queue length distribution in the case of two M/G/
\(\infty \) queues in tandem (Sect.
4.1). This helps sharpen our intuition and will make it easier to digest and understand the results of Sect.
4.2.
4.1 Two M/G/\(\infty \) queues in tandem
Consider a model of two M/G/
\(\infty \) queues 1, 2 in series, with arrival rate
\(\lambda \) at queue 1, generic service time
\(B_i\) at queue
i,
\(i=1,2\), and all customers moving from queue 1 to 2 before leaving the system. Queue 2 has no external arrivals. The same concepts that were underlying the results mentioned in Sect.
3.1 (see, in particular, (
14)) quickly lead to the following expression for the generating function of the joint distribution of the two queue lengths
\(L_1(t),L_2(t)\):
$$\begin{aligned}&{\mathbb {E}}[w_1^{L_1(t)} w_2^{L_2(t)}|L_1(0)=L_2(0)=0]\\&\quad =\, \exp \left( {(w_1-1)\lambda \int _0^t \mathbb P(B_1>u) \mathrm{d} u + (w_2-1)\lambda \int _0^t \mathbb P(B_1 < u, B_1+B_2 > u)} \mathrm{d}u\right) . \end{aligned}$$
Furthermore, the generating function of the steady-state joint queue length distribution is given by
$$\begin{aligned} \mathbb E[w_1^{L_1} w_2^{L_2}] =\exp \big ({(w_1-1)\lambda \mathbb EB_1 + (w_2-1)\lambda \mathbb EB_2 }\big ). \end{aligned}$$
4.2 Joint queue length
We focus on analyzing the stationary joint queue length distribution. In principle, virtually all quantities studied in the previous section can be derived again, at the expense of introducing rather heavy notation.
In this section, we let
\(K_m\) denote the stationary queue length at node
\(m\in \{1,\ldots ,n\}\). Our objective is to compute the joint probability-generating function
$$\begin{aligned} \Pi ({\varvec{w}}) = {{\mathbb {E}}}\left( \prod _{m=1}^n w_m^{K_m}\right) . \end{aligned}$$
Using the same arguments as in the previous section, we conclude that
\(K_m\) has a mixed Poisson distribution. More precisely,
\(K_m\) has a Poisson distribution with (random) parameter
$$\begin{aligned} \int _{-\infty } ^0 \sum _{{k}=1}^n\Lambda _{{k}}(s) p_{{k}m}(-s)\,\mathrm{d}s. \end{aligned}$$
(22)
It follows that
$$\begin{aligned} \Pi ({\varvec{w}}) =&\, {{\mathbb {E}}}\left( \prod _{m=1}^n w_m^{K_m}\right) = {{\mathbb {E}}}\exp \left( \sum _{m=1}^n (w_m-1) \int _{-\infty }^0 \sum _{{k}=1}^n \Lambda _{{k}}(s)\,p_{{k}m}(-s)\,\mathrm{d}s\,\right) \nonumber \\ =&\, {{\mathbb {E}}}\exp \left( \sum _{m=1}^n (w_m-1) \int _{-\infty }^0 \sum _{{k}=1}^n {\varvec{a}}_{{k}}'{\varvec{X}}(s)\,p_{{k}m}(-s)\,\mathrm{d}s\,\right) . \end{aligned}$$
(23)
Substituting (cf. (
6))
$$\begin{aligned} {\varvec{X}}(s) = \int _{-\infty }^s \mathrm{e}^{-Q(s-r)}\mathrm{d}{\varvec{J}}(r)\end{aligned}$$
in (
23) gives
$$\begin{aligned} \Pi ({\varvec{w}}) = {{\mathbb {E}}}\exp \left( \sum _{{k}=1}^n {\varvec{a}}_{{k}}' \int _{-\infty }^0\int _{r}^0 \mathrm{e}^{-Q(s-r)}\,\pi _{{k}}(s\,|\,{\varvec{w}}) \,\mathrm{d}s\,\mathrm{d}{\varvec{J}}(r) \right) , \end{aligned}$$
with
$$\begin{aligned} \pi _{{k}}(s\,|\,{\varvec{w}}):=&\, \sum _{m=1}^n p_{{k}m}(-s) \,(w_m-1)= \sum _{m=1}^n p_{{k}m}(-s) \,w_m +p_{{k}0}(-s)-1. \end{aligned}$$
This leads to the following result.
4.3 Example
In this illustrative example we consider a two-node tandem system in which the service times at both nodes are exponential with parameter
\(\kappa >0.\) We assume for ease that there is only input at the first queue and that all customers move from the first queue to the second queue. It is immediate that, for
\(t\geqslant 0\),
$$\begin{aligned} p_{11}(t) = \mathrm{e}^{-\kappa t}, \,\,\,p_{12}(t) =\kappa t\,\mathrm{e}^{-\kappa t},\,\,\, p_{10}(t) = 1- \mathrm{e}^{-\kappa t}- \kappa t\,\mathrm{e}^{-\kappa t}; \end{aligned}$$
use that the sojourn time in the system is Erlang(2). We find that
$$\begin{aligned} \pi _1(t\,|\,w_1,w_2) = (w_1-1) \,\mathrm{e}^{\kappa t}- (w_2-1) \, \kappa t\,\mathrm{e}^{\kappa t}. \end{aligned}$$
Assume for ease that
\(d=1\) (i.e., one-dimensional shot-noise). We choose
\(a_1=1\) (whereas
\(a_2=0\), as we assumed no external input to the second queue). As a consequence, we have that
\(\Lambda _1(t) = X(t)\) and
\(\Lambda _2(t) =0\), where
X(
t) can be represented as
$$\begin{aligned} X(t) = \int _{(-\infty ,t]} \mathrm{e}^{-q(t-s)} \,\mathrm{d}J(s), \end{aligned}$$
for some
\(q>0\) and a (scalar) Lévy subordinator
\(J(\cdot )\) (with exponent
\(-\eta (\cdot )\)).
Appealing to Thm.
4.1,
$$\begin{aligned} \Pi (w_1,w_2) = {{\mathbb {E}}}\big [w_1^{K_1} w_2^{K_2}\big ] = \exp \left( -\int _{-\infty }^0 \eta \left( -\int _r^0 \mathrm{e}^{-q(s-r)} \pi _1(s\,|\,w_1,w_2) \,\mathrm{d}s \right) \mathrm{d}r\right) .\!\!\!\!\!\nonumber \\ \end{aligned}$$
(24)
To evaluate this expression, observe that
$$\begin{aligned} \int _r^0 \mathrm{e}^{-q(s-r)} \pi _1(s\,|\,w_1,w_2) \,\mathrm{d}s=&\int _r^0 \mathrm{e}^{-q(s-r)}\left( (w_1-1) \,\mathrm{e}^{\kappa s}- (w_2-1) \, \kappa s\,\mathrm{e}^{\kappa s}\right) \mathrm{d}s \nonumber \\ =&\, (w_1-1) \frac{\mathrm{e}^{q r} -\mathrm{e}^{\kappa r}}{\kappa -q} +(w_2-1)\left( \frac{\kappa r\,\mathrm{e}^{\kappa r}}{\kappa -q} +\frac{\kappa (\mathrm{e}^{q r} -\mathrm{e}^{\kappa r})}{(\kappa -q)^2}\right) . \end{aligned}$$
(25)
It is easy to obtain moments of
\(K_1\) and
\(K_2\) from (
24) and (
25) by differentiation with respect to
\(w_1\) and
\(w_2\), respectively. In particular, straightforward algebra yields
$$\begin{aligned} \mathbb E[K_1] =\mathbb E[K_2]= \frac{\eta '(0)}{\kappa q}. \end{aligned}$$
We now return to our tandem example. Suppose that
\(J(\cdot )\) corresponds to a Gamma process [
9, Section 1.2.4] with (without loss of generality) rate and shape parameters both equal to 1, i.e.,
$$\begin{aligned} \eta (\alpha ) =-\log \left( \frac{1}{1+\alpha }\right) = \log (1+\alpha ). \end{aligned}$$
The probability-generating function of the queue length in the first queue is therefore
$$\begin{aligned} \Pi (w,1) = \exp \left( -\int _{-\infty }^0 \log \left( 1+(1-w) \frac{\mathrm{e}^{q r} -\mathrm{e}^{\kappa r}}{\kappa -q}\right) \,\mathrm{d}r\right) . \end{aligned}$$
Using the Taylor expansion of the logarithm, the exponent of this expression can be rewritten as
$$\begin{aligned} \int _{-\infty }^0 \sum _{n=1}^\infty \frac{(-1)^{n}}{n} \left( v(w)\,( \mathrm{e}^{q r} -\mathrm{e}^{\kappa r})\right) ^n \mathrm{d}r, \end{aligned}$$
with
\(v(w):= (1-w)/(\kappa -q).\) Relying on the binomium, this equals
$$\begin{aligned} \int _{-\infty }^0 \sum _{n=1}^\infty \frac{(-1)^{n}}{n} (v(w))^n\,\left( \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^{n-k} \,\mathrm{e}^{q r k+\kappa r(n-k)}\right) \mathrm{d}r. \end{aligned}$$
Swapping the order of the summations and the integral, we obtain
$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n} (v(w))^n\, \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^k\frac{1}{qk+\kappa (n-k)}. \end{aligned}$$
We thus end up with
$$\begin{aligned} {{\mathbb {E}}}\big [w^{K_1}\big ] =\exp \left( -\sum _{n=1}^\infty \frac{1}{n} (v(w))^n\, \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^k\frac{1}{qk+\kappa (n-k)}\right) . \end{aligned}$$
A similar procedure yields the probability-generating function of the second queue, but the resulting expressions are slightly more complicated.
5 Conclusions and suggestions for further research
In this paper we have considered a network of infinite-server queues where the input process is a Cox process that allows much modeling flexibility; the arrival rate is represented as a linear combination of the components of a multivariate generalized shot-noise process. We have derived various distributional properties of the multivariate shot-noise process, subsequently exploiting them to obtain the joint transform of the numbers of customers, at consecutive time epochs, in an infinite-server queue with, as input process, such a Cox process. We have also derived the joint steady-state transform of the vectors of arrival rate and queue length, as well as their means and covariance structure, and we have studied the departure process from the queue. Finally, we extended our analysis to the setting of a network of infinite-server queues, allowing the arrival processes at the various queues to be dependent Cox processes.
In a follow-up study [
14] we investigate several related aspects. Specifically, (i) we develop a recursive scheme that allows us to obtain higher-order moments of
\((\Lambda (t),L(t))\), (ii) we derive asymptotics of the queue length process, under assumptions regarding the tail behavior of the shot-noise process, (iii) we study the heavy-traffic behavior of the queue length process, and (iv) we obtain several numerical results for means, variances and correlations; this includes a numerical exploration of the sensitivity of various performance metrics with respect to the service time distribution. It could also be investigated to what extent the assumption of exponential decay in the shot-noise process can be relaxed.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.