Skip to main content
Top
Published in: Queueing Systems 3-4/2021

Open Access 18-02-2021

Large deviations for acyclic networks of queues with correlated Gaussian inputs

Authors: Martin Zubeldia, Michel Mandjes

Published in: Queueing Systems | Issue 3-4/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

We consider an acyclic network of single-server queues with heterogeneous processing rates. It is assumed that each queue is fed by the superposition of a large number of i.i.d. Gaussian processes with stationary increments and positive drifts, which can be correlated across different queues. The flow of work departing from each server is split deterministically and routed to its neighbors according to a fixed routing matrix, with a fraction of it leaving the network altogether. We study the exponential decay rate of the probability that the steady-state queue length at any given node in the network is above any fixed threshold, also referred to as the ‘overflow probability’. In particular, we first leverage Schilder’s sample-path large deviations theorem to obtain a general lower bound for the limit of this exponential decay rate, as the number of Gaussian processes goes to infinity. Then, we show that this lower bound is tight under additional technical conditions. Finally, we show that if the input processes to the different queues are nonnegatively correlated, non-short-range dependent fractional Brownian motions, and if the processing rates are large enough, then the asymptotic exponential decay rates of the queues coincide with the ones of isolated queues with appropriate Gaussian inputs.
Notes
The authors are with the Korteweg–de Vries Institute for Mathematics University of Amsterdam Science Park 904 1098 XH Amsterdam the Netherlands their research is partly funded by the NWO Gravitation project Networks, Grant No. 024.002.003.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Modern communication networks are complex, and handle huge amounts of data. This is especially true closer to the backbone of the networks, where large numbers of connections share the same resources. The design and operation of these networks greatly benefits from tractable theoretical models that are able to describe and predict the performance of the system. In order to obtain such tractable models, a common practice is to represent the network’s nodes as single server queues with an appropriate service discipline. Moreover, given the high level of traffic aggregation, it is appealing to approximate the incoming traffic to the network by Gaussian processes [1, 2]. Since these networks are often operated in a regime where the packet loss probabilities are very small, there is a need for understanding the large-deviations behavior of these networks.
While a queueing network with Gaussian inputs is a rather streamlined model, the analysis of its large-deviations behavior is notoriously difficult outside the case of an isolated queue, which has been thoroughly studied [36]. The main reason for this is that after the (initially Gaussian) incoming traffic goes through the first queue, it is no longer Gaussian. Then, when it is fed to a different queue, the analysis of this queue is significantly harder. For the special case of two queues in tandem, with work arriving only to the first queue and all the departing work of the first queue going into the second one, a useful trick involving subtracting the first queue (which has Gaussian input) from the sum of both queues (which behaves exactly as a single-server queue with a Gaussian input) yields a tractable analysis of the second queue in the tandem [7], even if it does not have a Gaussian input; see also the more refined approach in [8] based on the delicate busy-period analysis developed in [9]. However, this trick does not work for more complex networks (not even for two queues in tandem with inputs to both queues, or when not all departures from the first queue join the second one [10]). Another factor that further complicates the analysis of complex networks is the fact that the input processes to the different queues can be correlated. This becomes a problem when the output of queues with correlated inputs are merged into another queue.
In this paper, we consider acyclic networks of single-server queues, where work arrives to the queues as (possibly correlated) Gaussian processes, and where the work departing from each queue is deterministically split among its neighbors, with a fraction of it leaving the system altogether. This deterministic split of the departing work was also considered in, for example, [11], and it is particularly suitable for modeling single-class networks (where all work is essentially exchangeable), or for modeling networks where all work needs to be routed to the same node (and thus where the splitting of departure streams is only performed to load balance the network).
In terms of our approach, this paper fits in the framework of the analysis of a single Gaussian queue [5], and the subsequent analysis of tandem, priority, and generalized processor sharing queues [7, 12]; we refer to [13] for a textbook account on Gaussian queues. In terms of our scope, this paper is perhaps most similar to [11], where the authors obtained large-deviations results for acyclic networks of G/G/1 queues. However, in that paper, there were certain limitations regarding the correlation structure of the input processes (in that they have to be independent across different queues), and regarding the structure of the network (in that any two directed paths cannot meet in more than one node).

1.1 Our contribution

In this paper, we generalize the analysis of a pair of queues in tandem, fed by a single Gaussian process [7], to acyclic networks of single-server queues, fed by (possibly correlated) Gaussian processes. As in [7], we assume that the arrival processes are the superposition of n i.i.d. (multi-dimensional) Gaussian processes, and scale the processing rates of the servers by a factor of n, which corresponds to the so called ‘many sources regime.’ In this regime, for any given node i, we work toward characterizing the asymptotic exponential decay rate of its ‘overflow probability,’ that is, the limit
$$\begin{aligned} - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) , \end{aligned}$$
(1)
where \(Q^{(n)}_i\) is the steady-state queue length at the ith node, and b is any positive threshold. In particular:
(i)
We obtain a general lower bound on the asymptotic exponential decay rate by leveraging the power of a generalized version of Schilder’s theorem (Theorem 3).
 
(ii)
Under additional technical conditions, we prove the tightness of the lower bound by finding the most likely sample paths, and showing that the corresponding asymptotic exponential decay rates coincide with the lower bound (Theorems 45, and 6).
 
(iii)
We show that if the input processes to the different queues are nonnegatively correlated, non-short-range-dependent fractional Brownian motions, and if the processing rates are large enough, then the asymptotic exponential decay rates of the queues coincide with those of isolated queues with appropriate Gaussian inputs (Theorem 7).
 

1.2 Organization of the paper

The paper is organized as follows: In Sect. 2, we introduce some notation, the network model, and a few preliminaries on large-deviations theory. In Sect. 3, we present our main results. In Sect. 4, we introduce an interesting example where the large-deviations behavior of any queue in the network coincides with the behavior of a single-server queue with Gaussian input. Finally, we conclude in Sect. 5.

2 Model and preliminaries

In this section, we introduce some notation, the queueing network model that we analyze, and present a few preliminaries on sample-path large deviations theory.

2.1 Notation for underlying graph

Given a directed graph \(G=(V,E)\), and a node \(i\in V\), we introduce the following notation: Let
$$\begin{aligned} \mathcal {N}_\mathrm{in}(i) \triangleq \big \{j\in V : (j,i)\in E \big \} \end{aligned}$$
be the set of all inbound neighbors of i. Let
$$\begin{aligned} \mathcal {P}_m(i) \triangleq \bigcup _{l=m}^{|V|} \Big \{ r \in V^l : r_l=i, \text { and } (r_\ell ,r_{\ell +1})\in E,\,\, \forall \, \ell \le l-1 \Big \} \end{aligned}$$
be the set of all directed paths that contain at least m nodes, and end at node i. In particular, note that the trivial path (i) is only in \(\mathcal {P}_1(i)\). For any path \(r\in \mathcal {P}_2(i)\), let \(r_+\in \mathcal {P}_1(i)\) be the path that results from removing the node \(r_1\) from the path r. Finally, for any path \(r\in \mathcal {P}_1(i)\), let |r| be the number of nodes that it contains.

2.2 Queueing network

In this subsection, we introduce the basic structure of our queueing network. Consider a directed acyclic graph with k nodes, and a scaling parameter \(n\in \mathbb {Z}_+\). Each node i of the graph is equipped with a single server with rate \(n\mu _i\), and a queue with infinite capacity. Work arrives to the network in a number of stochastic processes, \(A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot )\), with stationary increments and positive rates \(n\lambda _1, \ldots ,n\lambda _k\), respectively. (More details about these processes are given in Sect. 2.3.) In particular, \(A^{(n)}_i(\cdot )\) is the stream of work that enters the network at node i. Work departing from node i is split deterministically so that, for each edge (ij) with \(i\ne j\), a fraction \(p_{i,j}\in [0,1]\) is routed to node j. The remaining fraction of the work departing from node i, denoted by \(p_{i,i}\in [0,1]\), leaves the network; evidently, \(\sum _{i} p_{i,j} = 1\). In order to simplify notation, for any directed path r, let us denote
$$\begin{aligned} \Pi _r \triangleq \prod \limits _{\ell =1}^{|r|-1} p_{r_\ell ,r_{\ell +1}}. \end{aligned}$$
In particular, we have \(\Pi _{(i)}=1\).
For \(s\le t\), we interpret
$$\begin{aligned} A^{(n)}_i(s,t)\triangleq A^{(n)}_i(t)-A^{(n)}_i(s) \end{aligned}$$
as the amount of exogenous work that arrived to the ith node during the time interval (st]. Let \(D^{(n)}_i(s,t)\) be the amount of work that departed the ith node during (st]. Then, the total amount of work arriving to the ith node during (st] is
$$\begin{aligned} I^{(n)}_i(s,t) \triangleq A^{(n)}_i(s,t) + \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} D^{(n)}_j(s,t), \end{aligned}$$
(2)
recalling that \(\mathcal {N}_\mathrm{in}(i)\) is the set of inbound neighbors of i. Furthermore, for \(t\in \mathbb {R}\), Reich’s formula states that the amount of remaining work in the ith queue at time t (also called the ‘queue length’) is given by
$$\begin{aligned} Q^{(n)}_i(t)\triangleq \sup \limits _{s< t} \left\{ I^{(n)}_i(s,t) - n\mu _i (t-s) \right\} . \end{aligned}$$
(3)
Moreover, we evidently have
$$\begin{aligned} D^{(n)}_i(s,t)&= Q^{(n)}_i(s) + I^{(n)}_i(s,t) - Q^{(n)}_i(t). \end{aligned}$$
(4)
Since we are interested in the steady-state of the queue lengths, we need to ensure that the service rate of each server is strictly larger than the total arrival rate to its node. This is enforced by imposing the following assumption.
Assumption 1
For each \(i\in \{1, \ldots ,k\}\), we have
$$\begin{aligned} \sum \limits _{r\in \mathcal {P}_1(i)} \lambda _{r_1} \Pi _r < \mu _i. \end{aligned}$$
Note that, even under Assumption 1, the existence and uniqueness of k-dimensional processes \(D^{(n)}(\cdot )\), \(I^{(n)}(\cdot )\), and \(Q^{(n)}(\cdot )\) that satisfy Eqs. (2), (3), and (4) is not immediate. This is shown in in Sect. 3.1, by expressing them as functionals of the exogenous arrival processes \(A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot )\).

2.3 Gaussian arrival processes

In this subsection, we specify the nature of the exogenous arrivals to the network. Let \(\{X^{(j)}(\cdot )\}_{j\in {\mathbb Z}_+}\) be a sequence of i.i.d. k-dimensional Gaussian processes with continuous sample paths and stationary increments, and with \(X^{(j)}(0)=(0, \ldots ,0)\), for all \(j\in {\mathbb Z}_+\). Each one of these k-dimensional processes is characterized by its drift vector \(\lambda =(\lambda _1, \ldots ,\lambda _k)\), where
$$\begin{aligned} \lambda \triangleq \mathbb {E}\left[ X^{(1)}(1)\right] , \end{aligned}$$
and by its covariance matrix \(\Sigma :\mathbb {R}^2\rightarrow \mathbb {R}^{k\times k}\), where
$$\begin{aligned} \Sigma _{i,j}(t,s) = {\mathbb C}\mathrm{ov}\left( X_i^{(1)}(t),\, X_j^{(1)}(s) \right) . \end{aligned}$$
Throughout this paper, we assume that the process \(A^{(n)}(\cdot )\triangleq \big ( A^{(n)}_1(\cdot ), \ldots ,A^{(n)}_k(\cdot ) \big )\) is a k-dimensional Gaussian process such that
$$\begin{aligned} A^{(n)}_i(\cdot ) = \sum \limits _{j=1}^n X_i^{(j)}(\cdot ), \end{aligned}$$
(5)
for all \(i\in \{1, \ldots ,k\}\). Therefore, \(A^{(n)}(\cdot )\) also has continuous sample paths and stationary increments, and satisfies \(A^{(n)}(0)=(0, \ldots ,0)\). Moreover, the k-variate process \(A^{(n)}(\cdot )\) has drift vector \(n\lambda \), and covariance matrix \(n\Sigma \).
Remark 1
Equation (5) corresponds to the setting where the arrival processes are a superposition of individual streams, which is also called the ‘many-sources regime’ [14].
Finally, the following assumption is in place. It is required for a generalized version of Schilder’s theorem to hold, which is introduced in the following subsection.
Assumption 2
(i)
The covariance matrix \(\Sigma \) is differentiable.
 
(ii)
For every \(i,j\in \{1, \ldots ,k\}\), we have
$$\begin{aligned} \lim \limits _{t^2+s^2\rightarrow \infty } \frac{\Sigma _{i,j}(t,s)}{t^2+s^2} = 0. \end{aligned}$$
 

2.4 Sample-path large deviations

In this paper, our aim is to study the limit
$$\begin{aligned} \mathbb {I}_i(b) \triangleq - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) , \end{aligned}$$
(6)
where \(Q^{(n)}_i\) is the steady-state queue length of the ith node, and \(\mathbb {I}:\mathbb {R}_+\rightarrow \mathbb {R}_+^k\) is a function that only depends on the server rates \(\mu \triangleq (\mu _1, \ldots ,\mu _k)\), on the routing matrix p, on the drift vector \(\lambda \), and on the covariance matrix \(\Sigma \). In order to do this, we rely on a sample-path large deviations principle for centered Gaussian processes, based on the generalized Schilder’s theorem. Before stating this theorem, we introduce its framework.
First, we introduce the sample-path space
$$\begin{aligned}&\Omega ^k \triangleq \left\{ \omega :\mathbb {R}\rightarrow \mathbb {R}^k, \,\,\right. \\&\qquad \qquad \left. \text {continuous},\,\, \omega (0)=(0, \ldots ,0),\,\,\lim \limits _{t\rightarrow \infty } \frac{\Vert \omega (t)\Vert _2}{1+|t|}=\lim \limits _{t\rightarrow -\infty } \frac{\Vert \omega (t)\Vert _2}{1+|t|}=0 \right\} , \end{aligned}$$
equipped with the norm
$$\begin{aligned} \Vert \omega \Vert _{\Omega ^k} \triangleq \sup \left\{ \frac{\Vert \omega (t)\Vert _2}{1+|t|} : t\in \mathbb {R} \right\} , \end{aligned}$$
which is a separable Banach space [15]. Next, we introduce the Reproducing Kernel Hilbert Space (rkhs) \(\mathcal {R}^k\subset \Omega ^k\) (see [16] for more details) induced by using the covariance matrix \(\Sigma (\cdot ,\cdot )\) as the kernel. In order to define it, we start from the smaller space
$$\begin{aligned} \mathcal {R}^k_* \triangleq \text {span}\left\{ \Sigma (t, \cdot )v : t\in \mathbb {R},\, v\in \mathbb {R}^k \right\} , \end{aligned}$$
with the inner product \(\langle \cdot , \cdot \rangle _{\mathcal {R}^k}\) defined as
$$\begin{aligned} \big \langle \Sigma (t,\cdot )u,\, \Sigma (s,\cdot )v \big \rangle _{\mathcal {R}^k} \triangleq u^\top \Sigma (t,s)v, \end{aligned}$$
for all \(t,s\in \mathbb {R}\) and \(u,v\in \mathbb {R}^k\). The closure of \(\mathcal {R}^k_*\) with respect to the topology induced by its inner product is the rkhs \(\mathcal {R}^k\). Using this inner product and its corresponding norm \(\Vert \cdot \Vert _{\mathcal {R}^k}\), we define a rate function by
$$\begin{aligned} \mathbb {I}(\omega ) \triangleq {\left\{ \begin{array}{ll} \frac{1}{2}\Vert \omega \Vert ^2_{\mathcal {R}^k}, &{} \text {if } \omega \in \mathcal {R}^k,\\ \infty , &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
Remark 2
In [12, 15], the authors defined an appropriate multi-dimensional rkhs as the product of single-dimensional spaces that use the individual variance functions as kernels. There this could be done because the different coordinates of the multi-dimensional Gaussian process of interest were assumed independent. In our case, since the coordinates of our Gaussian process of interest need not be independent, we needed to define the multi-dimensional space directly, using the whole covariance matrix as the kernel. When the coordinates are indeed independent, both definitions are equivalent.
Under the framework define above, the following sample-path large deviations principle holds.
Theorem 1
(Generalized Schilder [17]) Under Assumption 2, the following holds:
(i)
For any closed set \(F\subset \Omega ^k\),
$$\begin{aligned} \limsup \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in F \right) \le -\inf \limits _{\omega \in F} \big \{ \mathbb {I}(\omega ) \big \}. \end{aligned}$$
 
(ii)
For any open set \(G\subset \Omega ^k\),
$$\begin{aligned} \liminf \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( \frac{A^{(n)}(\cdot ) -n\lambda \,\cdot \,}{n} \in G \right) \ge -\inf \limits _{\omega \in G} \big \{ \mathbb {I}(\omega ) \big \}. \end{aligned}$$
 
Schilder’s theorem typically only gives implicit results, as it is often hard to explicitly compute the infimum over the set of sample paths. However, as in [7, 12, 15], we will leverage the properties of our rkhs to obtain explicit results.

3 Main results

In this section, we will establish large deviations results for the steady-state queue-length distributions. In particular, we will use Theorem 1 to show that for any \(\{1, \ldots ,k\}\), and for every \(b>0\), the limit
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb\right) \end{aligned}$$
(7)
exists, and to find (tight) bounds for it. The first step is to express this probability as a function of the Gaussian arrival processes (Sect. 3.1), and to show that the limit exists (Sect. 3.2). Second, we obtain a general upper bound for this limit (Sect. 3.3), and prove that it is tight under additional technical assumptions (Sect. 3.4). The arguments largely follow the same structure as the arguments for the analysis of the second queue in a tandem [7], but without the simplifications that come from having only two queues in tandem, with arrivals only to the first one.

3.1 Overflow probability as a function of the arrival processes

In this subsection, we obtain a set \(\mathcal {E}_i(b)\) of sample paths such that
$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i > nb\right) = \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \mathcal {E}_i(b)\right) . \end{aligned}$$
By Reich’s formula, we have
$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i> nb\right)&= \mathbb {P}\left( \sup \limits _{t< 0} \left\{ I^{(n)}_i(t,0) + n\mu _i t \right\}> nb\right) \\&= \mathbb {P}\left( \exists \, t< 0 : I^{(n)}_i(t,0) + n\mu _i t > nb\right) , \end{aligned}$$
where \(I^{(n)}_i(t,0)\) is the total amount of work that arrived to the ith queue in the time interval (t, 0]. If i is a node with no inbound neighbors, i.e., if \(\mathcal {N}_\mathrm{in}(i)=\emptyset \), we have that \(I^{(n)}_i(t,0)=-A^{(n)}_i(t)\), and thus
$$\begin{aligned}&\mathbb {P}\left( Q^{(n)}_i> nb\right) \\&\quad = \mathbb {P}\left( \exists \, t< 0 : n\mu _i t -A^{(n)}_i(t)> nb\right) \\&\quad = \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \Big \{f\in \Omega ^k : \exists \, t< 0,\, (\mu _i-\lambda _i) t - f_i(t) > b \Big \}\right) . \end{aligned}$$
In this case, a large-deviations analysis can be performed through a straightforward application of Schilder’s theorem (this is exactly the same as in the case of an isolated Gaussian queue [5]). However, in general the input process is the sum of the local Gaussian arrival process, and the departure processes of its inbound neighbors, which are not Gaussian. In the following lemma, we obtain the input process as a functional of the exogenous arrival processes of all the upstream nodes.
Lemma 1
For each \(i\in \{1, \ldots ,k\}\), and for all \(t<0\), we have
$$\begin{aligned} I^{(n)}_i(t,0)&= A^{(n)}_i(t,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_i(t)} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \nonumber \\&\quad - \sup \limits _{{\varvec{s}}\in \mathcal {T}_i(0)} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)+ n\mu _{r_1}({\varvec{s}}_{r}-{\varvec{s}}_{r_+}) \right] \Pi _r \right\} , \end{aligned}$$
(8)
where
$$\begin{aligned} \mathcal {T}_i(t)&\triangleq \Big \{ {\varvec{t}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{t}}_i = t \quad \text {and} \quad {\varvec{t}}_r < {\varvec{t}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \}. \end{aligned}$$
The proof is given in Appendix A, and consists of solving a recursive equation on the input processes by using induction on the maximum length of paths that end in node i.
Remark 3
Let \({\varvec{t}}^*\) and \({\varvec{s}}^*\) be finite optimizers of the two suprema in (8) over the closure of their domains. These have the following interpretation: for each path \(r\in \mathcal {P}_2(i)\), the time \({\varvec{t}}^*_{r}\) (respectively, \({\varvec{s}}^*_{r}\)) is the starting point of the busy period of the \(r_1\)th queue that contains the time \({\varvec{t}}^*_{r_+}\) (respectively, \({\varvec{s}}^*_{r_+}\)). Then, since \({\varvec{t}}_i=t <0\) and \({\varvec{s}}_i=0\), it follows that \({\varvec{t}}^*_{r}\le {\varvec{s}}^*_{r}\), for all \(r\in \mathcal {P}_1(i)\). Combining this with (8), and using the continuity of \(A^{(n)}(\cdot )\), we obtain
$$\begin{aligned} I^{(n)}_i(t,0)&= A^{(n)}_i(t,0) - n\left( \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \mu _j p_{j,i} \right) t \\&\quad + \sup \limits _{{\varvec{t}}\in \mathcal {T}_i}\left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)\right. \right. \\&\quad \left. \left. +\,\, n \left( \mu _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) {\varvec{t}}_{r} \right] \Pi _r \right. \\&\quad \left. - \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)\right. \right. \right. \\&\quad \left. \left. \left. +\,\, n\left( \mu _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) {\varvec{s}}_{r} \right] \Pi _r \right\} \right\} , \end{aligned}$$
where
$$\begin{aligned} \mathcal {T}_i&\triangleq \Big \{ {\varvec{t}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{t}}_i< 0 \quad \text {and} \quad {\varvec{t}}_r< {\varvec{t}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \},\\ \mathcal {S}_i({\varvec{t}})&\triangleq \Big \{ {\varvec{s}}\in \mathbb {R}^{\mathcal {P}_1(i)}: {\varvec{s}}_i=0 \quad \text {and} \quad {\varvec{t}}_{r}< {\varvec{s}}_r < {\varvec{s}}_{r_+}, \,\, \forall \, r\in \mathcal {P}_2(i) \Big \}. \end{aligned}$$
Note that the continuity of \(A^{(n)}(\cdot )\) is what allows us to have the condition \({\varvec{t}}_{r} < {\varvec{s}}_{r}\) instead of \({\varvec{t}}_{r} \le {\varvec{s}}_{r}\). This distinction will be convenient later.
We now state the main result of this subsection.
Theorem 2
For each \(i\in \{1, \ldots ,k\}\), and for every \(b>0\), we have
$$\begin{aligned} \mathbb {P}\left( Q_i^{(n)} > b n \right)&= \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {E}_i(b) \right) , \end{aligned}$$
(9)
where
$$\begin{aligned} \mathcal {E}_i(b)&\triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \\&\qquad \left. > b - \sum \limits _{r\in \mathcal {P}_1(i)} \left[ \left( \mu _{r_1}-\lambda _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1}\right) \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )\right] \Pi _r \right\} . \end{aligned}$$
The proof follows immediately from Reich’s formula and Lemma 1, and it is given in Appendix B.

3.2 Decay rate of the overflow probability

In this subsection, we establish the existence of the limit
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) , \end{aligned}$$
for all \(b>0\). Recall that Theorem 2 states that \(\mathbb {P}( Q_i^{(n)} > b n)\) satisfies (9), where \(\mathcal {E}_i(b)\) is an open set of the path space \(\Omega ^k\). Then, by Schilder’s theorem (Theorem 1), we have
$$\begin{aligned} -\liminf \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \mathcal {E}_i(b) \right) \right)&\le \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \}, \end{aligned}$$
and
$$\begin{aligned} -\limsup \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \,\cdot \,}{n} \in \overline{\mathcal {E}_i(b)} \right) \right)&\ge \inf \limits _{f\in \overline{\mathcal {E}_i(b)}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
Then, the existence of the limit is equivalent to showing that \(\mathcal {E}_i(b)\) is an \(\mathbb {I}\)-continuity set, which is stated in the following proposition. The proof follows along the lines of the proof of [7, Thm. 3.1], and it is thus omitted.
Proposition 1
For each \(i\in \{1, \ldots ,k\}\), and for every \(b>0\), we have
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \left( \mathbb {P}\left( Q_i^{(n)} > b n \right) \right) = \inf \limits _{f\in \overline{\mathcal {E}_i(b)}} \big \{ \mathbb {I}(f) \big \} = \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
(10)
Since the existence of the decay rate of interest given in (6) has been established now, in the following subsections, we focus on finding lower and upper bounds on it.

3.3 Lower bound on the decay rate

In this subsection, we present a general lower bound for the asymptotic exponential decay rate of the overflow probability in steady state. We start by introducing some notation. Given a vector v and a scalar a, we denote \(v-(a, \ldots ,a)\) as \(v-a\). For each node \(i\in \{1, \ldots ,k\}\), we denote
$$\begin{aligned} \hat{A}_i(t)&\triangleq \frac{A_i^{(n)}(t) - n\lambda _i t}{\sqrt{n}}. \end{aligned}$$
Note that \(\hat{A}(\cdot )\) is a k-dimensional Gaussian process with zero mean, and covariance matrix \(\Sigma \). For each node, \(i\in \{1, \ldots ,k\}\),
$$\begin{aligned} \overline{\lambda }_i&\triangleq \sum \limits _{r\in \mathcal {P}_1(i)} \lambda _{r_1} \Pi _r, \\ \bar{A}_i({\varvec{s}},{\varvec{t}})&\triangleq \sum \limits _{r\in \mathcal {P}_1(i)} \Big [ \hat{A}_{r_1}({\varvec{t}}_{r}) - \hat{A}_{r_1}({\varvec{s}}_{r}) \Big ] \Pi _r. \end{aligned}$$
Moreover, let us define the functions
$$\begin{aligned} k^b_i({\varvec{t}},{\varvec{s}})&\triangleq \mathbb {E}\left[ \left. \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \,\right| \, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) =b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \right] ,\\ h^b_i({\varvec{t}},{\varvec{s}})&\triangleq \mathbb {E}\left[ \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \,\left| \, \bar{A}_i({\varvec{s}},{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right. \right] , \end{aligned}$$
where
$$\begin{aligned} c_i({\varvec{t}},{\varvec{s}})&\triangleq \left( \overline{\lambda }_i - \lambda _i - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \mu _j p_{j,i} \right) {\varvec{t}}_i\\&\quad +\sum \limits _{r\in \mathcal {P}_2(i)} \left( \mu _{r_1}-\lambda _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big ) \Pi _r. \end{aligned}$$
Note that \(c_i({\varvec{t}},{\varvec{t}}-{\varvec{t}}_i)=0\).
Using the above notation, we now state our lower bound.
Theorem 3
Under Assumptions 1 and 2, for each \(i\in \{1, \ldots ,k\}\) and for every \(b > 0\),
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \ge \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \Big \{ \mathbb {I}_i^b({\varvec{t}},{\varvec{s}}) \Big \}, \end{aligned}$$
where
$$\begin{aligned} \mathbb {I}_i^b({\varvec{t}},{\varvec{s}}) \triangleq {\left\{ \begin{array}{ll} \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}, \quad \text {if } \ k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}}), \\ \qquad \qquad \quad \qquad \qquad \qquad \qquad \text {or} \ \quad {\varvec{s}}={\varvec{t}}-{\varvec{t}}_i, \\ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \Big )}, \quad \text {if } \ h^b_i({\varvec{t}},{\varvec{s}})> c_i({\varvec{t}},{\varvec{s}}), \\ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}, \quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
(11)
The proof is given in Appendix C, and it essentially consists of two steps. First, we decompose the event \(\mathcal {E}_i(b)\) given in Theorem 2 as a union of intersections of simpler events that only involve the sample paths at fixed times, and we upper bound the probability of the intersection by the probability of the least likely one. Then, we use Cramér’s theorem to obtain the decay rate of the least likely of these simpler events by solving the additional quadratic optimization problem that arises by its application.
Remark 4
As part of the proof of Theorem 3, it is established that conditions \(k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}})\) or \({\varvec{s}}={\varvec{t}}-{\varvec{t}}_i\), and \(h^b_i({\varvec{t}},{\varvec{s}})>c_i({\varvec{t}},{\varvec{s}})\) cannot be satisfied at the same time. As a result, the three cases in the definition of \(\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\) are disjoint.
Remark 5
The lower bound in Theorem 3 generalizes the lower bound given in [7, Corollary 3.5], not only by generalizing the network structure from a set of tandem queues to any acyclic network of queues, but also by removing a concavity assumption on the square root of the variance of the input processes. However, the removal of this assumption makes the expression of the lower bound more convoluted, even if we restrict it to the case of a pair of queues in tandem.
Remark 6
It is worth highlighting that, even if the bound of Theorem 3 is not tight, it provides an upper bound for the asymptotic exponential decay rate of overflow probability that can be used as a performance guarantee in applications.

3.4 Tightness of the lower bound

In this subsection, we obtain conditions under which the lower bound in Theorem 3 is tight. We present three results, one for each of the cases in the definition of \(\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\) in (11), with different technical conditions for each case.
Let \(({\varvec{t}}^*,{\varvec{s}}^*)\) be an optimizer of (11) over the closure of its domain. We first establish that, if the optimum of (11) is achieved in the first case, then the lower bound of Theorem 3 is tight under an additional technical condition. This is formalized in the following theorem.
Theorem 4
Under Assumptions 1 and 2, the following holds. If
$$\begin{aligned} k^b_i\left( {\varvec{t}}^*,{\varvec{s}}\right) < c_i\left( {\varvec{t}}^*,{\varvec{s}}\right) , \end{aligned}$$
(12)
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}} \ne {\varvec{t}}^*- {\varvec{t}}^*_i\), then
$$\begin{aligned} \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} . \end{aligned}$$
The proof is given in Appendix D, and it essentially consists of two steps. First, we identify a most likely sample path in the least likely event of the intersection given in the decomposition of the event \(\mathcal {E}_i(b)\) that was used in the proof of Theorem 3. Then, we show that under the assumptions imposed this most likely sample path is in all the sets featuring in the intersection, thus implying optimality in \(\mathcal {E}_i(b)\).
Since the condition in (12) requires an optimizer \(t^*\) of (11), it is generally hard to verify. In the following lemma, we present a sufficient condition that is easier to verify.
Lemma 2
A sufficient condition for (12) to hold is that
$$\begin{aligned} k^b_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) < c_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) , \end{aligned}$$
(13)
for all \({\varvec{s}}\in \mathcal {S}_i\left( \tilde{\varvec{t}}\right) \) such that \({\varvec{s}}\ne \tilde{\varvec{t}}- \tilde{\varvec{t}}_i\), where
$$\begin{aligned} \tilde{\varvec{t}} \in \underset{{\varvec{t}}\in \overline{\mathcal {T}_i}}{\arg \min }\left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} . \end{aligned}$$
The proof is given in Appendix E.
Remark 7
Although the condition of (13) looks almost the same as the original one of (12), the key simplification is that for (13), we only need \(\tilde{\varvec{t}}\) instead of \({\varvec{t}}^*\), which is an optimizer of an easier optimization problem.
We now present the second result of this subsection. It asserts that, if the optimum of (11) is achieved in the second case, then the lower bound of Theorem 3 is tight under an additional technical condition.
Theorem 5
Under Assumptions 1 and 2, the following holds: Suppose that
$$\begin{aligned}&\mathbb {E}\left[ \bar{A}_i({\varvec{s}},{\varvec{s}}^*) \,\left| \, \bar{A}_i({\varvec{s}}^*,{\varvec{t}}^*) = b -\left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^*\right. \right. \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \quad \left. \left. - c_i({\varvec{t}}^*,{\varvec{s}}^*) \right. \right] \ge c_i({\varvec{t}}^*,{\varvec{s}}^*) - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\). If
$$\begin{aligned} h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) > c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \end{aligned}$$
then
$$\begin{aligned}&\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \\&\quad = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right\} . \end{aligned}$$
The proof is analogous to the proof of Theorem 4, and it is thus omitted.
Remark 8
Note that the second condition in Theorem 5 is satisfied if the first one is satisfied with strict inequality for \({\varvec{s}}={\varvec{t}}^*-{\varvec{t}}_i^*\).
Finally, we show that if the optimum of (11) is achieved in the third case, then the lower bound of Theorem 3 is tight under a different additional technical condition.
Theorem 6
Under Assumptions 1 and 2, the following holds: Suppose that
$$\begin{aligned}&\mathbb {E}\left[ \left. \bar{A}_i({\varvec{s}},{\varvec{s}}^*) \,\right| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i-\overline{\lambda }_i\right) t^*_i;\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*)\right. \nonumber \\&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \,\,\left. =c_i({\varvec{t}}^*,{\varvec{s}}^*) \right] \ge c_i({\varvec{t}}^*,{\varvec{s}}^*) - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$
(14)
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\). If
$$\begin{aligned} h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \le c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) , \qquad \text {and} \qquad k^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \ge c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) , \end{aligned}$$
then
$$\begin{aligned}&\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \\&\quad = -\inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}\right. \\&\qquad \left. + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )} \right\} . \end{aligned}$$
The structure of the proof is the same as the proof of Theorem 4, and it is given in Appendix F.

4 Example: equivalence to a single server queue

In this section, we show that if the input process is a multivariate fractional Brownian motion with non-short-range dependence and nonnegative correlation between its coordinates, and if the service rates are sufficiently large, then the large deviations behavior of any fixed queue in the network is the same as if all inputs to upstream queues were inputs to the queue itself. This phenomenon was also observed in [7] for the second queue in a tandem, and here we generalize the conditions under which it occurs.

4.1 Preliminaries on multivariate fractional Brownian motions

Consider the case where the exogenous arrival process \(A^{(n)}(\cdot )\) is a multivariate fractional Brownian motion (mfBm). Since each coordinate is a real-valued fBm, for each \(i\in \{1, \ldots ,k\}\), and for every \(t<s<0\), we have
$$\begin{aligned} {\mathbb C}\mathrm{ov}\left( A^{(n)}_i(t),\,\, A^{(n)}_i(s)\right) = \frac{\sigma _i^2}{2} \Big [ |t|^{2H_i} + |s|^{2H_i} - |s-t|^{2H_i} \Big ], \end{aligned}$$
where \(H_i\in (0,1)\) is its Hurst index, and
$$\begin{aligned} \sigma _i \triangleq \sqrt{ \mathbb {V}\text {{ar}}\left( A^{(n)}_i(1)\right) } \end{aligned}$$
is its variance. Furthermore, it is known [18] that for each \(i,j \in \{1, \ldots ,k\}\), and for every \(t<s<0\), we have
$$\begin{aligned}&{\mathbb C}\mathrm{ov}\left( A^{(n)}_i(t),\,\, A^{(n)}_j(s)\right) \nonumber \\&\quad = {\left\{ \begin{array}{ll} \frac{\sigma _i \sigma _j}{2} \Big [ (\rho _{i,j}-\eta _{i,j})|t|^{H_i+H_j} + (\rho _{i,j}+\eta _{i,j})|s|^{H_i+H_j} - (\rho _{i,j}-\eta _{i,j})|s-t|^{H_i+H_j} \Big ], \\ \quad \text { if } H_i+H_j\ne 1, \\ \frac{\sigma _i \sigma _j}{2} \Big [ \rho _{i,j}\big (|t| + |s| - |s-t| \big ) + \eta _{i,j}\big ( s\log |s| - t\log |t| -(s-t)\log |s-t| \big ) \Big ], \\ \quad \text { if } H_i+H_j= 1, \end{array}\right. } \end{aligned}$$
where
$$\begin{aligned} \rho _{i,j} \triangleq {\mathbb C}\mathrm{orr}\,\left( A^{(n)}_i(1),\,\, A^{(n)}_j(1)\right) \end{aligned}$$
are their covariances, and \(\eta _{i,j}=-\eta _{j,i}\in \mathbb {R}\) represents the inter-correlation in time between the two coordinates. Note that, contrary to the single-dimensional fBm, they need not be time-reversible. In particular, a mfBm is time-reversible if and only if \(\eta _{i,j}=0\) for all ij [19, Prop. 6]. Moreover, the parameters \(\eta _{i,j}\) have the following interpretation [19]:
(i)
If the one-dimensional fBm s are short-range dependent (i.e., if \(H_i,H_j<1/2\)), then they are either short-range interdependent if \(\rho _{i,j}\ne 0\) or \(\eta _{i,j}\ne 0\), or independent if \(\rho _{i,j}=\eta _{i,j}=0\). This also holds when \(H_i+H_j<1\), even if one of them is larger than or equal to 1/2.
 
(ii)
If the one-dimensional fBm s are long-range dependent (i.e., if \(H_i,H_j>1/2\)), then they are either long-range interdependent if \(\rho _{i,j}\ne 0\) or \(\eta _{i,j}\ne 0\), or independent if \(\rho _{i,j}=\eta _{i,j}=0\). This also holds when \(H_i+H_j>1\), even if one of them is smaller than or equal to 1/2.
 
(iii)
If the one-dimensional fBm s are Brownian motions (i.e., if \(H_i=H_j=1/2\)), then they are either long-range interdependent if \(\eta _{i,j}\ne 0\), or independent if \(\eta _{i,j}=0\). This also holds whenever \(H_i+H_j=1\), even if neither of them are equal to 1/2.
 

4.2 Nonnegatively correlated, non-short-range-dependent inputs

We now present the main result of this section.
Theorem 7
Fix some node i. Suppose that \(H_j=H \ge 1/2\), for all \(j\in \{1, \ldots ,k\}\), that \(\eta _{j,l}=0\), for all \(j,l\in \{1, \ldots ,k\}\), and that \(\rho _{j,l}\ge 0\), for all \(j,l\in \{1, \ldots ,k\}\). Moreover, suppose that
$$\begin{aligned}&\min \left\{ \mu _j - \lambda _j -\sum \limits _{l\in \mathcal {N}_\mathrm{in}(j)} \mu _l p_{l,j} : j\ne i \right\} >\nonumber \\&\quad \sup \limits _{\alpha \in (0,1)^{|\mathcal {P}_2(i)|}} \left\{ \frac{ \sum \limits _{r\in \mathcal {P}_2(i)} \left( \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \left[ \left( \alpha _r\right) ^{2H}+ 1 - \left( 1- \alpha _r\right) ^{2H}\right] \Pi _r }{ \left( \sum \limits _{r\in \mathcal {P}_2(i)} \alpha _r \Pi _r \right) } \right\} \left( \frac{\mu _i-\overline{\lambda }_i}{2H \overline{\sigma }_i^2}\right) , \end{aligned}$$
(15)
where
$$\begin{aligned} \overline{\sigma }_i^2 \triangleq \sigma _i^2 + \sum \limits _{r\in \mathcal {P}_2(i)} \left( 2 \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r. \end{aligned}$$
Then, for every \(b>0\),
$$\begin{aligned} - \lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q^{(n)}_i > nb \right) = \frac{1}{2\overline{\sigma }_i^2}\left( \frac{b}{1-H} \right) ^{2-2H}\left( \frac{\mu _i - \overline{\lambda }_i}{H} \right) ^{2H}. \end{aligned}$$
The proof is given in Appendix G, and amounts to checking that Theorem 4 applies in this case, to then compute the exact decay rate.
Remark 9
Note that this decay rate is the same as the one that we would obtain in a single-server queue with processing rate \(\mu _i\) and input
$$\begin{aligned} \sum \limits _{r\in \mathcal {P}_1(i)} A^{(n)}_{r_1}(\cdot )\Pi _r. \end{aligned}$$
This means that under the assumptions of Theorem 7, in this regime, the queues upstream of node i are ‘transparent.’ In particular, this implies that the most likely overflow path is the one where all upstream queues are empty.
Remark 10
In the case of a pair of queues tandem with arrivals only to the first queue, the condition in (15) is the same as the one obtained in the analysis of the tandem queues done in [7].

5 Conclusions

We have considered an acyclic network of queues with (possibly correlated) Gaussian inputs and static routing, and characterized the large deviations behavior of the steady-state queue length in each queue of the network. We achieved this by defining an appropriate multi-dimensional reproducing kernel Hilbert space, and using Schilder’s theorem to obtain lower and upper bounds for the asymptotic exponential decay rate. This generalizes previous results, which focused on isolated queues and two-queue tandem systems (with arrivals only to the first queue).
While the results that we obtain are quite general both in terms of the network structure and in terms of the correlation structure among the arrival processes to the different nodes, there are still interesting open problems. For instance:
(i)
While we considered essentially only single-class traffic with a deterministic split of the work departing from each server, it would be interesting to extend our results to multi-class networks, where the servers are shared by using, for example, the generalized processor sharing discipline [12].
 
(ii)
While we only obtained large-deviations results for each queue separately, it would be interesting to obtain similar results for the joint queue lengths.
 

Acknowledgements

Helpful comments by Sem Borst are greatly appreciated.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix A. Proof of Lemma 1

We prove this by induction in the maximum length of paths that end in node i. Suppose that the maximum length is one. Then, \(\mathcal {P}_2(i)=\emptyset \) and thus
$$\begin{aligned} I^{(n)}_i(t,0) = A^{(n)}_i(t,0). \end{aligned}$$
Now suppose that (8) holds for all nodes j such that the maximum length of paths that end in j is at most one less than the maximum lengths of paths that end in node i. Recall that
$$\begin{aligned} D^{(n)}_j(t,0)&= Q^{(n)}_j(t) + I^{(n)}_j(t,0) - Q^{(n)}_j(0),\\ Q^{(n)}_j(t)&= \sup \limits _{s<t} \left\{ I^{(n)}_j(s,t) - n\mu _j (t-s) \right\} . \end{aligned}$$
Combining the last two equations, we obtain that \(I^{(n)}_i(t,0)\) equals
$$\begin{aligned} A^{(n)}_i(t,0)&+ \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} D^{(n)}_j(t,0) \\&\quad = A^{(n)}_i(t,0) + \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ \sup \limits _{t_j<t} \left\{ I^{(n)}_j(t_j,t) - n\mu _j (t-t_j) \right\} \right. \\&\qquad \left. +\,\, I^{(n)}_j(t,0) - \sup \limits _{s_j<0} \left\{ I^{(n)}_j(s_j,0) + n\mu _j s_j \right\} \right] \\&\quad = A^{(n)}_i(t,0) + \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ \sup \limits _{t_j<t} \left\{ I^{(n)}_j(t_j,0) - n\mu _j (t-t_j) \right\} \right. \\&\qquad \left. - \sup \limits _{s_j<0} \left\{ I^{(n)}_j(s_j,0) + n\mu _j s_j \right\} \right] . \end{aligned}$$
Since all j are inbound neighbors of i, and the graph is acyclic, the maximum lengths of paths that end in nodes j are at most one less than the maximum length of paths that end in node i. Then, using the inductive hypothesis on the input processes \(I^{(n)}_j(t_j,0)\), \(I^{(n)}_i(t,0)\) equals \(A^{(n)}_i(t,0)\) increased by
$$\begin{aligned}&\sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ \sup \limits _{t_j<t} \left\{ A^{(n)}_j(t_j,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_j(t_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \right. \right. \\&\qquad \left. - \sup \limits _{{\varvec{s}}\in \mathcal {T}_j(0)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)+ n\mu _{r_1}({\varvec{s}}_{r}-{\varvec{s}}_{r_+}) \right] \Pi _r \right\} - n\mu _j (t-t_j) \right\} \\&\qquad - \sup \limits _{s_j<0} \left\{ A^{(n)}_j(s_j,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_j(s_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \right. \\&\qquad \left. \left. - \sup \limits _{{\varvec{s}}\in \mathcal {T}_j(0)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)+ n\mu _{r_1}({\varvec{s}}_{r}-{\varvec{s}}_{r_+}) \right] \Pi _r \right\} + n\mu _j s_j \right\} \right] =\\&\sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ \sup \limits _{t_j<t} \left\{ A^{(n)}_j(t_j,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_j(t_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \right. \right. \\&\qquad \left. \left. - n\mu _j (t-t_j) \right\} \right. \\&\qquad \left. - \sup \limits _{s_j<0} \left\{ A^{(n)}_j(s_j,0) + \sup \limits _{{\varvec{t}}\in \mathcal {T}_j(s_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right\} \right. \right. \\&\qquad \left. \left. + n\mu _j s_j \right\} \right] . \end{aligned}$$
After renaming the variables for ease of exposition, and using that
$$\begin{aligned} C .\sup \limits _x \left\{ f(x) + \sup \limits _y \big \{ g(y) \big \} \right\} = \sup \limits _{x,y} \Big \{ C \big [f(x) + g(y)\big ] \Big \}, \end{aligned}$$
and that
$$\begin{aligned} \sup \limits _x \big \{ f(x) \big \} + \sup \limits _y \big \{ g(y) \big \} = \sup \limits _{x,y} \big \{ f(x) + g(y) \big \}, \end{aligned}$$
we obtain that \(I^{(n)}_i(t,0)\) equals \(A^{(n)}_i(t,0)\) increased by
$$\begin{aligned}&\sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ \sup \limits _{t_j<t} \left\{ A^{(n)}_j(t_j,0) + \sup \limits _{{\varvec{t}}^{(j)}\in \mathcal {T}_j(t_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}^{(j)}_r,0)\right. \right. \right. \right. \\&\qquad \left. \left. \left. \left. + n\mu _{r_1}({\varvec{t}}^{(j)}_r-{\varvec{t}}^{(j)}_{r_+}) \right] \Pi _r \right\} - n\mu _j (t-t_j) \right\} \right. \\&\qquad \left. - \sup \limits _{s_j<0} \left\{ A^{(n)}_j(s_j,0) + \sup \limits _{{\varvec{s}}^{(j)}\in \mathcal {T}_j(s_j)} \left\{ \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}^{(j)}_r,0)\right. \right. \right. \right. \\&\qquad \left. \left. \left. \left. + n\mu _{r_1}({\varvec{s}}^{(j)}_r-{\varvec{s}}^{(j)}_{r_+}) \right] \Pi _r \right\} + n\mu _j s_j \right\} \right] \\&= \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \sup \limits _{t_j<t} \left\{ \sup \limits _{{\varvec{t}}^{(j)}\in \mathcal {T}_j(t_j)} \left\{ p_{j,i} \left[ A^{(n)}_j(t_j,0)\right. \right. \right. \\&\qquad \left. \left. \left. +\sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}^{(j)}_r,0)+ n\mu _{r_1}({\varvec{t}}^{(j)}_r-{\varvec{t}}^{(j)}_{r_+}) \right] \Pi _r - n\mu _j (t-t_j) \right] \right\} \right\} \\&\qquad - \sup \limits _{s_j<0} \left\{ \sup \limits _{{\varvec{s}}^{(j)}\in \mathcal {T}_j(s_j)} \left\{ p_{j,i} \left[ A^{(n)}_j(s_j,0)\right. \right. \right. \\&\qquad \left. \left. \left. + \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}^{(j)}_r,0)+ n\mu _{r_1}({\varvec{s}}^{(j)}_r-{\varvec{s}}^{(j)}_{r_+}) \right] \Pi _r + n\mu _j s_j \right] \right\} \right\} \\&= \sup \limits _{\begin{array}{c} t_\ell<t, \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sup \limits _{\begin{array}{c} {\varvec{t}}^{(\ell )}\in \mathcal {T}_\ell (t_\ell ), \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ A^{(n)}_j(t_j,0) + n\mu _j (t_j-t)\right. \right. \right. \\&\qquad \left. \left. \left. +\sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}^{(j)}_r,0)+ n\mu _{r_1}({\varvec{t}}^{(j)}_r-{\varvec{t}}^{(j)}_{r_+}) \right] \Pi _r \right] \right\} \right\} \\&\qquad - \sup \limits _{\begin{array}{c} s_\ell <0, \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sup \limits _{\begin{array}{c} {\varvec{s}}^{(\ell )}\in \mathcal {T}_\ell (s_\ell ), \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ A^{(n)}_j(s_j,0) + n\mu _j s_j + \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}^{(j)}_r,0)\right. \right. \right. \right. \\&\qquad \left. \left. \left. \left. + n\mu _{r_1}({\varvec{s}}^{(j)}_r-{\varvec{s}}^{(j)}_{r_+}) \right] \Pi _r \right] \right\} \right\} . \end{aligned}$$
Finally, note that there is a one-to-one correspondence between the paths in \(\mathcal {P}_2(i)\) and the paths in
$$\begin{aligned} \bigcup \limits _{j\in \mathcal {N}_\mathrm{in}(i)} \mathcal {P}_2(j) \cup \{ (j) \}, \end{aligned}$$
where each \(r'\in \mathcal {P}_2(i)\) is of the form \(r'=(r_1, \ldots ,r_{|r|},i)\) for some \(r\in \mathcal {P}_2(j) \cup \{ (j) \}\), \(j\in \mathcal {N}_\mathrm{in}(i)\). Then, we can rename the variables once more to obtain
$$\begin{aligned}&\sup \limits _{\begin{array}{c} t_\ell<t, \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sup \limits _{\begin{array}{c} {\varvec{t}}^{(\ell )}\in \mathcal {T}_\ell (t_\ell ), \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ A^{(n)}_j(t_j,0) + n\mu _j (t_j-t)\right. \right. \right. \\&\left. \left. \left. \qquad + \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{t}}^{(j)}_r,0)+ n\mu _{r_1}({\varvec{t}}^{(j)}_r-{\varvec{t}}^{(j)}_{r_+}) \right] \Pi _r \right] \right\} \right\} \\&\qquad - \sup \limits _{\begin{array}{c} s_\ell <0, \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sup \limits _{\begin{array}{c} {\varvec{s}}^{(\ell )}\in \mathcal {T}_\ell (s_\ell ), \\ \forall \ell \in \mathcal {N}_\mathrm{in}(i) \end{array}} \left\{ \sum \limits _{j\in \mathcal {N}_\mathrm{in}(i)} p_{j,i} \left[ A^{(n)}_j(s_j,0) + n\mu _j s_j\right. \right. \right. \\&\qquad \left. \left. \left. + \sum \limits _{r\in \mathcal {P}_2(j)} \left[ A^{(n)}_{r_1}({\varvec{s}}^{(j)}_r,0)+ n\mu _{r_1}({\varvec{s}}^{(j)}_r-{\varvec{s}}^{(j)}_{r_+}) \right] \Pi _r \right] \right\} \right\} \\&=\sup \limits _{{\varvec{t}}\in \mathcal {T}_i(t)} \left\{ \sum \limits _{r'\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r'_1}({\varvec{t}}_{r'},0)+ n\mu _{r'_1}({\varvec{t}}_{r'}-{\varvec{t}}_{r'_+}) \right] \Pi _{r'} \right\} \\&\qquad -\sup \limits _{{\varvec{s}}\in \mathcal {T}_i(0)} \left\{ \sum \limits _{r'\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r'_1}({\varvec{s}}_{r'},0)+ n\mu _{r'_1}({\varvec{s}}_{r'}-{\varvec{s}}_{r'_+}) \right] \Pi _{r'} \right\} . \end{aligned}$$

Appendix B. Proof of Theorem 2

By Reich’s formula, we have
$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i> nb\right)&= \mathbb {P}\left( \sup \limits _{{\varvec{t}}_i< 0} \left\{ I^{(n)}_i({\varvec{t}}_i,0) + n\mu _i {\varvec{t}}_i \right\} > nb\right) . \end{aligned}$$
By Lemma 1, we obtain
$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i> nb\right)&= \mathbb {P}\left( \sup \limits _{{\varvec{t}}_i< 0} \left\{ A^{(n)}_i({\varvec{t}}_i,0)\right. \right. \\&\qquad \left. \left. + \sup \limits _{{\varvec{t}}\in \mathcal {T}_i({\varvec{t}}_i)} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0)+ n\mu _{r_1}({\varvec{t}}_{r}-{\varvec{t}}_{r_+}) \right] \Pi _r \right. \right. \right. \\&\qquad \left. \left. \left. - \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{s}}_{r},0)+ n\mu _{r_1}({\varvec{s}}_{r}-{\varvec{s}}_{r_+}) \right] \Pi _r \right\} \right\} \right. \right. \\&\qquad \left. \left. + n\mu _i {\varvec{t}}_i \right\}> nb\right) \\&= \mathbb {P}\left( \exists \, {\varvec{t}}_i<0,\,\, {\varvec{t}}\in \mathcal {T}_i({\varvec{t}}_i) : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}) : \frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i,0)\right. \right. \\&\qquad \left. \left. +\sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r},0) - A^{(n)}_{r_1}({\varvec{s}}_{r},0) \right] \Pi _r \right) \right. \\&\qquad \left.> b - \mu _i {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \Big [\mu _{r_1}\big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )-\mu _{r_1}\big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r \right) \\&= \mathbb {P}\left( \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}) : \frac{1}{n} \left( -A^{(n)}_i({\varvec{t}}_i)\right. \right. \\&\qquad \left. \left. -\sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{s}}_{r}) \right] \Pi _r \right) \right. \\&\qquad \left. > b - \mu _i {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \Big [\mu _{r_1}\big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )-\mu _{r_1}\big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r \right) \\&= \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \tilde{\mathcal {E}}^i(b) \right) , \end{aligned}$$
where
$$\begin{aligned}&\tilde{\mathcal {E}}^i(b) \triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, - f_i({\varvec{t}}_i) - \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \\&\qquad \qquad \left. > b - (\mu _i-\lambda _i) {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \Big [\big (\mu _{r_1}-\lambda _{r_1}\big )\big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )-\mu _{r_1}\big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r \right\} . \end{aligned}$$
Since the centered Gaussian processes are symmetric, we have
$$\begin{aligned} \mathbb {P}\left( Q^{(n)}_i > nb\right)&= \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {E}_i(b) \right) , \end{aligned}$$
where
$$\begin{aligned}&\mathcal {E}_i(b) \triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i) +\sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \\&\qquad \qquad \left. > b - (\mu _i-\lambda _i) {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \Big [\big (\mu _{r_1}-\lambda _{r_1}\big )\big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )-\mu _{r_1}\big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r \right\} . \end{aligned}$$
Using that \({\varvec{s}}_i=0\) and that \(\Pi _r=p_{r_1,r_2} \Pi _{r_+}\), we obtain
$$\begin{aligned}&\mathcal {E}_i(b) \triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i) +\sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \nonumber \\&\qquad \qquad \left. > b - \sum \limits _{r\in \mathcal {P}_1(i)} \big (\mu _{r_1}-\lambda _{r_1}\big )\big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big ) \Pi _r - \sum \limits _{r\in \mathcal {P}_2(i)} -\mu _{r_1}p_{r_1,r_2} \big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big ) \Pi _{r_+} \right\} . \end{aligned}$$
(16)
Note that, for every \(r,r'\in \mathcal {P}_2(i)\) such that \(|r|=|r'|\) and \(r_{\ell }=r'_\ell \) for all \(\ell \ge 2\), we have \(r_+=r'_+\). It follows that
$$\begin{aligned} \sum \limits _{r\in \mathcal {P}_2(i)} -\mu _{r_1}p_{r_1,r_2} \big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big ) \Pi _{r_+} = \sum \limits _{r_+ : r\in \mathcal {P}_2(i)} \sum \limits _{j\in \mathcal {N}_\mathrm{in}((r_+)_1)} - \mu _{j}p_{j,(r_+)_1} \big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big ) \Pi _{r_+}. \end{aligned}$$
(17)
Moreover, note that
$$\begin{aligned} \big \{r_+ : r\in \mathcal {P}_2(i) \big \} = \big \{r\in \mathcal {P}_1(i): r\text { is not maximal}\big \}, \end{aligned}$$
and that \(\mathcal {N}_\mathrm{in}(r_1)=\emptyset \) for every maximal path \(r\in \mathcal {P}_1(i)\). Therefore,
$$\begin{aligned}&\sum \limits _{r_+ : r\in \mathcal {P}_2(i)} \sum \limits _{j\in \mathcal {N}_\mathrm{in}((r_+)_1)} - \mu _{j}p_{j,(r_+)_1} \big ({\varvec{t}}_{r_+}-{\varvec{s}}_{r_+}\big ) \Pi _{r_+} \\&\quad = \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} - \mu _{j}p_{j,r_1} \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big ) \Pi _{r}. \end{aligned}$$
Finally, combining this with Eqs. (16) and (17), we get
$$\begin{aligned} \mathcal {E}_i(b)&\triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \right. \\&\qquad \left. > b - \sum \limits _{r\in \mathcal {P}_1(i)} \left[ \left( \mu _{r_1}-\lambda _{r_1} - \sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1}\right) \big ({\varvec{t}}_{r}-{\varvec{s}}_{r}\big )\right] \Pi _r \right\} . \end{aligned}$$

Appendix C. Proof of Theorem 3

The proof consists of two steps. First, we decompose the event \(\mathcal {E}_i(b)\) given in Theorem 2 as a union of intersections of simpler events that only involve the sample paths at fixed times, and we majorize the probability of the intersection by the probability of the least likely one (Lemma 3). Then, we use Cramér’s theorem to obtain the decay rate of the least likely of these simpler events by solving the additional quadratic optimization problem that arises by its application (Lemma 4).
Lemma 3
We have
$$\begin{aligned} \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \} \ge \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \}, \end{aligned}$$
where
$$\begin{aligned} \mathcal {U}_{{\varvec{t}},{\varvec{s}}}&\triangleq \Big \{ f\in \Omega ^k : f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i)\Big ] \Pi _r \ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \,\,\text {and} \\&f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \ge b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big \}. \end{aligned}$$
Remark 11
Note that the first condition in the definition of the set \(\mathcal {U}_{{\varvec{t}},{\varvec{s}}}\) is the same as the second one, but with \({\varvec{s}}_r={\varvec{t}}_r-{\varvec{t}}_i\), for all \(r\in \mathcal {P}_2(i)\). This generalizes Theorem 3.2 in [7], where an appropriate \(\mathcal {U}_{{\varvec{t}},{\varvec{s}}}\) is defined by having the first condition being the same as the second one but with \({\varvec{s}}_r=0\), for all \(r\in \mathcal {P}_2(i)\). In the case of a tandem with arrivals only to the first queue, both definitions are equivalent.
Proof
Recall that
$$\begin{aligned} \mathcal {E}_i(b)&\triangleq \left\{ f\in \Omega ^k: \exists \, {\varvec{t}}\in \mathcal {T}_i : \forall \, {\varvec{s}}\in \mathcal {S}_i({\varvec{t}}),\,\, f_i({\varvec{t}}_i)\right. \\&\quad \left. +\sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right\} . \end{aligned}$$
Thus,
$$\begin{aligned} \mathcal {E}_i(b) = \bigcup \limits _{{\varvec{t}}\in \mathcal {T}_i} \bigcap \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \mathcal {E}_{{\varvec{t}},{\varvec{s}}}, \end{aligned}$$
where
$$\begin{aligned}&\mathcal {E}_{{\varvec{t}},{\varvec{s}}} \triangleq \left\{ f\in \Omega ^k: f_i({\varvec{t}}_i) +\sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right\} . \end{aligned}$$
Then, we have
$$\begin{aligned} \inf \limits _{f\in \mathcal {E}_i(b)} \big \{ \mathbb {I}(f) \big \} = \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \inf \limits _{f\in \bigcap \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \mathcal {E}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
(18)
Now fix \({\varvec{t}}\in \mathcal {T}_i\), and consider the innermost infimum. Since f is continuous, then
$$\begin{aligned} f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}})\) implies
$$\begin{aligned} f_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f_{r_1}({\varvec{t}}_{r}) - f_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \ge b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \end{aligned}$$
for all \({\varvec{s}}\in \overline{\mathcal {S}_i({\varvec{t}})}\). Hence
$$\begin{aligned} \bigcap \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \mathcal {E}_{{\varvec{t}},{\varvec{s}}} \subset \bigcap \limits _{{\varvec{s}}\in \overline{\mathcal {S}_i({\varvec{t}})}} \mathcal {U}_{{\varvec{t}},{\varvec{s}}} \subset \mathcal {U}_{{\varvec{t}},{\varvec{r}}}, \end{aligned}$$
for all \({\varvec{r}}\in \mathcal {S}_i({\varvec{t}})\), and thus
$$\begin{aligned} \inf \limits _{f\in \bigcap \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \mathcal {E}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} \ge \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{r}}}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
Therefore,
$$\begin{aligned} \inf \limits _{f\in \bigcap \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \mathcal {E}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} \ge \sup \limits _{{\varvec{r}}\in \mathcal {S}_i({\varvec{t}})} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{r}}}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
Combining this with (18) completes the proof.
Remark 12
Note that, by taking the supremum over all \({\varvec{r}}\in \mathcal {S}_i({\varvec{t}})\) at the end of the proof, we are essentially upper bounding the probability of an intersection with the probability of the least likely event.
While we have made progress toward obtaining the desired expression for the limiting overflow probability, the expression in Lemma 3 still depends on the rate function \(\mathbb {I}\). We now proceed to compute this simpler expression.
Lemma 4
Under Assumption 2, for \({\varvec{t}}\in \mathcal {T}_i\) and \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}})\), we have
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} = {\left\{ \begin{array}{ll} \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}, \quad \text {if } \ k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}}),\\ \qquad \qquad \qquad \qquad \qquad \qquad \text {or} \ \quad {\varvec{s}}={\varvec{t}}-{\varvec{t}}_i, \\ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \Big )}, \quad \text {if } \ h^b_i({\varvec{t}},{\varvec{s}})> c_i({\varvec{t}},{\varvec{s}}), \\ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}, \quad \text {otherwise.} \end{array}\right. } \end{aligned}$$
Proof
Recall that
$$\begin{aligned} \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {U}_{{\varvec{t}},{\varvec{s}}} \right) \end{aligned}$$
can be rewritten as
$$\begin{aligned} \mathbb {P}\Bigg ( \frac{1}{n}&\left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i) \right] \Pi _r \right) \ge b - \mu _i {\varvec{t}}_i \qquad \text {and} \nonumber \\&\qquad \frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{s}}_{r}) \right] \Pi _r \right) \nonumber \\&\ge b - \mu _i {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \mu _{r_1}\Big [\big ({\varvec{t}}_{r}-{\varvec{t}}_{r_+}\big )-\big ({\varvec{s}}_{r}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r \Bigg ). \end{aligned}$$
(19)
Since this probability only depends on the state of the trajectories at fixed points in time, that is, only depends on a finite set of Gaussian random variables, it follows that \(\mathcal {U}_{{\varvec{t}},{\varvec{s}}}\) is a \(\mathbb {I}\)-continuity set, and thus Schilder’s theorem implies that
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n} \log \mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {U}_{{\varvec{t}},{\varvec{s}}} \right) = \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \}. \end{aligned}$$
(20)
We now proceed to compute the left-hand side.
First, consider the exceptional case where \({\varvec{s}}={\varvec{t}}-{\varvec{t}}_i\). Substituting this in (19), we get
$$\begin{aligned}&\mathbb {P}\left( \frac{A^{(n)}(\cdot )-n\lambda \, \cdot \,}{n} \in \mathcal {U}_{{\varvec{t}},{\varvec{s}}} \right) \nonumber \\&=\mathbb {P}\left( \frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i) \right] \Pi _r \right) \ge b - \mu _i {\varvec{t}}_i \right) . \end{aligned}$$
(21)
Moreover, by Cramér’s theorem, we have that
$$\begin{aligned}&-\lim \limits _{n\rightarrow \infty } \frac{1}{n} \log \mathbb {P}\left( \frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i) \right] \Pi _r \right) \ge b - \mu _i {\varvec{t}}_i \right) \\&\qquad =\frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }. \end{aligned}$$
Combining this with (20) and (21), we obtain
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} = \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }. \end{aligned}$$
Now consider the case when \({\varvec{s}}\ne {\varvec{t}}-{\varvec{t}}_i\). By the multivariate version of Cramér’s theorem, we have that
$$\begin{aligned}&\lim \limits _{n\rightarrow \infty } \frac{1}{n} \log \mathbb {P}\Bigg ( \frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{s}}_{r}) \right] \Pi _r \right) \ge \\&\qquad \qquad b - \mu _i {\varvec{t}}_i - \sum \limits _{r\in \mathcal {P}_2(i)} \mu _{r_1}\Big [\big ({\varvec{t}}_{r}-{\varvec{t}}_{r_+}\big )-\big ({\varvec{s}}_{r}-{\varvec{s}}_{r_+}\big )\Big ] \Pi _r, \\&\frac{1}{n} \left( A^{(n)}_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ A^{(n)}_{r_1}({\varvec{t}}_{r}) - A^{(n)}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i) \right] \Pi _r \right) \ge b - \mu _i {\varvec{t}}_i \Bigg ) \\&\qquad \qquad \qquad = \inf \Big \{ \Lambda _{{\varvec{t}},{\varvec{s}}}(y,z) : y\ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i;\\&\qquad \qquad \qquad \qquad \qquad \qquad z\ge b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big \}, \end{aligned}$$
where
$$\begin{aligned} \Lambda _{{\varvec{t}},{\varvec{s}}}(y,z) \triangleq \frac{1}{2}\left( y,\, z\right) \begin{pmatrix} \mathbb {V}\text {{ar}}\Big (\bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big ) &{} {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) \\ {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) &{} \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) \end{pmatrix}^{-1}\left( y,\, z\right) ^\top . \end{aligned}$$
(22)
Combining this with (19) and (20), we get that
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \}&= \inf \Big \{ \Lambda _{{\varvec{t}},{\varvec{s}}}(y,z) : y\ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i;\,\, z\ge b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big \}. \end{aligned}$$
(23)
Since \(\Lambda _{{\varvec{t}},{\varvec{s}}}\) is quadratic and the constraints are linear, it follows by standard calculus that the optimal values of y and z are
$$\begin{aligned} y^*&\triangleq \max \left\{ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i,\,\, \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right] z^* \right\} . \end{aligned}$$
(24)
and
$$\begin{aligned} z^*&\triangleq \max \left\{ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}),\,\, \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }\right] y^* \right\} , \end{aligned}$$
(25)
respectively. Although this gives four possible combinations for \((y^*,z^*)\), the following lemma states that one of them is not possible. \(\square \)
Claim 1
For all \({{\varvec{t}}}\in \mathcal {T}_i\) and \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}})\) such that \({\varvec{s}}\ne {\varvec{t}}-{\varvec{t}}_i\), we have that
$$\begin{aligned} y^* = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i, \qquad \text {and/or} \qquad z^*= b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}). \end{aligned}$$
Proof
Suppose that
$$\begin{aligned} y^* = \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right] z^* > b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i, \end{aligned}$$
(26)
and that
$$\begin{aligned} z^*&= \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }\right] y^* > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}). \end{aligned}$$
Then, we have
$$\begin{aligned} y^* = \left[ \frac{{\mathbb {C}}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }\right] y^*, \end{aligned}$$
which is impossible because the Cauchy–Schwarz inequality implies that
$$\begin{aligned} \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } <1, \end{aligned}$$
(27)
for all \({\varvec{t}}\), and \({\varvec{s}}\) such that \({\varvec{s}}\ne {\varvec{t}}-{\varvec{t}}_i\). \(\square \)
Combining Claim 1 with (26) and (25), we conclude that \(z^* > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \) if and only if
$$\begin{aligned}&b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) < \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right] \Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ], \end{aligned}$$
which is equivalent to
$$\begin{aligned}&c_i({\varvec{t}},{\varvec{s}}) > \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }\right] \Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ] = k^b_i({\varvec{t}},{\varvec{s}}). \end{aligned}$$
In that case, substituting the optimal values
$$\begin{aligned} y^*&= b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i ,\\ z^*&= \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right] \Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ] \end{aligned}$$
in (22), we obtain
$$\begin{aligned}&\Lambda _{{\varvec{t}},{\varvec{s}}}(y^*,z^*) \\&\quad = \frac{{y^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - 2 y^*z^* {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) + {z^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] } \\&\quad = \frac{ \left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - 2 \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } + \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }\right] \Big [ b - \left( \mu _i -\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] } \\&\quad = \frac{\Big [ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }. \end{aligned}$$
Combining this with (23) we get that, if
$$\begin{aligned} k^b_i({\varvec{t}},{\varvec{s}}) < c_i({\varvec{t}},{\varvec{s}}), \end{aligned}$$
(28)
then
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} = \frac{\Big [ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }. \end{aligned}$$
On the other hand, combining Claim 1 with Eqs. (26) and (25), we also get that
$$\begin{aligned} y^* > b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i \end{aligned}$$
if and only if
$$\begin{aligned}&\left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right] \left[ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right] > b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i, \end{aligned}$$
which is equivalent to
$$\begin{aligned}&c_i({\varvec{t}},{\varvec{s}}) < \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}),\,\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }\right] \Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ] = h^b_i({\varvec{t}},{\varvec{s}}). \end{aligned}$$
In that case, substituting the optimal values
$$\begin{aligned} z^*&= b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) ,\\ y^*&= \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right] \left[ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right] \end{aligned}$$
in (22), we obtain that \(\Lambda _{{\varvec{t}},{\varvec{s}}}(y^*,z^*)\) equals
$$\begin{aligned}&\frac{{y^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - 2 y^*z^* {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) + {z^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] } \\&\quad = \frac{ \left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } - 2 \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } + \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \right] \Big [ b - \left( \mu _i -\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] } \\&\quad =\frac{\left[ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right] ^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }. \end{aligned}$$
Combining this with (23) we get that, if
$$\begin{aligned} h^b_i({\varvec{t}},{\varvec{s}}) > c_i({\varvec{t}},{\varvec{s}}), \end{aligned}$$
(29)
then
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \} = \frac{\left[ b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \right] ^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) }. \end{aligned}$$
Finally, if neither (28) nor (29) hold, Claim 1 implies that
$$\begin{aligned} y^*&= b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i,\\ z^*&= b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}). \end{aligned}$$
Combining this with (23), we obtain that \(\Lambda _{{\varvec{t}},{\varvec{s}}}(y^*,z^*) \) equals
$$\begin{aligned}&\frac{{y^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - 2 y^*z^* {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) + {z^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] } \\&= \frac{{y^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) - 2 y^*z^* {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) + {z^*}^2 \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) ^2 }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \\&= \frac{{y^*}^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} \\&\quad + \frac{\left[ z^* \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) - y^* {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) \right] ^2 }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] \mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \\&= \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} \\&\quad + \frac{\left[ \big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \big ] {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \right) - c_i({\varvec{t}},{\varvec{s}}) \mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \right] ^2 }{2\left[ \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) - {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2 \right] \mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \\&= \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} + \frac{\left[ \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \right) }{\mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } [b - (\mu _i-\overline{\lambda }_i) {\varvec{t}}_i] - c_i({\varvec{t}},{\varvec{s}}) \right] ^2 }{2\left[ 1 - \frac{{\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}),\,\, \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) ^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right] \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \\&=\frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}. \end{aligned}$$
Combining this with (23) we get that, if
$$\begin{aligned} k^b_i({\varvec{t}},{\varvec{s}}) \ge c_i({\varvec{t}},{\varvec{s}}) \qquad \text {and} \qquad h^b_i({\varvec{t}},{\varvec{s}}) \le c_i({\varvec{t}},{\varvec{s}}), \end{aligned}$$
then
$$\begin{aligned} \inf \limits _{f\in \mathcal {U}_{{\varvec{t}},{\varvec{s}}}} \big \{ \mathbb {I}(f) \big \}&= \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )} \\&\quad + \frac{\Big [ k^b_i({\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{s}},{\varvec{t}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}, \end{aligned}$$
as desired. Combining Lemmas 3 and 4 concludes the proof of Theorem 3.

Appendix D. Proof of Theorem 4

Given Theorem 3, it is enough to show that if
$$\begin{aligned} k^b_i\left( {\varvec{t}}^*,{\varvec{s}}\right) < c_i\left( {\varvec{t}}^*,{\varvec{s}}\right) , \end{aligned}$$
(30)
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}}\ne {\varvec{t}}^*-{\varvec{t}}^*_i\), then
$$\begin{aligned} -\lim \limits _{n\rightarrow \infty } \frac{1}{n}\log \mathbb {P}\left( Q_i^{(n)} > b n \right) \le \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} . \end{aligned}$$
In the proof of Theorem 3, the lower bound in the decay rate was obtained by replacing the decay rate of an intersection of events by the decay rate of the least likely of these. Therefore, if the optimum path in this least likely set happens to be in all the sets in the intersection, then the bound is tight. In particular, if \({\varvec{t}}^*\) and \({\varvec{s}}^*\) are optimizers in the lower bound of Theorem 3, then we need to show that the most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\) is in \(\mathcal {E}_i(b)\). Furthermore, since Theorem 1 states that \(\mathcal {E}_i(b)\) is a \(\mathbb {I}\)-continuity set, then it is enough to show that the most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\) is in \(\overline{\mathcal {E}_i(b)}\).
Claim 2
If \(k^b_i\left( {\varvec{t}}^*,{\varvec{s}}\right) < c_i\left( {\varvec{t}}^*,{\varvec{s}}\right) \), for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}}\ne {\varvec{t}}^*-{\varvec{t}}^*_i\), then a most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\) is \(f^*\in \Omega ^k\) such that
$$\begin{aligned} f^*_j(\cdot ) = \mathbb {E}\left[ \hat{A}_j(\cdot ) \,\left| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^* \right. \right] , \end{aligned}$$
for \(j\in \{1, \ldots ,k\}\).
Proof
For \(j\in \{1, \ldots ,k\}\), we have
$$\begin{aligned} f^*_j(\cdot )&= \frac{{\mathbb C}\mathrm{ov}\left( \hat{A}_j(\cdot ) ,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) }{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \Big [ b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^* \Big ]. \end{aligned}$$
Then, we can write
$$\begin{aligned} f^*(\cdot ) = \left( \sum \limits _{r\in \mathcal {P}_1(i)} \Big [ \Sigma ({\varvec{t}}^*_{r},\cdot ) - \Sigma ({\varvec{t}}^*_{r} - {\varvec{t}}_i^*,\cdot ) \Big ].e_{r_1}\Pi _r \right) \left[ \frac{b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^*}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \right] , \end{aligned}$$
and thus \(f^*\) is in the rkhs \(\mathcal {R}^k\). Then, we have
$$\begin{aligned} \mathbb {I}(f^*)&= \frac{1}{2} \langle f^*,\, f^* \rangle _{\mathcal {R}^k}\\&= \frac{1}{2} \left( \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} e_{r_1}^\top . \big [ \Sigma ({\varvec{t}}^*_r,{\varvec{t}}^*_{r'}) - \Sigma ({\varvec{t}}^*_r,{\varvec{t}}^*_{r'}-{\varvec{t}}^*_i) - \Sigma ({\varvec{t}}^*_r-{\varvec{t}}^*_i, {\varvec{t}}^*_{r'}) \right. \\&\qquad \left. {\sum \limits _{r\in \mathcal {P}_1(i)} } + \Sigma ({\varvec{t}}^*_r-{\varvec{t}}^*_i,{\varvec{t}}^*_{r'}-{\varvec{t}}^*_i) \big ]. e_{r'_1} \Pi _r\Pi _{r'} \right) \left[ \frac{b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^*}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \right] ^2 \\&= \frac{1}{2} \left( \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} \left[ {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}^*_r),\,\, \hat{A}_{r'_1}({\varvec{t}}^*_{r'}) \right) - {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}^*_r),\,\, \hat{A}_{r'_1}({\varvec{t}}^*_{r'}-{\varvec{t}}^*_i) \right) \right. \right. \\&\qquad \left. \left. - {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}^*_r-{\varvec{t}}^*_i),\,\, \hat{A}_{r'_1}({\varvec{t}}^*_{r'}) \right) \right. \right. \\&\qquad \left. { \sum \limits _{r\in \mathcal {P}_1(i)}} + {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}^*_r-{\varvec{t}}^*_i),\,\, \hat{A}_{r'_1}({\varvec{t}}^*_{r'}-{\varvec{t}}^*_i) \Big ) \right] \Pi _r\Pi _{r'} \right) \left[ \frac{b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^*}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \right] ^2 \\&= \frac{1}{2} \left[ \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}^*_r) - \hat{A}_{r_1}({\varvec{t}}^*_r-{\varvec{t}}^*_i)\right. \right. ,\\&\qquad \left. \left. \hat{A}_{r'_1}({\varvec{t}}^*_{r'}) - \hat{A}_{r'_1}({\varvec{t}}^*_{r'}-{\varvec{t}}^*_i) \right) \Pi _r\Pi _{r'} \right] \left[ \frac{b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^*}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \right] ^2 \\&= \frac{1}{2} \mathbb {V}\text {{ar}}\left( \sum \limits _{r\in \mathcal {P}_1(i)} \left[ \hat{A}_{r_1}({\varvec{t}}^*_r) - \hat{A}_{r_1}({\varvec{t}}^*_r-{\varvec{t}}^*_i) \right] \Pi _r \right) \left[ \frac{b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^*}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) } \right] ^2\\&=\frac{\Big [ b - \left( \mu _i - \overline{\lambda }_i\right) {\varvec{t}}_i^* \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) }. \end{aligned}$$
Since \(k^b_i\left( {\varvec{t}}^*,{\varvec{s}}\right) < c_i\left( {\varvec{t}}^*,{\varvec{s}}\right) \) for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}}\ne {\varvec{t}}^*-{\varvec{t}}^*_i\), the expression above is equal to the lower bound in Theorem 3. It follows that \(f^*\) is a most probable path in the set \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\). \(\square \)
To complete the proof, we just need to show that \(f^*\in \overline{\mathcal {E}_i(b)}\), i.e., we need to show that there exists \({\varvec{t}}\in \overline{\mathcal {T}_i}\) such that
$$\begin{aligned} f^*_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}), \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}})\). For \({\varvec{t}}={\varvec{t}}^*\), we have
$$\begin{aligned}&f^*_i(t^*_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}^*_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r\\&\qquad = \mathbb {E}\left[ \left. \bar{A}_i({\varvec{s}},{\varvec{t}}^*) \,\right| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i -\overline{\lambda }_i\right) {\varvec{t}}_i^* \right] \\&\qquad = b - \left( \mu _i - \overline{\lambda }_i\right) t^*_i +\\&\qquad \quad \mathbb {E}\left[ \left. \bar{A}_i({\varvec{s}},{\varvec{t}}^*-{\varvec{t}}^*_i) \,\right| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i -\overline{\lambda }_i\right) {\varvec{t}}_i^* \right] \\&\qquad = b - \left( \mu _i - \overline{\lambda }_i\right) t^*_i - k^b_i({\varvec{t}}^*,{\varvec{s}}). \end{aligned}$$
Finally, combining this with (30) and the fact that \(k^b_i\left( {\varvec{t}}^*,{\varvec{t}}^*-{\varvec{t}}^*_i\right) = 0 = c_i\left( {\varvec{t}}^*,{\varvec{t}}^*-{\varvec{t}}^*_i\right) \), we obtain
$$\begin{aligned}&f^*_i(t^*_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}^*_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r\\&\qquad = b - \left( \mu _i-\overline{\lambda }_i\right) t^*_i - k^b_i({\varvec{t}}^*,{\varvec{s}}) \ge b - \left( \mu _i-\overline{\lambda }_i\right) t^*_i - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\), which concludes the proof.

Appendix E. Proof of Lemma 2

Since \({\varvec{t}}-{\varvec{t}}_i\in \mathcal {S}_i({\varvec{t}})\) for all \({\varvec{t}}\in \mathcal {T}_i\), we have
$$\begin{aligned} \sup \limits _{{\varvec{s}}\in \mathcal {S}_i({\varvec{t}})} \left\{ \frac{\Big [b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{s}},{\varvec{t}}) \right) } \right\}&\ge \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }, \end{aligned}$$
for all \({\varvec{t}}\in \mathcal {T}_i\). Therefore, we have
$$\begin{aligned} \sup \limits _{\mathbf{}s\in \mathcal {S}_i({\varvec{t}})} \Big \{\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\Big \}&\ge \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) }, \end{aligned}$$
and thus
$$\begin{aligned} \inf \limits _{{\varvec{t}}\in \mathcal {T}_i}\sup \limits _{\mathbf{}s\in \mathcal {S}_i({\varvec{t}})} \Big \{\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\Big \}&\ge \inf \limits _{{\varvec{t}}\in \mathcal {T}_i} \left\{ \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} = \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) \tilde{\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i\left( \tilde{\varvec{t}}- \tilde{\varvec{t}}_i,\tilde{\varvec{t}}\right) \right) }. \end{aligned}$$
(31)
On the other hand, since \(k^b_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) < c_i\left( \tilde{\varvec{t}},{\varvec{s}}\right) \) for all \({\varvec{s}}\in \mathcal {S}_i(\tilde{\varvec{t}})\) such that \({\varvec{s}}\ne \tilde{\varvec{t}}-\tilde{\varvec{t}}_i\), we have
$$\begin{aligned} \mathbb {I}_i^b(\tilde{\varvec{t}},{\varvec{s}}) = \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) \tilde{\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i\left( \tilde{\varvec{t}}- \tilde{\varvec{t}}_i,\tilde{\varvec{t}}\right) \right) }, \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i(\tilde{\varvec{t}})\). Combining this with (31), we get
$$\begin{aligned} \inf \limits _{{\varvec{t}}\in \mathcal {T}_i}\sup \limits _{\mathbf{}s\in \mathcal {S}_i({\varvec{t}})} \Big \{\mathbb {I}_i^b({\varvec{t}},{\varvec{s}})\Big \} = \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) \tilde{\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\left( \bar{A}_i\left( \tilde{\varvec{t}}- \tilde{\varvec{t}}_i,\tilde{\varvec{t}}\right) \right) }. \end{aligned}$$
(32)
In particular, this means that we can pick \(\tilde{\varvec{t}} = {\varvec{t}}^*\), and thus \(k^b_i({\varvec{t}}^*,{\varvec{s}}) = k^b_i(\tilde{\varvec{t}},{\varvec{s}}) < c_i(\tilde{\varvec{t}},{\varvec{s}}) = c_i({\varvec{t}}^*,{\varvec{s}}),\) for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}} \ne {\varvec{t}}^*-{\varvec{t}}^*_i\).

Appendix F. Proof of Theorem 6

Similarly to the proof of Theorem 4, if \({\varvec{t}}^*\) and \({\varvec{s}}^*\) are optimizers in the lower bound of Theorem 3, we need to show that the most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\) is in \(\overline{\mathcal {E}_i(b)}\).
Claim 3
If \(h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \le c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \) and \(k^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \ge c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) ,\) then a most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\) is \(f^*\in \Omega ^k\) such that
$$\begin{aligned} f^*_j(\cdot )&= \mathbb {E}\left[ \hat{A}_j(\cdot ) \,\left| \, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^*;\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) = c_i({\varvec{t}}^*,{\varvec{s}}^*) \right. \right] , \end{aligned}$$
for \(j\in \{1, \ldots ,k\}\).
Proof
Using standard properties of conditional multivariate Normal random variables, we get that
$$\begin{aligned} f^*_j(\cdot )&= \theta ^*_1 {\mathbb C}\mathrm{ov}\left( \hat{A}_j(\cdot ),\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) + \theta ^*_2 {\mathbb C}\mathrm{ov}\left( \hat{A}_j(\cdot ),\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) \right) , \end{aligned}$$
for all \(j\in \{1, \ldots ,k\}\), where
$$\begin{aligned} \theta ^*&\triangleq \begin{pmatrix} \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) \right) &{} {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*),\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) \right) \\ {\mathbb C}\mathrm{ov}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*),\,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) \right) &{} \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) \right) \end{pmatrix}^{-1}\\&\begin{pmatrix} b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^* \\ c_i({\varvec{t}}^*,{\varvec{s}}^*) \end{pmatrix}. \end{aligned}$$
Then, we can write
$$\begin{aligned} f^*(\cdot )= & {} \theta ^*_1 \left[ \sum \limits _{r\in \mathcal {P}_1(i)} \Big [ \Sigma ({\varvec{t}}^*_{r},\cdot ) - \Sigma ({\varvec{t}}^*_{r} - {\varvec{t}}_i^*,\cdot ) \Big ].e_{r_1}\Pi _r \right] \\&\quad +\theta ^*_2\left[ \sum \limits _{r\in \mathcal {P}_1(i)} \Big [ \Sigma ({\varvec{s}}^*_{r},\cdot ) - \Sigma ({\varvec{t}}^*_{r} - {\varvec{t}}_i^*,\cdot ) \Big ].e_{r_1}\Pi _r \right] , \end{aligned}$$
and thus \(f^*\) is in the rkhs \(\mathcal {R}^k\). After tedious but straightforward computations we obtain
$$\begin{aligned} \mathbb {I}(f^*)&= \frac{\Big [ b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \Big )}\\&\quad + \frac{\Big [ k^b_i(t,{\varvec{t}},{\varvec{s}})- c_i({\varvec{t}},{\varvec{s}}) \Big ]^2}{2\,\mathbb {V}\text {{ar}}\Big ( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{s}}) \,\Big |\, \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) = b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i \Big )}. \end{aligned}$$
Since \(h^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \le b-\left( \mu _i - \overline{\lambda }_i \right) {\varvec{t}}_i^*\) and \(k^b_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \ge c_i\left( {\varvec{t}}^*,{\varvec{s}}^*\right) \), the equation above is equal to the lower bound in Theorem 3. It follows that \(f^*\) is a most probable path in \(\mathcal {U}_{{\varvec{t}}^*,{\varvec{s}}^*}\). \(\square \)
To complete the proof, we just need to show that \(f^*\in \overline{\mathcal {E}_i(b)}\), i.e., we need to show that there exists \({\varvec{t}}\in \overline{\mathcal {T}_i}\) such that
$$\begin{aligned} f^*_i({\varvec{t}}_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r \ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i - c_i({\varvec{t}},{\varvec{s}}), \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}})\). In order to simplify notation, we denote
$$\begin{aligned} \overline{\mathbb {E}}[ \,\,\cdot \,\, ]&\triangleq \mathbb {E}\left[ \,\,\cdot \,\, \left| \,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{t}}^*) = b - \left( \mu _i-\overline{\lambda }_i\right) t^*_i; \,\, \bar{A}_i({\varvec{t}}^*-{\varvec{t}}^*_i,{\varvec{s}}^*) = c_i({\varvec{t}}^*,{\varvec{s}}^*) \right. \right] . \end{aligned}$$
For \({\varvec{t}}={\varvec{t}}^*\), we have
$$\begin{aligned}&f^*_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}^*_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r = \overline{\mathbb {E}}\left[ \hat{A}_i({\varvec{t}}^*_i) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [ \hat{A}_{r_1}({\varvec{t}}^*_{r}) - \hat{A}_{r_1}({\varvec{s}}_{r}) \Big ] \Pi _r \right] \\&= b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^* -\overline{\mathbb {E}}\left[ \sum \limits _{r\in \mathcal {P}_2(i)} \Big [ \hat{A}_{r_1}({\varvec{s}}_{r}) - \hat{A}_{r_1}({\varvec{t}}^*_{r}-{\varvec{t}}_i^*) \Big ] \Pi _r \right] . \end{aligned}$$
Combining this with (14), we obtain
$$\begin{aligned} f^*_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \Big [f^*_{r_1}({\varvec{t}}^*_{r}) - f^*_{r_1}({\varvec{s}}_{r})\Big ] \Pi _r&\ge b - \left( \mu _i-\overline{\lambda }_i\right) {\varvec{t}}_i^* - c_i({\varvec{t}}^*,{\varvec{s}}), \end{aligned}$$
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\), which concludes the proof.

Appendix G. Proof of Theorem 7

We start with a technical lemma.
Lemma 5
There exists
$$\begin{aligned} {\varvec{t}}^* \in \underset{{\varvec{t}}\in \overline{\mathcal {T}_i}}{\arg \min } \left\{ \frac{\Big [b-\big (\mu _i - \overline{\lambda }_i\big ){\varvec{t}}_i\Big ]^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} \end{aligned}$$
(33)
such that \({\varvec{t}}^*_r={\varvec{t}}^*_i\), for all \(r\in \mathcal {P}_2(i)\).
Proof
Note that the numerator of the function being minimized in (33) only depends on \({\varvec{t}}_i\). As a result, we can focus on the structure of the maximizers of its denominator when we keep \({\varvec{t}}_i\) fixed. Using that \(\hat{A}(\cdot )\) is a time-reversible mfBm, we obtain that \(\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \) equals
$$\begin{aligned}&\sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} \Pi _r \Pi _{r'} {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}_{r}) - \hat{A}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i),\,\, \hat{A}_{r'_1}({\varvec{t}}_{r'}) - \hat{A}_{r'_1}({\varvec{t}}_{r'}-{\varvec{t}}_i)\right) \\&\qquad = \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} \Pi _r \Pi _{r'} \left[ {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}_{r}),\,\, \hat{A}_{r'_1}({\varvec{t}}_{r'})\right) - {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}_{r}),\,\, \hat{A}_{r'_1}({\varvec{t}}_{r'}-{\varvec{t}}_i)\right) \right. \\&\qquad \qquad - \left. {\mathbb C}\mathrm{ov} \left( \hat{A}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i),\,\, \hat{A}_{r'_1}({\varvec{t}}_{r'}) \right) \right. \\&\qquad \qquad \left. + {\mathbb C}\mathrm{ov} \left( \hat{A}_{r_1}({\varvec{t}}_{r}-{\varvec{t}}_i),\,\, \hat{A}_{r'_1}({\varvec{t}}_{r'}-{\varvec{t}}_i) \right) \right] \\&\qquad =\sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} \frac{\sigma _{r'_1}\sigma _{r_1}\rho _{r_1,r'_1}}{2} \left[ \Big ( |{\varvec{t}}_{r}|^{2H} + |{\varvec{t}}_{r'}|^{2H} - |{\varvec{t}}_{r}-{\varvec{t}}_{r'}|^{2H} \Big )\right. \\&\qquad \qquad \left. - \Big ( |{\varvec{t}}_{r}|^{2H} + |{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H} - |{\varvec{t}}_{r}-{\varvec{t}}_{r'}+{\varvec{t}}_i|^{2H} \Big ) \right. \\&\qquad \qquad - \left. \Big ( |{\varvec{t}}_{r}-{\varvec{t}}_i|^{2H} + |{\varvec{t}}_{r'}|^{2H} - |{\varvec{t}}_{r}-{\varvec{t}}_i-{\varvec{t}}_{r'}|^{2H} \Big )\right. \\&\qquad \qquad \left. + \Big ( |{\varvec{t}}_{r}-{\varvec{t}}_i|^{2H} + |{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H} - |{\varvec{t}}_{r}-{\varvec{t}}_{r'}|^{2H} \Big ) \right] \Pi _r \Pi _{r'} \\&\qquad = \sum \limits _{r\in \mathcal {P}_1(i)} \sum \limits _{r'\in \mathcal {P}_1(i)} \frac{\sigma _{r'_1}\sigma _{r_1}\rho _{r_1,r'_1}}{2} \Big [ \Big ( |{\varvec{t}}_{r}-{\varvec{t}}_{r'}+{\varvec{t}}_i|^{2H} + |{\varvec{t}}_{r}-{\varvec{t}}_i-{\varvec{t}}_{r'}|^{2H} - 2 |{\varvec{t}}_{r}-{\varvec{t}}_{r'}|^{2H} \Big ) \Big ]\\&\qquad \qquad \Pi _r \Pi _{r'}. \end{aligned}$$
Taking the derivative with respect to \({\varvec{t}}_{r}\), and using that \({\varvec{t}}_{r}\le {\varvec{t}}_i \le 0\) for all \({\varvec{t}}\in \overline{\mathcal {T}_i}\), we obtain
$$\begin{aligned} \frac{\partial }{\partial {\varvec{t}}_{r}} \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right)&= \sum \limits _{r'\in \mathcal {P}_1(i)} \sigma _{r'_1}\sigma _{r_1}\rho _{r_1,r'_1}H \Big [ \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}+{\varvec{t}}_i) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}+{\varvec{t}}_i|^{2H-1} \\&\qquad + \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H-1} \\&\qquad - 2 \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}|^{2H-1} \Big ] \Pi _r \Pi _{r'}. \end{aligned}$$
Moreover, for all \({\varvec{t}}_{r}\le \min \{ {\varvec{t}}_{r'} : r'\in \mathcal {P}_1(i),\,\, r'\ne r \}\), we have
$$\begin{aligned} \frac{\partial }{\partial {\varvec{t}}_{r}} \mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right)&= \sum \limits _{r'\in \mathcal {P}_1(i),\, r'\ne r} \sigma _{r'_1}\sigma _{r_1}\rho _{r_1,r'_1}H \Big [ - ({\varvec{t}}_{r'}-{\varvec{t}}_{r}-{\varvec{t}}_i)^{2H-1}\nonumber \\&\qquad + \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H-1}\nonumber \\&\qquad + 2 ({\varvec{t}}_{r'}-{\varvec{t}}_{r})^{2H-1} \Big ] \Pi _r \Pi _{r'}. \end{aligned}$$
(34)
If \({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i\le 0\), we have
$$\begin{aligned}&- ({\varvec{t}}_{r'}-{\varvec{t}}_{r}-{\varvec{t}}_i)^{2H-1} + \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H-1} + 2 ({\varvec{t}}_{r'}-{\varvec{t}}_{r})^{2H-1} \nonumber \\&\quad = - ({\varvec{t}}_{r'}-{\varvec{t}}_{r}-{\varvec{t}}_i)^{2H-1} - ({\varvec{t}}_{r'}-{\varvec{t}}_{r}+{\varvec{t}}_i)^{2H-1}\nonumber \\&\qquad + 2 ({\varvec{t}}_{r'}-{\varvec{t}}_{r})^{2H-1} \ge 0, \end{aligned}$$
(35)
where in the last inequality we used that \(H\ge 1/2\). On the other hand, if \({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i> 0\), we have
$$\begin{aligned}&- |{\varvec{t}}_{r}-{\varvec{t}}_{r'}+{\varvec{t}}_i|^{2H-1} + \mathrm{sign}({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i) |{\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i|^{2H-1} + 2 |{\varvec{t}}_{r}-{\varvec{t}}_{r'}|^{2H-1} \nonumber \\&\quad = - ({\varvec{t}}_{r'}-{\varvec{t}}_{r}-{\varvec{t}}_i)^{2H-1} + ({\varvec{t}}_{r}-{\varvec{t}}_{r'}-{\varvec{t}}_i)^{2H-1}\nonumber \\&\qquad + 2 ({\varvec{t}}_{r'}-{\varvec{t}}_{r})^{2H-1} \ge 0, \end{aligned}$$
(36)
where in the last inequality we used that \(H\ge 1/2\). Combining (34), (35), and (36) with \(\rho _{r_1,r'_1}\ge 0\), for all \(r,r'\in \mathcal {P}_1(i)\), it follows that \(\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) \) is maximized when \({\varvec{t}}_{r}={\varvec{t}}_i\), for all \(r\in \mathcal {P}_2(i)\). \(\square \)
Lemma 5 implies that we can pick
$$\begin{aligned} {\varvec{t}}^* \in \underset{{\varvec{t}}\in \overline{\mathcal {T}_i}}{\arg \min } \left\{ \frac{\Big [b-\big (\mu _i - \overline{\lambda }_i\big ){\varvec{t}}_i\Big ]^2}{\mathbb {V}\text {{ar}}\left( \bar{A}_i({\varvec{t}}-{\varvec{t}}_i,{\varvec{t}}) \right) } \right\} \end{aligned}$$
such that \({\varvec{t}}^*_r={\varvec{t}}^*_i\), for all \(r\in \mathcal {P}_2(i)\). In that case, we have
$$\begin{aligned} {\varvec{t}}^*_i \in \underset{{\varvec{t}}_i\le 0}{\arg \min } \left\{ \frac{\Big [ b-\big (\mu _i - \overline{\lambda }_i\big ){\varvec{t}}_i \Big ]^2}{\mathbb {V}\text {{ar}}\left( \sum \limits _{r\in \mathcal {P}_1(i)} \hat{A}_{r_1}({\varvec{t}}_i) \Pi _r \right) } \right\} . \end{aligned}$$
An elementary computation yields that
$$\begin{aligned} {\varvec{t}}_i^* = -\left( \frac{b}{\mu _i-\overline{\lambda }_i} \right) \left( \frac{H}{1-H} \right) . \end{aligned}$$
(37)
Using this, the condition in Lemma 2 is
$$\begin{aligned}&\frac{{\mathbb C}\mathrm{ov}\left( \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{s}}_{r}) \Pi _r ,\,\, \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) }{\mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) }\Big [b-\big (\mu _i-\overline{\lambda }_i\big ) {\varvec{t}}_i^*\Big ] \nonumber \\&\qquad < \sum \limits _{r\in \mathcal {P}_2(i)} \left( \mu _{r_1}-\lambda _{r_1}-\sum \limits _{j\in \mathcal {N}_\mathrm{in}(r_1)} \mu _j p_{j,r_1} \right) (-{\varvec{s}}_{r}) \Pi _r , \end{aligned}$$
(38)
for all \({\varvec{s}}\in \mathcal {S}_i({\varvec{t}}^*)\) such that \({\varvec{s}}\ne {\varvec{t}}^*-{\varvec{t}}^*_i\). Then, since \({\varvec{t}}^*-{\varvec{t}}^*_i \notin \mathcal {S}_i({\varvec{t}}^*)\), a sufficient condition for (38) to hold is that
$$\begin{aligned}&\min \left\{ \mu _j - \lambda _j -\sum \limits _{l\in \mathcal {N}_\mathrm{in}(j)} \mu _l p_{l,j} : j\ne i \right\} > \\&\qquad \sup \limits _{{\varvec{s}}\in \mathcal {S}({\varvec{t}}^*)} \left\{ \frac{{\mathbb C}\mathrm{ov}\left( \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{s}}_{r}) \Pi _r ,\,\, \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) }{\mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) \left( \sum \limits _{r\in \mathcal {P}_2(i)} -{\varvec{s}}_{r} \Pi _r \right) }\right. \\&\qquad \left. \Big [b-\big (\mu _i-\overline{\lambda }_i\big ) {\varvec{t}}_i^*\Big ] \right\} . \end{aligned}$$
Substituting (37) in the equation above, we obtain, with \(b_H\triangleq b/(1-H)\),
$$\begin{aligned}&\frac{{\mathbb C}\mathrm{ov}\left( \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{s}}_{r}) \Pi _r ,\,\, \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) }{\mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}_i^*) + \sum \limits _{r\in \mathcal {P}_2(i)} \hat{A}_{r_1}({\varvec{t}}_i^*) \Pi _r \right) \left( \sum \limits _{r\in \mathcal {P}_2(i)} -{\varvec{s}}_{r} \Pi _r \right) }\Big [b-\big (\mu _i-\overline{\lambda }_i\big ) {\varvec{t}}_i^*\Big ] \\&\quad =\frac{b_H\cdot \sum \limits _{r\in \mathcal {P}_2(i)} \left[ {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{s}}_{r}),\,\, \hat{A}_i({\varvec{t}}_i^*) \right) + \sum \limits _{r'\in \mathcal {P}_2(i)} {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{s}}_{r}), \hat{A}_{r'_1}({\varvec{t}}_i^*) \right) \Pi _{r'}\right] \Pi _r}{\left( \mathbb {V}\text {{ar}}\left( \hat{A}_i({\varvec{t}}_i^*) \right) + \sum \limits _{r\in \mathcal {P}_2(i)} \left[ 2 {\mathbb C}\mathrm{ov}\left( \hat{A}_i({\varvec{t}}_i^*),\,\, \hat{A}_{r_1}({\varvec{t}}_i^*) \right) + \sum \limits _{r'\in \mathcal {P}_2(i)} {\mathbb C}\mathrm{ov}\left( \hat{A}_{r_1}({\varvec{t}}_i^*),\,\, \hat{A}_{r'_1}({\varvec{t}}_i^*) \right) \Pi _{r'} \right] \Pi _r \right) \left( \sum \limits _{r\in \mathcal {P}_2(i)} -{\varvec{s}}_{r} \Pi _r \right) } \\&\quad =\frac{ b_H\cdot \sum \limits _{r\in \mathcal {P}_2(i)} \left( \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \Big (|{\varvec{s}}_{r}|^{2H}+|{\varvec{t}}_i^*|^{2H}-|{\varvec{t}}_i^*-{\varvec{s}}_{r}|^{2H}\Big )}{2|{\varvec{t}}_i^*|^{2H} \left[ \sigma _i^2 + \sum \limits _{r\in \mathcal {P}_2(i)} \left( 2 \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \right] \left( \sum \limits _{r\in \mathcal {P}_2(i)} -{\varvec{s}}_{r} \Pi _r \right) } \\&\quad =\frac{ \sum \limits _{r\in \mathcal {P}_2(i)} \left( \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \left( \left| \frac{{\varvec{s}}_{r}}{{\varvec{t}}_i^*}\right| ^{2H}+ 1 - \left| 1- \frac{{\varvec{s}}_{r}}{{\varvec{t}}_i^*}\right| ^{2H}\right) }{2 \left[ \sigma _i^2 + \sum \limits _{r\in \mathcal {P}_2(i)} \left( 2 \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \right] \left( \sum \limits _{r\in \mathcal {P}_2(i)} \frac{{\varvec{s}}_{r}}{{\varvec{t}}_i^*} \Pi _r \right) } \left( \frac{\mu _i-\overline{\lambda }_i}{H}\right) . \end{aligned}$$
Then, a sufficient condition for (38) to hold is that
$$\begin{aligned}&\min \left\{ \mu _j - \lambda _j -\sum \limits _{l\in \mathcal {N}_\mathrm{in}(j)} \mu _l p_{l,j} : j\ne i \right\} > \\&\sup \limits _{\alpha \in (0,1)^{|\mathcal {P}_2(i)|}} \left\{ \frac{ \sum \limits _{r\in \mathcal {P}_2(i)} \left( \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \left( \left( \alpha _r\right) ^{2H}+ 1 - \left( 1- \alpha _r\right) ^{2H}\right) }{ \left[ \sigma _i^2 + \sum \limits _{r\in \mathcal {P}_2(i)} \left( 2 \sigma _{r_1}\sigma _i\rho _{r_1,i} + \sum \limits _{r'\in \mathcal {P}_2(i)} \sigma _{r_1}\sigma _{r'_1}\rho _{r_1,r'_1} \Pi _{r'}\right) \Pi _r \right] \left( \sum \limits _{r\in \mathcal {P}_2(i)} \alpha _r \Pi _r \right) } \left( \frac{\mu _i-\overline{\lambda }_i}{2H}\right) \right\} . \end{aligned}$$
Lemma 2 and Theorem 4 finish the proof.
Literature
1.
go back to reference Kilpi, Jorma, Norros, Ilkka: Testing the Gaussian approximation of aggregate traffic. In 2nd ACM SIGCOMM Internet Measurement Workshop, pages 49–61, (2002) Kilpi, Jorma, Norros, Ilkka: Testing the Gaussian approximation of aggregate traffic. In 2nd ACM SIGCOMM Internet Measurement Workshop, pages 49–61, (2002)
2.
go back to reference Meent, Remco van de, Mandjes, Michel, Pras, Aiko: Gaussian traffic everywhere? In 2006 IEEE International Conference on Communications, pages 573–578, (2006) Meent, Remco van de, Mandjes, Michel, Pras, Aiko: Gaussian traffic everywhere? In 2006 IEEE International Conference on Communications, pages 573–578, (2006)
3.
go back to reference Botvich, D., Duffield, N.: Large deviations, the shape of loss curve, and economies of scale in large multiplexers. Queueing Syst. 20, 293–320 (1995)CrossRef Botvich, D., Duffield, N.: Large deviations, the shape of loss curve, and economies of scale in large multiplexers. Queueing Syst. 20, 293–320 (1995)CrossRef
4.
go back to reference Courcoubetis, C., Weber, R.: Buffer overflow asymptotics for a buffer handling many traffic sources. J. Appl. Probab. 33, 886–903 (1996)CrossRef Courcoubetis, C., Weber, R.: Buffer overflow asymptotics for a buffer handling many traffic sources. J. Appl. Probab. 33, 886–903 (1996)CrossRef
5.
go back to reference Addie, R., Mannersalo, P., Norros, I.: Most probable paths and performance formulae for buffers with Gaussian input traffic. Eur. Trans. Telecommun. 13, 183–196 (2002)CrossRef Addie, R., Mannersalo, P., Norros, I.: Most probable paths and performance formulae for buffers with Gaussian input traffic. Eur. Trans. Telecommun. 13, 183–196 (2002)CrossRef
6.
go back to reference Debicki, K., Mandjes, M.: Exact overflow asymptotics for queues with many Gaussian inputs. J. Appl. Probab. 40, 704–720 (2003)CrossRef Debicki, K., Mandjes, M.: Exact overflow asymptotics for queues with many Gaussian inputs. J. Appl. Probab. 40, 704–720 (2003)CrossRef
7.
go back to reference Mandjes, M., van Uitert, M.: Sample-path large deviations for tandem and priority queues with Gaussian inputs. Ann. Appl. Probab. 15, 1193–1226 (2005)CrossRef Mandjes, M., van Uitert, M.: Sample-path large deviations for tandem and priority queues with Gaussian inputs. Ann. Appl. Probab. 15, 1193–1226 (2005)CrossRef
8.
go back to reference Mandjes, M., Mannersalo, P., Norros, I.: Gaussian tandem queues with an application to dimensioning of switch fabrics. Comput. Netw. 51, 781–797 (2007)CrossRef Mandjes, M., Mannersalo, P., Norros, I.: Gaussian tandem queues with an application to dimensioning of switch fabrics. Comput. Netw. 51, 781–797 (2007)CrossRef
9.
go back to reference Mandjes, M., Mannersalo, P., Norros, I., van Uitert, M.: Large deviations of infinite intersections of events in Gaussian processes. Stoch. Process. Their Appl. 116, 1269–1293 (2006)CrossRef Mandjes, M., Mannersalo, P., Norros, I., van Uitert, M.: Large deviations of infinite intersections of events in Gaussian processes. Stoch. Process. Their Appl. 116, 1269–1293 (2006)CrossRef
10.
go back to reference Ramanan, K.: Large deviation properties of data streams that share a buffer. Ann. Appl. Probab. 8(4), 1070–1129 (1998) Ramanan, K.: Large deviation properties of data streams that share a buffer. Ann. Appl. Probab. 8(4), 1070–1129 (1998)
11.
go back to reference Bertsimas, D., Paschalidis, J.C., Tsitsiklis, J.N.: On the large deviations behavior of acyclic networks of G/G/1 queues. Ann. Appl. Probab. 8(4), 1027–1069 (1998)CrossRef Bertsimas, D., Paschalidis, J.C., Tsitsiklis, J.N.: On the large deviations behavior of acyclic networks of G/G/1 queues. Ann. Appl. Probab. 8(4), 1027–1069 (1998)CrossRef
12.
go back to reference Mandjes, M.: Sample-path large deviations for generalized processor sharing queues with Gaussian inputs. Perform. Eval. 61(2–3), 225–256 (2005)CrossRef Mandjes, M.: Sample-path large deviations for generalized processor sharing queues with Gaussian inputs. Perform. Eval. 61(2–3), 225–256 (2005)CrossRef
13.
go back to reference Mandjes, M.: Large deviations for Gaussian queues: modelling communication networks. Wiley, Hoboken (2007)CrossRef Mandjes, M.: Large deviations for Gaussian queues: modelling communication networks. Wiley, Hoboken (2007)CrossRef
14.
go back to reference Weiss, A.: A new technique for analyzing large traffic systems. Adv. Appl. Probab. 18, 506–532 (1986)CrossRef Weiss, A.: A new technique for analyzing large traffic systems. Adv. Appl. Probab. 18, 506–532 (1986)CrossRef
15.
go back to reference Mannersalo, P., Norros, I.: Approximate formulae for Gaussian priority queues. In: Proceedings of INFOCOM, pp. 991–1002 (2001) Mannersalo, P., Norros, I.: Approximate formulae for Gaussian priority queues. In: Proceedings of INFOCOM, pp. 991–1002 (2001)
16.
go back to reference Adler, R.: An introduction to continuity, extrema, and related topics for general Gaussian processes. IMS, Hayward (1990) Adler, R.: An introduction to continuity, extrema, and related topics for general Gaussian processes. IMS, Hayward (1990)
17.
go back to reference Bahadur, R.R.: Large deviations of the sample mean in general vector spaces. Ann. Probab. 7, 587–621 (1979)CrossRef Bahadur, R.R.: Large deviations of the sample mean in general vector spaces. Ann. Probab. 7, 587–621 (1979)CrossRef
18.
go back to reference Lavancier, F., Philippe, A., Surgailis, D.: Covariance function of vector self-similar processes. Stat. Probab. Lett. 79(23), 2415–2421 (2009)CrossRef Lavancier, F., Philippe, A., Surgailis, D.: Covariance function of vector self-similar processes. Stat. Probab. Lett. 79(23), 2415–2421 (2009)CrossRef
19.
go back to reference Amblard, P.-O., Coeurjolly, J.-F., Lavancier, F., Philippe, A.: Basic properties of the multivariate fractional Brownian motion. Séminaries et congres, Société mathématique de France 28, 65–87 (2013) Amblard, P.-O., Coeurjolly, J.-F., Lavancier, F., Philippe, A.: Basic properties of the multivariate fractional Brownian motion. Séminaries et congres, Société mathématique de France 28, 65–87 (2013)
Metadata
Title
Large deviations for acyclic networks of queues with correlated Gaussian inputs
Authors
Martin Zubeldia
Michel Mandjes
Publication date
18-02-2021
Publisher
Springer US
Published in
Queueing Systems / Issue 3-4/2021
Print ISSN: 0257-0130
Electronic ISSN: 1572-9443
DOI
https://doi.org/10.1007/s11134-021-09689-9

Other articles of this Issue 3-4/2021

Queueing Systems 3-4/2021 Go to the issue

Premium Partner