Skip to main content
Top
Published in: Queueing Systems 1-2/2020

Open Access 09-03-2020

Waiting time and queue length analysis of Markov-modulated fluid priority queues

Author: Gábor Horváth

Published in: Queueing Systems | Issue 1-2/2020

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This paper considers a multi-type fluid queue with priority service. The input fluid rates are modulated by a Markov chain, which is common for all fluid types. The service rate of the queue is constant. Various performance measures are derived, including the Laplace–Stieltjes transform and the moments of the stationary waiting time of the fluid drops and the queue length distributions. An Erlangization-based numerical method is also provided to approximate the waiting time and the queue length distributions up to arbitrary precision. All performance measures are formulated as reward accumulation problems during busy periods of simple Markovian fluid flow models, for which efficient matrix-analytic solutions are provided, enabling us to solve large models with several hundred states.
Notes
This work is supported by the OTKA K-123914 Project and by the ÚNKP-17-4-III New National Excellence Program of the Ministry of Human Capacities.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Multi-class queueing systems have been used for a long time to analyze systems where different kinds of jobs exist. Among the multi-type service policies, priority-based decisions are the most popular choices. In such systems, a high priority is assigned to the most important jobs, and the server always picks the highest priority job present in the system. Jobs belonging to the same class are served in order of arrival. This conceptual simplicity makes priority systems easy to implement both in hardware and in software.
Queueing systems with priority service have been studied for a very long time. The first results for priority queues with exponentially distributed inter-arrival and service times have been considered in [1] more than 50 years ago. Nowadays, priority systems can be analyzed efficiently with complex arrival processes (like marked Markovian arrival processes) and more general service time distributions (see [2] or [3]).
There are queueing applications where the demands are easier to model by continuous fluid flows rather than discrete jobs. In these systems, the jobs are considered to be infinitesimally small, like fluid drops, and the queue storing the jobs is a fluid storage. The analysis of continuous (fluid based) queues has a long history as well; however, for some reason, the results for continuous queues have always been published years after their discrete counterparts. For instance, the matrix-analytic methods enabling the numerically efficient analysis of many discrete-state (non-fluid) queues have been adapted to continuous queues only in 1999 (see [4]).
Not many results exist for fluid queues with multi-type jobs. Among the existing results, the fluid queues with priority service have been studied the most, due to their practical relevance in the performance evaluation of various telecommunication systems. The authors in [5] and [6] assume simple on–off modulated rate for the high priority demands and constant fluid rate for the low priority demands, and obtain the Laplace–Stieltjes transform (LST) of the joint queue length distribution for this two-class system. The procedure proposed in [7] assumes simple on–off sources with exponentially distributed on and off periods for both the high and the low priority traffic. The novelty of that approach is that the authors could avoid the characterization of the joint queue length distribution. The idea is to create a simplified process to obtain the marginal distribution of the buffer content. In the simplified low priority queue length process, however, there are non-Markovian periods, representing the periods when the high priority class is busy. In the solution proposed in this paper, a somewhat similar reduction is introduced, too, but the reduction results in a completely Markovian system. The assumptions on the input process in the aforementioned papers, however, can be too restrictive for practical applications.
In this paper, a more general system is considered, where the fluid input rates of the arrival process are modulated by an arbitrary Markov chain. The same model has been studied in [8], where the partial differential equations for the joint distribution of the queue lengths are derived. However, these differential equations, especially the boundary functions, are difficult to solve. [8] is able to provide the mean, the variance and the covariance of the queue lengths only in case of two-state Markov chains. In [9], the LST of the stationary joint distribution of the buffer contents is obtained in a closed form based on a spectral decomposition approach, from which the tail distributions and the queue length moments are derived. Using Little’s law, the mean waiting times are also presented. [10] follows a different approach to obtain the LST of the joint distribution, based on the analysis of the idle and busy periods of the high priority queue.
The (matrix-analytic) approach presented in this paper is based on the workload process analysis, just like in [3] for the discrete case. While the main steps of the analysis are the same, adapting the method of [3] to continuous queues is not straightforward at all. The specialties of continuous systems require different solutions in many steps of the procedure. Additionally, several ingredients of the procedure, which have been available for discrete systems for a long time (like the relation between the queue length distributions at departure instants and at random points in time) are not yet available for the continuous case.
The drawback of the presented approach is that the joint behavior of the queues cannot be characterized this way. (Note that the characterization of the joint queue length was the starting point of most related past papers.)
The advantage of the approach presented in this paper is that it can provide the LST and the moments for both the individual queue lengths and the waiting times of fluid drops in a nice and elegant form. (Note that the waiting time of fluid drops in a fluid priority queue has never been considered in the past, apart from the expected value.) Furthermore, numerical methods for the waiting time and the queue length distributions based on Erlangization are also provided, which are created with probabilistic interpretations, and have better numerical behavior than a generic inverse Laplace transform algorithm.
It is particularly important for us to develop algorithms that behave well numerically, even for large models. All performance measures are formulated as reward accumulation problems during busy periods of simple Markovian fluid flow models, for which matrix-analytic solutions are provided. The computation bottlenecks are the solutions of Riccati- and Sylvester-type equations, for which efficient implementations exist. As a result, the presented procedure can solve large models up to many hundreds of phases, while the past solutions mentioned above are less tractable, they cannot go above a couple of tens of states (except for [9], where an efficient solution is provided if the background Markov chain is constructed by a Kronecker composition).
The rest of the paper is organized as follows: Section 2 describes the system, introduces the notation and lists the main steps of the computation of the performance measures. Markov-modulated fluid flow models are considered in Sect. 3. The first part of this section gives an overview of known results. In the second part of this section, the reward accumulated during the busy period is characterized. The waiting time of the fluid drops is derived in Sect. 4. The queue length distribution of a given fluid class is studied in Sect. 5. Finally, some numerical examples conclude the paper in Sect. 6.

2 The Markov-modulated fluid priority queue and the concept of the solution

2.1 The description of the system

In the system considered in the paper, K different (fluid) job types are distinguished, where class 1 has the lowest, and class K the highest priority. The priority of the fluid can be interpreted as “weight,” and can correspond to the importance or the urgency of the jobs.
The rates at which the fluid belonging to various job types enters the system are modulated by a continuous-time Markov chain (CTMC, also referred to as the background process) with generator denoted by \(\mathbf {Q}\). (The irreducibility of the background process is assumed throughout the paper.) The stationary probability vector of the CTMC is \(\pi \); hence, \(\pi \mathbf {Q}=0,\pi \mathbb {1}=1\) hold. (\(\mathbb {1}\) denotes the column vector of ones of appropriate size.) The rate at which class k fluid flows into the queue in state i is \(r_i^{(k)}\ge 0\). The matrix \(\mathbf {R}^{(k)}\) is a diagonal matrix composed of the class k incoming fluid rates; \(\mathbf {R}^{(k)}=\text {diag}\langle r_i^{(k)} \rangle \). For simplicity, we denote the input rates of classes having priority equal to or higher than k by \(r_i^{(k+)}=\sum _{\ell =k}^K r_i^{(k)}\), and the corresponding matrix by \(\mathbf {R}^{(k+)}=\text {diag}\langle r_i^{(k+)} \rangle \). The rate at which fluid can leave the queue is denoted by d.
Using the notation introduced above, the mean arrival rate of class k fluid can be obtained by \(\lambda ^{(k)}=\pi \mathbf {R}^{(k)}\mathbb {1}\), and the total input rate is \(\lambda =\sum _{k=1}^K \lambda ^{(k)}\). The system is assumed to be stable; hence, \(\lambda < d\).
In those states of the system where the fluid arrival rate is greater than the service rate available, the fluid is directed into a buffer. The buffer space is assumed to be infinite. Due to the priority service, lower priority fluid cannot be drained when any of the higher priority queues contain fluid. When the higher priority queues are empty and the input fluid rates of the higher priority classes are less than d, the lower priority queues are drained with the remaining service capacity. (Hence, the system is work conserving.)

2.2 Concept of the solution

In this paper, all performance measures will be derived from the analysis of the workload process. The workload, also called the unfinished work, or the virtual waiting time, is the amount of work present in the system, not in fluid level, but in time [11]. The fluid levels of different priorities are correlated; their joint analysis is rather complex. The workload process, however, makes it possible to obtain several performance measures from a Markov chain having only a single continuous variable, which is much more tractable numerically.
The solution is based on the “tagged customer” approach. When a class k fluid drop arrives into the queue, it finds a certain amount of class \(k+\) workload in the system. This amount of workload has to be served before the tagged fluid drop can leave the system. The workload of lower priority classes can be neglected. While the tagged fluid drop waits for its service in the queue, further class \((k+1)+\) fluid can arrive, which has to be served before the tagged one. After a certain amount of time, the tagged fluid drop reaches the head of the queue and can leave the system.
  • The waiting time of the tagged fluid drop is the workload found in the system at arrival, increased by the higher priority workload brought to the system while waiting.
  • The class kqueue length at the fluid drop departure instant is the amount of class k fluid arriving while the tagged fluid drop waits in the system.
It is shown in Sect. 4.1 that the workload process of a fluid priority queue can be characterized by a Markovian fluid flow model. In Sect. 4.2, we are going to show that the time until the workload ahead of the tagged fluid drop is processed by the system is equal to the busy period duration of a special Markovian fluid flow model defined appropriately. Finally, the queue length analysis problem is translated to the analysis of the amount of reward accumulated over the busy period of this special Markovian fluid flow model.
Since each step of the solution relies on Markovian fluid flow models, the following section gives an overview of their stationary distribution and the behavior of the busy period.

3 Markovian fluid flow models

In this section, we recap the definition of the “standard” Markovian fluid flow models (without priorities) and those properties that will be necessary for the performance analysis of the fluid priority queue. The main purpose of this section is to introduce the notation and to review the related matrix-analytic solution methodology. The second part of this section presents some new contributions as well, related to the busy period analysis.
Markovian fluid flow models (or, in short, fluid models) consist of a continuous-time background Markov chain and a fluid storage (buffer). The rate at which fluid flows into or out from the buffer depends on the state of the background Markov chain. The buffer has a lower boundary (i.e., the minimal fluid level is 0), and its capacity is considered to be infinite.
Formally, fluid models are two-dimensional Markov processes \(\{{\mathcal {A}}(t),{\mathcal {J}}(t),t>0\}\), where \({\mathcal {A}}(t)\) is the fluid level and \({\mathcal {J}}(t)\) is the state of a CTMC (the background process) at time t.
The Markov chain is assumed to be irreducible with state space \({\mathcal {S}}\); the number of states is \(N=|{\mathcal {S}}|\). Its generator matrix is denoted by \(\mathbf {F}=[f_{ij},~i,j\in {\mathcal {S}}]\); the stationary distribution vector is \(\vartheta \), which satisfies \(\vartheta \mathbf {F}=0,\vartheta \mathbb {1}=1\). A fluid rate is associated with each state of the background process, \(c_i, i\in {\mathcal {S}}\), that determines the rate at which the level of the fluid buffer changes.
Since the analysis of the fluid priority queue involves the solution of various fluid flow models, we use different notation to make it easier to distinguish between the two systems. The generator matrix and the fluid input rates of the fluid priority queue are characterized by the matrices \(\mathbf {Q}\) and \(\mathbf {R}\), while the generator and the fluid rates of fluid flow models are denoted by \(\mathbf {F}\) and \(\mathbf {C}\) throughout the paper.
The evolution of the fluid level \({\mathcal {A}}(t)\) can be described by
$$\begin{aligned} \frac{\hbox {d}}{\hbox {d}t} {\mathcal {A}}(t) = {\left\{ \begin{array}{ll} c_{{\mathcal {J}}(t)}, &{} \text {if } {\mathcal {A}}(t) > 0, \\ \max \{0,c_{{\mathcal {J}}(t)}\}, &{} \text {if }{\mathcal {A}}(t)=0. \end{array}\right. } \end{aligned}$$
(1)
The size N row vector \(\pi (x)\) denotes the stationary density of the fluid level, whose ith entry is defined by \(\pi _i(x)=\lim _{t\rightarrow \infty }\frac{\hbox {d}}{\hbox {d}x}P({\mathcal {A}}(t)<x,{\mathcal {J}}(t)=i)\). At level 0, probability mass can accumulate as well. The ith entry of the row vector p represents the probability mass at level 0, \(p_i=\lim _{t\rightarrow \infty } P({\mathcal {A}}(t)=0,{\mathcal {J}}(t)=i)\). The vector \(\pi (x)\) is the solution of the matrix differential equation [12]
$$\begin{aligned} \frac{\hbox {d}}{\hbox {d}t}\pi (x)\mathbf {C} = \pi (x) \mathbf {F}, \end{aligned}$$
(2)
where \(\mathbf {C}\) is a diagonal matrix composed of the fluid rates \(c_i\). The boundary equations providing \(\pi (0)\) and p are
$$\begin{aligned} \begin{aligned} \pi (0)\mathbf {C}&= p\,\mathbf {F}, \\ p_i&= 0, ~~~\forall i: c_i>0. \end{aligned} \end{aligned}$$
(3)

3.1 The stationary distribution of the fluid level

Several numerical methods are available to solve (2) and (3), including the eigenvalue decomposition-based method [12], the Schur decomposition-based method [13] and the matrix-analytic method [4]. The subsequent sections of this paper rely heavily on the matrix-analytic solution of fluid models; thus, the rest of the section provides a short summary of this approach.
For the matrix-analytic method, the state space of the Markov chain has to be partitioned into three sets, \({\mathcal {S}}={\mathcal {S}}_+\cup {\mathcal {S}}_-\cup {\mathcal {S}}_0\), according to the sign of the fluid rates, i.e., \(\mathcal {S_+}=\{i\in {\mathcal {S}},c_i>0\}\), \(\mathcal {S_-}=\{i\in {\mathcal {S}},c_i<0\}\) and \({\mathcal {S}}_0=\{i\in {\mathcal {S}},c_i=0\}\). From now on, it is assumed that the matrices \(\mathbf {F}\) and \(\mathbf {C}\) are partitioned, and thus,
$$\begin{aligned} \mathbf {F}=\begin{bmatrix} \mathbf {F}_{++} &{} \mathbf {F}_{+-} &{} \mathbf {F}_{+0} \\ \mathbf {F}_{-+} &{} \mathbf {F}_{--} &{} \mathbf {F}_{-0} \\ \mathbf {F}_{0+} &{} \mathbf {F}_{0-} &{} \mathbf {F}_{00} \\ \end{bmatrix}, ~~~ \mathbf {C}=\begin{bmatrix} \mathbf {C}_+ \\ &{} \mathbf {C}_- \\ &{} &{} \mathbf {0} \\ \end{bmatrix}, \end{aligned}$$
(4)
where all diagonal entries of \(\mathbf {C}_+ = \text {diag}\langle c_i \rangle _{i\in {\mathcal {S}}_+}\) are positive and all diagonal entries of \(\mathbf {C}_- = \text {diag}\langle c_i \rangle _{i\in {\mathcal {S}}_-}\) are negative. Furthermore, let us introduce the matrix \(\mathbf {F}^\bullet \) as the generator of \({\mathcal {J}}(t)\) restricted to nonzero states, and hence,
$$\begin{aligned} \mathbf {F^\bullet } = \begin{bmatrix} \mathbf {F^\bullet _{++}} &{} \mathbf {F^\bullet _{+-}} \\ \mathbf {F^\bullet _{-+}} &{} \mathbf {F^\bullet _{--}} \\ \end{bmatrix} = \begin{bmatrix} \mathbf {F_{++}} &{} \mathbf {F_{+-}} \\ \mathbf {F_{-+}} &{} \mathbf {F_{--}} \\ \end{bmatrix}+\begin{bmatrix} \mathbf {F_{+0}} \\ \mathbf {F_{-0}}\end{bmatrix}(-\mathbf {F_{00}})^{-1}\begin{bmatrix} \mathbf {F_{0+}}&\mathbf {F_{0-}} \end{bmatrix}. \end{aligned}$$
(5)
The stationary solution of the differential equation (2) and (3) has a matrix-exponential form, as proven in [4]. If the drift of the queue is negative, \(\vartheta \mathbf {C}\mathbb {1}<0\), the density function of the stationary fluid level can be expressed as:
$$\begin{aligned} \pi (x) = \kappa \,e^{\mathbf {K}x}\underbrace{\begin{bmatrix} \mathbf {I}&\varvec{\Psi } \end{bmatrix}\begin{bmatrix} \mathbf {C}_+ \\ &{} |\mathbf {C}_-| \end{bmatrix}^{-1} \begin{bmatrix} \mathbf {I}&{} \mathbf {0} &{} \mathbf {F}_{+0}(-\mathbf {F}_{00})^{-1} \\ \mathbf {0} &{} \mathbf {I}&{} \mathbf {F}_{-0}(-\mathbf {F}_{00})^{-1} \\ \end{bmatrix}}_{\mathbf {A}}, \end{aligned}$$
(6)
where the matrix \(\varvec{\Psi }\) is the minimal nonnegative solution of the non-symmetric algebraic Riccati equation (NARE)
$$\begin{aligned} \varvec{\Psi } |\mathbf {C_-}|^{-1}\mathbf {F^\bullet _{-+}} \varvec{\Psi } + \varvec{\Psi } |\mathbf {C_-}|^{-1}\mathbf {F^\bullet _{--}} + \mathbf {C_+}^{-1}\mathbf {F^\bullet _{++}}\varvec{\Psi } + \mathbf {C_+}^{-1}\mathbf {F^\bullet _{+-}}&=\mathbf {0}, \end{aligned}$$
(7)
and the matrix \(\mathbf {K}\) is obtained by
$$\begin{aligned} \mathbf {K} = \mathbf {C_+}^{-1}\mathbf {F^\bullet _{++}} + \varvec{\Psi } |\mathbf {C_-}|^{-1}\mathbf {F^\bullet _{-+}}. \end{aligned}$$
(8)
The probability mass at level zero is \(p=\begin{bmatrix}0&p_-&p_0\end{bmatrix}\). The vectors \(\kappa \), \(p_-\) and \(p_0\) are the solutions of the linear equations
$$\begin{aligned} \begin{bmatrix}\kappa&p_-&p_0\end{bmatrix}\begin{bmatrix} [\qquad \qquad&\begin{array}{c} - {\mathbf{A}} \end{array} &{} \qquad \qquad ] \\ \mathbf {F}_{-+} &{} \mathbf {F}_{--} &{} \mathbf {F}_{-0} \\ \mathbf {F}_{0+} &{} \mathbf {F}_{0-} &{} \mathbf {F}_{00} \end{bmatrix} = 0, \end{aligned}$$
(9)
with the normalization condition
$$\begin{aligned} \begin{bmatrix}\kappa&p_-&p_0\end{bmatrix}\begin{bmatrix} -\mathbf {K}^{-1}\mathbf {A}\mathbb {1} \\ \mathbb {1} \\ \mathbb {1} \end{bmatrix} = 1. \end{aligned}$$
(10)

3.2 Reward accumulation during the busy period

The duration of the busy period of fluid flow models, \(\tau =\inf (t>0: {\mathcal {A}}(t)=0)\), has been studied extensively in the literature: in [14] the Laplace–Stieltjes transform (LST) of its distribution is provided, and in [15] an Erlangization-based algorithm is described to approximate its distribution in the time domain. Both of these papers are based on matrix-analytic methods. (We note that the busy period of Markovian fluid flows can be analyzed with other methodologies as well, like with the spectral decomposition method, as in [16].)
In this section, we consider a more general problem. We assign a reward rate to each state of the background process and characterize the accumulated reward over the busy period of the fluid model. It turns out that the main performance measures of the fluid priority queue can be related to the accumulated reward over the busy period.
The introduction of reward accumulation makes the system three-dimensional, \(\{{\mathcal {A}}(t),{\mathcal {B}}(t),{\mathcal {J}}(t),t\ge 0\}\), where \({\mathcal {B}}(t)\) represents the reward accumulated up to time t. The reward accumulation is linear; thus, \({\mathcal {B}}(t) = \int _{0}^{t}d_{{\mathcal {J}}(u)}du\), where \(d_i\) denotes the reward rate associated with state i. Let us also introduce the diagonal matrix of reward rates \(\mathbf {D}=\text {diag}\langle d_i \rangle \). The joint distribution of the accumulated reward and the state transitions over the busy period is given by the matrix \(\varvec{\Phi }(y)\), defined as
$$\begin{aligned}{}[\varvec{\Phi }(y)]_{ij} = P({\mathcal {B}}(\tau )<y,{\mathcal {J}}(\tau )=j|{\mathcal {A}}(0)=0, {\mathcal {B}}(0)=0, {\mathcal {J}}(0)=i), \end{aligned}$$
(11)
for \(i\in {\mathcal {S}}_+, j\in {\mathcal {S}}_-\). Observe that in the special case of \(\mathbf {D}=\mathbf {I}\), the matrix \(\varvec{\Phi }(y)\) equals the distribution of the busy period duration starting with zero initial fluid level.
The following theorem, from [17], provides the LST of \(\varvec{\Phi }(y)\), defined by \(\varvec{\Phi }^*(v)=\int _{0}^\infty e^{-vy}\,d\varvec{\Phi }(y)\).
Theorem 1
[17, Theorem 3] The matrix \(\varvec{\Phi }^*(v)\), describing the LST of the accumulated reward over a busy period starting from an empty buffer, satisfies the NARE
$$\begin{aligned} \mathbf {0}&=\mathbf {F}^*_{++}(v)\varvec{\Phi }^*(v) +\varvec{\Phi }^*(v)\mathbf {F}^*_{--}(v) +\varvec{\Phi }^*(v)\mathbf {F}^*_{-+}(v) \varvec{\Phi }^*(v) +\mathbf {F}^*_{+-}(v), \end{aligned}$$
(12)
where the matrices \(\mathbf {F}^*_{++}(v), \mathbf {F}^*_{--}(v), \mathbf {F}^*_{-+}(v)\) and \(\mathbf {F}^*_{+-}(v)\) are defined by
$$\begin{aligned} \begin{aligned} \mathbf {F}^*_{++}(v)&=\mathbf {C}_+^{-1}\left( \mathbf {F}_{++}-v\mathbf {D}_++\mathbf {F}_{+0}\mathbf {H}^*_{00}(v) \mathbf {F}_{0+} \right) , \\ \mathbf {F}^*_{--}(v)&=-\mathbf {C}_-^{-1}\left( \mathbf {F}_{--}-v\mathbf {D}_-+\mathbf {F}_{-0}\mathbf {H}^*_{00}(v) \mathbf {F}_{0-} \right) , \\ \mathbf {F}^*_{-+}(v)&=-\mathbf {C}_-^{-1}\left( \mathbf {F}_{-+}+\mathbf {F}_{-0}\mathbf {H}^*_{00}(v) \mathbf {F}_{0+} \right) , \\ \mathbf {F}^*_{+-}(v)&=\mathbf {C}_+^{-1}\left( \mathbf {F}_{+-}+\mathbf {F}_{+0}\mathbf {H}^*_{00}(v) \mathbf {F}_{0-} \right) , \end{aligned} \end{aligned}$$
(13)
with \(\mathbf {H}^*_{00}(v)=(\nu \mathbf {D}_0-\mathbf {F}_{00})^{-1}\) being the LST of the reward accumulated in the zero states.
To obtain the moments of the performance measures of the fluid priority queue, the derivatives of the matrix \(\varvec{\Phi }^*(v)\) will be required at \(v\rightarrow 0\). Let us introduce the matrices \(\mathbf {F}_{ij}^{(m)}\) defined by \(\mathbf {F}_{ij}^{(m)}=\frac{\hbox {d}^m}{\hbox {d}v^m}\mathbf {F}^*_{ij}(v)|_{v\rightarrow 0}\) for \(i,j\in \{+,-\}\). The next theorem provides the derivatives of \(\varvec{\Phi }^*(v)\).
Corollary 1
The mth derivative of \(\varvec{\Phi }^*(v)\) at \(v\rightarrow 0\), denoted by \(\varvec{\Phi }^{(m)} =\frac{\hbox {d}^m}{\hbox {d}v^m} \varvec{\Phi }^*(v)|_{v\rightarrow 0}\), is obtained recursively by solving the Sylvester equations
$$\begin{aligned} (\mathbf {F}_{++}^{(0)} + \varvec{\Phi } \mathbf {F}_{-+}^{(0)})\varvec{\Phi }^{(m)} + \varvec{\Phi }^{(m)} (\mathbf {F}_{--}^{(0)} + \mathbf {F}_{-+}^{(0)}\varvec{\Phi } ) + \mathbf {F}_{+-}^{(m)} - \varvec{\Phi }\mathbf {F}_{+-}^{(m)}\varvec{\Phi } + \mathbf {U}_{m-1} =\mathbf {0} \end{aligned}$$
for \(m>0\), and for \(m=0\) we have that \(\varvec{\Phi }^{(0)} =\varvec{\Phi } = \varvec{\Psi }\). The matrix \(\mathbf {U}_{m-1}\) depends only on the derivatives of \(\varvec{\Phi }^*(v)\) up to order \(m-1\):
$$\begin{aligned} \begin{aligned} \mathbf {U}_{m-1}&=\sum _{i=0}^{m-1} \left( {\begin{array}{c}m\\ i\end{array}}\right) \Big (\mathbf {F}_{++}^{(m-i)}+\varvec{\Phi } \mathbf {F}_{-+}^{(m-i)}\Big )\varvec{\Phi }^{(i)} \\&\quad + \sum _{\ell =0}^{m-1} \left( {\begin{array}{c}m\\ \ell \end{array}}\right) \varvec{\Phi }^{(\ell )}\Big ( \mathbf {F}_{--}^{(m-\ell )}+\mathbf {F}_{-+}^{(m-\ell )}\varvec{\Phi }\Big ) \\&\quad + \sum _{\ell =1}^{m-1}\sum _{i=1}^{m-\ell }\left( {\begin{array}{c}m\\ \ell \end{array}}\right) \left( {\begin{array}{c}m-\ell \\ i\end{array}}\right) \varvec{\Phi }^{(\ell )} \mathbf {F}_{-+}^{(m-\ell -i)}\varvec{\Phi }^{(i)}. \end{aligned} \end{aligned}$$
Proof
Taking the mth derivative of (12) and letting \(v\rightarrow 0\) leads to
$$\begin{aligned} \begin{aligned}&\sum _{i=0}^m \left( {\begin{array}{c}m\\ i\end{array}}\right) \mathbf {F}_{++}^{(m-i)} \varvec{\Phi }^{(i)} +\sum _{i=0}^m \left( {\begin{array}{c}m\\ i\end{array}}\right) \varvec{\Phi }^{(i)} \mathbf {F}_{--}^{(m-i)} +\mathbf {F}_{+-}^{(m)} \\&\qquad + \sum _{\ell =0}^m\sum _{i=0}^{m-\ell } \left( {\begin{array}{c}m\\ \ell \end{array}}\right) \left( {\begin{array}{c}m-\ell \\ i\end{array}}\right) \varvec{\Phi }^{(\ell )}\mathbf {F}_{-+}^{(m-\ell -i)} \varvec{\Phi }^{(i)} = \mathbf {0}, \end{aligned} \end{aligned}$$
(14)
which, after some manipulations, provides the theorem.\(\square \)
Appendix A describes how to obtain the matrix derivatives \(\mathbf {F}_{ij}^{(m)},~i,j\in \{+,-\}\) in detail.
The rest of this section focuses on the distribution of the reward accumulated over the busy period. Unfortunately, there are no exact solutions available to obtain \(\varvec{\Phi }(t)\). One can either rely on a generic numerical inverse Laplace transform (ILT) method to calculate it from \(\varvec{\Phi }^*(v)\), or, one can apply the Erlangization method to approximate it. Most numerical ILT algorithms have several well-known drawbacks: they require complex and high precision arithmetic, have convergence problems and are prone to over- and undershooting, meaning that they can result in negative or higher than 1 values instead of valid probabilities. The Erlangization method can be viewed as a special kind of ILT method as well. Compared to generic ILT methods, Erlangization is slower, in the sense that it needs more evaluations of \(\varvec{\Phi }^*(v)\) to achieve reasonable accuracy. On the other hand, it can be implemented using real, machine precision arithmetic and has no over- and undershooting problems; the result is always a valid probability. Additionally, Erlangization has an elegant stochastic interpretation, which can be exploited when tailored to specific problems. Below we present an Erlangization-based numerical method to approximate \(\varvec{\Phi }(t)\), following the approach of [15]. The basic algorithm in [15] is extended in two aspects: first, that paper studies the duration of the busy period, while we need the accumulated reward in this paper; second, zero rates are excluded in [15], while we need them here.
According to the Erlangization algorithm, the order-n approximation of the matrix \(\varvec{\Phi }(t)\) is
$$\begin{aligned} \varvec{\Phi }_{n}(t) = \int _0^\infty f_{{\mathcal {E}}(n,n/t)}(u)\cdot \varvec{\Phi }(u) \,\hbox {d}u, \end{aligned}$$
(15)
where \(f_{{\mathcal {E}}(n,n/t)}(u)\) is an order-n Erlang density with rate parameter \(\nu =n/t\). As n tends to infinity, \(f_{{\mathcal {E}}(n,n/t)}(u)\) tends to a Dirac delta function \(\delta (u-t)\); hence, \(\varvec{\Phi }_{n}(t)\) tends to \(\varvec{\Phi }(t)\). The key idea behind Erlangization is that, by probabilistic interpretation, the integral in (15) is often easier to compute than \(\varvec{\Phi }(t)\) itself. In our case, the integral can be interpreted as the probability that the reward accumulated during the busy period is less than an Erlang\((n,\nu )\) variable.
Theorem 2
The order-n approximation of the joint distribution of the accumulated reward and the state transitions over the busy period is
$$\begin{aligned} \varvec{\Phi }_{n}(t) = \sum _{\ell =0}^{n-1}{\hat{\varvec{\Psi }}}_\ell (\nu )|_{\nu =n/t}, \end{aligned}$$
(16)
where we have \({\hat{\varvec{\Psi }}}_0(\nu )=\varvec{\Phi }^*(\nu )\), and for \(\ell >0\) the matrices \({\hat{\varvec{\Psi }}}_\ell (\nu )\) are obtained recursively by the solutions of the Sylvester equations
$$\begin{aligned} \mathbf {0}= & {} \big (\mathbf {F}_{++}^*(\nu ) + {\hat{\varvec{\Psi }}}_0(\nu )\mathbf {F}_{-+}^*(\nu )\big ) {\hat{\varvec{\Psi }}}_\ell (\nu ) + {\hat{\varvec{\Psi }}}_\ell (\nu ) \big (\mathbf {F}_{--}^*(\nu ) + \mathbf {F}_{-+}^*(\nu ){\hat{\varvec{\Psi }}}_0(\nu )\big ) \nonumber \\&+\nu \mathbf {C}_+^{-1}\mathbf {D}_+{\hat{\varvec{\Psi }}}_{\ell -1}(\nu ) + \nu {\hat{\varvec{\Psi }}}_{\ell -1}(\nu )(-\mathbf {C}_-)^{-1}\mathbf {D}_- +\mathbf {C}_+^{-1}\mathbf {F}_{+0}\mathbf {Z}_\ell (\nu )\mathbf {F}_{0-} \nonumber \\&-{\hat{\varvec{\Psi }}}_0(\nu )(-\mathbf {C}_-)^{-1}\mathbf {F}_{-0} \mathbf {Z}_\ell (\nu )\mathbf {F}_{0+}{\hat{\varvec{\Psi }}}_0(\nu ) \nonumber \\&+\sum _{i=1}^{\ell -1}\sum _{j=1}^{\ell -i}{\hat{\varvec{\Psi }}}_{i} (\nu )(-\mathbf {C}_-)^{-1}\mathbf {F}_{-0}\mathbf {Z}_{\ell -i-j}(\nu )\mathbf {F}_{0+} {\hat{\varvec{\Psi }}}_j(\nu )\nonumber \\&+\sum _{i=1}^{\ell -1}{\hat{\varvec{\Psi }}}_i(\nu ) (-\mathbf {C}_-)^{-1}\mathbf {F}_{-+} {\hat{\varvec{\Psi }}}_{\ell -i}(\nu ) \nonumber \\&+\sum _{i=0}^{\ell -1}{\hat{\varvec{\Psi }}}_i(\nu )(-\mathbf {C}_-)^{-1} \mathbf {F}_{-0}\mathbf {Z}_{\ell -i}(\nu )(\mathbf {F}_{0-}+\mathbf {F}_{0+}{\hat{\varvec{\Psi }}}_0(\nu )) \nonumber \\&+\sum _{i=0}^{\ell -1}(\mathbf {C}_+^{-1}\mathbf {F}_{+0}+{\hat{\varvec{\Psi }}}_0(\nu ) (-\mathbf {C}_-)^{-1}\mathbf {F}_{-0})\mathbf {Z}_{\ell -i}(\nu )\mathbf {F}_{0+}{\hat{\varvec{\Psi }}}_i(\nu ), \end{aligned}$$
(17)
where the matrices \(\mathbf {F}_{++}^*(\nu ),\mathbf {F}_{+-}^*(\nu ),\mathbf {F}_{--}^*(\nu )\) and \(\mathbf {F}_{-+}^*(\nu )\) are defined by (13) and the matrices \(\mathbf {Z}_\ell (\nu )\) are defined by \(\mathbf {Z}_\ell (\nu )=\mathbf {H}^*_{00}(\nu )(\nu \mathbf {D}_0\mathbf {H}^*_{00}(\nu ))^\ell \).
Proof
The proof of the theorem is presented in Appendix B. \(\square \)

4 Waiting time analysis of the fluid priority queue

This section presents the LST, the moments and the distribution function of the waiting time of class k fluid drops, for \(k=1,\dots ,K\).

4.1 The distribution of the workload at fluid drop arrivals

As the first step of the analysis, the distribution of the total workload (also called backlog, unfinished work, or virtual waiting time in the literature) of classes \(\ge k\) is determined at class k fluid drop arrival instants. This is the amount of workload (and possibly more, due to further higher priority arrivals) that has to be accomplished before the tagged fluid drop can leave the system.
In non-fluid queues, where the jobs are discrete, the difference between the discrete-state queue length process and the continuous-state workload process is more obvious, but in the case of fluid queues both processes have continuous state space, and we will show that their dynamics are rather similar, too. Still, for the analysis of fluid priority queues, using the workload process is essential: it allows us to derive the performance measures from a process having only one continuous variable. Basing the analysis on the queue length process, we would need to solve a system with two continuous variables (the lengths of priority k and the priority \((k+1)+\) queues—since they are correlated), which is much more involved.
Let us denote the workload of classes \(\ge k\) at time t by \({\mathcal {V}}^{(k+)}(t)\). It is important to note that the workload is measured in time, and not in fluid level. \({\mathcal {V}}^{(k+)}(t)\) is the time needed by the system to accomplish all the work and become idle, given that no class \(\ge k\) fluid enters the queue after time t. \({\mathcal {V}}^{(k+)}(t)=0\) holds if and only if the class \(\ge k\) fluid level is zero, but in general the evolution of the workload process differs from the evolution of the fluid level process.
The joint density function of the workload and the state of the background process at time t is denoted by the row vector \(v^{(k+)}(t,x)\), whose elements are \(v^{(k+)}_i(t,x)=\frac{\hbox {d}}{\hbox {d}x}P({\mathcal {V}}^{(k+)}(t)\le x,{\mathcal {J}}(t)=i)\). The joint probability that the workload of the system is 0 and the state of the background process is i is stored by the row vector \(\alpha ^{(k+)}(t)=[\alpha ^{(k+)}_i(t)]\), where \(\alpha ^{(k+)}_i(t)=P({\mathcal {V}}^{(k+)}(t)=0,{\mathcal {J}}(t)=i)\).
Lemma 1
The vector \(v^{(k+)}(t,x)\) is determined by the differential equation
$$\begin{aligned} \frac{\partial }{\partial t} v^{(k+)}(t,x) + \frac{\partial }{\partial x} v^{(k+)}(t,x) \left( \frac{1}{d}\mathbf {R}^{(k+)}-\mathbf {I}\right) = v^{(k+)}(t,x)\mathbf {Q}, \end{aligned}$$
(18)
with the boundary condition
$$\begin{aligned} \frac{\hbox {d}}{\hbox {d}t}\alpha ^{(k+)}_i(t)=\sum _{j:r_j^{(k+)}\le d}\alpha ^{(k+)}_j(t)q_{ji} - v^{(k+)}_i(t,0)\left( \frac{r_i^{(k+)}}{d} - 1\right) ,~\text {for } i: r_i^{(k+)}\le d, \end{aligned}$$
(19)
and \(\alpha ^{(k+)}_i(t)=0\) for \(i: r_i^{(k+)}>d\).
Proof
For \(x>0\), the behavior of \(v^{(k+)}(t,x)\) in an infinitesimally small \(\varDelta \) time interval is
$$\begin{aligned} \begin{aligned} v^{(k+)}_i(t,x)&= v^{(k+)}_i(t-\varDelta ,x+\varDelta -\varDelta r_i^{(k+)}/d) (1+q_{ii}\varDelta ) \\&\quad +\sum _{j\ne i} q_{ji}\varDelta v^{(k+)}_j(t-\varTheta (\varDelta ),x+\varTheta (\varDelta )) + o(\varDelta ), \end{aligned} \end{aligned}$$
(20)
where the first term corresponds to the case when there is no state transition in \((t-\varDelta ,t)\), and the second one when there is a state transition from j to i. Over a \(\varDelta \) long interval, the amount of class \(\ge k\) workload brought into the system is \(\varDelta r^{(k+)}_i/d\) when the background process is in state i, and the amount of workload processed by the server is \(\varDelta \). Functions \(o(\varDelta )\) and \(\varTheta (\varDelta )\) are such that \(\lim _{\varDelta \rightarrow 0}o(\varDelta )/\varDelta =0\) and \(\lim _{\varDelta \rightarrow 0}\varTheta (\varDelta )=0\), respectively.
Rearranging the terms of (20) and dividing both sides by \(\varDelta \) leads to
$$\begin{aligned} \begin{aligned}&\frac{v^{(k+)}_i(t,x) - v^{(k+)}_i(t-\varDelta ,x)}{\varDelta } + \frac{v^{(k+)}_i(t-\varDelta ,x) - v^{(k+)}_i(t-\varDelta ,x+\varDelta -\varDelta r_i^{(k+)}/d) }{\varDelta } \\&\quad =\sum _{j\in {\mathcal {S}}} q_{ji} v^{(k+)}_j(t-\varTheta (\varDelta ),x+\varTheta (\varDelta )) + o(\varDelta )/\varDelta , \end{aligned} \end{aligned}$$
which provides (18) when \(\varDelta \rightarrow 0\).
The boundary probability \(\alpha ^{(k+)}_i(t)\) is clearly zero if \(r_i^{(k+)}>d\), since in these states workload starts to accumulate. For states where \(r_i^{(k+)}\le d\) holds, the behavior in a small \(\varDelta \) interval is
$$\begin{aligned} \begin{aligned} \alpha ^{(k+)}_i(t)&= \alpha ^{(k+)}_i(t-\varDelta )(1+q_{ii}\varDelta ) + \sum _{j\ne i,r_j^{(k+)}\le d}q_{ji}\varDelta \alpha ^{(k+)}_j(t-\varDelta ) \\&\quad +P({\mathcal {V}}^{(k+)}(t-\varDelta )\in [0,\varDelta -\varDelta r_i^{(k+)}/d],{\mathcal {J}}(t-\varDelta )=i)+o(\varDelta ), \end{aligned} \end{aligned}$$
thus, either the workload was 0 in \(t-\varDelta \), or it was positive, but close enough to zero such that it became zero in \((t-\varDelta ,t)\). Rearranging the terms, dividing by \(\varDelta \) and tending \(\varDelta \rightarrow 0\) gives (19).\(\square \)
Lemma 1 has an important corollary: the workload process behaves like an ordinary Markovian fluid flow model with parameters
$$\begin{aligned} \mathbf {F} = \mathbf {Q},~~~~~~\mathbf {C}^{(k+)} = \frac{1}{d}\mathbf {R}^{(k+)}-\mathbf {I}, \end{aligned}$$
(21)
thus, its stationary density, \(v^{(k+)}(x)=\lim _{t\rightarrow \infty }v^{(k+)}(t,x)\), and probability mass at zero, \(\alpha ^{(k+)}=\lim _{t\rightarrow \infty }\alpha ^{(k+)}(t)\), are given by the matrix-exponential form
$$\begin{aligned} v^{(k+)}(x) = \kappa ^{(k+)} e^{\mathbf {K}^{(k+)} x}\mathbf {A}^{(k+)}, ~~~~~~ \alpha ^{(k+)} = p^{(k+)}, \end{aligned}$$
(22)
where the vectors \(\kappa ^{(k+)}, p^{(k+)}\) and the matrices \(\mathbf {K}^{(k+)},\mathbf {A}^{(k+)}\) are given by (6)–(10) with parameters (21). Hence, the density of the workload and the probability mass at zero embedded at class k fluid drop arrival instants are
$$\begin{aligned} {\hat{v}}^{(k)}(x) = \frac{1}{\lambda ^{(k)}} \kappa ^{(k+)} e^{\mathbf {K}^{(k+)} x}\mathbf {A}^{(k+)} \mathbf {R}^{(k)}, ~~~~~~ {\hat{\alpha }}^{(k)} = \frac{1}{\lambda ^{(k)}}\,p^{(k+)}\, \mathbf {R}^{(k)}. \end{aligned}$$
(23)

4.2 Obtaining the properties of the waiting time

As described in Sect. 2, the waiting time of a fluid drop is equal to the workload found in the system at its arrival, increased by the higher priority workload brought to the system while waiting.
Figure 1 shows an example for the evolution of the remaining waiting time process of a class k fluid drop, \({\mathcal {W}}^{(k)}(t,x)\), where the arriving fluid drop found x amount of class \(k+\) workload in the system. In the time periods where the curve is drawn with a solid line, the background process is in a state where \(r_{{\mathcal {J}}(t)}^{((k+1)+)}=0\) and there is no class \((k+1)+\) workload in the system either; hence, all the server capacity is spent on class k, and the remaining waiting time decreases with slope \(-1\). In the intervals where the curve is dashed, class \((k+1)+\) fluid is also present, and the service capacity is either fully taken by the higher priority classes, or, if their incoming rate is less than d and there is no high priority fluid accumulated in the buffer, it is shared between class k and the higher priority classes. While Fig. 1 depicts the remaining waiting time of a class k fluid drop, we can observe that it follows the same trajectory as the class \((k+1)+\) workload process, \({\mathcal {V}}^{((k+1)+)}(t)\), given that \({\mathcal {V}}^{((k+1)+)}(0)=x\).
Let \({\mathcal {T}}\) denote the random variable representing the waiting time of class k fluid drops. (For the sake of simplicity, we neglect the superscript (k) for \({\mathcal {T}}\) in this section.) By conditioning on the amount of class \(k+\) workload at a class k fluid drop arrival, we have that
$$\begin{aligned} P({\mathcal {T}}<y) = \int _0^\infty {\hat{v}}^{(k)}(x) \mathbf {G}^{((k+1)+)}(y,x)\mathbb {1}\,\hbox {d}x, \end{aligned}$$
(24)
with \(\mathbf {G}^{((k+1)+)}(y,x)\) being the distribution of the busy period of the class \((k+1)+\) workload starting at level x, defined by
$$\begin{aligned} \mathbf {G}^{((k+1)+)}(y,x) = P(\tau ^{((k+1)+)}<y,{\mathcal {J}}(\tau )=j|{\mathcal {V}}^{((k+1)+)}(0)=x, {\mathcal {J}}(0)=i), \end{aligned}$$
where \(\tau ^{((k+1)+)} =\inf (t>0: {\mathcal {V}}^{((k+1)+)}(t)=0)\) is the first time point when the workload process reaches level 0.
To get rid of the integral, we apply a transformation that transforms the analysis problem of the class k waiting time to the analysis problem of the accumulated reward of an appropriately constructed fluid flow model over a busy period. The main idea (introduced in [18]) is to start this fluid model at level 0 and add a phase whose role is to set the fluid level according to the initial workload at class k fluid arrival, having density \({\hat{v}}^{(k)}(x)\) (Fig. 2). The main benefit of this transformation is that it makes it possible to use the results of Sect. 3.2, where the busy period with 0 initial level is characterized.
The generator of the background process, the fluid rates and the reward rates of this special fluid flow model are
$$\begin{aligned} \mathbf {F} = \left[ \begin{array}{c|c} \mathbf {K}^{(k+)} &{} \mathbf {A}^{(k+)}\mathbf {R}^{(k)}/\lambda ^{(k)} \\ \hline &{} \mathbf {Q} \end{array}\right] , ~ \mathbf {C} = \left[ \begin{array}{c|c} \mathbf {I}\\ \hline &{} \frac{1}{d}\mathbf {R}^{((k+1)+)}-\mathbf {I}\end{array}\right] , ~ \mathbf {D} = \left[ \begin{array}{c|c} \mathbf {0} \\ \hline &{} \mathbf {I}\end{array}\right] . \end{aligned}$$
(25)
The first state group of the background process is responsible for the accumulation of the workload a class k fluid drop finds in the system upon its arrival. Observe that the probability density of the time spent here equals \(e^{\mathbf {K}^{(k+)}t}\mathbf {A}^{(k+)}\mathbf {R}^{(k)}/\lambda ^{(k)}\), which, when started from the vector \(\kappa ^{(k+)}\), equals \({\hat{v}}^{(k)}(t)\), according to (23). (Remember that \(\kappa ^{(k+)}\) is the initial vector of the stationary solution of the workload process; see (23).) While staying in this state group the workload increases with rate 1 (hence the \(\mathbf {I}\) matrix in the first block of \(\mathbf {C}\)), and during this period no reward is accumulated, since the time spent here is not part of the waiting time. (There is a \(\mathbf {0}\) in the corresponding block in the matrix \(\mathbf {D}\).)
When the appropriate workload level is reached, a transition occurs to the second group of states. In this part, the fluid level in this special model represents the workload of the system ahead of the tagged class k fluid drop. If class k has the highest priority in the system, then the workload ahead of the tagged fluid drop decreases linearly by rate 1. If there is higher priority work, class \((k+1)+\) fluid flows in the system, then the workload brought by them increases the workload ahead of the tagged class k fluid drop. The rate at which the workload changes during this period is given by the diagonal elements of \(\frac{1}{d}\mathbf {R}^{((k+1)+)}-\mathbf {I}\). The reward rate is 1 in each state of the modulating CTMC; thus, the reward measures the time spent in this part of the state space. The time spent until the workload ahead of the tagged fluid drop reaches level 0 is identical to the waiting time.
Consequently, the waiting time is equal to the amount of reward accumulated during the busy period, starting from the first state group; thus, the initial phase distribution vector is
$$\begin{aligned} {\hat{\kappa }}^{(k)} = \begin{bmatrix}\kappa ^{(k+)}&0\end{bmatrix}. \end{aligned}$$
(26)
Based on the above reasoning, we can state the following corollary.
Corollary 2
The LST of the distribution function of \({\mathcal {T}}\), \(f^*_{{\mathcal {T}}}(s)\) and its mth moment, \(E({\mathcal {T}}^m)\), are expressed by
$$\begin{aligned} f^*_{{\mathcal {T}}}(s)&= {\hat{\kappa }}^{(k)} \varvec{\Phi }^*(s)\mathbb {1}, \end{aligned}$$
(27)
$$\begin{aligned} E({\mathcal {T}}^m)&= (-1)^m {\hat{\kappa }}^{(k)} \varvec{\Phi }^{(m)}\mathbb {1},~~~~k>0, \end{aligned}$$
(28)
and the order-n approximation of the distribution function, \(F^{(n)}_{{\mathcal {T}}}(t)\), is
$$\begin{aligned} F^{(n)}_{{\mathcal {T}}}(t) = {\hat{\alpha }}^{(k)}\mathbb {1} + {\hat{\kappa }}^{(k)}\varvec{\Phi }_{n}(t)\mathbb {1}, \end{aligned}$$
(29)
where the matrices \(\varvec{\Phi }^*(s), \varvec{\Phi }^{(m)}\) and \(\varvec{\Phi }_{n}(t)\) are given by the reward analysis of the fluid model defined by the matrices (25), as detailed in Sect. 3.2.
In (29), the first term \({\hat{\alpha }}^{(k)}\mathbb {1}\) reflects that the waiting time is zero if the workload is zero when the fluid drop arrives.

5 Queue length analysis of the fluid priority queue

To analyze the stationary queue length for each fluid class of the system, we first characterize the queue length at fluid drop departure instants. Then, we provide the relation between the queue length distribution at departures and the queue length distribution at random point in time.

5.1 Queue length distribution at fluid drop departures

The approach for characterizing the amount of class k fluid in the system at class k fluid drop departures is similar to the one presented in Sect. 4.2. A special Markov-modulated fluid model is constructed such that the reward accumulated during its busy period is equal to the queue length at departures. The matrices defining this special fluid model are
$$\begin{aligned} \mathbf {F} = \left[ \begin{array}{c|c} \mathbf {K}^{(k+)} &{} \mathbf {A}^{(k+)}\mathbf {R}^{(k)}/\lambda ^{(k)} \\ \hline &{} \mathbf {Q} \end{array}\right] , \mathbf {C} = \left[ \begin{array}{c|c} \mathbf {I}\\ \hline &{} \frac{1}{d}\mathbf {R}^{((k+1)+)}-\mathbf {I}\end{array}\right] , \mathbf {D} = \left[ \begin{array}{c|c} \mathbf {0} \\ \hline &{} \mathbf {R}^{(k)} \end{array}\right] . \end{aligned}$$
(30)
Observe that these matrices are similar to (25), and the interpretation is similar as well. The first state group sets the initial workload seen by a class k fluid drop arrival. The second state group follows the workload ahead of the fluid drop until it leaves the system. With the given reward rate matrix \(\mathbf {D}\), however, the reward accumulated over the busy period measures the amount of class k fluid that arrives until the tagged fluid drop leaves; hence, it provides the class k queue length at the departure of the tagged class k fluid drop.
Corollary 3
Denoting the class k queue length at departures by \({\mathcal {X}}\), the LST of the distribution function by \(f^*_{\mathcal {X}}(s)\), its mth moment by \(E({\mathcal {X}}^m)\), and the order-n approximation of its distribution function by \(F^{(n)}_{\mathcal {X}}(x)\), we have that
$$\begin{aligned} f^*_{\mathcal {X}}(s)&= {\hat{\kappa }}^{(k)} \varvec{\Phi }^*(s)\mathbb {1}, \end{aligned}$$
(31)
$$\begin{aligned} E({\mathcal {X}}^m)&= (-1)^m {\hat{\kappa }}^{(k)} \varvec{\Phi }^{(m)}\mathbb {1}, ~~~~k>0, \end{aligned}$$
(32)
$$\begin{aligned} F^{(n)}_{\mathcal {X}}(x)&= {\hat{\alpha }}^{(k)}\mathbb {1} + {\hat{\kappa }}^{(k)}\varvec{\Phi }_{n}(x)\mathbb {1}. \end{aligned}$$
(33)
The definition and the computation of the matrices \(\varvec{\Phi }^*(s), \varvec{\Phi }^{(m)}\) and \(\varvec{\Phi }_{n}(x)\) of the fluid model characterized by (30) are detailed in Sect. 3.2.
The first term in (33), \({\hat{\alpha }}^{(k)}\), is the probability that the class \(k+\) workload is zero at a class k fluid drop arrival, implying that the queue length is zero, too.

5.2 Queue length distribution at random point in time

In the case of non-fluid queues with discrete customers, the relation between the queue length distribution at departures and at random point in time is known [19]. For fluid queues, however, such a result is not available in the literature.
Let \({\mathcal {Y}}\) denote the class k queue length at random point in time. The density function, the distribution function and the probability mass at level 0 are denoted by \(f_{\mathcal {Y}}(x), F_{\mathcal {Y}}(x)\) and \(p_{\mathcal {Y}}\), respectively. The following theorem establishes a relationship between the density and distribution functions of \({\mathcal {X}}\) and \({\mathcal {Y}}\). Before stating the theorem, we introduce the vector forms of these quantities that include the state of the background process as well, characterizing \(\lim _{t\rightarrow \infty }\{{\mathcal {Y}}(t),{\mathcal {J}}(t)\}\). These row vector quantities are defined by \([\underline{F_{\mathcal {Y}}}(x)]_i=\lim _{t\rightarrow \infty } P({\mathcal {Y}}(t)<x,{\mathcal {J}}(t)=i), \underline{f_{\mathcal {Y}}}(x)=\frac{\hbox {d}}{\hbox {d}x}\underline{F_{\mathcal {Y}}}(x)\), and \([\underline{p_{\mathcal {Y}}}]_i=\lim _{t\rightarrow \infty } P({\mathcal {Y}}(t)=0,{\mathcal {J}}(t)=i)\).
Theorem 3
The density and distribution functions of \({\mathcal {X}}\) and \({\mathcal {Y}}\) are related by
$$\begin{aligned} \underline{f_{\mathcal {Y}}}(u)\mathbf {R}^{(k)} - \underline{F_{{\mathcal {Y}}}}(u)\mathbf {Q} = \lambda ^{(k)}\underline{f_{\mathcal {X}}}(u), \end{aligned}$$
(34)
and for the masses at level 0 we have
$$\begin{aligned} \underline{p_{\mathcal {Y}}}\mathbf {R}^{(k)}&= \lambda ^{(k)}\underline{p_{\mathcal {X}}}. \end{aligned}$$
(35)
Proof
The proof is inspired by [20] and is based on simple balance equations. Assume that the size of the fluid drops is \(\delta \). Expressing the rate at which the state \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}(t)=j\}\) is left (for \(x>0\)) we get
$$\begin{aligned} P({\mathcal {Y}}\in [x-\delta ,x),{\mathcal {J}}=j)\cdot r^{(k)}_j/\delta + P({\mathcal {Y}}<x,{\mathcal {J}}=j)\cdot (-q_{jj}), \end{aligned}$$
(36)
where the first term is the rate of moving from \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}(t)=j\}\) to \(\{{\mathcal {Y}}\ge x,{\mathcal {J}}(t)=j\}\) due to a fluid drop arrival, and the second term is the rate of leaving \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}(t)=j\}\) due to the state change of the background process.
For \(x>0\), the rate at which the system enters state \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}=j\}\) is
$$\begin{aligned} P({\mathcal {X}}\in [x-\delta ,x),{\mathcal {J}}=j)\cdot \lambda ^{(k)}/\delta + \sum _{i\ne j} P({\mathcal {Y}}<x,{\mathcal {J}}=i)\cdot q_{ij}, \end{aligned}$$
(37)
where \(P({\mathcal {X}}\in [x-\delta ,x),{\mathcal {J}}=j)\), in the first term, is the fraction of fluid drops that leave the system behind with fluid level \(<x\). Since \(\lambda ^{(k)}/\delta \) is the fluid drop departure intensity, \(P({\mathcal {X}}\in [x-\delta ,x),{\mathcal {J}}=j)\cdot \lambda ^{(k)}/\delta \) is the transition rate from \(\{{\mathcal {Y}}\ge x,{\mathcal {J}}=j\}\) to \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}=j\}\) caused by fluid drop departures. The second term covers the state transitions triggered by the background process, from state \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}=i\}\) to \(\{{\mathcal {Y}}\in [0,x),{\mathcal {J}}=j\}\).
Equating (36) and (37) and rearranging the terms lead to
$$\begin{aligned} \begin{aligned}&\frac{P({\mathcal {Y}}\in [x-\delta ,x),{\mathcal {J}}=j)}{\delta }r^{(k)}_j-\sum _{i\in {\mathcal {S}}}P({\mathcal {Y}}<x,{\mathcal {J}}=i)\cdot q_{ij} \\&\qquad =\lambda ^{(k)}\frac{P({\mathcal {X}}\in [x-\delta ,x),{\mathcal {J}}=j)}{\delta } , \end{aligned} \end{aligned}$$
which, letting \(\delta \rightarrow 0\), exploiting that \(\lim _{\delta \rightarrow 0} P({\mathcal {Y}}\in [x-\delta ,x),{\mathcal {J}}=j)/\delta = [\underline{f_{\mathcal {Y}}}(x)]_j\) and \(\lim _{\delta \rightarrow 0} P({\mathcal {X}}\in [x-\delta ,x),{\mathcal {J}}=j)/\delta = [\underline{f_{\mathcal {X}}}(x)]_j\), then switching to matrix notation, gives (34).
The relation between the probability masses of \({\mathcal {X}}\) and \({\mathcal {Y}}\) at level 0 can be obtained using similar arguments. The rate of leaving state \(\{{\mathcal {Y}}=0,{\mathcal {J}}=j\}\) is \([\underline{p_{\mathcal {Y}}}]_j r_j^{(k)}/\delta +[\underline{p_{\mathcal {Y}}}]_j (-q_{ii})\), while the rate of entering \(\{{\mathcal {Y}}=0,{\mathcal {J}}=j\}\) is \(\lambda ^{(k)}/\delta \cdot [\underline{p_{\mathcal {X}}}]_j + \sum _{i\ne j}[\underline{p_{\mathcal {Y}}}]_i q_{ij}\). By equating them, we get
$$\begin{aligned}{}[\underline{p_{\mathcal {Y}}}]_j\cdot r_j^{(k)} -\sum _{i\in {\mathcal {S}}}[\underline{p_{\mathcal {Y}}}]_i q_{ij}\cdot \delta = \lambda ^{(k)}\cdot [\underline{p_{\mathcal {X}}}]_j, \end{aligned}$$
(38)
which, letting \(\delta \rightarrow 0\), establishes (35). \(\square \)
It is worth noting that (35) is the continuous counterpart of [21, Theorem 5], while the derivative of (34) is the one of [21, Theorem 6]. The relation between \({\mathcal {X}}\) and \({\mathcal {Y}}\) in the Laplace transform domain, given by the following corollary, is similar to the result obtained for discrete-state (non-fluid) queues in [21, Equation (30)].
Corollary 4
The LSTs of the distribution functions of \({\mathcal {X}}\) and \({\mathcal {Y}}\), denoted by \(\underline{f^*_{\mathcal {X}}}(s)=\int _0^\infty e^{-st}\,d\underline{F_{\mathcal {X}}}(t)\) and \(\underline{f^*_{\mathcal {Y}}}(s)=\int _0^\infty e^{-st}\,d\underline{F_{\mathcal {Y}}}(t)\), are related by
$$\begin{aligned} \underline{f^*_{\mathcal {Y}}}(s) (s\,\mathbf {R}^{(k)}-\mathbf {Q}) = \lambda ^{(k)} \,s\, \underline{f^*_{\mathcal {X}}}(s). \end{aligned}$$
(39)
Next, the class k queue length moments are derived based on the ones embedded at departure epochs. The moments are obtained by taking the derivatives of the LST, i.e.,
$$\begin{aligned} E({\mathcal {Y}}^m) = (-1)^m\underbrace{\frac{\hbox {d}^m}{\hbox {d}s^m} \underline{f^*_{\mathcal {Y}}}(s)|_{s\rightarrow 0}}_{y^{(m)}}\mathbb {1}. \end{aligned}$$
(40)
The following theorem (which is the continuous counterpart of [21, Section 4.1]) provides a recursive algorithm to compute the vectors \(y^{(m)}\).
Theorem 4
The vectors \(y^{(m)}\) can be obtained recursively by
$$\begin{aligned} y^{(m)} = y^{(m)}\mathbb {1}\pi - m (y^{(m-1)}\mathbf {R}^{(k)} - \lambda ^{(k)} x^{(m-1)})(\mathbb {1}\pi -\mathbf {Q})^{-1}, \end{aligned}$$
(41)
for \(m>0\), where \(y^{(m)}\mathbb {1}\) is given by
$$\begin{aligned} y^{(m)}\mathbb {1} = x^{(m)}\mathbb {1} + m (y^{(m-1)}\mathbf {R}^{(k)} -\lambda ^{(k)}x^{(m-1)})(\mathbb {1}\pi -\mathbf {Q})^{-1}\mathbf {R}^{(k)}/\lambda ^{(k)}\mathbb {1}. \end{aligned}$$
(42)
The vectors \(x^{(m)}\) are defined by \(x^{(m)}=\frac{\mathrm{d}^m}{\mathrm{d}s^m} \underline{f^*_{\mathcal {X}}}(s) |_{s\rightarrow 0}\); hence,
$$\begin{aligned} x^{(m)}={\left\{ \begin{array}{ll} {\hat{\kappa }}^{(k)} \varvec{\Phi }^{(m)}, &{} m>0, \\ {\hat{\alpha }}^{(k)} + {\hat{\kappa }}^{(k)} \varvec{\Phi }^{(0)}, &{} m=0. \\ \end{array}\right. } \end{aligned}$$
(43)
For \(m=0\), we have \(y^{(0)}=\pi \), where \(\pi \) is the stationary probability vector of \(\mathbf {Q}\).
Proof
For \(m=0\), the elements of the vector \(y^{(0)}\) are the stationary probabilities of the background process, from which \(y^{(0)}=\pi \) follows.
Taking the derivative of (39) m times (\(m>0\)) and setting \(s=0\) leads to
$$\begin{aligned} y^{(m)}\mathbf {Q} = m\, y^{(m-1)} \mathbf {R}^{(k)} -m\, \lambda ^{(k)}x^{(m-1)}. \end{aligned}$$
(44)
Since the matrix \(\mathbf {Q}\) is not invertible, \(y^{(m)}\) cannot be expressed from (44) directly. Let us subtract \(y^{(m)}\mathbb {1}\pi \) from both sides of (44). We get
$$\begin{aligned} y^{(m)}(\mathbb {1}\pi -\mathbf {Q}) = y^{(m)}\mathbb {1}\pi - m\, y^{(m-1)} \mathbf {R}^{(k)} +m\, \lambda ^{(k)}x^{(m-1)}. \end{aligned}$$
(45)
Observing that \(\mathbb {1}\pi -\mathbf {Q}\) is invertible and that \(\mathbb {1}\pi (\mathbb {1}\pi -\mathbf {Q})^{-1}=\mathbb {1}\pi \) provides (41). Multiplying both sides of (41) by \(\mathbf {R}^{(k)}\mathbb {1}\) from the right yields
$$\begin{aligned} \underbrace{y^{(m)}\mathbf {R}^{(k)}\mathbb {1}}_{\lambda ^{(k)}x^{(m)}\mathbb {1}} = y^{(m)}\mathbb {1}\underbrace{\pi \mathbf {R}^{(k)}\mathbb {1}}_{\lambda ^{(k)}} - m (y^{(m-1)}\mathbf {R}^{(k)} - \lambda ^{(k)} x^{(m-1)})(\mathbb {1}\pi -\mathbf {Q})^{-1}\mathbf {R}^{(k)}\mathbb {1}. \end{aligned}$$
Exploiting that multiplying (44) by \(\mathbb {1}\) from the right gives \(y^{(m)}\mathbf {R}^{(k)}\mathbb {1}=\lambda ^{(k)}x^{(m)}\mathbb {1}\) and that \(\lambda ^{(k)}=\pi \mathbf {R}^{(k)}\mathbb {1}\), (42) is proven.\(\square \)
With the theorems and corollaries presented above, it is possible to compute the LST and the moments of the queue length distribution. Finally, it remains to obtain the order-n approximation of the class k queue length distribution in the time domain, \(F^{(n)}_{\mathcal {Y}}(x)\), based on Erlangization.
Assume that the order-n approximation of the queue length distribution at fluid drop departures is available in the form of (33), \(F^{(n)}_{\mathcal {X}}(x) = {\hat{\alpha }}^{(k)}\mathbb {1} + {\hat{\kappa }}^{(k)}\varvec{\Phi }_{n}(x)\mathbb {1}\), where \(\varvec{\Phi }_{n}(x) = \sum _{\ell =0}^{n-1}{\hat{\varvec{\Psi }}}_\ell (n/x)\) (as given in (16)). The following theorem demonstrates how to obtain the approximation of \(F^{(n)}_{\mathcal {Y}}(x)\) from the same \({\hat{\varvec{\Psi }}}_\ell (n/x)\) matrices.
Theorem 5
The order-n approximation of the distribution function \(F_{\mathcal {Y}}(x)\) is \(F^{(n)}_{\mathcal {Y}}(x)= \underline{F^{(n)}_{\mathcal {Y}}}(x)\mathbb {1}\), where the row vectors \(\underline{F^{(n)}_{\mathcal {Y}}}(x)\) are defined recursively by
$$\begin{aligned} \underline{F^{(n)}_{\mathcal {Y}}}(x) = \nu \left( \lambda ^{(k)}{\hat{\kappa }}^{(k)}{\hat{\varvec{\Psi }}}_{n-1}(\nu ) + \underline{F^{(n-1)}_{\mathcal {Y}}}(x)\mathbf {R}^{(k)}\right) (\nu \mathbf {R}^{(k)}-\mathbf {Q})^{-1}, \end{aligned}$$
(46)
for \(n>1\), and
$$\begin{aligned} \underline{F^{(1)}_{\mathcal {Y}}}(x) = \nu \lambda ^{(k)}({\hat{\alpha }}^{(k)} + {\hat{\kappa }}^{(k)}{\hat{\varvec{\Psi }}}_{0}(\nu )) (\nu \mathbf {R}^{(k)}-\mathbf {Q})^{-1}, \end{aligned}$$
(47)
for \(n=1\), where \(\nu =n/x\).
Proof
The idea behind Erlangization is to express the distribution function as the limit of \(F^{(n)}_{\mathcal {Y}}(x)=\int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)P({\mathcal {Y}}<u)\,\hbox {d}u\). The state-dependent version of the formula is \([\underline{F^{(n)}_{\mathcal {Y}}}(x)]_i=\int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)P({\mathcal {Y}}<u,{\mathcal {J}}=i)\,\hbox {d}u\).
Multiplying both sides of (34) by an Erlang density \(f_{{\mathcal {E}}(n,\nu )}(u)\) and taking the integral from 0 to \(\infty \) gives
$$\begin{aligned} \begin{aligned}&\int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)\underline{f_{\mathcal {Y}}}(u)\,\hbox {d}u\, \mathbf {R}^{(k)} - \int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)\underline{F_{{\mathcal {Y}}}}(u)\,\hbox {d}u\, \mathbf {Q} \\&\quad = \lambda ^{(k)}\int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)\underline{f_{\mathcal {X}}}(u)\,\hbox {d}u, \end{aligned} \end{aligned}$$
which, when integrating both sides by parts, transforms to
$$\begin{aligned} \begin{aligned}&\left[ f_{{\mathcal {E}}(n,\nu )}(u)(\underline{F_{{\mathcal {Y}}}}(u)-\underline{p_{{\mathcal {Y}}}})\right] _0^\infty \mathbf {R}^{(k)} - \int _0^\infty \nu f_{{\mathcal {E}}(n-1,\nu )}(u)(\underline{F_{{\mathcal {Y}}}}(u)-\underline{p_{{\mathcal {Y}}}})\mathbf {R}^{(k)}\,\hbox {d}u \\&\qquad + \int _0^\infty \nu f_{{\mathcal {E}}(n,\nu )}(u)(\underline{F_{{\mathcal {Y}}}}(u)-\underline{p_{{\mathcal {Y}}}})\mathbf {R}^{(k)}\,\hbox {d}u - \int _0^\infty f_{{\mathcal {E}}(n,\nu )}(u)\underline{F_{{\mathcal {Y}}}}(u)\mathbf {Q}\,\hbox {d}u \\&\quad = \lambda ^{(k)}\left[ f_{{\mathcal {E}}(n,\nu )}(u)(\underline{F_{{\mathcal {X}}}}(u)-\underline{p_{{\mathcal {X}}}})\right] _0^\infty - \lambda ^{(k)}\int _0^\infty \nu f_{{\mathcal {E}}(n-1,\nu )}(u)(\underline{F_{{\mathcal {X}}}}(u)-\underline{p_{{\mathcal {X}}}})\,\hbox {d}u \\&\qquad + \lambda ^{(k)} \int _0^\infty \nu f_{{\mathcal {E}}(n,\nu )}(u)(\underline{F_{{\mathcal {X}}}}(u)-\underline{p_{{\mathcal {X}}}})\,\hbox {d}u, \end{aligned} \end{aligned}$$
for \(n>1\), where we have exploited that \(\frac{\hbox {d}}{\hbox {d}u}f_{{\mathcal {E}}(n,\nu )}(u) = \nu f_{{\mathcal {E}}(n-1,\nu )}(u) - \nu f_{{\mathcal {E}}(n,\nu )}(u)\).
The terms in the square brackets are zero, since \(\underline{F_{\mathcal {X}}}(0)=\underline{p_{\mathcal {X}}}\) and \(\underline{F_{\mathcal {Y}}}(0)=\underline{p_{\mathcal {Y}}}\) hold, and \(f_{{\mathcal {E}}(n,\nu )}(u)|_{u=\infty }=0\) holds, too. The remaining terms can be expressed as
$$\begin{aligned} -\nu \underline{F_{{\mathcal {Y}}}^{(n-1)}}(x)\mathbf {R}^{(k)} + \nu \underline{F_{{\mathcal {Y}}}^{(n)}}(x)\mathbf {R}^{(k)} - \underline{F_{{\mathcal {Y}}}^{(n)}}(x)\mathbf {Q} = \lambda ^{(k)}\nu (\underline{F_{{\mathcal {X}}}^{(n)}}(x)-\underline{F_{{\mathcal {X}}}^{(n-1)}}(x)), \end{aligned}$$
which equals (46) since \(\underline{F_{{\mathcal {X}}}^{(n)}}(x)-\underline{F_{{\mathcal {X}}}^{(n-1)}}(x)= {\hat{\kappa }}^{(k)}{\hat{\varvec{\Psi }}}_{n-1}\) (see (33) and (16)).
Following similar steps for \(n=1\) gives
$$\begin{aligned} \begin{aligned}&\left[ f_{{\mathcal {E}}(1,\nu )}(u)(\underline{F_{{\mathcal {Y}}}}(u)-\underline{p_{{\mathcal {Y}}}})\right] _0^\infty \mathbf {R}^{(k)} + \nu \int _0^\infty f_{{\mathcal {E}}(1,\nu )}(u)(\underline{F_{{\mathcal {Y}}}}(u)-\underline{p_{{\mathcal {Y}}}})\mathbf {R}^{(k)}\,\hbox {d}u \\&\qquad - \int _0^\infty f_{{\mathcal {E}}(1,\nu )}(u)\underline{F_{{\mathcal {Y}}}}(u)\mathbf {Q}\,\hbox {d}u\\&\quad = \lambda ^{(k)}\left[ f_{{\mathcal {E}}(1,\nu )}(u)(\underline{F_{{\mathcal {X}}}}(u)-\underline{p_{{\mathcal {X}}}})\right] _0^\infty + \lambda ^{(k)} \nu \int _0^\infty f_{{\mathcal {E}}(1,\nu )}(u)(\underline{F_{{\mathcal {X}}}}(u)-\underline{p_{{\mathcal {X}}}})\,\hbox {d}u, \end{aligned} \end{aligned}$$
which, since \(\lim _{\nu \rightarrow 0} f_{{\mathcal {E}}(1,\nu )}(u)=\nu \), simplifies to
$$\begin{aligned} \nu \underline{F_{{\mathcal {Y}}}^{(1)}}(x)\mathbf {R}^{(k)} -\nu \underline{p_{\mathcal {Y}}}\mathbf {R}^{(k)} - \underline{F_{{\mathcal {Y}}}^{(1)}}(x)\mathbf {Q} = \lambda ^{(k)}\nu (\underline{F_{{\mathcal {X}}}^{(1)}}(x)-\underline{p_{{\mathcal {X}}}}), \end{aligned}$$
(48)
proving (47), since \(\underline{p_{{\mathcal {X}}}}={\hat{\alpha }}^{(k)}\), \(\underline{F_{{\mathcal {X}}}^{(1)}}(x)-\underline{p_{{\mathcal {X}}}} = {\hat{\kappa }}^{(k)}{\hat{\varvec{\Psi }}}_{0}\) and \(\underline{p_{\mathcal {Y}}}\mathbf {R}^{(k)} = \lambda ^{(k)}\underline{p_{\mathcal {X}}}\) due to (35).\(\square \)

6 Numerical examples

The proposed algorithms have been implemented in the MATLAB environment and are available to download1. This implementation uses the ADDA algorithm [22] to solve the Riccati equations, and the lyap function of MATLAB to solve the Sylvester equations (the lyap function is based on the Hessenberg–Schur algorithm [23]). All numerical experiments have been executed on an average PC with a CPU clocked at 3.4 GHz and 4 GB of RAM.

6.1 Example 1

In the first example, there are three fluid types and the background process has four states. The generator of the CTMC and the fluid input rates in various states are given by the matrices
$$\begin{aligned} \mathbf {Q}&= \begin{bmatrix} -8 &{} 5 &{} 0 &{} 3\\ 3 &{} -4 &{} 0 &{} 1\\ 4 &{} 6 &{} -10 &{} 0\\ 2 &{} 3 &{} 10 &{} -15 \end{bmatrix}, \qquad \quad&\mathbf {R}^{(1)}&= \begin{bmatrix} 1 \\ &{} 0 &{} &{}\\ &{} &{} 2 &{} \\ &{} &{} &{} 1 \end{bmatrix}, \\ \mathbf {R}^{(2)}&= \begin{bmatrix} 2 &{} &{} &{} \\ &{} 0 &{} &{} \\ &{} &{} 4 &{} \\ &{} &{} &{} 1 \end{bmatrix},&\mathbf {R}^{(3)}&= \begin{bmatrix} 4.5 &{} &{} &{} \\ &{} 1 &{} &{} \\ &{} &{} 0 &{} \\ &{} &{} &{} 2 \end{bmatrix}, \end{aligned}$$
and the service rate is \(d=4\). With these parameters, the utilization of the system is \(\lambda /d = 0.875\).
The algorithms returned the waiting time and queue length moments without a measurable delay. For the numeric results, see Table 1.
Table 1
Waiting time and queue length moments in Example 1
 
\(E({\mathcal {T}})\)
\(E({\mathcal {T}}^2)\)
\(E({\mathcal {T}}^3)\)
\(E({\mathcal {Y}})\)
\(E({\mathcal {Y}}^2)\)
\(E({\mathcal {Y}}^3)\)
Class 1:
2.0845
9.5477
68.66
1.137
3.5904
17.955
Class 2:
0.32218
0.23518
0.25628
0.32218
0.3538
0.59163
Class 3:
0.01104
0.000368
\(1.839\cdot 10^{-5}\)
0.02157
0.00288
0.00058
Obtaining the distributions is, however, more demanding computationally. The accuracy and the execution speed depend on how many terms are taken into account during the Erlangization. Our experience is in line with the earlier reports (see [15]), namely that Erlangization is able to provide relatively accurate results with a small number of terms (the parameter n). Figure 3 depicts the distribution functions of class 1 waiting times for different values of n. For \(n=10\), the approximation has some visible error, but the approximate distribution functions for \(n\ge 25\) are almost indistinguishable from each other.
Profiling this example revealed that the bottleneck of the Erlangization algorithm is the computation of the twofold summation at line 5 in (17). This term, however, is zero when the workload process of class 2 and 3 jobs does not have zero rates. The workload process does have a zero rate in this example, since in state 3 we have \(r_3^{(2)} + r_3^{(3)} = d\). Changing \(r_3^{(2)}\) to a value different from 4 eliminated the zero state, leading to an order of magnitude faster execution times2 (Table 2).
Table 2
Computation time of the class 1 waiting time distribution in 50 points, in seconds (Example 1)
 
\(r_3^{(2)}=4\)
\(r_3^{(2)}=4.1\)
\(n=10\)
0.523 s
0.189 s
\(n=25\)
4.771 s
0.543 s
\(n=50\)
33.255 s
1.427 s
\(n=100\)
249.72 s
4.616 s

6.2 Example 2

The aim of the second example is to demonstrate how the speed of the algorithms scales with the size of the input. Let us define the model as the superposition of N identical on–off sources with exponentially distributed on and off periods, with transition rates denoted by \(\nu \) and \(\gamma \), respectively. Each source generates fluid with rate \(\rho /N\) while being in the on state. The matrix parameters of the superposed fluid model are given by
$$\begin{aligned} \mathbf {Q}_N=\begin{bmatrix} \bullet &{} N \nu \\ \gamma &{} \bullet &{} (N-1)\nu \\ &{} 2\gamma &{} \bullet &{} (N-2)\nu \\ &{} &{} \ddots &{} \ddots &{} \ddots \\ &{} &{} &{} (N-1)\gamma &{} \bullet &{} \nu \\ &{} &{} &{} &{} N \gamma &{} \bullet \end{bmatrix},~~~ \mathbf {R}_N = \begin{bmatrix} 0 \\ &{} \rho /N \\ &{} &{} 2\rho /N \\ &{} &{} &{} \ddots \\ &{} &{} &{} &{} \rho \end{bmatrix}, \end{aligned}$$
where N determines the size of the model. The matrix \(\mathbf {Q}_N\) is the generator of the background process, and the matrix \(\mathbf {R}_N\) is the diagonal matrix of the overall fluid rates. If we assume that \(10\%\) of the fluid generated by the sources is class 1 fluid, \(30\%\) is class 2 fluid and \(60\%\) is class 3 fluid, the per class fluid input rates are \(\mathbf {R}_N^{(1)}=0.1\cdot \mathbf {R}_N, \mathbf {R}_N^{(2)}=0.3\cdot \mathbf {R}_N\) and \(\mathbf {R}_N^{(3)}=0.6\cdot \mathbf {R}_N\).
In this example, the parameters are \(\nu =1/N\) and \(\gamma =1/(3N)\) (the reason for making them dependent on N is to keep the values of the generator in the same range irrespective of the size), and the fluid rate parameter is \(\rho =50\). With these settings, the mean fluid arrival rate is \(\lambda =12.5\).
We have computed the first five moments of the queue length of all job classes with different model sizes (controlled by the parameter N) and with different utilizations u (controlled by the service rate \(d=\lambda /u\)). The execution times are depicted in Fig. 4, which reports the average of multiple repetitions for every point, together with the bootstrap-t confidence intervals (that are very tight).
As visible in the figure, the presented algorithms are able to solve huge models with 1000 states in reasonable time, without any numerical problems. The execution times depend on the utilization as well, since this affects the number of positive and negative states. At low utilization, obtaining five moments of the queue length for all job classes requires \(10\,s\) for \(N=1000\), while at high utilization about \(40\,s\) are required for the largest model.

Acknowledgements

Open access funding provided by Budapest University of Technology and Economics (BME).
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix A: Calculating the derivatives of \(\mathbf {F}^*_{ij}(v)\)

This section describes how to calculate the derivatives of matrix functions of the form \(\mathbf {F}(v)=(v\mathbf {D}-\mathbf {F})^{-1}\) (where \(\mathbf {D}\) is a diagonal matrix composed of the reward rates), necessary to obtain the matrices \(\mathbf {F}_{ij}^{(m)}\) in Corollary  1.
Taking the derivatives of \(\mathbf {F}(v)\) is trivial if all entries of \(\mathbf {D}\) are nonzero. If this is not the case, the matrices involved must be re-partitioned according to the sign of the reward rates, and hence,
$$\begin{aligned} \mathbf {D} = \begin{bmatrix} \mathbf {D}_+ \\ &{} \mathbf {0} \end{bmatrix}, ~~~ \mathbf {F} = \begin{bmatrix} \mathbf {F}_{++} &{} \mathbf {F}_{+0} \\ \mathbf {F}_{0+} &{} \mathbf {F}_{00} \end{bmatrix}. \end{aligned}$$
(49)
Note that this partitioning is not the same as the one assumed in Sect. 3. In Sect. 3, the partitioning is done according to the fluid rates; here, it is done according to the reward rates.
According to [24], block matrices can be inverted using
$$\begin{aligned} \begin{bmatrix}A &{} B \\ C &{} D \end{bmatrix}^{-1} = \begin{bmatrix} (A-BD^{-1}C)^{-1} &{} -(A-BD^{-1}C)^{-1}BD^{-1} \\ -D^{-1}C(A-BD^{-1}C)^{-1} &{} D^{-1}+D^{-1}C(A-BD^{-1}C)^{-1}BD^{-1} \end{bmatrix}. \end{aligned}$$
Introducing \(\mathbf {X}(v)=(\mathbf {F}_{++}-v\mathbf {D}_++\mathbf {F}_{+0}(-\mathbf {F}_{00})^{-1}\mathbf {F}_{0+})^{-1}\), this means that
$$\begin{aligned} (v\mathbf {D}-\mathbf {F})^{-1} =\begin{bmatrix} -\mathbf {X}(v) &{} \mathbf {X}(v)\mathbf {F}_{+0}\mathbf {F}_{00}^{-1} \\ \mathbf {F}_{00}^{-1}\mathbf {F}_{0+}\mathbf {X}(v) &{} -\mathbf {F}_{00}^{-1} - \mathbf {F}_{00}^{-1}\mathbf {F}_{0+}\mathbf {X}(v)\mathbf {F}_{+0}\mathbf {F}_{00}^{-1} \end{bmatrix}. \end{aligned}$$
(50)
Since the only term in (50) which depends on v is \(\mathbf {X}(v)\), the derivatives of \((v\mathbf {D}-\mathbf {F})^{-1}\) can now be easily obtained since
$$\begin{aligned} \frac{\hbox {d}^m}{\hbox {d}v^m} \mathbf {X}(v) = (-1)^{m+1} m!(v\mathbf {I}- (\mathbf {F}_{++}+\mathbf {F}_{+0}(-\mathbf {F}_{00})^{-1}\mathbf {F}_{0+}) \mathbf {D}_+^{-1})^{-m-1}\mathbf {D}_+^{-1}. \end{aligned}$$
(51)

Appendix B: The proof of Theorem 2

Examining (15), it can be observed that the entries of the matrix \(\varvec{\Phi }_{n}(t)\) are the probabilities that the accumulated reward is less than an order-n Erlang-distributed random variable with parameter \(\nu \). Hence, the matrix \(\varvec{\Phi }_{n}(t)\) is obtained by creating a process where the reward accumulation process and the Erlang distribution are running in parallel and are competing. To this end, the state space of the background process (given by the generator matrix \(\mathbf {F}\)) has to be extended such that it does not only keep track of the state of \({\mathcal {J}}(t)\), but also counts the number of events generated by a Poisson process having parameter \(\nu \). At the end of the busy period, the probabilities of those states, where the counter is less that n, contribute to \(\varvec{\Phi }_{n}(t)\). To take the state-dependent reward rates into account, the transition rate incrementing the counter must be \(\nu \cdot d_i\) if \({\mathcal {J}}(t)=i\). Thus, the time is accelerated for the counting process if the reward rate is high in the current state, and it is slowed down if the reward rate is low.
The generator matrix and the fluid rates of the modified process, which also includes the counter, have the following block structure:
$$\begin{aligned} {\tilde{\mathbf{F}}}(\nu )=\begin{bmatrix} \mathbf {F}-\nu \mathbf {D} &{} \nu \mathbf {D} \\ &{} \mathbf {F}-\nu \mathbf {D} &{} \nu \mathbf {D} \\ &{} &{} \ddots &{} \ddots \\ \end{bmatrix}, ~~~ \varvec{{\tilde{C}}} = \begin{bmatrix} \mathbf {C} \\ &{} \mathbf {C} \\ &{} &{} \ddots \end{bmatrix}, \end{aligned}$$
(52)
where being in block i at time t means that the value of the counter is i at time t.
By partitioning the state space according to the sign of the fluid rates, the matrices \({\tilde{\mathbf{F}}}(\nu )\) and \(\varvec{{\tilde{C}}}\) become
where each of the \(3\times 3\) partitions of the matrices \({\hat{\mathbf{F}}}(\nu )\) and \({\hat{\mathbf{C}}}\) have size infinity.
To obtain the state of the background process at the end of the busy period, the blocks of these matrices need to be inserted into (7), leading to
$$\begin{aligned} \begin{aligned} \mathbf {0}&={{\hat{\mathbf{C}}}_+}^{-1}({\hat{\mathbf{F}}}_{++}(\nu )+{\hat{\mathbf{F}}}_{+0}(\nu ) (-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0+}(\nu )){\hat{\varvec{\Psi }}}(\nu ) \\&\quad + {\hat{\varvec{\Psi }}}(\nu ) (-{\hat{\mathbf{C}}}_-)^{-1}({\hat{\mathbf{F}}}_{--}(\nu )+{\hat{\mathbf{F}}}_{-0}(\nu )( -{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0-}(\nu )) \\&\quad + {\hat{\varvec{\Psi }}}(\nu )(-{\hat{\mathbf{C}}}_-)^{-1}({\hat{\mathbf{F}}}_{-+}(\nu ) +{\hat{\mathbf{F}}}_{-0}(\nu )(-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0+}(\nu )) {\hat{\varvec{\Psi }}}(\nu ) \\&\quad +{{\hat{\mathbf{C}}}_+}^{-1}({\hat{\mathbf{F}}}_{+-}(\nu )+{\hat{\mathbf{F}}}_{+0}(\nu )( -{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0-}(\nu )). \end{aligned} \end{aligned}$$
(53)
Due to the structure of the matrices \({\hat{\mathbf{F}}}(\nu )\) and \({\hat{\mathbf{C}}}\), the inverse matrix \((-{{\hat{\mathbf{F}}}_{00}(\nu )})^{-1}\) has an upper block-triangular structure:
$$\begin{aligned} (-{{\hat{\mathbf{F}}}_{00}}(\nu ))^{-1} = \begin{bmatrix} \mathbf {Z}_0(\nu ) &{} \mathbf {Z}_1(\nu )&{} \mathbf {Z}_2(\nu ) &{} \ldots \\ &{} \mathbf {Z}_0(\nu ) &{} \mathbf {Z}_1(\nu ) &{} \ldots \\ &{} &{} \mathbf {Z}_0(\nu ) &{} \ldots \\ &{} &{} &{} \ddots \end{bmatrix}, \end{aligned}$$
(54)
with \(\mathbf {Z}_\ell (\nu )=\mathbf {H}^*_{00}(\nu )(\nu \mathbf {D}_0\mathbf {H}^*_{00}(\nu ))^\ell \). Investigating the structure of the coefficient matrices of (53), namely
$$\begin{aligned} \mathbf {M}_{++}(\nu )&= {{\hat{\mathbf{C}}}_+}^{-1}({\hat{\mathbf{F}}}_{++}(\nu )+{\hat{\mathbf{F}}}_{+0}(\nu )(-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0+}(\nu )), \\ \mathbf {M}_{+-}(\nu )&= {{\hat{\mathbf{C}}}_+}^{-1}({\hat{\mathbf{F}}}_{+-}(\nu )+{\hat{\mathbf{F}}}_{+0}(\nu )(-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0-}(\nu )), \\ \mathbf {M}_{--}(\nu )&= (-{\hat{\mathbf{C}}}_-)^{-1}({\hat{\mathbf{F}}}_{--}(\nu )+{\hat{\mathbf{F}}}_{-0}(\nu )(-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0-}(\nu )), \\ \mathbf {M}_{-+}(\nu )&= (-{\hat{\mathbf{C}}}_-)^{-1}({\hat{\mathbf{F}}}_{-+}(\nu )+{\hat{\mathbf{F}}}_{-0}(\nu )(-{\hat{\mathbf{F}}}_{00}(\nu ))^{-1}{\hat{\mathbf{F}}}_{0+}(\nu )), \end{aligned}$$
reveals that all of them are upper block-triangular, too; hence, we have
$$\begin{aligned} \mathbf {M}_{ij}(\nu )=\begin{bmatrix} \mathbf {M}^{(0)}_{ij}(\nu ) &{} \mathbf {M}^{(1)}_{ij}(\nu ) &{} \mathbf {M}^{(2)}_{ij}(\nu ) &{} \dots \\ &{} \mathbf {M}^{(0)}_{ij}(\nu ) &{} \mathbf {M}^{(1)}_{ij}(\nu ) &{} \dots \\ &{} &{} \mathbf {M}^{(0)}_{ij}(\nu ) &{} \dots \\ &{} &{} &{} \ddots \end{bmatrix}, \end{aligned}$$
(55)
where \(i\in \{+,-\}\) and \(j\in \{+,-\}\). The block structure of the coefficient matrices implies that the solution of the NARE
$$\begin{aligned} \mathbf {0}&=\mathbf {M}_{++}(\nu ){\hat{\varvec{\Psi }}}(\nu ) + {\hat{\varvec{\Psi }}}(\nu ) \mathbf {M}_{--}(\nu ) + {\hat{\varvec{\Psi }}}(\nu )\mathbf {M}_{-+}(\nu ){\hat{\varvec{\Psi }}}(\nu ) +\mathbf {M}_{+-}(\nu ) \end{aligned}$$
(56)
has an upper block-triangular structure as well, i.e.,
$$\begin{aligned} {\hat{\varvec{\Psi }}}(\nu ) = \begin{bmatrix} {\hat{\varvec{\Psi }}}_0(\nu ) &{} {\hat{\varvec{\Psi }}}_1(\nu ) &{} {\hat{\varvec{\Psi }}}_2(\nu ) &{} \ldots \\ &{} {\hat{\varvec{\Psi }}}_0(\nu ) &{} {\hat{\varvec{\Psi }}}_1(\nu ) &{} \ldots \\ &{} &{} {\hat{\varvec{\Psi }}}_0(\nu ) &{} \ldots \\ &{} &{} &{} \ddots \end{bmatrix}. \end{aligned}$$
(57)
With the given coefficient matrices, the first block row and the \(\ell +1\)th block column of the NARE (56) become
$$\begin{aligned} \begin{aligned} \mathbf {0}&=\sum _{i=0}^\ell \mathbf {M}_{++}^{(i)}(\nu ) {\hat{\varvec{\Psi }}}_{\ell -i}(\nu ) + \sum _{i=0}^\ell {\hat{\varvec{\Psi }}}_{i}(\nu ) \mathbf {M}_{--}^{(\ell -i)}(\nu ) \\&\quad + \sum _{j=0}^\ell \sum _{i=0}^{j} {\hat{\varvec{\Psi }}}_{i}(\nu ) \mathbf {M}_{-+}^{(j-i)}(\nu ) {\hat{\varvec{\Psi }}}_{\ell -j}(\nu ) + \mathbf {M}_{+-}^{(\ell )}(\nu ), \end{aligned} \end{aligned}$$
(58)
from which \({\hat{\varvec{\Psi }}}_0(\nu )=\varvec{\Phi }^*(\nu )\) follows immediately for \(\ell =0\). In order to obtain a recursion for \({\hat{\varvec{\Psi }}}_\ell (\nu ), \ell >0\), we separate all terms involving \({\hat{\varvec{\Psi }}}_\ell (\nu )\) from those that do not depend on it, yielding
$$\begin{aligned} \begin{aligned} \mathbf {0}&= (\mathbf {M}_{++}^{(0)}(\nu ) + {\hat{\varvec{\Psi }}}_{0}(\nu ) \mathbf {M}_{-+}^{(0)}(\nu ) ){\hat{\varvec{\Psi }}}_{\ell }(\nu ) + {\hat{\varvec{\Psi }}}_{\ell }(\nu ) (\mathbf {M}_{--}^{(0)}(\nu ) + \mathbf {M}_{-+}^{(0)}(\nu ) {\hat{\varvec{\Psi }}}_{0}(\nu )) \\&\quad + \sum _{i=1}^\ell \mathbf {M}_{++}^{(i)}(\nu ) {\hat{\varvec{\Psi }}}_{\ell -i}(\nu ) + \sum _{i=0}^{\ell -1} {\hat{\varvec{\Psi }}}_{i}(\nu ) \mathbf {M}_{--}^{(\ell -i)}(\nu ) + \mathbf {M}_{+-}^{(\ell )}(\nu ) \\&\quad + \sum _{j=1}^{\ell -1} \sum _{i=0}^{j} {\hat{\varvec{\Psi }}}_{i}(\nu ) \mathbf {M}_{-+}^{(j-i)}(\nu ) {\hat{\varvec{\Psi }}}_{\ell -j}(\nu ) + \sum _{i=0}^{\ell -1} {\hat{\varvec{\Psi }}}_{i}(\nu ) \mathbf {M}_{-+}^{(\ell -i)}(\nu ) {\hat{\varvec{\Psi }}}_{0}(\nu ). \end{aligned} \end{aligned}$$
(59)
Inserting the definitions of the coefficient matrices into the equation gives (17).
Finally, as the matrix \({\hat{\varvec{\Psi }}}_\ell (\nu )\) contains the probabilities that the counter value is \(\ell \) when the busy period ends, (16) follows. \(\square \)
Footnotes
1
The implementation can be downloaded from http://​www.​hit.​bme.​hu/​~ghorvath/​software.
 
2
Our implementation detects the lack of zero states automatically for faster execution.
 
Literature
1.
go back to reference Miller Jr., R.G.: Priority queues. Ann. Math. Stat. 31, 86–103 (1960) Miller Jr., R.G.: Priority queues. Ann. Math. Stat. 31, 86–103 (1960)
2.
go back to reference Takine, T.: A nonpreemptive priority MAP/G/1 queue with two classes of customers. J. Oper. Res. Soc. Jpn. 39(2), 266–290 (1996) Takine, T.: A nonpreemptive priority MAP/G/1 queue with two classes of customers. J. Oper. Res. Soc. Jpn. 39(2), 266–290 (1996)
3.
go back to reference Horváth, G.: Efficient analysis of the MMAP[K]/PH[K]/1 priority queue. Eur. J. Oper. Res. 246(1), 128–139 (2015) Horváth, G.: Efficient analysis of the MMAP[K]/PH[K]/1 priority queue. Eur. J. Oper. Res. 246(1), 128–139 (2015)
4.
go back to reference Ramaswami, V.: Matrix analytic methods for stochastic fluid flows. In: Teletraffic Engineering in a Competitive World—Proceedings of the 16th International Teletraffic Congress (ITC 16), Elsevier, pp. 1019–1030 (1999) Ramaswami, V.: Matrix analytic methods for stochastic fluid flows. In: Teletraffic Engineering in a Competitive World—Proceedings of the 16th International Teletraffic Congress (ITC 16), Elsevier, pp. 1019–1030 (1999)
5.
go back to reference Knessl, C., Tier, C.: A simple fluid model for servicing priority traffic. IEEE Trans. Autom. Control 46(6), 909–914 (2001) Knessl, C., Tier, C.: A simple fluid model for servicing priority traffic. IEEE Trans. Autom. Control 46(6), 909–914 (2001)
6.
go back to reference Liu, Y., Gong, W.: On fluid queueing systems with strict priority. IEEE Trans. Autom. Control 48(12), 2079–2088 (2003) Liu, Y., Gong, W.: On fluid queueing systems with strict priority. IEEE Trans. Autom. Control 48(12), 2079–2088 (2003)
7.
go back to reference Narayanan, A., Kulkarni, V.: First passage times in fluid models with an application to two priority fluid systems. In: Proceedings of IEEE International Computer Performance and Dependability Symposium. IEEE, pp. 166–175 (1996) Narayanan, A., Kulkarni, V.: First passage times in fluid models with an application to two priority fluid systems. In: Proceedings of IEEE International Computer Performance and Dependability Symposium. IEEE, pp. 166–175 (1996)
8.
go back to reference Zhang, J.: Performance study of Markov modulated fluid flow models with priority traffic. In: INFOCOM’93. Proceedings. Twelfth Annual Joint Conference of the IEEE Computer and Communications Societies. Networking: Foundation for the Future. IEEE, pp. 10–17 (1993) Zhang, J.: Performance study of Markov modulated fluid flow models with priority traffic. In: INFOCOM’93. Proceedings. Twelfth Annual Joint Conference of the IEEE Computer and Communications Societies. Networking: Foundation for the Future. IEEE, pp. 10–17 (1993)
9.
go back to reference Choi, B.D., Choi, K.B.: A Markov modulated fluid queueing system with strict priority. Telecommun. Syst. 9(1), 79–95 (1998) Choi, B.D., Choi, K.B.: A Markov modulated fluid queueing system with strict priority. Telecommun. Syst. 9(1), 79–95 (1998)
10.
go back to reference Tzenova, E.I., Adan, I., Kulkarni, V.G.: A two-priority fluid flow model: joint steady state distribution of the buffer content processes. Technische Universiteit Eindhoven, Department of Mathematics and Computing Science, TU/e (2004) Tzenova, E.I., Adan, I., Kulkarni, V.G.: A two-priority fluid flow model: joint steady state distribution of the buffer content processes. Technische Universiteit Eindhoven, Department of Mathematics and Computing Science, TU/e (2004)
11.
go back to reference Takine, T.: The workload in the MAP/G/1 queue with state-dependent services: its application to a queue with preemptive resume priority. Stoch. Models 10(1), 183–204 (1994) Takine, T.: The workload in the MAP/G/1 queue with state-dependent services: its application to a queue with preemptive resume priority. Stoch. Models 10(1), 183–204 (1994)
12.
go back to reference Kulkarni, V.G.: Fluid models for single buffer systems. Front. Queueing Models Appl. Sci. Eng. 321, 338 (1997) Kulkarni, V.G.: Fluid models for single buffer systems. Front. Queueing Models Appl. Sci. Eng. 321, 338 (1997)
13.
go back to reference Kankaya, H.E., Akar, N.: Solving multi-regime feedback fluid queues. Stoch. Models 24(3), 425–450 (2008) Kankaya, H.E., Akar, N.: Solving multi-regime feedback fluid queues. Stoch. Models 24(3), 425–450 (2008)
14.
go back to reference Ahn, S., Ramaswami, V.: Efficient algorithms for transient analysis of stochastic fluid flow models. J. Appl. Probab. 42, 531–549 (2005) Ahn, S., Ramaswami, V.: Efficient algorithms for transient analysis of stochastic fluid flow models. J. Appl. Probab. 42, 531–549 (2005)
15.
go back to reference Ramaswami, V., Woolford, D.G., Stanford, D.A.: The Erlangization method for Markovian fluid flows. Ann. Oper. Res. 160(1), 215–225 (2008) Ramaswami, V., Woolford, D.G., Stanford, D.A.: The Erlangization method for Markovian fluid flows. Ann. Oper. Res. 160(1), 215–225 (2008)
16.
go back to reference Kulkarni, V.G., Tzenova, E.: Mean first passage times in fluid queues. Oper. Res. Lett. 30(5), 308–318 (2002) Kulkarni, V.G., Tzenova, E.: Mean first passage times in fluid queues. Oper. Res. Lett. 30(5), 308–318 (2002)
17.
go back to reference Bean, N.G., O’Reilly, M.M.: A stochastic two-dimensional fluid model. Stoch. Models 29(1), 31–63 (2013) Bean, N.G., O’Reilly, M.M.: A stochastic two-dimensional fluid model. Stoch. Models 29(1), 31–63 (2013)
18.
go back to reference Dzial, T., Breuer, L., da Silva Soares, A., Latouche, G., Remiche, M.-A.: Fluid queues to solve jump processes. Perform. Eval. 62(1), 132–146 (2005) Dzial, T., Breuer, L., da Silva Soares, A., Latouche, G., Remiche, M.-A.: Fluid queues to solve jump processes. Perform. Eval. 62(1), 132–146 (2005)
19.
go back to reference Takine, T., Takahashi, Y.: On the relationship between queue lengths at a random instant and at a departure in the stationary queue with BMAP arrivals. Stoch. Models 14(3), 601–610 (1998) Takine, T., Takahashi, Y.: On the relationship between queue lengths at a random instant and at a departure in the stationary queue with BMAP arrivals. Stoch. Models 14(3), 601–610 (1998)
20.
go back to reference Kim, N.K., Chang, S.H., Chae, K.C.: On the relationships among queue lengths at arrival, departure, and random epochs in the discrete-time queue with D-BMAP arrivals. Oper. Res. Lett. 30(1), 25–32 (2002) Kim, N.K., Chang, S.H., Chae, K.C.: On the relationships among queue lengths at arrival, departure, and random epochs in the discrete-time queue with D-BMAP arrivals. Oper. Res. Lett. 30(1), 25–32 (2002)
21.
go back to reference Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server vacations and a class of non-renewal arrival processes. Adv. Appl. Probab. 22, 676–705 (1990) Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server vacations and a class of non-renewal arrival processes. Adv. Appl. Probab. 22, 676–705 (1990)
22.
go back to reference Wang, W.-G., Wang, W.-C., Li, R.-C.: Alternating-directional doubling algorithm for M-matrix algebraic Riccati equations. SIAM J. Matrix Anal. Appl. 33(1), 170–194 (2012) Wang, W.-G., Wang, W.-C., Li, R.-C.: Alternating-directional doubling algorithm for M-matrix algebraic Riccati equations. SIAM J. Matrix Anal. Appl. 33(1), 170–194 (2012)
23.
go back to reference Golub, G.H., Nash, S., Van Loan, C.: A Hessenberg–Schur method for the problem AX+XB=C. IEEE Trans. Autom. Control 24(6), 909–913 (1979) Golub, G.H., Nash, S., Van Loan, C.: A Hessenberg–Schur method for the problem AX+XB=C. IEEE Trans. Autom. Control 24(6), 909–913 (1979)
24.
go back to reference Bernstein, D.S.: Matrix Mathematics: Theory, Facts, and Formulas. Princeton University Press, Princeton (2009) Bernstein, D.S.: Matrix Mathematics: Theory, Facts, and Formulas. Princeton University Press, Princeton (2009)
Metadata
Title
Waiting time and queue length analysis of Markov-modulated fluid priority queues
Author
Gábor Horváth
Publication date
09-03-2020
Publisher
Springer US
Published in
Queueing Systems / Issue 1-2/2020
Print ISSN: 0257-0130
Electronic ISSN: 1572-9443
DOI
https://doi.org/10.1007/s11134-020-09650-2

Other articles of this Issue 1-2/2020

Queueing Systems 1-2/2020 Go to the issue

Premium Partner