In this section, we consider the following extension of a model of an autoregressive process, studied in [
8]:
$$\begin{aligned} R_{n+1} = \mathrm{max}(A_{n} R_{n} + G_n,0), ~~~n =0,1,\ldots , \end{aligned}$$
(43)
where
\(R_0=z\) and where, for
\(n=0,1,\ldots \),
\(G_n = Y_n - B_n\) with all
\(B_n\) independent random variables which are
\(\mathrm{exp}(\lambda )\) distributed, and all
\(Y_n\) non-negative i.i.d. random variables with distribution
\(F_Y(\cdot )\) and LST
\(\phi _Y(\cdot )\). In [
8], one has
\(A_n \equiv a\) with
\(a \in (0,1)\), but we now take
\(A_0,A_1,\dots \) i.i.d., with the following discrete distribution:
$$\begin{aligned} \mathbb P(A_1=a_i) = p_i, ~~~ i=1,\dots ,M, ~~~\mathrm{with ~all} ~ p_i>0 ~~~\mathrm{and} ~ \sum _{i=1}^M p_i = 1, ~ \mathrm{and ~ all} ~a_i \in (0,1). \end{aligned}$$
(44)
In [
8], both the transient behavior and steady-state behavior of the
\(R_n\) process with
\(A_n \equiv a\) are studied, via a Wiener–Hopf technique (cf. [
12]) that leads to a recursion. We apply the same tools in the extension defined by (
43), (
44). Below we first follow the approach of [
8]. Introduce
\(U_n := \mathrm{min}(A_nR_n+G_n,0)\) for
\(n=0,1,\dots \), and the transforms
$$\begin{aligned} R_z(r,s) := \sum _{n=0}^{\infty } r^n \mathbb E[\mathrm{e}^{-sR_n}|R_0=z], ~~~U_z(r,s) := \sum _{n=0}^{\infty } r^n \mathbb E[\mathrm{e}^{-s U_n}|R_0=z]. \end{aligned}$$
(45)
The first transform is analytic for
\(\mathrm{Re} ~s \ge 0\) and the second one for
\(\mathrm{Re} ~s \le 0\). Observing that
\(1 + \mathrm{e}^x = \mathrm{e}^{\mathrm{max}(x,0)} + \mathrm{e}^{\mathrm{min}(x,0)}\), we have for
\(n=0,1,\dots \):
$$\begin{aligned} \mathrm{e}^{-sR_{n+1}} = \mathrm{e}^{-s(A_nR_n+G_n)} + 1 - \mathrm{e}^{-sU_n}. \end{aligned}$$
Taking expectations and realizing that
\(R_n\),
\(A_n\) and
\(G_n\) are independent, we can write
$$\begin{aligned} \mathbb E\big [\mathrm{e}^{-sR_{n+1}}|R_0=z\big ] = \mathbb E\big [\mathrm{e}^{-sG_n}\big ] \sum _{i=1}^M p_i \mathbb E\big [\mathrm{e}^{-sa_i R_n}|R_0=z\big ] + 1 - \mathbb E\big [\mathrm{e}^{-sU_n}|R_0=z\big ] , \end{aligned}$$
(46)
, hence, after multiplication of both sides by
\(r^{n+1}\) and summation, we obtain for
\(\mathrm{Re} ~s =0\):
$$\begin{aligned} R_z(r,s) - \mathrm{e}^{-sz} - r \phi _Y(s) \frac{\lambda }{\lambda -s} \sum _{i=1}^M p_i R_z(r,a_is) = \frac{r}{1-r} - r U_z(r,s) . \end{aligned}$$
(47)
Restricting ourselves at this stage to
\(\mathrm{Re} ~s =0\) ensures that all terms are properly defined. Multiplying both sides by
\(\lambda -s\), one obtains:
$$\begin{aligned} (\lambda - s)R_z(r,s) - r \lambda \phi _Y(s) \sum _{i=1}^M p_i R_z(r,a_is) = (\lambda -s) [\mathrm{e}^{-sz} + \frac{r}{1-r} - r U_z(r,s)]. \end{aligned}$$
(48)
Because all
\(a_i < 1\), the steady-state distribution of the
\(\{R_n, n=0,1,\dots \}\) process always exists [
13]. We shall restrict ourselves to the steady-state case. (The transient case can in principle be studied in a similar way. Here it should be observed that, with fixed point
\(a=0\), we have
\(K(0) = 1 + \frac{r}{1-r} - rU_z(r,0) = 1 \ne 0\). We are now in Case 2 of Sect.
2.2;
\(|r|<1\) will guarantee the convergence of the corresponding
\(p_1^{i_1} \ldots p_M^{i_M} L_{i_1,\ldots ,i_M}(z)\) as
\(i_1+\cdots +i_M=n \rightarrow \infty \).)
Let
\(R(s) = \mathbb E[\mathrm{e}^{-sR}]\), with
R a random variable with the steady-state distribution of the
\(R_n\) process.
\(U(s) = \mathbb E[\mathrm{e}^{-sU}]\) is similarly defined. After multiplying both sides of (
48) by
\(1-r\) and letting
\(r \rightarrow 1\), an Abelian theorem for generating functions implies that
$$\begin{aligned} (\lambda -s) R(s) - \lambda \phi _Y(s) \sum _{i=1}^M p_i R(a_is) = (\lambda -s) [1 - U(s)] . \end{aligned}$$
(49)
Now make the following observations:
-
The left-hand side of (
49) is analytic in
\(\mathrm{Re}~s > 0\), and continuous in
\(\mathrm{Re} ~s \ge 0\).
-
The right-hand side of (
49) is analytic in
\(\mathrm{Re}~s < 0\), and continuous in
\(\mathrm{Re} ~s \le 0\).
-
R(
s) is for
\(\mathrm{Re}~s \ge 0\) bounded by 1, and hence, the left-hand side of (
49) behaves at most as a linear function in
s for large
s,
\(\mathrm{Re}~s > 0\).
-
U(
s) is for
\(\mathrm{Re}~s \le 0\) bounded by 1, and hence, the right-hand side of (
49) behaves at most as a linear function in
s for large
s,
\(\mathrm{Re}~s < 0\).
Liouville’s theorem [
19] now implies that both sides of (
49), in their respective half-planes, are equal to the same linear function in
s. We focus on the left-hand side of (
49):
$$\begin{aligned} (\lambda -s) R(s) - \lambda \phi _Y(s) \sum _{i=1}^M p_i R(a_is) = C_0 + C_1 s, ~~~\mathrm{Re} ~s \ge 0. \end{aligned}$$
(50)
Substituting
\(s=0\), we see that
\(C_0=0\). Taking
\(s \rightarrow \infty \), we see that
\(C_1 = - \mathbb P(R=0)\), but that does not yet determine
\(C_1\). Taking
\(s=\lambda \), we observe that
$$\begin{aligned} C_1 = - \phi _Y(\lambda ) \sum _{i=1}^M p_i R(a_i \lambda ). \end{aligned}$$
(51)
In fact, it is not hard to interpret this relation (replacing
\(C_1\) by
\(-\mathbb P(R=0)\)), using (
43) and the fact that
\(\phi _Y(\lambda ) = \mathbb P(B > Y)\) and
\(R(a_i \lambda ) = \mathbb P( B > a_i R)\).
We can rewrite (
50) as follows:
$$\begin{aligned} R(s) = H(s) \sum _{i=1}^M p_i R(a_is) + K(s), \end{aligned}$$
(52)
where
$$\begin{aligned} H(s) = \phi _Y(s) \frac{\lambda }{\lambda -s}, ~~~K(s) = C_1 \frac{s}{\lambda -s} . \end{aligned}$$
(53)
Equation (
52) has exactly the same form as (
2). Observe that the fixed point of the iterates
\(\alpha _i(z) = a_iz\) is
\(z=0\) and that
\(K(0)=0\). Hence, Theorem
2 applies. It follows that
$$\begin{aligned} R(s)= & {} \sum _{k=0}^{\infty } \sum _{i_1+ \dots +i_M=k} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(s) K(a_1^{i_1} \dots a_M^{i_M}s) \nonumber \\&\quad + \lim _{n \rightarrow \infty } \sum _{i_1+\dots +i_M=n+1} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(s). \end{aligned}$$
(54)
We finally need to determine the constant
\(C_1 = - \mathbb P(R=0)\), that features in
K(
s). This is done by taking
\(s=a_i \lambda \), for
\(i=1,\dots ,M\), in (
54), and adding the resulting
M expressions, using (
51):
$$\begin{aligned} C_1= & {} - C_1 \phi _Y(\lambda ) \sum _{j=1}^M p_j \sum _{k=0}^{\infty } \sum _{i_1+ \dots +i_M=k} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(a_j \lambda ) \frac{a_1^{i_1} \dots a_M^{i_M} a_j}{1 - a_1^{i_1} \dots a_M^{i_M}a_j} \nonumber \\&\quad - \phi _Y(\lambda ) \sum _{j=1}^M p_j ~ \lim _{n \rightarrow \infty } \sum _{i_1+\dots +i_M=n+1} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(a_j \lambda ), \end{aligned}$$
(55)
and hence,
$$\begin{aligned} C_1 = - \frac{ \phi _Y(\lambda ) \sum _{j=1}^M p_j ~ \lim _{n \rightarrow \infty } \sum _{i_1+\dots +i_M=n+1} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(a_j \lambda ), }{ 1 + \phi _Y(\lambda ) \sum _{j=1}^M p_j \sum _{k=0}^{\infty } \sum _{i_1+ \dots +i_M=k} p_1^{i_1} \dots p_M^{i_M} L_{i_1,\dots ,i_M}(a_j \lambda ) \frac{a_1^{i_1} \dots a_M^{i_M} a_j}{1-a_1^{i_1} \dots a_M^{i_M} a_j} } . \end{aligned}$$
(56)
Just like in [
8], the removable singularity
\(s=\lambda \) requires some extra care, but poses no real problems.