Note that the processes
\(S_{n}\) are null on the axes, and that the tightness of the laws of the family
\(S_{n}\) follows from Lemma
3.2. Then, in order to prove Theorem
3.1, by the criterion given by Bickel and Wichura [
2] it suffices to show: the family of process
\(S_{n}(t,s)\) converges, as
n tends to infinity, to the subfractional Brownian sheet in the sense of finite-dimensional distribution.
Let
\(\alpha_{1},\alpha_{2},\ldots,\alpha_{d}\in\mathbb{R}\) and
\((t_{1},s_{1}),\ldots,(t_{d},s_{d})\in[0,T]\times[0,S]\). We want to show that
\(X_{n}\),
$$X_{n}:=\sum^{d}_{m=1} \alpha_{m}S_{n}(t_{m},s_{m}), $$
converges in distribution, as
n tends to infinity, to a normal random variable with zero mean and variance
$$E \Biggl(\sum^{d}_{m=1} \alpha_{m} S^{\alpha,\beta}(t_{m},s_{m}) \Biggr)^{2}. $$
Indeed, the zero mean is trivial. Let us write
\(X_{n}\) as
$$\begin{aligned} X_{n} =&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}n^{2} \xi^{(n)}_{i,j} \sum^{d}_{m=1} \alpha_{m}\int^{\frac{i}{n}}_{\frac{i-1}{n}} K_{\alpha}\biggl(\frac {[nt_{m}]}{n},x_{1}\biggr)\,dx_{1} \int ^{\frac{j}{n}}_{\frac{j-1}{n}}K_{\beta}\biggl( \frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{2} \\ :=&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}X^{(n)}_{i,j}. \end{aligned}$$
Then
$$\begin{aligned} \sum^{[nT]}_{i=1} \sum^{[nS]}_{j=1}{\bigl(X^{(n)}_{i,j} \bigr)}^{2} =&\sum^{d}_{m,h=1} \alpha_{m}\alpha_{h}n^{4}\sum ^{[nT]}_{i=1}\sum^{[nS]}_{j=1} \int^{\frac{i}{n}}_{\frac{i-1}{n}}\int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{m}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \\ &{} \times \int^{\frac{i}{n}}_{\frac{i-1}{n}}\int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{h}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{h}]}{n},x_{2}\biggr)\,dx_{2}\,dx_{1} \bigl(\xi ^{(n)}_{i,j}\bigr)^{2}. \end{aligned}$$
By Lemma
3.3, the above equation converges to
$$\begin{aligned}& \sum^{d}_{m,h=1} \alpha_{m}\alpha_{h}\int^{T}_{0}K_{\alpha}(t_{m},x_{1})K_{\alpha}(t_{h},x_{1})\,dx_{1}\int^{S}_{0}K_{\beta}(s_{m},x_{2})K_{\beta}(s_{h},x_{2})\,dx_{2} \\& \quad=E \Biggl(\sum^{d}_{m=1} \alpha_{m}S^{\alpha,\beta}(t_{m},s_{m}) \Biggr)^{2}, \end{aligned}$$
since
\(K_{\alpha}(t,s)=0\) for
\(s\geq t\). Therefore, in order to end the proof we just need to prove the following Lindeberg condition holds: for any
\(\varepsilon>0\),
$$\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}E\bigl[\bigl(X^{(n)}_{i,j} \bigr)^{2} I_{(|X^{(n)}_{i,j}|>\varepsilon)}|\mathcal{G}^{n}_{i-1,j-1} \bigr]\stackrel{\rm P}{\rightarrow}0. $$
Consider the set
$$\bigl\{ \bigl|X^{(n)}_{i,j}\bigr|>\varepsilon\bigr\} =\bigl\{ \bigl(X^{(n)}_{i,j}\bigr)^{2}>\varepsilon^{2} \bigr\} . $$
We get an upper bound to
\(X^{(n)}_{i,j}\) by noticing that
\(K_{H}(t,u)\) is increasing with respect to
t and decreasing with respect to
u,
$$\begin{aligned} \bigl(X^{(n)}_{i,j}\bigr)^{2} =& \Biggl(n^{2}\xi^{(n)}_{i,j} \sum ^{d}_{m=1}\alpha_{m}\int ^{\frac{i}{n}}_{\frac{i-1}{n}} \int^{\frac{j}{n}}_{\frac{j-1}{n}} K_{\alpha}\biggl(\frac{[nt_{m}]}{n},x_{1}\biggr)K_{\beta}\biggl(\frac{[ns_{m}]}{n},x_{2}\biggr)\,dx_{1}\,dx_{2} \Biggr)^{2} \\ \leq& n^{4}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} \Biggl(\sum^{d}_{m=1}\alpha_{m} \Biggr)^{2} \biggl[\int^{\frac{i}{n}}_{\frac{i-1}{n}}K_{\alpha}(T,x_{1})\,dx_{1} \biggr]^{2} \cdot \biggl[\int^{\frac{j}{n}}_{\frac{j-1}{n}}K_{\beta}(S,x_{2})\,dx_{2} \biggr]^{2} \\ \leq& n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} A\int^{\frac{i}{n}}_{\frac{i-1}{n}} K^{2}_{\alpha}(T,x_{1})\,dx_{1} \cdot\int^{\frac{j}{n}}_{\frac{j-1}{n}}K^{2}_{\beta}(S,x_{2})\,dx_{2} \\ \leq& n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2} A :=n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2}A \delta^{n}, \end{aligned}$$
where
\(A:=(\sum^{d}_{m=1}\alpha_{m})^{2}\), and
$$\delta^{n}:=\int^{\frac{1}{n}}_{0}K^{2}_{\alpha}(T,x_{1})\,dx_{1} \cdot\int^{\frac{1}{n}}_{0}K^{2}_{\beta}(S,x_{2})\,dx_{2}. $$
Thus, we get
$$ \bigl\{ \bigl|X^{(n)}_{i,j}\bigr|>\varepsilon\bigr\} \subset \bigl\{ n^{2}\bigl(\xi^{(n)}_{i,j}\bigr)^{2}A \delta ^{n}>\varepsilon^{2}\bigr\} . $$
(3.3)
Using the inclusion (
3.3) and the Cauchy-Schwartz inequality we get, a.s.,
$$\begin{aligned} E \bigl[\bigl(X^{(n)}_{i,j}\bigr)^{2}I_{(|X^{(n)}_{i,j}|>\varepsilon)}| \mathcal {G}^{n}_{i-1,j-1} \bigr] \leq& E \bigl[n^{2}\bigl(\xi^{(n)}_{i,j} \bigr)^{2}A\delta^{n}I_{(n^{2}(\xi ^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr] \\ \leq& CA\delta^{n}E \bigl[I_{(n^{2}(\xi^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon ^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr]. \end{aligned}$$
Hence, for some constant
\(K>0\)
$$\begin{aligned} \sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}E\bigl[\bigl(X^{(n)}_{i,j} \bigr)^{2} I_{(|X^{(n)}_{i,j}|>\varepsilon^{2})}|\mathcal{G}^{n}_{i-1,j-1} \bigr] \leq&\sum^{[nT]}_{i=1}\sum ^{[nS]}_{j=1}CA\delta^{n} E \bigl[I_{(n^{2}(\xi^{(n)}_{i,j})^{2}A\delta^{n}>\varepsilon^{2})}|\mathcal {G}^{n}_{i-1,j-1} \bigr] \quad \mbox{a.s.} \\ \leq& CA\delta^{n}\sum^{[nT]}_{i=1} \sum^{[nS]}_{j=1}E [I_{(K^{2}A\delta ^{n}>\varepsilon^{2})} ] \rightarrow0,\quad n\rightarrow\infty, \end{aligned}$$
because
\(\delta^{n}\rightarrow0\) implies
\(I_{(K^{2}A\delta^{n}>\varepsilon ^{2})}\rightarrow0\). This completes the proof. □