1 Introduction
The concept of the complete convergence was first introduced by Hsu and Robbins [
3] to prove that the arithmetic mean of independent and identically distributed (i.i.d.) random variables converges completely to the expectation of the random variables. Later on, Baum and Katz [
4] generalized and extended this fundamental theorem as follows.
Since the independence assumption is not reasonable in the real practice of applications in many statistical problems. This result has been extended to many classes of dependent random variables. A classical extension of independence is negative association, which was introduced by Joag-Dev and Proschan [
5] as follows.
There are many results based on NA random variables, we refer to Shao [
6], Kuczmaszewska [
7], Baek et al. [
8], Kuczmaszewska and Lagodowski [
9].
Let
H be a real separable Hilbert space with the norm
\(\|\cdot\|\) generated by an inner product
\(\langle\cdot,\cdot\rangle\). Denote
\(X^{(j)}=\langle X,e^{(j)}\rangle\), where
\(\{e^{(j)},j\geq1\}\) is an orthonormal basis in
H, and
X is an
H-valued random vector. Ko et al. [
10] introduced the following concept of
H-valued NA sequence.
Ko et al. [
10] and Thanh [
11], respectively, obtained the almost sure convergence for NA random vectors in Hilbert space. Miao [
12] established the Hajeck–Renyi inequality for
H-valued NA random vectors.
Huan et al. [
1] introduced the concept of coordinatewise negative association for random vectors in Hilbert space as follows, which is more general than that of Definition
1.2.
Obviously, if a sequence of random vectors in Hilbert space is NA, it is CNA. However, generally speaking, the reverse is not true. One can see in Example 1.4 of Huan et al. [
1].
Huan et al. [
1] extended Theorem
A from independence to the case of CNA random vectors in Hilbert space. Huan [
13] extended this complete convergence result for
H-valued CNA random vectors to the case of
\(1< r<2\) and
\(\alpha r=1\). However, the interesting case
\(r=1\),
\(\alpha r=1\) was not considered in these papers. Recently, Ko [
2] extended the results of Huan et al. [
1] from the complete convergence to the complete moment convergence as follows. For more details as regards the complete moment convergence, one can refer to Ko [
2] and the references therein.
However, there are some mistakes in the proof of the result in the case
\(r=1\). In specific, the formulas
\(\int _{1}^{u}y^{r-2}\,dy\leq Cu^{r-1}\) of Eq. (2.7) and
\(\sum_{n=1}^{m}n^{\alpha r-1-\alpha}\leq Cm^{\alpha r-\alpha}\) of Eq. (2.9) in Ko [
2] are wrong when
\(r=1\), the same problem also occurs in the proof of
\(I_{222}\) (see the proof of Lemma 2.5 in Ko [
2]). Moreover, the interesting case
\(\alpha r=1\) was not considered in this paper.
In this paper, the results of the complete convergence and the complete moment convergence are established for CNA random vectors in Hilbert spaces. The results are focused on the weighted sums, which is more general than partial sums. The interesting case
\(\alpha r=1\) is also considered in this article. Moreover, the results of the complete moment convergence are considered with the exponent
\(0< q<2\) while in Theorem
B only the case
\(q=1\) was obtained.
Recall that if \(n^{-1}\sum_{i=1}^{n}\mathbb{P}(|X_{i}^{(j)}|>x)\leq C\mathbb{P}(|X^{(j)}|>x)\) for all \(j\geq1\), \(n\geq1\) and \(x\geq0\), then the sequence \(\{X_{n},n\geq1\}\) is said to be coordinatewise weakly upper bounded by X, where \(X_{n}^{(j)}=\langle X,e^{(j)}\rangle\) and \(X^{(j)}=\langle X,e^{(j)}\rangle\). Throughout the paper, let C be a positive constant whose value may vary in different places. Let \(\log{x}=\ln\max (x,e)\) and \(I(\cdot)\) be the indicator function.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.