Skip to main content
Erschienen in: Journal of Inequalities and Applications 1/2015

Open Access 01.12.2015 | Research

The mean consistency of wavelet density estimators

verfasst von: Zijuan Geng, Jinru Wang

Erschienen in: Journal of Inequalities and Applications | Ausgabe 1/2015

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The wavelet estimations have made great progress when an unknown density function belongs to a certain Besov space. However, in many practical applications, one does not know whether the density function is smooth or not. It makes sense to consider the mean \(L_{p}\)-consistency of the wavelet estimators for \(f\in L_{p}\) (\(1\leq p\leq\infty\)). In this paper, the authors will construct wavelet estimators and analyze their \(L_{p}(\mathbb{R})\) performance. They prove that, under mild conditions on the family of wavelets, the estimators are shown to be \(L_{p} \) (\(1\leq p\leq\infty\))-consistent for both noiseless and additive noise models.
Hinweise

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

1 Introduction

Wavelet analysis plays important roles in both pure and applied mathematics such as signal processing, image compressing, and numerical solutions. One of the important applications is to estimate an unknown density function based on random samples [13]. Optimal convergence rate and consistency are two basic asymptotic criteria of the quality for an estimator. Some perfect achievements have been made for the wavelet estimation in \(L_{p}\) norm by Donoho et al. [4] etc., when an unknown density function belongs to Besov spaces. However, in many practical applications, we do not know whether the density function is smooth or not [5]. Therefore, it is natural to consider the mean consistency of the wavelet estimators, which means \(E\|f-\hat{f}_{n}\| _{p} \) (\(1\leq p\leq\infty\)) converges to zero as the sample size n tends to infinity.
In 2005, Chacón and Rodríguez-Casal [6] discussed the mean \(L_{1}\)-consistency of the wavelet estimator based on random samples without any noise. However, in practice, the observed samples are contaminated by random noises. Devroye [7] proved the mean consistency of the kernel estimator in \(L_{1}\) norm. Liu and Taylor [8] investigated \(L_{\infty}\)-consistency of the kernel estimator. Ramírez and Vidakovic [9] proposed linear and nonlinear wavelet estimators and showed that they are \(L_{2}\)-consistent.
This paper studies the mean \(L_{p}\)-consistency of the wavelet estimator. In Section 2, we briefly describe the preliminaries on wavelet scaling functions and orthogonal projection kernels. In Section 3, for the classical model, the mean \(L_{p}\)-consistency is given, which generalizes Chacón’s theorem [6]. The last section deals with the \(L_{p}\)-consistency for the additive noise model.

2 Wavelet scaling function and orthonormal projection kernel

In this section, we shall recall some useful and well-known concepts and lemmas. As usual, \(L_{p}(\mathbb{R})\), \(p \geq1\), denotes the classical Lebesgue space on the real line ℝ.
Definition 2.1
(see [10])
A multi-resolution analysis (MRA) of \(L_{2}(\mathbb{R})\) is a set of increasing, closed linear subspaces \(V_{j}\subset V_{j+1} \), for all \(j \in\mathbb{Z}\), called scaling spaces, satisfying:
(i)
\(\bigcap^{\infty}_{-\infty}V{_{j}}=\{0 \}\), \(\overline{\bigcup^{\infty}_{-\infty}V_{j}}=L _{2}(\mathbb{R})\);
 
(ii)
\(f(\cdot)\in V_{0}\) if and only if \(f(2^{j}\cdot)\in V_{j}\) for all \(j \in{ \mathbb{Z}}\);
 
(iii)
\(f(\cdot)\in V_{0}\) if and only if \(f(\cdot-k) \in V_{0}\) for all \(k \in{ \mathbb{Z}}\);
 
(iv)
there exists a function \(\varphi(\cdot) \in V_{0}\) such that \(\{\varphi(\cdot-k)\}\) is an orthonormal basis in \(V_{0}\). The function \(\varphi(\cdot)\) is called the scaling function.
 
It is easy to show that \(\{\varphi_{jk}(x),k\in\mathbb{Z}\}\) forms an orthonormal basis in \(V_{j}\), where \(\varphi_{jk}(x)=2^{j/2}\varphi (2^{j}x-k)(x)\), \(j,k\in\mathbb{Z}\).
Condition S
There exists a bounded nonincreasing function \(\Phi(\cdot)\) such that \(\int\Phi (|u|)\,du<\infty\), and \(|\varphi(u)|\leq\Phi(|u|)\) (a.e.).
Condition S is not very restrictive. For example, the Meyer scaling functions satisfy that condition; compactly supported and bounded scaling functions do as well. Furthermore, Condition S implies that \(\varphi\in L_{1}(\mathbb{R})\cap L_{\infty}(\mathbb{R})\) and \(\operatorname{ess} \sup\sum_{k}|\varphi(x-k)|<\infty\). We denote \(\theta_{\varphi}(x)=\sum_{k}|\varphi(x-k)|\).
The following lemmas are taken from [2], which will be used later on.
Lemma 2.2
If the scaling function φ satisfies \(\operatorname{ess} \sup\sum_{k\in\mathbb{Z}}|\varphi (x-k)|<\infty\), then for any sequence \(\{\lambda_{k}\}_{k \in\mathbb {Z}}\in l_{p}\), one has \(C_{1}\|\lambda\|_{l_{p}}2^{(\frac{j}{2}-\frac{j}{p})}\leq\|\sum_{k}\lambda_{k}\varphi_{j,k}\|_{p}\leq C_{2}\|\lambda\| _{l_{p}}2^{(\frac{j}{2}-\frac{j}{p})}\), where \(C_{1}=(\|\theta_{\varphi}\|^{\frac{1}{p}}_{\infty}\|\varphi \|^{\frac{1}{q}}_{1})^{-1}\), \(C_{2}=(\|\theta_{\varphi}\|^{\frac{1}{q}}_{\infty}\|\varphi\| ^{\frac{1}{p}}_{1})^{-1}\), \(1\leq p \leq\infty\), \(\frac{1}{p}+\frac {1}{q}=1\).
Definition 2.3
(see [2])
If the scaling function φ satisfies \(\operatorname{ess} \sup\sum_{k}|\varphi (x-k)|<\infty\), the kernel function
$$K(x,y)=\sum_{k}\varphi(x-k)\varphi(y-k) $$
is called orthonormal projection kernel associated with φ.
For \(f\in L_{p}(\mathbb{R}) \) (\(1\leq p\leq\infty\)), if \(\operatorname{ess} \sup\sum_{k}|\varphi(x-k)|<\infty\), it is not hard to show that
$$\int K_{j}(x,y)f(y)\,dy=K_{j}f=\sum _{k}\alpha_{jk}\varphi_{jk}(x), $$
where \(K_{j}(x,y)=2^{j}K(2^{j}x,2^{j}y)\), \(\alpha_{jk}=\int\varphi_{jk}(x)f(x)\,dx\).
Lemma 2.4
If the scaling function φ satisfies Condition  S, then
(i)
\(\int K(x,y)\,dy=1\) (a.e.);
 
(ii)
\(|K(x,y)|\leq C_{1}\Phi (\frac{|x-y|}{C_{2}} )\) (a.e.), where \(C_{1}\), \(C_{2} \) are positive constants depending on Φ.
 
Let \(F(x)= C_{1}\Phi(\frac{|x|}{C_{2}})\), then \(F\in L_{1}(\mathbb{R})\cap L_{\infty}(\mathbb{R})\) and \(|K(x,y)|\leq F(x-y)\) (a.e.).
Lemma 2.5
If the scaling function φ satisfies Condition  S, then for \(f\in L_{p}(\mathbb{R})\), \(1\leq p<\infty\), one has
$$\lim_{j\rightarrow\infty}\|K_{j}f-f\|_{p}=0. $$
The above result is also true if \(f\in L_{\infty}(\mathbb{R})\) is uniformly continuous.
Lemma 2.6
(Rosenthal’s inequality)
Let \(X_{1}, \ldots, X_{n}\) be independent random variables such that \(E(X_{i})=0\) and \(|X_{i}|< M\), then there exists a constant \(C(p)>0\) such that
$$\begin{aligned}& (\mathrm{i})\quad E \Biggl( \Biggl|\sum_{i=1}^{n} X_{i} \Biggr|^{p} \Biggr)\leq C(p) \Biggl({M^{p-2}\sum _{i=1}^{n} E\bigl(X_{i}^{2} \bigr)+ \Biggl(\sum_{i=1}^{n} E \bigl(X_{i}^{2}\bigr) \Biggr)^{\frac {p}{2}}} \Biggr), \quad p> 2, \\& (\mathrm{ii})\quad E \Biggl( \Biggl|\sum_{i=1}^{n} X_{i} \Biggr|^{p} \Biggr)\leq C(p) \Biggl(\sum _{i=1}^{n} E\bigl(X_{i}^{2} \bigr) \Biggr)^{\frac {p}{2}}, \quad 0< p\leq 2. \end{aligned}$$

3 Mean consistency for \(L_{p}\) norm

In this section, based on the random sample without noise, we shall construct the wavelet estimator and give its \(L_{p}\)-consistency.
Let \(X_{1}, X_{2}, \ldots, X_{n} \) are independent identically distributed (i.i.d.) random samples without noise, φ be compactly supported scaling function, the wavelet estimator be defined as follows:
$$ \hat{f}_{n}(x)=\sum_{k} \hat{\alpha}_{jk}\varphi_{jk}(x),\qquad \hat{\alpha}_{jk}=\frac{1}{n}\sum_{i=1}^{n} \varphi_{jk}(X_{i}). $$
(1)
Obviously, one gets \(E\hat{\alpha}_{jk}=\frac{1}{n}\sum_{i=1}^{n}\int\varphi_{jk}(x)f(x)\,dx=\alpha_{jk}\). On the other hand, one obtains
$$\begin{aligned} \hat{f}_{n}(x) =&\sum_{k} \hat{\alpha}_{jk}\varphi_{jk}(x) \\ =&\sum_{k} \Biggl(\frac{1}{n}\sum _{i=1}^{n}\varphi_{jk}(X_{i}) \Biggr)\varphi_{jk}(x) \\ =&\frac{1}{n}\sum_{i=1}^{n}\sum _{k}\varphi _{jk}(X_{i}) \varphi_{jk}(x) \\ =&\frac{1}{n}\sum_{i=1}^{n}K_{j}(x,X_{i}). \end{aligned}$$
(2)
Theorem 3.1
Let a scaling function \(\varphi(x)\) be compactly supported and bounded, \(\hat{f}_{n}(x)\) be the wavelet estimator defined in (1). If we take \(2^{j}\sim n^{\frac{1}{2}}\), then for any \(f\in L_{p}(\mathbb{R})\), \(1\leq p<\infty\), one has
$$ \lim_{n\rightarrow\infty}E\| f-\hat{f}_{n} \|_{p} =0. $$
(3)
Note
The notation \(A\lesssim B\) indicates that \(A \leqslant c B\) with a positive constant c, which is independent of A and B. If \(A\lesssim B\) and \(B\lesssim A\), we write \(A\sim B\).
Proof
Due to \((E\| f-\hat{f}_{n}\|_{p})^{p}\leq E\| f-\hat {f}_{n}\|_{p}^{p}\), one only needs to consider \(E\| f-\hat{f}_{n}\|_{p}^{p}\).
Firstly, thanks to the triangular inequality and convexity inequality, one can decompose \(E\| f-\hat{f}_{n}\|_{p}^{p}\) into a bias term and a stochastic term, respectively. That is,
$$\begin{aligned} E\| f-\hat{f}_{n}\|_{p}^{p} =&E\| f-E\hat {f}_{n}+E\hat{f}_{n}-\hat{f}_{n} \|_{p}^{p} \\ \leq&E \bigl(\| f-E\hat{f}_{n}\|_{p}+\|E\hat{f}_{n}- \hat {f}_{n}\|_{p} \bigr)^{p} \\ \leq&2^{p-1} \bigl(\| f-E\hat{f}_{n}\|_{p}^{p}+E \|\hat{f}_{n}-E\hat {f}_{n}\|_{p}^{p} \bigr). \end{aligned}$$
(i) For the bias term \(\| f-E\hat{f}_{n}\|_{p}^{p}\), one has
$$\begin{aligned} E\hat{f}_{n}(x) =&E\frac{1}{n}\sum _{i=1}^{n}K_{j}(x,X_{i})=E K_{j}(x,X_{1}) \\ =&\int K_{j}(x,y)f(y)\,dy=K_{j} f(x). \end{aligned}$$
Since \(\varphi(x)\) satisfies Condition S, taking \(2^{j}\sim n^{\frac {1}{2}}\), due to Lemma 2.4 and Lemma 2.5, one gets
$$ \lim_{n\rightarrow\infty}\| f-E\hat{f}_{n}\| _{p}=\lim _{n\rightarrow\infty}\|f-K_{j} f\|_{p}=0. $$
(ii) For the stochastic term \(E\|\hat{f}_{n}-E\hat {f}_{n}\|_{p}^{p}\), one can estimate it as follows:
$$\begin{aligned} E\|\hat{f}_{n}-E\hat{f}_{n}\|_{p}^{p} =&E \int|\hat {f}_{n}-E\hat{f}_{n}|^{p} \,dx \\ =&\int E|\hat{f}_{n}-E\hat{f}_{n}|^{p} \\ =&\int E \Biggl|\frac{1}{n}\sum_{i=1}^{n}K_{j}(x,X_{i})-E \frac {1}{n}\sum_{i=1}^{n}K_{j}(x,X_{i}) \Biggr|^{p} d x \\ =&\frac{1}{n^{p}}\int E \Biggl|\sum_{i=1}^{n} \bigl(K_{j}(x,X_{i})-E K_{j}(x,X_{i}) \bigr) \Biggr|^{p} d x \\ =&\frac{1}{n^{p}}\int E \Biggl|\sum_{i=1}^{n}Y_{i} \Biggr|^{p} d x. \end{aligned}$$
Denote \(Y_{i}=K_{j}(x,X_{i})-E K_{j}(x,X_{i})\), then \(\{Y_{i}\}\) are i.i.d. samples, and \(EY_{i}=0\). One obtains
$$\begin{aligned} |Y_{i}| =&\bigl|K_{j}(x,X_{i})-E K_{j}(x,X_{i})\bigr| \leq\bigl|K_{j}(x,X_{i})\bigr|+\bigl|E K_{j}(x,X_{i})\bigr| \\ \leq& \biggl|2^{j}\sum_{k}\varphi \bigl(2^{j}x-k\bigr)\varphi\bigl(2^{j}X_{i}-k\bigr) \biggr|+\int \biggl|2^{j}\sum_{k}\varphi \bigl(2^{j}x-k\bigr)\varphi\bigl(2^{j}y-k\bigr) \biggr|f(y)\,dy \\ \leq&2^{j}\|\varphi\|_{\infty}\|\theta_{\varphi}\|_{\infty}+2^{j}\| \varphi\|_{\infty}\| \theta_{\varphi}\|_{\infty}\int f(x)\,dx \\ \lesssim&2^{j+1}. \end{aligned}$$
(i) For \(2\leq p<\infty\), Rosenthal’s inequality, Lemma 2.6, tells us that
$$\begin{aligned} E \Biggl( \Biggl|\sum_{i=1}^{n}Y_{i} \Biggr|^{p} \Biggr) \leq& C(p) \Biggl(\bigl(2^{j+1} \bigr)^{p-2}\sum_{i=1}^{n}E \bigl(Y_{i}^{2}\bigr)+ \Biggl(\sum _{i=1}^{n}E\bigl(Y_{i}^{2}\bigr) \Biggr)^{p/2} \Biggr) \\ \lesssim&\bigl(2^{j+1}\bigr)^{p-2}\sum _{i=1}^{n}E\bigl( Y_{i}^{2} \bigr)+ \Biggl(\sum_{i=1}^{n}E \bigl(Y_{i}^{2}\bigr) \Biggr)^{p/2} \end{aligned}$$
and
$$\begin{aligned} \Biggl(\sum_{i=1}^{n}E \bigl(Y_{i}^{2}\bigr) \Biggr)^{p/2} =& \bigl(n E(Y_{1}) \bigr)^{p/2} \\ =&n^{p/2} \bigl(E \bigl(K_{j}(x,X_{1})-E K_{j}(x,X_{1}) \bigr)^{2} \bigr)^{p/2} \\ \leq&n^{p/2} \bigl(E K_{j}^{2}(x,X_{1}) \bigr)^{p/2} \\ =&n^{p/2} \biggl(\int K_{j}^{2}(x,y)f(y)\,dy \biggr)^{p/2} \\ \leq&n^{p/2} \biggl(\int2^{2j}F^{2} \bigl(2^{j}x-2^{j}y\bigr)f(y)\,dy \biggr)^{p/2} \\ =&n^{p/2}2^{jp/2} \biggl(\int2^{j}F^{2} \bigl(2^{j}x-2^{j}y\bigr)f(y)\,dy \biggr)^{p/2}, \end{aligned}$$
then
$$\begin{aligned} \int \Biggl(\sum_{i=1}^{n}E \bigl(Y_{i}^{2}\bigr) \Biggr)^{p/2}\,dx =&n^{p/2}2^{jp/2}\int \biggl(\int 2^{j}F^{2} \bigl(2^{j}x-2^{j}y\bigr)f(y)\,dy \biggr)^{p/2}\,dx \\ =&n^{p/2}2^{jp/2}\int \biggl(\int F^{2}(t)f \bigl(x-t/2^{j}\bigr)\,dt \biggr)^{p/2}\,dx \\ =&n^{p/2}2^{jp/2}\|F\|_{2}^{p}\int \biggl( \int\frac{F^{2}(t)}{\|F\| _{2}^{2}}f\bigl(x-t/2^{j}\bigr)\,dt \biggr)^{p/2}\,dx \\ \lesssim&n^{p/2}2^{jp/2}\int\int\frac{F^{2}(t)}{\|F\| _{2}^{2}}f^{p/2} \bigl(x-t/2^{j}\bigr)\,dt \,dx \\ \lesssim&n^{p/2}2^{jp/2}\int\int F^{2}(t)f^{p/2} \bigl(x-t/2^{j}\bigr)\,dx \,dt \\ \lesssim&n^{p/2}2^{jp/2}. \end{aligned}$$
Therefore, one gets
$$\begin{aligned} E\|\hat{f}_{n}-E\hat{f}_{n} \|_{p}^{p} \lesssim &\frac{1}{n^{p}} \bigl( \bigl(2^{j+1}\bigr)^{p-2}n2^{j}+n^{p/2}2^{jp/2} \bigr) \\ =&\frac{2^{(j+1)(p-2)}2^{j} n}{n^{p}}+\frac {n^{p/2}2^{jp/2}}{n^{p}} \\ =&\biggl(\frac{2^{j}}{n}\biggr)^{p-1}+\biggl(\frac{2^{j}}{n} \biggr)^{p/2}. \end{aligned}$$
(4)
Taking \(2^{j}\sim n^{\frac{1}{2}}\), one obtains the following desired result:
$$ \lim_{n\rightarrow\infty}E\|\hat{f}_{n}-E\hat {f}_{n}\|_{p}^{p}=0. $$
(5)
(ii) For \(1\leq p<2\), let \(A=\{x\mid |\hat{f}_{n}-E\hat{f}_{n}|<1\}\), \(B=\{x\mid |\hat{f}_{n}-E\hat{f}_{n}|\geq1\}\), then one has
$$\begin{aligned} E\|\hat{f}_{n}-E\hat{f}_{n} \|_{p}^{p} =&E\int |\hat{f}_{n}-E \hat{f}_{n}|^{p} \,dx \\ =&E\int_{A}|\hat{f}_{n}-E\hat{f}_{n}|^{p} \,dx+E\int_{B}|\hat {f}_{n}-E\hat{f}_{n}|^{p} \,dx \\ \leq&E\int_{A}|\hat{f}_{n}-E \hat{f}_{n}|\,dx+E\int_{B}|\hat{f}_{n}-E \hat{f}_{n}|^{2}\,dx \\ \leq&E\int|\hat{f}_{n}-E\hat{f}_{n}|\,dx+E\int|\hat {f}_{n}-E\hat{f}_{n}|^{2}\,dx \\ =&E\|\hat{f}_{n}-E\hat {f}_{n}\|_{1}+E\| \hat{f}_{n}-E\hat{f}_{n}\|_{2}^{2}. \end{aligned}$$
(6)
Obviously, one knows that \(f\in L_{1}(\mathbb{R})\) which guarantees that \(\lim_{n\rightarrow\infty}E\|\hat{f}_{n}-E\hat{f}_{n}\|_{2}^{2}=0\).
Moreover, \(E\|\hat{f}_{n}-E\hat{f}_{n}\|_{1}= \int\frac{1}{n} E|\sum_{i=1}^{n}Y_{i}|\,dx\), according to Rosenthal’s inequality, Lemma 2.6, one has
$$\begin{aligned} \frac{1}{n}E \Biggl|\sum_{i=1}^{n}Y_{i} \Biggr| \leq&\frac{1}{n} \Biggl(\sum_{i=1}^{n}E(Y_{i})^{2} \Biggr)^{\frac{1}{2}} \\ =& \frac{1}{n^{1/2}} \bigl(E(Y_{1})^{2} \bigr)^{\frac{1}{2}} \\ \leq& \biggl( \frac{2^{j}}{n} \biggr)^{1/2} \biggl(\int 2^{j}F^{2}\bigl(2^{j}x-2^{j}y\bigr)f(y)\,dy \biggr)^{1/2} \\ =& \biggl( \frac{2^{j}}{n} \biggr)^{1/2}G_{j}*f(x) \\ =&A(x), \end{aligned}$$
(7)
where \(G(x)=F^{2}(x)\). On the other hand,
$$\begin{aligned} \frac{1}{n}E \Biggl|\sum_{i=1}^{n}Y_{i} \Biggr| \leq&\frac{1}{n}\sum_{i=1}^{n}E\bigl|K_{j}(x,X_{i})-E K_{j}(x,X_{i})\bigr| \\ \lesssim&E\bigl|K_{j}(x,X_{1})\bigr| \\ =&\int\bigl|K_{j}(x,y)\bigr|f(y)\,dy \\ \leq&\int2^{j}F\bigl(2^{j}x-2^{j}y\bigr) f(y)\,dy \\ =&\int F(t)f\biggl(x-\frac{t}{2^{j}}\biggr)\,dt \\ \leq&f(x)\int F(t)\,dt+\int F(t) \biggl|f\biggl(x-\frac{t}{2^{j}}\biggr)-f(x) \biggr|\,dt \\ =&B(x)+C(x), \end{aligned}$$
(8)
where \(B(x)=f(x)\int F(t)\,dt\), \(C(x)=\int F(t) |f(x-\frac {t}{2^{j}})-f(x) |\,dt\). Then we get
$$\begin{aligned} E\|\hat{f}_{n}-E\hat{f}_{n}\|_{1} \leq&\int\min \bigl\{ A(x), B(x)+C(x) \bigr\} \,dx \leq\int\min \bigl\{ A(x), B(x) \bigr\} \,dx+\int C(x)\,dx. \end{aligned}$$
One knows that
$$\int C(x)\,dx=\int\int F(t) \biggl|f\biggl(x-\frac{t}{2^{j}}\biggr)-f(x) \biggr|\,dt \,dx=\int F(t)\int \biggl|f\biggl(x-\frac{t}{2^{j}}\biggr)-f(x) \biggr|\,dx \,dt, $$
since
$$F(t)\int \biggl|f\biggl(x-\frac{t}{2^{j}}\biggr)-f(x) \biggr|\,dx\leq2F(t)\|f\| _{1} \quad \mbox{and} \quad \lim_{j\rightarrow\infty}\int \biggl|f \biggl(x-\frac {t}{2^{j}}\biggr)-f(x) \biggr|\,dx=0. $$
Then one gets \(\lim_{n\rightarrow\infty}\int C(x)\,dx=0\). Next, for \(\int B(x)\,dx=\int f(x)\int F(t)\,dt \,dx=\|F\|_{1}\|f\|_{1}\). One has \(B(x)\in L_{1}(\mathbb{R})\). By the Lebesgue dominated convergence theorem, one gets
$$\begin{aligned} \lim_{n\rightarrow\infty}\int\min \bigl\{ A(x), B(x) \bigr\} \,dx =&\int\lim _{n\rightarrow\infty}\min \bigl\{ A(x), B(x) \bigr\} \,dx \leq\int\lim_{n\rightarrow\infty}A(x)\,dx. \end{aligned}$$
It remains only to show that \(\lim_{n\rightarrow\infty }A(x)=0\). Since the function \(G(x)=F^{2}(x)\) is radially decreasing,
$$ \lim_{j\rightarrow\infty}G_{j}*f(x)=\| F\|_{2}^{2} f(x) \quad (\mbox{a.e.}) $$
and \(\| F\|_{2}^{2} f(x)\) is finite for almost all x, we have \(\lim_{n\rightarrow\infty}A(x)=\lim_{n\rightarrow\infty}(\frac{2^{j}}{n}G_{j}*f)^{1/2}=0\). Finally, we get
$$\lim_{n\rightarrow\infty}E\|\hat{f}_{n}-E\hat{f}_{n}\| _{1}=0. $$
 □
Remark
Theorem 3.1 can be considered as a natural extension of Theorem 1 in [6].
Next we shall consider \(L_{\infty}\)-consistency.
Theorem 3.2
Let a scaling function \(\varphi(x)\) satisfy \(\operatorname{supp} \varphi\subset[-A,A]\) and bounded, \(\hat{f}_{n}(x)\) be the wavelet estimator defined in (1). If \(f\in L_{\infty}(\mathbb{R})\) is uniformly continuous and \(f(x)\lesssim\frac{1}{(1+|x|)^{2+\delta}} \) for any \(\delta>0\), taking \(2^{j}\sim n^{\frac{1}{4}}\), then one gets
$$ \lim_{n\rightarrow\infty}E\| f-\hat{f}_{n} \|_{\infty}=0. $$
(9)
Proof
The proof is similar to Theorem 3.1. We have
$$ E\| f-\hat{f}_{n}\|_{\infty}\leq\|f-E\hat {f}_{n} \|_{\infty}+E\|\hat{f}_{n}-E\hat{f}_{n}\| _{\infty}. $$
Since φ satisfies Condition S and f is uniformly continuous, by Lemma 2.4 and Lemma 2.5, one gets
$$ \lim_{n\rightarrow\infty}\| f-E\hat{f}_{n} \|_{p}=\lim_{n\rightarrow \infty}\|f-K_{j} f \|_{\infty}=0. $$
(10)
For the stochastic term, it can be proved that
$$\begin{aligned} |\hat{f}_{n}-E\hat{f}_{n}| =& \biggl|\sum _{k}\hat{\alpha}_{jk}\varphi_{jk}(x)-\sum _{k}\alpha _{jk}\varphi_{jk}(x) \biggr| \leq\sum_{k}|\hat{\alpha}_{jk}- \alpha_{jk}|\bigl|\varphi _{jk}(x)\bigr| \\ \leq&2^{j/2}\sum_{k}|\hat{\alpha}_{jk}-\alpha_{jk}|\| \varphi\|_{\infty}\quad (\mbox{a.e.}), \end{aligned}$$
then one has \(\|\hat{f}_{n}-E\hat{f}_{n}\|_{\infty}\lesssim 2^{j/2}\sum_{k}|\hat{\alpha}_{jk}-\alpha_{jk}|\). So one obtains
$$ E\|\hat{f}_{n}-E\hat{f}_{n}\|_{\infty}\lesssim 2^{j/2}\sum_{k}E|\hat{\alpha}_{jk}- \alpha_{jk}|. $$
(11)
According to Rosenthal’s inequality, Lemma 2.6, one has
$$\begin{aligned} E|\hat{\alpha}_{jk}-\alpha_{jk}| =&E \Biggl|\frac {1}{n}\sum _{i=1}^{n} \bigl(\varphi_{jk}(X_{i})-E \varphi _{jk}(X_{i}) \bigr) \Biggr| \\ \leq&\frac{1}{n} \Biggl|\sum_{i=1}^{n}E \bigl(\varphi _{jk}(X_{i})-E\varphi_{jk}(X_{i}) \bigr)^{2} \Biggr|^{1/2} \\ \leq&\frac{1}{n^{1/2}} \bigl|E\varphi_{jk}^{2}(X_{1}) \bigr|^{1/2} \\ =&\frac{1}{n^{1/2}} \biggl(\int2^{j}\varphi^{2} \bigl(2^{j}x-k\bigr)f(x)\,dx \biggr)^{1/2} \\ =&\frac{1}{n^{1/2}} \biggl(\int_{|t-k|\leq A}\varphi^{2}(t-k)f \biggl(\frac {t}{2^{j}}\biggr)\,dt \biggr)^{1/2}. \end{aligned}$$
Moreover, one has \(E\|\hat{f}_{n}-E\hat{f}_{n}\|_{\infty}\lesssim (\frac{2^{j}}{n})^{1/2}\sum_{k} (\int_{|t-k|\leq A}f(\frac {t}{2^{j}})\,dt )^{1/2}\) and
$$\begin{aligned} &\sum_{k} \biggl(\int _{|t-k|\leq A}f\biggl(\frac {t}{2^{j}}\biggr)\,dt \biggr)^{1/2} \\ &\quad\leq\sum_{|k|\leq A+1} \biggl(\int_{|t-k|\leq A} f\biggl(\frac {t}{2^{j}}\biggr)\,dt \biggr)^{1/2}+\sum _{|k|\geq A+1} \biggl(\int_{|t-k|\leq A}f\biggl( \frac{t}{2^{j}}\biggr)\,dt \biggr)^{1/2} \\ &\quad\lesssim\sum_{|k|\leq A+1} \biggl(\int f\biggl( \frac {t}{2^{j}}\biggr)\,dt \biggr)^{1/2}+\sum _{|k|\geq A+1} \biggl(\int_{|t-k|\leq A}\frac{1}{(1+|t/2^{j}|)^{2+\delta}}\,dt \biggr)^{1/2} \\ &\quad\lesssim 2^{j/2}+\sum_{k\geq A+1} \biggl(\int _{|t-k|\leq A}\frac{1}{(1+|(k-A)/2^{j}|)^{2+\delta}}\,dt \biggr)^{1/2} \\ &\quad=2^{j/2}+\sum_{k\geq A+1} \biggl( \frac{2A}{(1+|(k-A)/2^{j}|)^{2+\delta}} \biggr)^{1/2} \\ &\quad\lesssim 2^{j/2}+\sum_{k\geq1} \frac {1}{(1+|k/2^{j}|)^{1+\delta/2}} \\ &\quad\lesssim 2^{j/2}+\int\frac{1}{(1+|t/2^{j}|)^{1+\delta /2}}\,dt \\ &\quad\lesssim 2^{j}. \end{aligned}$$
(12)
Therefore, one gets \(E\|\hat{f}_{n}-E\hat{f}_{n}\|_{\infty}\lesssim (\frac{2^{3j}}{n})^{1/2}\). Taking \(2^{j}\sim n^{\frac{1}{4}}\), one obtains
$$\lim_{n\rightarrow\infty}E\| f-\hat{f}_{n}\|_{\infty}=0. $$
 □

4 Additive noise model

In practical situations, direct data is not always available. One of the classical models is described as follows:
$$Y_{i}=X_{i}+\epsilon_{i}, $$
where \(X_{i}\) stands for the random samples with unknown density \(f_{X}\) and \(\epsilon_{i}\) denotes the i.i.d. random noise with density g. To estimate the density \(f_{X}\) is a deconvolution problem.
In 2002, Fan and Koo [11] studied the wavelet estimation for random samples with smooth and super smooth noise over a Besov ball. In 2014, Li and Liu [12] considered the wavelet estimation for random samples with moderately ill-posed noise. In this section, we consider the mean \(L_{p}\)-consistency for \(f_{X}\in L_{p}(\mathbb{R})\) with additive noise.
The Fourier transform of \(f\in L_{1}(\mathbb{R}) \) is defined as follows:
$$ \tilde{f}(t)=\int f(x)e^{-itx}\,dx. $$
It is well known that \(\tilde{f}_{Y}(t)=\tilde{f}_{X}(t)\tilde{g}(t)\). For \(\tilde{g}(t)\neq0 \) (\(\forall t\in\mathbb{R}\)), the wavelet estimator is given by
$$ \hat{f}_{X,n}(x)=\sum_{k} \hat{\alpha}_{jk}\varphi_{jk}(x), $$
(13)
where
$$ \hat{\alpha}_{jk}=\frac{1}{n}\sum _{i=1}^{n}(H_{j}\varphi)_{jk}(Y_{i});\qquad (H_{j}\varphi) (y)=\frac{1}{2\pi}\int e^{ity} \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt, $$
(14)
and φ is the Meyer scaling function.
Lemma 4.1
If \(f_{X}\in L_{2}\mathbb{(R)}\), then \(\hat{\alpha}_{jk}\) defined in (14) is the unbiased estimation of \(\alpha_{jk}\).
Proof
The Plancherel formula tells us that
$$\begin{aligned} \alpha_{jk} =&\int f_{X}(x)\varphi_{jk}(x)\,dx =\frac{1}{2\pi}\int\tilde{f_{X}}(t)\overline{\tilde{\varphi}_{jk}(t)}\,dt =\frac{1}{2\pi}\int\frac{\tilde{f}_{Y}( t)}{\tilde {g}(t)}\tilde{\varphi}_{jk}(-t)\,dt. \end{aligned}$$
On the other hand, one gets
$$\begin{aligned} E\tilde{\alpha}_{jk} =&E \Biggl(\frac{1}{n}\sum _{i=1}^{n}(H_{j}\varphi)_{jk}(Y_{i}) \Biggr)=E (H_{j}\varphi)_{jk}(Y_{1}) \\ =&\int \biggl(\frac{1}{2\pi}\int2^{\frac{j}{2}}e^{it(2^{j} y-k)} \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \biggr)f_{Y}(y)\,dy \\ =&\frac{1}{2\pi}\int\int2^{\frac{j}{2}}e^{it(2^{j} y-k)}f_{Y}(y)\,dy \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \\ =&\frac{1}{2\pi}\int2^{\frac{j}{2}}e^{-itk}\tilde{f}_{Y} \bigl(-2^{j} t\bigr)\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \\ =&\frac{1}{2\pi}\int\frac{\tilde{f}_{Y}( t)}{\tilde{g}(t)}\tilde{\varphi}_{jk}(-t)\,dt. \end{aligned}$$
Therefore, \(E\tilde{\alpha}_{jk}=\alpha_{jk}\). □
The next two theorems deal with the different cases for \(p\geq2\) and \(1\leq p<2\), respectively.
Theorem 4.1
Let \(\varphi(x)\) be the Meyer scaling function, \(\tilde{g}(t)\gtrsim(1+|t^{2}|)^{-\frac{\beta}{2}}\) (\(\beta\geq0\)), and \(\hat{f}_{X,n}(x)\) is the wavelet estimator defined in (13). If \(f_{X}\in L_{p}\mathbb{(R)}\), \(2\leq p<\infty\), taking \(2^{j}\sim n^{\frac{1-\epsilon}{1+2\beta}}\) (\(\epsilon>0\)), then one gets
$$ \lim_{n\rightarrow\infty}E\| f_{X}-\hat{f}_{X,n} \|_{p} =0. $$
(15)
Proof
Similarly, one needs to consider a bias term and a stochastic term.
(i) For the bias term, one observes that
$$\begin{aligned} E\hat{f}_{X,n}(x) =& E \biggl(\sum_{k} \hat{\alpha}_{jk}\varphi _{jk}(x) \biggr)\\ =&E \Biggl(\sum_{k}\frac{1}{n}\sum _{i=1}^{n}(H_{j}\varphi)_{jk}(Y_{i}) \varphi_{jk}(x) \Biggr) \\ =&E \biggl(\sum_{k}(H_{j} \varphi)_{jk}(Y_{1})\varphi _{jk}(x) \biggr). \end{aligned}$$
Note that \(\int\sum_{k}|(H_{j}\varphi)_{jk}(y)||\varphi _{jk}(x)| f_{Y}(y)\,dy\leq2^{j}\|(H_{j}\varphi)\|_{\infty}\|\theta_{\varphi}\|_{\infty}\| f_{Y}\| _{1}<\infty\), then
$$\begin{aligned} E\hat{f}_{X,n}(x) =&E \biggl(\sum_{k}(H_{j} \varphi )_{jk}(Y_{1})\varphi_{jk}(x) \biggr) \\ =&\sum_{k}E(H_{j}\varphi)_{jk}(Y_{1}) \varphi_{jk}(x) \\ =&\sum_{k}\alpha_{jk} \varphi_{j k}(x) \\ =&K_{j} f_{X}(x). \end{aligned}$$
From Lemma 2.5, one gets
$$ \lim_{n\rightarrow\infty}\| f_{X}-E \hat{f}_{X,n}\|_{p}=\lim_{n\rightarrow\infty} \|f_{X}-K_{j} f_{X}\|_{p}=0. $$
(16)
(ii) For the stochastic term. Due to Lemma 2.2, it can be found that
$$\begin{aligned} \|\hat{f}_{X,n}-E\hat{f}_{X,n}\|_{p}^{p} =& \biggl\| \sum_{k}\hat{\alpha}_{jk} \varphi_{jk}(x)-\sum_{k}\alpha _{jk}\varphi_{jk}(x) \biggr\| _{p}^{p} = \biggl\| \sum_{k}(\hat{\alpha}_{jk}- \alpha_{jk})\varphi _{jk}(x) \biggr\| _{p}^{p}\\ \lesssim& 2^{j(\frac{p}{2}-1)}\sum_{k}|\hat{\alpha}_{jk}-\alpha_{jk}|_{p}^{p}, \end{aligned}$$
so one gets
$$\begin{aligned} E\|\hat{f}_{X,n}-E\hat{f}_{X,n} \|_{p}^{p} \lesssim& 2^{j(\frac{p}{2}-1)}E\sum _{k}|\hat{\alpha}_{jk}-\alpha_{jk}|_{p}^{p} \\ =&2^{j(\frac{p}{2}-1)}\sum_{k}E|\hat{\alpha}_{jk}-\alpha_{jk}|_{p}^{p}. \end{aligned}$$
(17)
Firstly, we estimate E\(|\hat{\alpha}_{jk}-\alpha_{jk}|_{p}^{p}\). We have
$$\begin{aligned} |\hat{\alpha}_{jk}-\alpha_{jk}| =& \Biggl|\frac {1}{n}\sum _{i=1}^{n}(H_{j} \varphi)_{jk}(Y_{i})-\frac{1}{n}\sum _{i=1}^{n}E(H_{j}\varphi)_{jk}(Y_{i}) \Biggr| \\ =&\frac{1}{n} \Biggl|\sum_{i=1}^{n}Z_{ik} \Biggr|, \end{aligned}$$
where \(Z_{ik}=(H_{j}\varphi)_{jk}(Y_{i})-E(H_{j}\varphi)_{jk}(Y_{i})\) and \(E Z_{ik}=0\). Then
$$\begin{aligned} | Z_{ik}| =&\bigl|(H_{j}\varphi )_{jk}(Y_{i})-E(H_{j} \varphi)_{jk}(Y_{i})\bigr| \\ \leq&\bigl|(H_{j}\varphi)_{jk}(Y_{i})\bigr|+E\bigl|(H_{j} \varphi )_{jk}(Y_{i})\bigr| \\ =& \biggl|\frac{1}{2\pi}\int2^{\frac{j}{2}}e^{it(2^{j}Y_{i}-k)}\frac {\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \biggr|+\int \biggl|\frac {1}{2\pi}\int2^{\frac{j}{2}}e^{it(2^{j} y-k)} \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \biggr| f_{Y}(y)\,dy \\ \lesssim&2^{j(\frac{1}{2}+\beta)}. \end{aligned}$$
Thanks to Rosenthal’s inequality, Lemma 2.6, one has
$$\begin{aligned} E|\hat{\alpha}_{jk}-\alpha_{jk}|_{p}^{p} =&\frac{1}{n^{p}}E \Biggl|\sum_{i=1}^{n}Z_{ik} \Biggr|^{p} \\ \lesssim& \frac{1}{n^{p}} \Biggl(\bigl(2^{j(\frac{1}{2}+\beta )}\bigr)^{p-2} \sum_{i=1}^{n} E|Z_{ik}|^{2}+ \Biggl(\sum_{i=1}^{n}E|Z_{ik}|^{2} \Biggr)^{\frac{p}{2}} \Biggr) \\ =&\frac{2^{j(\frac{1}{2}+\beta)(p-2)}}{n^{p-1}}E|Z_{1k}|^{2}+\frac {1}{n^{\frac{p}{2}}} \bigl(E|Z_{1k}|^{2} \bigr)^{\frac{p}{2}}. \end{aligned}$$
(18)
One only needs to consider \(\sum_{k}(E|Z_{1k}|^{2})^{\frac {p}{2}}\). Define \(A=\int|(H_{j}\varphi)(y)|^{2}\,dy=2\pi\int_{R}|\frac {\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}|^{2}\,dt\lesssim\int |(1+|2^{j}t|)^{\beta/2}\tilde{\varphi}(t)|^{2}\,dt\lesssim2^{2j\beta}\), and
$$\begin{aligned} \bigl(E|Z_{1k}|^{2} \bigr)^{\frac{p}{2}} =& \bigl(E\bigl|(H_{j}\varphi)_{jk}(Y_{1})-E(H_{j} \varphi)_{jk}(Y_{1})\bigr|^{2} \bigr)^{\frac{p}{2}} \\ \leq& \bigl(E\bigl|(H_{j}\varphi)_{jk}(Y_{1})\bigr|^{2} \bigr)^{\frac{p}{2}} \\ =& \biggl(\int\bigl|(H_{j}\varphi)_{jk}(y)\bigr|^{2}f_{Y}(y)\,dy \biggr)^{\frac{p}{2}} \\ =&A^{\frac{p}{2}} \biggl(\int\frac{|(H_{j}\varphi )_{jk}(y)|^{2}}{A}f_{Y}(y)\,dy \biggr)^{\frac{p}{2}} \\ \leq&A^{\frac{p}{2}-1}\int\bigl|(H_{j}\varphi)_{jk}(y)\bigr|^{2}f_{Y}(y)^{\frac{p}{2}}\,dy. \end{aligned}$$
Moreover,
$$\begin{aligned} \sum_{k}\bigl|(H_{j}\varphi)_{jk}(y)\bigr|^{2} =& \sum_{k} \biggl(\frac{2^{\frac{j}{2}}}{2\pi} \biggl|\int e^{it(2^{j}y-k)}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \biggr| \biggr)^{2} \\ \lesssim&2^{j}\sum_{k} \biggl( \biggl|\int _{-\frac {4\pi}{3}}^{\frac{4\pi}{3}} e^{it(2^{j}y-k)}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}\,dt \biggr| \biggr)^{2} \\ \leq&2^{j}\sum_{k} \biggl( \biggl|\int _{0}^{\frac {4\pi}{3}}e^{it2^{j}y}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}e^{-itk}\,dt \biggr|+ \biggl|\int_{-\frac{4\pi }{3}}^{0}e^{it2^{j}y} \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}e^{-itk}\,dt \biggr| \biggr)^{2} \\ \leq&2^{j} \biggl(\sum_{k} \biggl|\int _{0}^{\frac {4\pi}{3}}e^{it2^{j}y}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}e^{-itk}\,dt \biggr|^{2}\\ &{}+\sum_{k} \biggl|\int _{-\frac{4\pi}{3}}^{0}e^{it2^{j}y}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}e^{-itk}\,dt \biggr|^{2} \biggr). \end{aligned}$$
Note that \(e^{it2^{j}y}\frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}I_{[0,2\pi]}\in L_{2}[0,2\pi]\), \(\{e^{-itk},k\in Z\}\) is an orthonormal basis of \(L_{2}[0,2\pi]\), and by the Parseval formulas, one gets
$$\sum_{k} \biggl|\int_{0}^{\frac{4\pi }{3}}e^{it2^{j}y} \frac{\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)}e^{-itk}\,dt \biggr|^{2} =\int_{0}^{\frac{4\pi}{3}} \biggl|e^{it2^{j}y}\frac {\tilde{\varphi}(t)}{\tilde{g}(-2^{j}t)} \biggr|^{2}\,dt=2^{2j\beta}. $$
Similarly, \(\sum_{k} |\int_{-\frac{4\pi }{3}}^{0}e^{it2^{j}y}\frac{\hat{\varphi}(t)}{\hat{g}(-2^{j}t)}e^{-itk}\,dt |^{2}=2^{2j\beta}\). Then \(\sum_{k}|(H_{j}\varphi)_{jk}(y)|^{2}\lesssim2^{j(2\beta +1)}\).
For the density function \(f_{Y}\in L_{1}\mathbb{(R)}\cap L_{p}\mathbb {(R)}\), \(2\leq p<\infty\), one has \(f_{Y}\in L_{p/2}\mathbb{(R)}\). Moreover, \(\sum_{k}(E|Z_{1k}|^{2})^{\frac{p}{2}}\lesssim A^{\frac {p}{2}-1}2^{2j\beta}=2^{j(\beta p+1)}\). Therefore,
$$\begin{aligned} \sum_{k}E|\hat{\alpha}_{jk}-\alpha _{jk}|_{p}^{p} =&\frac{2^{j(\frac{1}{2}+\beta)(p-2)}}{n^{p-1}}\sum _{k} E|Z_{1k}|^{2}+\frac{1}{n^{\frac{p}{2}}}\sum _{k} \bigl(E|Z_{1k}|^{2} \bigr)^{\frac{p}{2}} \\ \lesssim&\frac{2^{j(\frac{1}{2}+\beta)(p-2)}2^{j(2\beta +1)}}{n^{p-1}}+\frac{2^{j(\beta p+1)}}{n^{\frac{p}{2}}} =\frac{2^{j(\beta p+1)}}{n^{\frac{p}{2}}} \biggl(\biggl(\frac {2^{j}}{n}\biggr)^{\frac{p}{2}-1}+1 \biggr). \end{aligned}$$
Then we get \(E\|\hat{f}_{X,n}-E\hat{f}_{X,n}\|_{p}^{p}\lesssim2^{j(\frac {p}{2}-1)}\frac{2^{j(\beta p+1)}}{n^{\frac{p}{2}}} ((\frac {2^{j}}{n})^{\frac{p}{2}-1}+1 ) \lesssim (\frac{2^{j(2\beta+1)}}{n} )^{\frac{p}{2}}\). Taking \(2^{j}\sim n^{\frac{1-\epsilon}{1+2\beta}} \) (\(\epsilon>0\)), one obtains \(\lim_{n\rightarrow\infty}E\|\hat{f}_{X,n}-E\hat{f}_{X,n}\| _{p}^{p}=0\). □
Theorem 4.2
Let \(\varphi(x)\) be the Meyer scaling function, \(\tilde{g}(t)|\gtrsim(1+|t|^{2})^{-\frac{\beta }{2}}\) (\(\beta\geq0\)), \(\hat{f}_{X,n}(x)\) is the estimator defined in (13). If \(f_{X}\in L_{2}\mathbb{(R)}\cap L_{p}\mathbb{(R)} \) (\(1\leq p<2\)) and \(\operatorname{supp} f_{X} \subset[-B,B]\), taking \(2^{j}\sim n^{\frac{1-\epsilon}{2+2\beta}}\), then one has
$$ \lim_{n\rightarrow\infty}E\| f_{X}-\hat {f}_{X,n}I_{[-B,B]} \|_{p} =0. $$
(19)
Proof
For the bias term, we get \(E\hat {f}_{X,n}I_{[-B,B]}(x)=K_{j}f_{X}I_{[-B,B]}(x)\), then
$$\begin{aligned} \|f_{X}-E\hat{f}_{X,n}I_{[-B,B]}\|_{p}^{p} =& \| f_{X}-K_{j}f_{X}I_{[-B,B]} \|_{p}^{p} \\ =&\int_{\mathbb{R}}\bigl|f_{X}(x)-K_{j}f_{X}I_{[-B,B]}(x)\bigr|^{p} \,dx \\ =&\int_{-B}^{B}\bigl|f_{X}(x)-K_{j}f_{X}(x)\bigr|^{p} \,dx \\ \leq&\bigl\| f_{X}(x)-K_{j}f_{X}(x) \bigr\| _{p}^{p}. \end{aligned}$$
So one gets \(\lim_{n\rightarrow\infty}\| f_{X}-E\hat {f}_{X,n}I_{[-B,B]}\|_{p}\leq\lim_{n\rightarrow\infty}\|f-K_{j} f\|_{p}=0\).
Next we only consider the stochastic term. For any \(1\leq p<2\),
$$\begin{aligned} E\|\hat{f}_{X,n}I_{[-B,B]}-E\hat{f}_{X,n}I_{[-B,B]} \| _{p}^{p} \leq& E\|\hat{f}_{X,n}I_{[-B,B]}-E \hat {f}_{X,n}I_{[-B,B]}\|_{1} \\ &{}+E\|\hat{f}_{X,n}I_{[-B,B]}- E\hat{f}_{X,n}I_{[-B,B]}\|_{2}^{2}. \end{aligned}$$
(20)
Because \(\lim_{n\rightarrow\infty}E\|\hat {f}_{X,n}I_{[-B,B]}-E\hat{f}_{X,n}I_{[-B,B]}\|_{2}^{2}\leq\lim_{n\rightarrow\infty}E\|\hat{f}_{X,n}-E\hat{f}_{X,n}\|_{2}^{2}=0\), we only need to consider \(E\|\hat{f}_{X,n}I_{[-B,B]}-E\hat {f}_{X,n}I_{[-B,B]}\|_{1}\). Clearly,
$$ \biggl|\sum_{k}\varphi(x-k) (H_{j}\varphi ) (y-k) \biggr| \leq\sum_{k}\bigl|\varphi(x-k)\bigr|\bigl|(H_{j} \varphi) (y-k)\bigr| \leq\|\theta_{\varphi}\|_{\infty}\|H_{j} \varphi\|_{\infty}\lesssim2^{j\beta}; $$
define
$$ D(x,y)=\sum_{k}\varphi(x-k) (H_{j} \varphi) (y-k), $$
then
$$\begin{aligned} \hat{f}_{X,n}(x) =&\sum_{k}\hat{\alpha}_{jk}\varphi_{jk}(x) \\ =&\sum_{k} \Biggl(\frac{1}{n}\sum _{i=1}^{n}(H_{j}\varphi)_{jk}(Y_{i}) \Biggr)\varphi_{jk}(x) \\ =&\frac{1}{n}\sum_{i=1}^{n}\sum _{k}(H_{j}\varphi )_{jk}(Y_{i}) \varphi_{jk}(x) \\ =&\frac{1}{n}\sum_{i=1}^{n}D_{j}(x,Y_{i}). \end{aligned}$$
We know that
$$E\|\hat{f}_{X,n}I_{[-B,B]}-E\hat{f}_{X,n}I_{[-B,B]} \| _{1}=\int_{-B}^{B} E| \hat{f}_{X,n}-E\hat{f}_{X,n}|\,dx, $$
now we estimate \(E|\hat{f}_{X,n}-E\hat{f}_{X,n}|\). Using Rosenthal’s inequality, Lemma 2.6, one gets
$$ \begin{aligned}[b] E |\hat{f}_{X,n}-E\hat{f}_{X,n} | &\leq \frac{1}{n} \Biggl(\sum_{i=1}^{n}E\bigl|D_{j}(x,Y_{i})-E D_{j}(x,Y_{i})\bigr|^{2} \Biggr)^{1/2} \\ &\leq\frac{1}{n^{1/2}} \bigl(E\bigl|D_{j}(x,Y_{1})\bigr| ^{2} \bigr)^{1/2} \\ &\leq \frac{2^{j}}{n^{1/2}} \biggl(\int \biggl(\sum_{k}\bigl|(H_{j} \varphi) \bigl(2^{j}y-k\bigr)\bigr|\bigl|\varphi\bigl(2^{j}x-k\bigr)\bigr| \biggr) ^{2}f_{Y}(y)\,dy \biggr)^{1/2} \\ &\lesssim\frac{2^{j}}{n^{1/2}}2^{j\beta}\|\theta_{\varphi}\|_{\infty}\|f\|_{1}^{1/2}. \end{aligned} $$
(21)
Then \(E\|\hat{f}_{X,n}I_{[-B,B]}-E\hat {f}_{X,n}I_{[-B,B]}\|_{1}\lesssim\frac{2^{j(1+\beta )}}{n^{1/2}}\), taking \(2^{j}\sim n^{\frac{1-\epsilon}{2+2\beta}}\), one gets
$$\lim_{n\rightarrow\infty}E\|\hat{f}_{X,n}I_{[-B,B]}-E\hat {f}_{X,n}I_{[-B,B]}\|_{1}=0. $$
 □
Remark
If g is the Dirac function δ, then the conclusions with additive noise reduce to the classical model results without noise.

Acknowledgements

The authors thank Professor Youming Liu for his profound insight and helpful suggestions. This work was supported by National Natural Science Foundation of China (No. 11271038), CSC Foundation (No. 201308110227) and Fundamental Research Fund of BJUT (No. 006000514313002).
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Literatur
2.
Zurück zum Zitat Härdle, W, Kerkyacharian, G, Picard, D, Tsybakov, A: Wavelets, Approximation and Statistical Applications. Springer, New York (1997) Härdle, W, Kerkyacharian, G, Picard, D, Tsybakov, A: Wavelets, Approximation and Statistical Applications. Springer, New York (1997)
4.
Zurück zum Zitat Donoho, DL, Johnstone, IM, Kerkyacharian, G, Picard, D: Density estimation by wavelet thresholding. Ann. Stat. 24, 508-539 (1996) CrossRefMATHMathSciNet Donoho, DL, Johnstone, IM, Kerkyacharian, G, Picard, D: Density estimation by wavelet thresholding. Ann. Stat. 24, 508-539 (1996) CrossRefMATHMathSciNet
6.
Zurück zum Zitat Chacón, JE, Rodríguez-Casal, A: On the \(L_{1}\)-consistency of wavelet density estimates. Can. J. Stat. 33(4), 489-496 (2005) CrossRefMATH Chacón, JE, Rodríguez-Casal, A: On the \(L_{1}\)-consistency of wavelet density estimates. Can. J. Stat. 33(4), 489-496 (2005) CrossRefMATH
8.
Zurück zum Zitat Liu, MC, Taylor, RL: A consistent nonparametric density estimator for the deconvolution problem. Can. J. Stat. 17(4), 427-438 (1989) CrossRefMATHMathSciNet Liu, MC, Taylor, RL: A consistent nonparametric density estimator for the deconvolution problem. Can. J. Stat. 17(4), 427-438 (1989) CrossRefMATHMathSciNet
9.
Zurück zum Zitat Ramírez, P, Vidakovic, B: Wavelet density estimation for stratified size-biased sample. J. Stat. Plan. Inference 140(2), 419-432 (2010) CrossRefMATH Ramírez, P, Vidakovic, B: Wavelet density estimation for stratified size-biased sample. J. Stat. Plan. Inference 140(2), 419-432 (2010) CrossRefMATH
12.
Zurück zum Zitat Li, R, Liu, YM: Wavelet optimal estimations for a density with some additive noises. Appl. Comput. Harmon. Anal. 36, 416-433 (2014) CrossRefMATHMathSciNet Li, R, Liu, YM: Wavelet optimal estimations for a density with some additive noises. Appl. Comput. Harmon. Anal. 36, 416-433 (2014) CrossRefMATHMathSciNet
Metadaten
Titel
The mean consistency of wavelet density estimators
verfasst von
Zijuan Geng
Jinru Wang
Publikationsdatum
01.12.2015
Verlag
Springer International Publishing
Erschienen in
Journal of Inequalities and Applications / Ausgabe 1/2015
Elektronische ISSN: 1029-242X
DOI
https://doi.org/10.1186/s13660-015-0636-1

Weitere Artikel der Ausgabe 1/2015

Journal of Inequalities and Applications 1/2015 Zur Ausgabe