Skip to main content
Log in

Local polynomial modelling of the conditional quantile for functional data

  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

As the problem of prediction is of great interest, several tools based on different methods and devoted to various contexts, have been developed in the statistical literature. The contribution of this paper is to focus on the study of the local linear nonparametric estimation of the quantile of a scalar response variable given a functional covariate. In fact, the covariate is a random variable taking values in a semi-metric space which can have an infinite dimension in order to permit to deal with curves. We first establish pointwise and uniform almost-complete convergences, with rates, of the conditional distribution function estimator. Then, we deduce the uniform almost-complete convergence of the obtained local linear conditional quantile estimator. We also bring out the application of our results to the multivariate case as well as to the particular case of the kernel method. Moreover, a real data study allows to place our conditional median estimator in relation to several other predictive tools.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Let \((z_n)_{n\in {\mathbb N}^{\star }}\) be a sequence of real random variables. We say that \((z_n)_{n\in {\mathbb N}^{\star }}\) converge-almost completely (a.co.) toward zero if, and only if, \(\forall \epsilon > 0\), \(\sum _{n=1}^\infty { I}\!{ P}(|z_n| >\epsilon ) < \infty \). Moreover, let \((u_n)_{n\in {\mathbb N}^*}\) be a sequence of positive real numbers; we say that \(z_n = O(u_n)\) a.co. if, and only if, \(\exists \epsilon > 0\), \(\sum _{n=1}^\infty { I}\!{ P}(|z_n| >\epsilon u_n) < \infty \). This kind of convergence implies both almost-sure convergence and convergence in probability (cf. Ferraty and Vieu 2006 for some details).

References

  • Baìllo A, Grané A (2009) Local linear regression for functional predictor and scalar response. J Multivar Anal 100:102–111

    Article  MATH  Google Scholar 

  • Barrientos-Marin J, Ferraty F, Vieu P (2010) Locally modelled regression and functional data. J Nonparametr Stat 22:617–632

    Article  MATH  MathSciNet  Google Scholar 

  • Benhenni K, Ferraty F, Rachdi M, Vieu P (2007) Local smoothing regression with functional data. Computat Stat 22(3):353–369

    Article  MathSciNet  Google Scholar 

  • Berlinet A, Elamine A, Mas A (2011) Local linear regression for functional data. Inst Stat Math 63:1047–1075

    Article  MATH  MathSciNet  Google Scholar 

  • Boj E, Delicado D, Fortiana J (2010) Distance-based local linear regression for functional predictor. Computat Stat Data Anal 54(2):429–437

    Article  MATH  MathSciNet  Google Scholar 

  • Cardot H, Crambes C, Sarda P (2004) Conditional quantiles with functional covariates: an application to ozone pollution forecasting. In: Antoch J (ed) COMPSTAT 2004—proceedings in computational statistics. Physica Verlag, Heidelberg

  • Chu C-K, Marron J-S (1991) Choosing a kernel regression estimator. With comments and a rejoinder by the authors. Stat Sci 6(1991):404–436

    Article  MATH  MathSciNet  Google Scholar 

  • Demongeot J, Laksaci A, Madani F, Rachdi M (2011) A fast functional locally modelled conditional density and mode for functional time series. In: Recent advances in functional data analysis and related topics. In: Contributions to statistics, Physica-Verlag/Springer, pp 85–90. doi:10.1007/978-3-7908-2736-1-13

  • Demongeot J, Laksaci A, Madani F, Rachdi M (2013) Functional data analysis: conditional density estimation and its application. Statistics 47(1):26–44

    Article  MATH  MathSciNet  Google Scholar 

  • Demongeot J, Laksaci A, Rachdi M, Rahmani S (2014) On the local linear modelization of the conditional distribution for functional data. Sankhyā Ser A 76(2):329–355

  • El Methni M, Rachdi M (2010) Local weighted average estimation of the regression operator for functional data. Commun Stat Theory Methods 40(17):3141–3153

    Article  Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modelling and its applications. Chapman & Hall, London

    MATH  Google Scholar 

  • Ferraty F, Laksaci A, Vieu P (2006) Estimating some characteristics of the conditional distribution in nonparametric functional models. Stat Inference Stoch Process 9:47–76

    Article  MATH  MathSciNet  Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Theory and practice. Springer Series in Statistics, New York

    MATH  Google Scholar 

  • Ferraty F, Van Keilegom I, Vieu P (2008) On the validity of the bootstrap in nonparametric functional regression. Scand J Stat 37(2):286–306

    Article  Google Scholar 

  • Ferraty F, Vieu P (2009) Additive prediction and boosting for functional data. Computat Stat Data Anal 53(4):1400–1413

    Article  MATH  MathSciNet  Google Scholar 

  • Ferraty F, Laksaci A, Tadj A, Vieu Ph (2010) Rate of uniform consistency for nonparametric estimates with functional variables. J Stat Plan Inference 140:335–352

    Article  MATH  MathSciNet  Google Scholar 

  • Ferraty F, Vieu P (2011) Richesse et complexité des donnés fonctionnelles. Revue Modulad 43:25–43

  • Ramsay J-O, Silverman B-W (1997) Functional data analysis. Springer Series in Statistics, New York

    Book  MATH  Google Scholar 

  • Ramsay J-O, Silverman B-W (2002) Applied functional data analysis. Methods and case studies. Springer Series in Statistics, New York

    Book  MATH  Google Scholar 

Download references

Acknowledgments

The authors are grateful to professors Christophe Crambes and Philippe Vieu for providing the pollution data. Thanks to the two reviewers for their constructive comments and to the editor of SMA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fatiha Messaci.

Appendix

Appendix

In what follows, when no confusion is possible, we will denote by \(C\) or \(C'\) some strictly positive generic constants. Moreover, we put, for any \(x\in \mathcal{F}\), and for all \(i=1,\ldots ,n\):

$$\begin{aligned} K_i(x)=K\left( h^{-1}_k\left| \delta (x,X_i)\right| \right) , \; \beta _i(x)=\beta (X_i,x) \hbox { and } J_i(y)=J\left( h_{J}^{-1}(y-Y_{i})\right) . \end{aligned}$$

Proof of Lemma 2.3

Let us define \(\widetilde{W}_{12}(x):=\frac{{W}_{12}(x)}{\mathrm{I}\!\mathrm{E}{W}_{12}(x)}\). Remark that, the equidistribution of the pairs \((X_i,Y_i)\), the assumption (H4) and the fact that \(\mathrm{I}\!\mathrm{E}\left[ \widetilde{W}_{12}(x)\right] =1\), lead directly, for all \(y\in S_{{\mathbb {R}}},\; \) to:

(6)

This last expectation can be easily computed by means of the Fubini’s Theorem and by using the fact that \(J'=J_0\):

$$\begin{aligned} \mathrm{I}\!\mathrm{E}\left[ J_{2}(y)|X\right]&= \int _{{\mathbb {R}}}J\left( \frac{y-u}{h_J}\right) dP(u|X)\nonumber \\&= \int _{{\mathbb {R}}}\int _{-\infty }^{\frac{y-u}{h_J}}J_0\left( v\right) dv dP(u|X)\nonumber \\&= \int _{{\mathbb {R}}}\int _{{\mathbb {R}}}J_0\left( v\right) {1\!\!1}_{[v,+\infty [}\left( \frac{y-u}{h_J}\right) dP(u|X) \; dv\nonumber \\&= \int _{{\mathbb {R}}}\int _{{\mathbb {R}}}J_0\left( v\right) {1\!\!1}_{{[}v,+\infty [}\left( \frac{y-u}{h_J}\right) dP(u|X)\; dv\nonumber \\&= \int _{{\mathbb {R}}}J_0\left( v\right) F^X_Y(y-vh_J)dv. \end{aligned}$$
(7)

Since \(J_0\) integrate up to \(1\) and is supported on \([-1,1]\), we get:

$$\begin{aligned} \mathrm{I}\!\mathrm{E}\left[ J_{1}(y)|X\right] -F_Y^x(y) =\int _{-1}^1J_0(v)\left( F^X_Y(y-vh_J) -F_Y^x(y)\right) dv. \end{aligned}$$
(8)

By using the last relation, together with hypothesis (H2), we obtain the claimed result.

Proof of Lemma 2.4

We will proceed by two steps as follows:

  1. 1.

    We first show that the proof of lemma \(4.4\) in Barrientos-Marin et al. (2010), adapted to the fact that \(J\) is bounded, allows us to write for any \(x\in S_{\mathcal{F}}\), any \(y\in \mathrm {IR}\) and any \(\epsilon >0\)

    $$\begin{aligned} {{ I}\!{ P}}\left[ | \widehat{F}_{N}^{ x}(y)-\mathrm{I}\!\mathrm{E}\ \widehat{F}_{N}^{x}(y)|>\epsilon \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right] \le C'n^{-C\epsilon ^2}. \end{aligned}$$
    (9)

Indeed

$$\begin{aligned} \widehat{F}_{N}^{ x}(y)&= \frac{1}{n(n-1) \mathrm{I}\!\mathrm{E}\left( W_{12}(x)\right) }\sum _{{i,j=1}}^n W_{ij}(x)J(h_{J}^{-1}(y-Y_{j}))\nonumber \\ {}&= Q(x)[S_{1}^x(y)S_{2}^x-S_3^x(y)S_4^x], \end{aligned}$$
(10)

where

$$\begin{aligned} Q(x)&= \frac{n^2h_{K}^{2}\varphi _{x}^{2}(h_{K})}{n(n-1) \mathrm{I}\!\mathrm{E}\left( W_{12}(x)\right) },\; \; S_{1}^x(y)=\frac{1}{n}\sum _{j=1}^n \frac{K_{j}(x)J_{j}(y)}{\varphi _{x}(h_{K})},\\ S_2^x(y) {:=}&S_{2}^x=\frac{1}{n}\sum _{i=1}^n \frac{K_{i}(x)\beta _{i}^{2}(x)}{h_{K}^{2}\varphi _{x}(h_{K})},\; \; S_{3}^x(y)= \frac{1}{n}\sum _{j=1}^n\frac{K_{j}(x)\beta _{j}(x)J_{j}(y)}{h_{K} \varphi _{x}(h_{K})}\\ \text{ and } \; \; S_4^x(y)&{:=} S_{4}^x=\frac{1}{n}\sum _{i=1}^n \frac{K_{i}(x)\beta _{i}(x)}{h_{K}\varphi _{x}(h_{K})}. \end{aligned}$$

So, one has

$$\begin{aligned}&\widehat{F}_{N}^{ x}(y)-\mathrm{I}\!\mathrm{E}\ \widehat{F}_{N}^{x}(y)\nonumber \\&\quad =Q(x)[S_{1}^x(y)S_{2}^x-\mathrm{I}\!\mathrm{E}(S_{1}^x(y) S_{2}^x)-(S_{3}^x(y)S_{4}^x-\mathrm{I}\!\mathrm{E}(S_{3}^x(y)S_{4}^x))] \end{aligned}$$
(11)

and since

$$\begin{aligned} S_{1}^x(y)S_{2}^x-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x)&= (S_{1}^x(y)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)))(S_{2}^x-\mathrm{I}\!\mathrm{E}(S_{2}^x)) +(S_{2}^x-\mathrm{I}\!\mathrm{E}(S_{2}^x))\mathrm{I}\!\mathrm{E}(S_{1}^x(y))\\&+(S_{1}^x(y)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)))\mathrm{I}\!\mathrm{E}(S_{2}^x) +\mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x),\\ S_{3}^x(y)S_{4}^x-\mathrm{I}\!\mathrm{E}(S_{3}^x(y)S_{4}^x)&= (S_{3}^x(y)-\mathrm{I}\!\mathrm{E}(S_{3}^x(y))(S_{4}^x-\mathrm{I}\!\mathrm{E}(S_{4}^x)) +(S_{4}^x-\mathrm{I}\!\mathrm{E}(S_{4}^x))\mathrm{I}\!\mathrm{E}(S_{3}^x(y))\\&+(S_{3}^x(y)-\mathrm{I}\!\mathrm{E}(S_{3}^x(y)))\mathrm{I}\!\mathrm{E}(S_{4}^x) +\mathrm{I}\!\mathrm{E}(S_{3}^x(y))\mathrm{I}\!\mathrm{E}(S_{4}^x)-\mathrm{I}\!\mathrm{E}(S_{3}^x(y)S_{4}^x) \end{aligned}$$

and \(Q(x)=O(1)\) (cf. Barrientos-Marin et al. 2010), we have to show that for any \(i=1,2,3,4\)

$$\begin{aligned} {{ I}\!{ P}}\left[ |S_i^x(y)-\mathrm{I}\!\mathrm{E}(S_i^x(y))|>\epsilon \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right] \le C'n^{-C\epsilon ^2},\ \ \mathrm{I}\!\mathrm{E}(S_i^x(y))=O(1) \end{aligned}$$

and that almost-surely

$$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y) S_{2}^x)=o\left( \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right) \end{aligned}$$

and

$$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{3}^x(y))\mathrm{I}\!\mathrm{E}(S_{4}^x)-\mathrm{I}\!\mathrm{E}(S_{3}^x(y)S_{4}^x)=o\left( \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right) . \end{aligned}$$
  • Firstly

    $$\begin{aligned} S_{1}^x(y)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y))=\frac{1}{n}\sum _{j}^n \frac{K_{j}(x)J_{j}(y)-\mathrm{I}\!\mathrm{E}(K_{j}(x)J_{j}(y))}{\varphi _{x}(h_K)}{:=} \frac{1}{n}\sum _{j}^nZ_{j}. \end{aligned}$$

Using Lemma A.1 in Barrientos-Marin et al. (2010) and the fact that \(J\) is bounded, we get:

$$\begin{aligned} \mathrm{I}\!\mathrm{E}|Z_{j}^{m}|&= \mathrm{I}\!\mathrm{E}\left| \varphi _{x}(h_K)^{-m}\left( K_{j}(x)J_{j}(y) -\mathrm{I}\!\mathrm{E}(K_{j}(x)J_{j}(y))\right) ^{m}\right| \\&= \varphi _{x}(h_K)^{-m}\mathrm{I}\!\mathrm{E}\left| \sum _{k=0}^m C^{k}_{m} (-1)^{k}(K_{j}(x)J_{j}(y))^{k}(\mathrm{I}\!\mathrm{E}(K_{j}(x)J_{j}(y)))^{m-k}\right| \\&\le C\varphi _{x}(h_K)^{-m}\sum _{k=0}^m \left| C^{k}_{m} (\mathrm{I}\!\mathrm{E}(K_{1}(x)))^{m-k}(\mathrm{I}\!\mathrm{E}(K^{k}_{1}(x)))\right| \\&\le C\varphi _{x}(h_K)^{-m}\sum _{k=0}^m C^{k}_{m} \varphi _{x}(h_K)^{-k+1}\varphi _{x}(h_K)^{m}\\&\le C 2^{m}\sup _{k\in \{0,...,m\}}\varphi _{x}(h_K)^{-k+1}\\&\le C 2^{m}\varphi _{x}(h_K)^{-m+1}=O(\varphi _{x}(h_K)^{-m+1}), \end{aligned}$$

where \(\displaystyle {C^{k}_{m}=\frac{m!}{k!(m-k)!}}.\)

Because of \(\displaystyle {u_{n}=\sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\rightarrow 0}\) and of hypothesis (H5), we can apply Corollary A8 of the Appendix of Ferraty and Vieu (2006) to obtain:

$$\begin{aligned} {{ I}\!{ P}}\left[ |S_{1}^x(y)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y))|> \epsilon \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right] \le C'n^{-c\epsilon ^2}. \end{aligned}$$
(12)
  • On the other hand, we have:

    $$\begin{aligned} {{ I}\!{ P}}\left[ |S_{2}^x-\mathrm{I}\!\mathrm{E}(S_{2}^x)|> \epsilon \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right] \le C'n^{-c\epsilon ^2} \quad \text{ and } \quad \mathrm{I}\!\mathrm{E}(S_{2}^x)=O(1), \end{aligned}$$
    (13)

    for which the proofs are given in Barrientos-Marin et al. (2010).

  • Moreover, applying Lemma A.1 in Barrientos-Marin et al. (2010) and the fact that \(J\) is bounded, we get:

    $$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{1}^x(y))=\frac{1}{\varphi _{x}(h_K)}(\mathrm{I}\!\mathrm{E}(K_{1}(x)J_{1}(y)))\le C. \end{aligned}$$
    (14)
  • The study of the term \(\mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x)\). Remark that:

    $$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)&= \frac{1}{n^{2}}\sum _{j=1}^n \sum _{i=1}^n\frac{\mathrm{I}\!\mathrm{E}(K_{i}(x)J_{i}(y))\mathrm{I}\!\mathrm{E}(K_{j}(x)\beta _{j}^{2}(x))}{h^{2}_K\varphi _{x}^{2}(h_{K})},\\ \mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x)&= \frac{1}{n^{2}h_K^{2}\varphi _{x}^{2}(h_K)} \sum _{i\ne j}^n\mathrm{I}\!\mathrm{E}(K_{j}(x)J_{j}(y))\mathrm{I}\!\mathrm{E}(K_{i}(x)\beta _{i}^{2}(x))\\&\quad +\frac{n}{n^{2}h^{2}_K \varphi _{x}^{2}(h_K)}\mathrm{I}\!\mathrm{E}(K_{1}(x)^{2}\beta _{1}^2(x)J_1(y)) \end{aligned}$$

    and

    $$\begin{aligned}&\frac{1}{n^{2}h^{2}_K\varphi _{x}^{2}(h_K)}\sum _{i\ne j}^n\mathrm{I}\!\mathrm{E}(K_{j}(x)J_{j}(y))\mathrm{I}\!\mathrm{E}(K_{i}(x)(x)\beta _{i}^{2}(x))\\&\quad =\frac{n(n-1)\mathrm{I}\!\mathrm{E}(K_{1}(x)J_{1}(y))\mathrm{I}\!\mathrm{E}(K_{1}(x)\beta _{1}^{2}(x))}{n^{2}h_K^{2}\varphi _{x}^{2}(h_K)}. \end{aligned}$$

    Using, once again, Lemma \(A.1\) in Barrientos-Marin et al. (2010) and the boundless of \(J\), we obtain:

    $$\begin{aligned} \frac{1}{nh_K^{2} \varphi _{x}^{2}(h_K)}\mathrm{I}\!\mathrm{E}(K_{1}(x)^{2}\beta _{1}^{2}(x)J_{1}(y))&\le \frac{C\mathrm{I}\!\mathrm{E}(K_{1}(x)^{2}\beta _{1}^{2}(x))}{nh_K^{2} \varphi _{x}^{2}(h_K)}\\&\le \frac{Ch^{2}_K\varphi _{x}(h_K)}{nh^{2}_K\varphi _{x}^{2}(h_K)}\\&= O\left( \frac{1}{n\varphi _{x}(h_K)}\right) . \end{aligned}$$

    So, one has:

    $$\begin{aligned}&\mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x) \\&\quad =\left( 1-\frac{n(n-1)}{n^{2}}\right) h_K^{-2} \varphi _{x}^{-2}(h_K)\mathrm{I}\!\mathrm{E}(K_{1}(x)\beta _{1}^{2}) \mathrm{I}\!\mathrm{E}(K_{1}(x)J_{1}(y))\\&\qquad +O\left( \frac{1}{n\varphi _{x}(h_K)}\right) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{nh_K^{2}\varphi _{x}^{2}(h_K)}\mathrm{I}\!\mathrm{E}(K_{1}(x)\beta _{1}^{2}) \mathrm{I}\!\mathrm{E}(K_{1}(x)J_{1}(y))\le \frac{Ch_K^{2}\varphi _{x}(h_K)}{nh_K^{2}\varphi _{x}^{2}(h_K)}=O \left( \frac{1}{n\varphi _{x}(h_K)}\right) , \end{aligned}$$

from which we can derive:

$$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x)&= O\left( \frac{1}{n\varphi _{x}(h_K)}\right) , \end{aligned}$$

and hypothesis (H5) allows us to obtain:

$$\begin{aligned} \mathrm{I}\!\mathrm{E}(S_{1}^x(y))\mathrm{I}\!\mathrm{E}(S_{2}^x)-\mathrm{I}\!\mathrm{E}(S_{1}^x(y)S_{2}^x)&= o\left( \sqrt{\frac{\ln n}{n\varphi _{x}(h_K)}}\right) . \end{aligned}$$
(15)
  • We can deduce, in the same way, the results for the terms depending on \(S_3^x(y)\) or (and) on \(S_4^x\).

  1. 2.

    In order to show the uniform convergence of \({\hat{F}}^x_Y\) on \(y\in S_{\mathrm {IR}}\), remark first that \(S_{\mathrm {IR}}\) is a compact, so there exists \(s_n\) reals \(t_k\) such that:

    $$\begin{aligned} S_{\mathrm {IR}}\subseteq \bigcup _{k=1}^{s_n}]t_k-l_n,t_k+l_n{[}, \end{aligned}$$

where

$$\begin{aligned} l_n=n^{-\xi -\frac{1}{2}} \ \ \text{ and } \ s_n=Cl_n^{-1}. \end{aligned}$$
(16)

Define \(t_y=\arg \min _{t\in \{t_1,\ldots ,t_{s_n}\}}|y-t|\). Then, we can write:

$$\begin{aligned} \sup _{y\in S_{\mathrm {IR}}} |\widehat{F}_N^x(y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(y)|&\le \sup _{y\in S_{\mathrm {IR}}} |\widehat{F}_N^x(y)- \widehat{F}_N^x(t_y)| +\sup _{y\in S_{\mathrm {IR}}} |\widehat{F}_N^x(t_y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)|\\&+\sup _{y\in S_{\mathrm {IR}}} |\mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(y)|\\&:= A_1+A_2+A_3, \end{aligned}$$

where the terms \(A_i\) for \(i=1,2,3\) are explicited below.

As \(J\) have a bounded first derivative, we get:

$$\begin{aligned} A_1&= \sup _{y\in S_{\mathrm {IR}}} |\widehat{F}_N^x(y)- \widehat{F}_N^x(t_y)|\\ {}&\le \sup _{y\in S_{\mathrm {IR}}}|\frac{1}{n(n-1)\mathrm{I}\!\mathrm{E}W_{1,2}(x)} \sum _{i\ne j}W_{i,j}(x)\left| J\left( h_{J}^{-1}(y-Y_j)\right) -J\left( h_{j}^{-1}(t_y-Y_j)\right) \right| \\&\le C \Big |\sup _{y\in S_{\mathrm {IR}}}\frac{1}{n(n-1)\mathrm{I}\!\mathrm{E}W_{1,2}(x)}\frac{|y-t_y|}{h_{J}}\sum _{i\ne j}W_{i,j}(x) \Big |\le C\frac{l_n}{h_{J}}|\widehat{F}_D^x|. \end{aligned}$$

This, together with Lemma 2.2 and hypothesis \(n^{\xi }h_{J} \rightarrow \infty \), allow us to obtain:

$$\begin{aligned} A_1=o\left( \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right) \ \ \text{ a.co. } \end{aligned}$$
(17)

and we can derive:

$$\begin{aligned} A_3:=\sup _{y\in S_{\mathrm {IR}}} |\mathrm{I}\!\mathrm{E}\widehat{F}_N^x(y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)| =o\left( \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right) \ \ \text{ a.co. } \end{aligned}$$
(18)

It remains to trait the term \(A_2=\sup _{y\in S_{\mathrm {IR}}} |\widehat{F}_N^x(t_y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)|\). For this main we write

$$\begin{aligned} {{ I}\!{ P}}\left[ A_2>\varepsilon \sqrt{\frac{\ln n}{n\,\varphi _x(h_K)}}\right] \!&= \! {{ I}\!{ P}}\left[ \max _{t_y\in \{t_1, \ldots ,t_{s_n}\}} |\widehat{F}_N^x(t_y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)|>\varepsilon \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right] \\&\le \! s_n\max _{t_y\in \{t_1, \ldots ,t_{s_n}\}}{{ I}\!{ P}}\left[ |\widehat{F}_N^x(t_y)- \mathrm{I}\!\mathrm{E}\widehat{F}_N^x(t_y)|>\varepsilon \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right] . \end{aligned}$$

In view of relation (9) and taking into account the fact that \(s_n=C(n^{\xi +\frac{1}{2}})\), we deduce that;

$$\begin{aligned} \sum _n s_n{{ I}\!{ P}}\left[ |S_1^x(t_y)-\mathrm{I}\!\mathrm{E}S_1^x(t_y)|>\varepsilon \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right] <\infty , \end{aligned}$$

for an appropriate choice of \(\varepsilon \), so one has:

$$\begin{aligned} A_2= O\left( \sqrt{\frac{\ln n}{n\, \varphi _x(h_K)}}\right) \ \ \text{ a.co. } \end{aligned}$$

In order to obtain the uniform convergence of \(\widehat{F}_Y^x\) (on \(x\)), we state a uniform version of Lemma A.1 in Barrientos-Marin et al. (2010), for which the proof works exactly in the same fashion as that of the cited lemma.

Lemma 4.1

Under assumptions (U1), (U3), (U4) and (U6), we obtain that:

  1. (i)

    \(\forall (p,l)\in {\mathrm {I\!N}}^{\star } \times {\mathrm {I\!N}}, \sup _{x \in S_{\mathcal{F}}}\mathrm{I}\!\mathrm{E}(K_1^p(x)|\beta _1(x)^l(x)|)\le Ch_{K}^l\varphi (h_{K})\)

  2. (ii)

    \( \inf _{x \in S_{\mathcal{F}}}\mathrm{I}\!\mathrm{E}(K_1(x)\beta _1^2(x))> Ch_{K}^2\varphi (h_{K}).\)

Proof of Lemma 2.6

We have that:

$$\begin{aligned} \widehat{F}_D^x= Q(x)\left[ Q_{2}(x)Q_{4}(x)-Q^{2}_{3}(x)\right] , \end{aligned}$$

where

$$\begin{aligned} Q_{p}(x)=\frac{1}{n\varphi _{x}(h_{K})}\sum _{i=1}^n \frac{K_{i}(x)\beta _{i}^{p-2}(x)}{h_{K}^{p-2}} \quad \text{ for }\; \; p=2,3,4 \end{aligned}$$

and

$$\begin{aligned} Q(x)=\frac{n^2h_{K}^{2}\varphi _{x}^{2}(h_{K})}{n(n-1) \mathrm{I}\!\mathrm{E}\left( W_{12}(x)\right) }. \end{aligned}$$

By following the same stapes as in the proof of Lemma 4.4 in Barrientos-Marin et al. (2010), but by using Lemma 4.1 instead of Lemma A.1 in Barrientos-Marin et al. (2010), we obtain, under hypotheses (U1), (U3), (U4) and (U6), that:

$$\begin{aligned} Q(x)=O(1),\,\, \mathrm{I}\!\mathrm{E}Q_{p}(x)=O(1)\,\,\text{ uniformly } \text{ on }\,x \end{aligned}$$
(19)

and

$$\begin{aligned} \sup _{x\in S_\mathcal{F}}\left[ \mathrm{I}\!\mathrm{E}\left[ Q_{2}(x)\right] \mathrm{I}\!\mathrm{E}\left[ Q_{4}(x)\right] -\mathrm{I}\!\mathrm{E}\left[ Q_{2}(x)Q_{4}(x)\right] +Var\left[ Q_{3}(x)\right] \right] =o\left( \sqrt{\frac{\ln d_{n}}{n\, \varphi (h_K)}}\right) . \end{aligned}$$

Il remains to show that, for any \( p=2,3,4\)

$$\begin{aligned} \sup _{x\in S_\mathcal{F}}\left| Q_{p}(x)-\mathrm{I}\!\mathrm{E}Q_{p}(x)\right| =O_{a.co}\left( \sqrt{\frac{\ln d_{n}}{n\, \varphi (h_K)}}\right) . \end{aligned}$$

To do that, we are inspired from Ferraty et al. (2010) and Demongeot et al. (2013). For this purpose, let us define:

$$\begin{aligned} j(x)= \arg \min _{j\in \{1,2,...,d_n\}}d(x, x_{j}). \end{aligned}$$
(20)

We will make use of the following inequality:

$$\begin{aligned} \sup _{x\in S_\mathcal{F}}\left| Q_{p}(x)-\mathrm{I}\!\mathrm{E}Q_{p}(x)\right|&\le \sup _{x\in S_\mathcal{F}}\left| Q_{p}(x)- Q_{p}(x_{j(x)} )\right| \nonumber \\&\quad +\sup _{x\in S_\mathcal{F}}\left| Q_{p}(x_{j(x)} )- \mathrm{I}\!\mathrm{E}Q_{p}(x_{j(x)} )\right| \nonumber \\&\quad +\sup _{x\in S_\mathcal{F}}\left| \mathrm{I}\!\mathrm{E}Q_{p}(x_{j(x)} )- \mathrm{I}\!\mathrm{E}Q_{p}(x )\right| \nonumber \\&:= F_1^{p}+F_2^{p}+F_3^{p}. \end{aligned}$$

Study of the terms \(F_1^{p} and F_3^{p}.\)

We have that:

$$\begin{aligned} F_1^{p}&\le \frac{C}{nh_{K}^{p-2}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}\left| K_{i}(x)\beta _{i}^{p-2}(x) {1\!\!1}_{B(x, h_{K})}(X_{i})\right. \\&\left. -K_{i}(x_{j(x)})\beta _{i}^{p-2}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i}) \right| \\&\le \frac{C}{nh_{K}^{p-2}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}K_{i}(x){1\!\!1}_{B(x, h_{K})}(X_{i})\left| \beta _{i}^{p-2}(x)\right. \\&\quad \left. -\beta _{i}^{p-2}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i})\right| \\&\quad +\frac{C}{nh_{K}^{p-2}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}\beta _{i}^{p-2}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i})\left| K_{i}(x){1\!\!1}_{B(x, h_{K})}(X_{i})\right. \\&\quad \left. -K_{i}(x_{j(x)}) \right| \\&:= T_{1,1}^{p}+T_{1,2}^{p}. \end{aligned}$$

For the term \(T_{1.1}^{p}\).

We infer, from hypothesis (U3), that:

$$\begin{aligned}&{1\!\!1}_{B(x, h_{K})}(X_{i})\left| \beta _{i}(x)-\beta _{i}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i})\right| \\&\quad \le C r_{n}{1\!\!1}_{B(x, h_{K})\bigcap {B(x_{j(x)}, h_{K})}}(X_{i})+C h_{K}{1\!\!1}_{B(x, h_{K})\bigcap \overline{B(x_{j(x)}, h_{K})}}(X_{i}) \end{aligned}$$

and

$$\begin{aligned}&{1\!\!1}_{B(x, h_{K})}(X_{i})\left| \beta _{i}^{2}(x) -\beta _{i}^{2}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i})\right| \\&\quad \le C r_{n}h_{K}{1\!\!1}_{B(x_{j(x)}, h_{K})\bigcap B(x, h_{K})}(X_{i})+C h_{K}^{2}{1\!\!1}_{B(x, h_{K})\bigcap \overline{B(x_{j(x)}, h_{K})}}(X_{i}). \end{aligned}$$

In the cases where \(p=3\) and \(p=4\), we get:

$$\begin{aligned}&{1\!\!1}_{B(x, h_{K})}(X_{i})\left| \beta _{i}^{p-2}(x)-\beta _{i}^{p-2}(x_{j(x)}) {1\!\!1}_{B(x_{j(x)}, h_{K})}(X_{i})\right| \\&\quad \le C r_{n}h_{K}^{p-3}{1\!\!1}_{B(x_{j(x)}, h_{K})\bigcap B(x, h_{K})}(X_{i})+C h_{K}^{p-2}{1\!\!1}_{B(x, h_{K})\bigcap \overline{B(x_{j(x)}, h_{K})}}(X_{i}). \end{aligned}$$

This permits us to conclude that:

$$\begin{aligned} T_{1,1}^{p}&\le \frac{C r_{n}}{nh_{K}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}K_{i}(x){1\!\!1}_{B(x, h_{K})\bigcap B(x_{j(x)}, h_{K})}(X_{i})\nonumber \\&\quad +\frac{C}{n\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}K_{i}(x){1\!\!1}_{B(x, h_{K})\bigcap \overline{B(x_j(x),h_{K})}}(X_{i}). \end{aligned}$$
(21)

For the term \(T_{1.2}^{p}.\)

Because of:

$$\begin{aligned}&{1\!\!1}_{B(x_j(x),h_{K})}(X_{i})\left| K_{i}(x){1\!\!1}_{B(x, h_{K})}(X_{i})-K_{i}(x_{j(x)})1_{B(x, h_{K})\bigcup \overline{B(x,h_{K})}}(X_{i})\right| \\&\quad \le {1\!\!1}_{B(x, h_{K})\bigcap {B(x_j(x),h_{K})}}(X_{i})\left| K_{i}(x) -K_{i}(x_{j(x)})\right| \\&\qquad +K_{i}(x_{j(x)}){1\!\!1}_{\overline{B(x, h_{K})}\bigcap {B(x_j(x),h_{K})}}(X_{i}), \end{aligned}$$

we derive, from hypotheses (U3) and (U4), that:

$$\begin{aligned}&\!\!\! |\beta _{i}^{p-2}(x_{j(x)})|{1\!\!1}_{B(x_j(x), h_{K})}(X_{i})\left| K_{i}(x){1\!\!1}_{B(x, h_{K})}(X_{i})-K_{i}(x_{j(x)})\right| \\&\quad \le C h_{K}^{p-2}\left\{ {1\!\!1}_{B(x, h_{K})\bigcap {B(x_j(x),h_{K})}}(X_{i})\frac{r_{n}}{h_{K}} +K_{i}(x_{j(x)}){1\!\!1}_{\overline{B(x, h_{K})}\bigcap {B(x_j(x),h_{K})}}(X_{i})\right\} . \end{aligned}$$

This implies that:

$$\begin{aligned} T_{1,2}^{p}&\le \frac{C r_{n}}{nh_{K}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}{1\!\!1}_{B(x, h_{K})\bigcap B(x_{j(x)}, h_{K})}(X_{i})\nonumber \\&\quad +\frac{C}{n\varphi (h_{K})}\sup _{x\in S_\mathcal{F}} \sum _{i=1}^{n}K_{i}(x_j(x)){1\!\!1}_{B(x_j(x), h_{K})\bigcap \overline{B(x,h_{K})}}(X_{i}). \end{aligned}$$
(22)

This last inequality together with (21) permit us to deduce:

$$\begin{aligned} F_{1}^{p}&\le \frac{C r_{n}}{nh_{K}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}{1\!\!1}_{B(x, h_{K})\bigcap B(x_{j(x)}, h_{K})}(X_{i})\\&\quad +\frac{C}{n\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}K_{i}(x_j(x)){1\!\!1}_{B(x_j(x), h_{K})\bigcap \overline{B(x,h_{K})}}(X_{i})\\&\quad +\frac{C}{n\varphi (h_{K})}\sup _{x\in S_\mathcal{F}} \sum _{i=1}^{n}K_{i}(x){1\!\!1}_{B(x, h_{K})\bigcap \overline{B(x_{j(x)}, h_{K})}}(X_{i}). \end{aligned}$$

According to hypothesis (U4), we get:

$$\begin{aligned} F_{1}^{p}\le \frac{Cr_{n}}{nh_{K}\varphi (h_{K})}\sup _{x\in S_\mathcal{F}}\sum _{i=1}^{n}{1\!\!1}_{B(x, h_{K})\bigcup B(x_{j(x)}, h_{K})}(X_{i}). \end{aligned}$$

By setting:

$$\begin{aligned} Z_{i}=\frac{Cr_{n}}{h_{K}\varphi (h_{K})}{1\!\!1}_{B(x, h_{K})\bigcup B(x_{j(x)}, h_{K})}(X_{i}), \end{aligned}$$

we obtain:

$$\begin{aligned} |Z_{i}| \le \frac{Cr_{n}}{h_{K}\varphi (h_{K})},\quad \, \mathrm{I}\!\mathrm{E}|Z_{i}|\le \frac{Cr_{n}}{h_{K}},\quad \, {\text{ and }} \quad \; \mathrm{I}\!\mathrm{E}(Z_{i}^{2})\le \frac{Cr_{n}^{2}}{h_{K}^{2}\varphi (h_{K})}. \end{aligned}$$

Combining Corollary A.9 in Ferraty and Vieu (2006) and hypotheses (U1) and (U5), we get:

$$\begin{aligned} F_{1}^{p}= O_{ac.o}\left( \frac{\ln n}{n\varphi (h_{K})}\right) , \end{aligned}$$

which entails that:

$$\begin{aligned} F_{1}^{p}= O_{ac.o}\left( \sqrt{\frac{\ln d_n}{n\varphi (h_{K})}}\right) . \end{aligned}$$

On the other hand, the fact that:

$$\begin{aligned} F_{3}^{p}\le \mathrm{I}\!\mathrm{E}\left\{ \sup _{x\in S_\mathcal{F}}\left| Q_{p}(x)-Q_{p}(x_{j(x)})\right| \right\} , \end{aligned}$$

implies that:

$$\begin{aligned} F_{3}^{p}=O\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$

Study of the term \(F_{2}^{p}.\)

For any \(\eta >0\), we have that:

$$\begin{aligned}&{{ I}\!{ P}}\left( F_{2}^{p}>\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) \\&\quad \le d_{n}\max _{j\in \{1,...,d_{n}\}}P\left( \left| Q_{p}(x_{j(x)}) -\mathrm{I}\!\mathrm{E}(Q_{p}(x_{j(x)}))\right| >\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$

Let us set:

$$\begin{aligned} \Delta _{p,i}&= \frac{1}{h_{K}^{p-2}\varphi (h_{K})}\left\{ K_{i}(x_{j(x)}) \beta _{i}^{p-2}(x_{j(x)})\right. \\&\quad \left. -\mathrm{I}\!\mathrm{E}\left[ K_{i}(x_{j(x)}) \beta _{i}^{p-2}(x_{j(x)})\right] \right\} \,\, \text{ for }\,\,p=2, 3, 4. \end{aligned}$$

Using the same arguments as for proving Eq. (12) in the proof of Lemma 2.4, by replacing \(\Delta _{p,i}\) instead of \(Z_{j}\) and applying Lemma 4.1, we get that:

$$\begin{aligned} \mathrm{I}\!\mathrm{E}\left| \Delta _{p,i}^{m}\right| =O\left( \varphi (h_{K})^{-m+1}\right) \,\,\text{ for }\,p=2,3,4 \end{aligned}$$

and

$$\begin{aligned} {{ I}\!{ P}}\left[ \left| Q_{p}(x_{j(x)})-\mathrm{I}\!\mathrm{E}(Q_{p}(x_{j(x)}))\right| >\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right] \le 2\exp \left\{ -C\eta ^{2}\ln d_{n}\right\} . \end{aligned}$$
(23)

Choosing \(C\eta ^{2}=\beta \), we obtain:

$$\begin{aligned} {{ I}\!{ P}}\left( F_{2}^{p}>\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) \le C d_{n}^{1-\beta }, \end{aligned}$$

and by hypothesis (U5) we deduce that:

$$\begin{aligned} F_{2}^{p}=O_{ac.o}\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$

Proof of Corollary 2.7

It is easy to see that:

$$\begin{aligned}&\inf _{x\in S_\mathcal{F}}\widehat{F}_{D}^{x}<\frac{1}{2} \Rightarrow \exists \,x\in S_\mathcal{F}\,\,\text{ such } \text{ that }\,\, 1-\widehat{F}_{D}^{x}>\frac{1}{2}\\&\quad \Rightarrow \sup _{x\in S_\mathcal{F}}\,\left| 1-\widehat{F}_{D}^{x}\right| >\frac{1}{2}\Rightarrow \sum _{n=0}^{\infty }{{ I}\!{ P}}\left( \inf _{x\in S_\mathcal{F}}\widehat{F}_{D}^{x}<\frac{1}{2}\right) <\infty , \end{aligned}$$

according to Lemma 2.6.

Proof of Lemma 2.8

It is a straightforward proof, by combining Eq. (8) and hypothesis (U2).

Proof of Lemma 2.9

In view of relation (10), we get that:

$$\begin{aligned} \widehat{F}_{N}^{x}(y)= Q(x)[S_{1}^x(y)S_{2}^x-S_3^x(y)S_4^x], \end{aligned}$$

where

$$\begin{aligned} Q(x)&= \frac{n^2h_{K}^{2}\varphi _{x}^{2}(h_{K})}{n(n-1) \mathrm{I}\!\mathrm{E}\left( W_{12}(x)\right) },\;\; S_{1}^x(y)=\frac{1}{n}\sum _{j=1}^n\frac{K_{j}(x)J_{j}(y)}{\varphi _{x}(h_{K})},\\ S_2^x(y)&:= S_{2}^x=\frac{1}{n}\sum _{i=1}^n\frac{K_{i}(x) \beta _{i}^{2}(x)}{h_{K}^{2}\varphi _{x}(h_{K})},\;\; S_{3}^x(y)= \frac{1}{n}\sum _{j=1}^n\frac{K_{j}(x)\beta _{j}(x)J_{j}(y)}{h_{K}\varphi _{x}(h_{K})} \end{aligned}$$

and

$$\begin{aligned} S_4^x(y):=S_{4}^x=\frac{1}{n}\sum _{i=1}^n\frac{K_{i}(x) \beta _{i}(x)}{h_{K}\varphi _{x}(h_{K})}. \end{aligned}$$

As the terms

$$\begin{aligned} S_{2}^x=Q_{4}(x),\;\; S_{4}^x= Q_{3}(x)\;\; \text{ and }\;\; Q(x) \end{aligned}$$

have been already studied in the proof of Lemma 2.6, it just remains to treat the terms

$$\begin{aligned} \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^x(y)-\mathrm{I}\!\mathrm{E}(S_i^x(y)) \right| \,\,\text{ and }\,\,\sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| \mathrm{I}\!\mathrm{E}(S_i^x(y))\right| \,\, \text{ for }\,\, i=1,3. \end{aligned}$$

To this end, let \(t_{y}\) and \(l_{n}\) be the real numbers defined in the proof of Lemma 2.4 and let \(j(x)\) be the real number given in relation (20). Then we have:

$$\begin{aligned} \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^x(y)-\mathrm{I}\!\mathrm{E}(S_i^x(y)) \right|&\le \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^x(y)-S_i^{x_{j(x)}}(y) \right| \\&\quad + \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^{x_{j(x)}}(y)-S_i^{x_{j(x)}}(t_y) \right| \\&\quad + \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^{x_{j(x)}}(t_y)-\mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(t_y)) \right| \\&\quad +\sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| \mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(t_y))-\mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(y)) \right| \\&\quad +\sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| \mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(y))-\mathrm{I}\!\mathrm{E}(S_i^{x}(y)) \right| \\&:= \sum _{k=1}^{5}T_{i}^{k}. \end{aligned}$$
  • Because of the boundless of \( J\), the study of the term \( T_{i}^{1}\) is exactly the same as that of \( F_{1}^{2}\) for \( i=1\) and as that of \( F_{1}^{3}\) for \( i=3\) (see Lemma 2.6). So we obtain:

    $$\begin{aligned} T_{i}^{1}= O_{ac.o}\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) , \end{aligned}$$

    which entails that

    $$\begin{aligned} T_{i}^{5}= O\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$
  • Moreover, we have that:

    $$\begin{aligned} T_{1}^{2}&\le C\sup _{x\in S_\mathcal{F}}\frac{l_{n}}{h_{J}}Q_{2}(x_{j(x)}) \end{aligned}$$

    and

    $$\begin{aligned} T_{3}^{2}&\le C\sup _{x\in S_\mathcal{F}}\frac{l_{n}}{h_{J}}Q_{3}(x_{j(x)}). \end{aligned}$$

    In view of relations (19) and (23), the fact that \(l_{n}=n^{-\xi -\frac{1}{2}}\) and \(\lim _{n\rightarrow \infty }n^{\xi }h_{J}=\infty \), we can derive for \(i=1,3\):

    $$\begin{aligned} T_{i}^{2}= O_{ac.o}\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) \end{aligned}$$

    and

    $$\begin{aligned} T_{i}^{4}= O\left( \sqrt{\frac{ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$
  • Finally, for the term \(T_{i}^{3}\), by using again Corollary A.8 in Ferraty and Vieu (2006), the facts that \( s_{n}=O(l_{n})=O(\eta ^{\xi +\frac{1}{2}})\) and \(\sum _{i=1}^{\infty }\eta ^{\xi +\frac{1}{2}}d_n^{1-\beta }<\infty \) for some \(\beta \), we get:

    $$\begin{aligned}&{{ I}\!{ P}}\left( T_{i}^{3}>\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) \\&\quad ={{ I}\!{ P}}\left( \sup _{x\in S_\mathcal{F}}\sup _{y\in S_\mathrm {IR}}\left| S_i^{x_{j(x)}}(t_y)-\mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(t_y)) \right| >\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) \\&\quad \le d_n s_{n}\max _{t_y\in \{t_1,\ldots ,t_{s_n}\}}\max _{x_{j(x)} \in \{x_1,\ldots ,x_{d_n}\}} {{ I}\!{ P}}\\&\quad \left( \left| S_i^{x_{j(x)}} (t_y)-\mathrm{I}\!\mathrm{E}(S_i^{x_{j(x)}}(t_y)) \right| >\eta \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) <\infty , \end{aligned}$$

    which means that

    $$\begin{aligned} T_{i} ^{3}=O_{ac.o}\left( \sqrt{\frac{\ln d_{n}}{n\varphi (h_{K})}}\right) . \end{aligned}$$

Proof of Corollary 3.1

Since

$$\begin{aligned} \sup _{x\in S_{\mathcal{F}}}\left| F_{Y}^{x}(t_{\alpha }(x)) -F_{Y}^{x}(\widehat{t}_{\alpha }(x))\right| \le \sup _{x\in S_{\mathcal{F}}}\sup _{y\in S_{\mathrm {IR}}}\left| \widehat{F}_{Y}^{x}(y)-F_{Y}^{x}(y)\right| , \end{aligned}$$

then the condition (U7), together with Theorem 2.5 imply that:

$$\begin{aligned} \lim _{n\rightarrow \infty }\left| \widehat{t}_{\alpha }(x) -t_{\alpha }(x)\right| =0,\,\,\,\, a.co. \end{aligned}$$
(24)

Now using the Taylor expansion of the function \(F_{Y}^{x}\), we get under hypothesis (U8), that:

$$\begin{aligned} F_{Y}^{x}(\widehat{t}_{\alpha }(x))-F_{Y}^{x}(t_{\alpha }(x))= \frac{1}{j!} F_{Y}^{x(j)}(t'_{\alpha }(x))\left( \widehat{t}_{\alpha }(x) -t_{\alpha }(x)\right) ^{j}, \end{aligned}$$

where \(t'_{\alpha }(x)\) lies between \(t_{\alpha }(x)\) and \(\widehat{t}_{\alpha }(x)\).

Because of (24) and the uniform continuity of \(F_{Y}^{x(j)}\), we get that:

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{x\in S_{\mathcal{F}}}\left| F_{Y}^{x}(t'_{\alpha }(x)) -F_{Y}^{x}(t_{\alpha }(x))\right| =0,\,\,\,\, a.co. \end{aligned}$$

So, there exists a positive real number \(\tau \) such that:

$$\begin{aligned} \sum _{n=1}^{\infty }{{ I}\!{ P}}\left( \inf _{x\in S_{\mathcal{F}}}F_{Y}^{x(j)}(t'_{\alpha }(x))<\tau \right) <\infty . \end{aligned}$$

Then

$$\begin{aligned} \sup _{x\in S_{\mathcal{F}}}\left| \widehat{t}_{\alpha }(x)-t_{\alpha }(x)\right| ^{j}\le C\sup _{x\in S_{\mathcal{F}}}\sup _{y\in S_{\mathrm {IR}}}\left| \widehat{F}_{Y}^{x}(y))-F_{Y}^{x}(y)\right| . \end{aligned}$$

It remains to apply the result of Theorem 2.5 to obtain the claimed result.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Messaci, F., Nemouchi, N., Ouassou, I. et al. Local polynomial modelling of the conditional quantile for functional data. Stat Methods Appl 24, 597–622 (2015). https://doi.org/10.1007/s10260-015-0296-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-015-0296-9

Keywords

Navigation