Skip to main content
main-content

Über dieses Buch

INTRODUCTION 1) Introduction In 1979, Efron introduced the bootstrap method as a kind of universal tool to obtain approximation of the distribution of statistics. The now well known underlying idea is the following : consider a sample X of Xl ' n independent and identically distributed H.i.d.) random variables (r. v,'s) with unknown probability measure (p.m.) P . Assume we are interested in approximating the distribution of a statistical functional T(P ) the -1 nn empirical counterpart of the functional T(P) , where P n := n l:i=l aX. is 1 the empirical p.m. Since in some sense P is close to P when n is large, n • • LLd. from P and builds the empirical p.m. if one samples Xl ' ... , Xm n n -1 mn • • P T(P ) conditionally on := mn l: i =1 a • ' then the behaviour of P m n,m n n n X. 1 T(P ) should imitate that of when n and mn get large. n This idea has lead to considerable investigations to see when it is correct, and when it is not. When it is not, one looks if there is any way to adapt it.

Inhaltsverzeichnis

Frontmatter

Introduction

Abstract
In 1979, Efron introduced the bootstrap method as a kind of universal tool to obtain approximation of the distribution of statistics. The now well known underlying idea is the following: consider a sample X1,…, Xn of independent and identically distributed (i.i.d.) random variables (r.v.’s) with unknown probability measure (p.m.) P. Assume we are interested in approximating the distribution of a statistical functional T(Pn), the empirical counterpart of the functional T(P), where \( {P_n}: = {n^{{ - 1}}}\Sigma_{{i = 1}}^n{\delta_{{{X_i}}}} \) is the empirical p.m. Since in some sense Pn is close to P when n is large, if one samples X1 *,…, \( {X_{{{m_n}}}}^{ * } \) i.i.d. from Pn and builds the empirical p.m. \( {P^{ * }}_{{{m_n}}}: = m_n^{{ - 1}}\Sigma_{{i = 1}}^{{{m_n}}}{\delta_{{{X_i}}}} * \), then the behaviour of \( T\left( {{P^{ * }}_{{n,{m_n}}}} \right) \) conditionally on Pn should imitate that of T(Pn) when n and mn get large.
Philippe Barbe, Patrice Bertail

Chapter I. Asymptotic Theory for the Generalized Bootstrap of Statistical Differentiable Functionals

Abstract
Let T be a statistical functional defined on a space p of probability measures (p.m.’s) on a locally compact Banach space B. Let X, X1,…, Xn be a sequence of independent and identically distributed (i.i.d.) random variables (r.v.) with common probability P∈P, and let us define \( {P_n}: = {n^{{ - 1}}}\Sigma_{{i = 1}}^n{\delta_{{{X_i}}}} \), the empirical measure, where δXi denotes the Dirac measure at Xi. When T is smooth in a neighborhood of P, a natural estimator of T(P) is its empirical counterpart T(Pn) (see Von Mises (1947), Huber (1981), Manski (1988)).
Philippe Barbe, Patrice Bertail

Chapter II. How to Choose the Weights

Abstract
The aim of this chapter is to discuss the practical choice of the random weights Wi,n which are used in the generalized bootstrap. Instead of building a complete mathematical theory, we will give some theoretical indications and illustrate them by some simulations.
Philippe Barbe, Patrice Bertail

Chapter III. Special Forms of the Bootstrap

Abstract
In chapter II, we discussed the choices of the weights, when one wants to bootstrap regular functionals as described in the first chapter. However one can find functionals which do not satisfy the assumptions of chapter I or a sample which is not i.i.d. but is obtained from i.i.d. r.v.’s. The aim of this chapter is to investigate three of such situations : what can we do if we want to bootstrap an empirical process when the parameters are estimated ? How can the extreme values be bootstrapped ? What happens to the bootstrap of the mean when the limiting distribution is non gaussian ?
Philippe Barbe, Patrice Bertail

Chapter IV. Proofs of Results of Chapter I

Abstract
It is convenient in all the proofs to normalize the weights \( {W_n}: = \left\{ {{W_{{i,n}}}\;,\;1 \leqslant i \leqslant n} \right\} \) in defining
$$ W_i^{{(n)}}\;: = n{W_{{i,n}}}\,,\;1 \leqslant i \leqslant n\;, $$
which ensures that Wi (n). is of the order of δW,n. in probability. We also abridge S(P) in S.
Philippe Barbe, Patrice Bertail

Chapter V. Proofs of Results of Chapter II

Abstract
We use similar techniques as in the proof of Proposition I.2.2. We first observe the obvious equality
$$ \Sigma_{{i = 1}}^n\;\left( {{W_{{1,n}}} - {{1} \left/ {n} \right.}} \right)\;h\left( {{X_i}} \right) $$
$$ = {n^{{ - 1}}}\Sigma_{{i = 1}}^n\left( {{Y_{{i,n}}} - 1} \right)h\left( {{X_i}} \right) + \quad {\overline {{Y_n}}^{{ - 1}}}\left( {1 - \overline {{Y_n}} } \right)\quad {n^{{ - 1}}}\Sigma_{{i = 1}}^n{Y_{{i,n}}}h\left( {{X_i}} \right) $$
.
Philippe Barbe, Patrice Bertail

Chapter VI. Proofs of Results of Chapter III

Abstract
The proof consists mainly in a reduction to usual uniform empirical process and an application of the weighted approximation of M. Csörgö, S. Csörgö, Horváth and Mason (1986) or Mason and Van Zwet (1987).
Philippe Barbe, Patrice Bertail

Backmatter

Weitere Informationen