Skip to main content

1997 | Buch | 2. Auflage

Series Approximation Methods in Statistics

verfasst von: John E. Kolassa

Verlag: Springer New York

Buchreihe : Lecture Notes in Statistics

insite
SUCHEN

Über dieses Buch

This book was originally compiled for a course I taught at the University of Rochester in the fall of 1991, and is intended to give advanced graduate students in statistics an introduction to Edgeworth and saddlepoint approximations, and related techniques. Many other authors have also written monographs on this sub­ ject, and so this work is narrowly focused on two areas not recently discussed in theoretical text books. These areas are, first, a rigorous consideration of Edgeworth and saddlepoint expansion limit theorems, and second, a survey of the more recent developments in the field. In presenting expansion limit theorems I have drawn heavily on notation of McCullagh (1987) and on the theorems presented by Feller (1971) on Edgeworth expansions. For saddlepoint notation and results I relied most heavily on the many papers of Daniels, and a review paper by Reid (1988). Throughout this book I have tried to maintain consistent notation and to present theorems in such a way as to make a few theoretical results useful in as many contexts aS possible. This was not only in order to present as many results with as few proofs as possible, but more importantly to show the interconnections between the various facets of asymptotic theory. Special attention is paid to regularity conditions. The reasons they are needed and the parts they play in the proofs are both highlighted.

Inhaltsverzeichnis

Frontmatter
1. Asymptotics in General
Abstract
Many authors have examined the use of asymptotic methods in statistics. Serfling (1980) investigates applications of probability limit theorems for distributions of random variables, including theorems concerning convergence almost surely, to many questions in applied statistics. Le Cam (1969) treats asymptotics from a decision-theoretic viewpoint. Barndorff-Nielsen and Cox (1989) present many applications of the density and distribution function approximations to be described below in a heuristic manner. Hall (1992) investigates Edgeworth series with a particular view towards applications to the bootstrap. Field and Ronchetti (1990) treat series expansion techniques in a manner that most closely parallels this work; I have included more detailed proofs and discussion of regularity conditions, and a survey of the use of Barndorff-Nielsen’s formula. Their work covers many aspects of robustness and estimating equations not included here. Skovgaard (1990) explores characteristics of models making them amenable to asymptotic techniques, and derives the concept of an analytic statistical model. He also investigates convergence along series indexed by more general measures of information than sample size. Jensen (1995) presents a range of topics similar to that presented here, but with a different flavor.
John E. Kolassa
2. Characteristic Functions and the Berry-Esseen Theorem
Abstract
This chapter discusses the role of the characteristic function in describing probability distributions. Theorems allowing the underlying probability function to be reconstructed from the characteristic function are presented. Results are also derived outlining the sense in which inversion of an approximate characteristic function leads to an approximate density or distribution function. These results are applied to derive Berry-Esseen theorems quantifying the error incurred in such an approximation. Finally, the relation between the characteristic function and moments and cumulants are investigated.
John E. Kolassa
3. Edgeworth Series
Abstract
Chapter 2 considered the normal approximation to distribution functions. We saw that the normal approximation could be derived by inverting the characteristic function, and that many of its properties could be described in terms of the first few derivatives of the characteristic function at 0. This chapter considers higher-order approximations to cumulative distribution functions and densities. Not surprisingly, these can also be expressed as approximate inversions of the characteristic function. Two heuristic motivations for the Edgeworth series are presented. Their correctness for densities and distribution functions is demonstrated using the methods of the previous chapter. Regularity conditions are investigated and discussed. Examples are given. The standards for assessment of accuracy of these methods are discussed and criticized. The Edgeworth series is inverted to yield the Cornish-Fisher expansion. Extensions of results from the standard application of means of independent and identically distributed continuous random variables to non-identically distributed and lattice cases are presented. Parallels in the lattice case with classical Sheppard’s corrections are developed.
John E. Kolassa
4. Saddlepoint Series for Densities
Abstract
In many statistical applications, approximations to the probability that a random variable exceeds a certain threshold value are important. Such approximations are useful, for example, in constructing tests and confidence intervals, and for calculating p-values. Edgeworth series converge uniformly quickly over the entire possible range of the random variable, when error is measured in an absolute sense. Often times, relative error behavior is more important than absolute error behavior; an error of.005 is of little importance when considering tests of approximate size.05 but is of great importance when considering tests of approximate size.001. Saddlepoint methodology is a method for achieving in many cases uniform bounds on relative error over the range of the distribution. This work was pioneered by Daniels (1954).
John E. Kolassa
5. Saddlepoint Series for Distribution Functions
Abstract
Recall from §3 that calculating distribution function approximations from Edgeworth density approximations was a simple matter. The Edgeworth series for the density is a linear combination of derivatives of the normal distribution function, and hence is easily integrated to give a corresponding cumulative distribution function approximation. This cumulative distribution function approximation inherits many good properties from the density approximation.
John E. Kolassa
6. Multivariate Expansions
Abstract
Edgeworth and saddlepoint expansions also have analogues for distributions of random vectors. As in the univariate case these expansions will be derived with reference to characteristic functions and cumulant generating functions, and hence these will be defined first. Subsequently Edgeworth density approximations will be defined. Just as in the univariate case, the Edgeworth approximation to probabilities that a random vector lies in a set is the integral of the Edgeworth density over that set; however, since sets of interest are usually not rectangular, theorems for the asymptotic accuracy of these approximations are difficult to prove. These proofs are not presented here. Approximation for variables on a multivariate lattice are discussed. Multivariate saddlepoint approximations are also defined, by a multivariate extension of steepest descent methods. These methods are also used to approximate conditional probabilities.
John E. Kolassa
7. Conditional Distribution Approximations
Abstract
Often inference for a subset of model parameters is desired, and the others are treated as nuisance parameters. Among the many methods for attacking this problem is conditional inference, in which sufficient statistics for nuisance parameters are conditioned on. Calculations involving these conditional distributions are often quite difficult. This chapter will develop methods for approximating densities and distribution functions for conditional distributions.
John E. Kolassa
8. Applications to Wald, Likelihood Ratio, and Maximum Likelihood Statistics
Abstract
The breath of this material is similar to that of Reid (1988). Of primary concern will be approximation to densities and distribution functions of maximum likelihood estimators, likelihood ratio statistics, and Wald statistics. Bartlett’s correction for the distribution of likelihood ratio statistics is derived. Approximate ancillarity is also discussed.
John E. Kolassa
9. Other Topics
Abstract
This chapter contains miscellaneous material applying saddlepoint and other distribution function approximations to statistical inference. An approximation for the root of an estimating equation is presented. Series approximation methods in Bayesian inference and resampling will also be discussed.
John E. Kolassa
10. Computational Aids
Abstract
This chapter contains code for doing some of the calculations presented here using the computer algebra package Mathematica. Lines surrounded by parenthesis star are comments. Material here is generally in the same order as it appears in the text, except that generally lattice material in the text was at the end of chapters; here it follows more naturally immediately after the continuous analogues. Code presented here is the minimal code necessary to perform many of the calculations in the text. Andrews and Stafford (1993) and Stafford and Andrews (1993) present more sophisticated Mathematica code.
John E. Kolassa
Backmatter
Metadaten
Titel
Series Approximation Methods in Statistics
verfasst von
John E. Kolassa
Copyright-Jahr
1997
Verlag
Springer New York
Electronic ISBN
978-1-4757-4277-0
Print ISBN
978-0-387-98224-3
DOI
https://doi.org/10.1007/978-1-4757-4277-0