Skip to main content

2013 | Buch

Essential Statistical Inference

Theory and Methods

insite
SUCHEN

Über dieses Buch

​This book is for students and researchers who have had a first year graduate level mathematical statistics course. It covers classical likelihood, Bayesian, and permutation inference; an introduction to basic asymptotic distribution theory; and modern topics like M-estimation, the jackknife, and the bootstrap. R code is woven throughout the text, and there are a large number of examples and problems.

An important goal has been to make the topics accessible to a wide audience, with little overt reliance on measure theory. A typical semester course consists of Chapters 1-6 (likelihood-based estimation and testing, Bayesian inference, basic asymptotic results) plus selections from M-estimation and related testing and resampling methodology.

Dennis Boos and Len Stefanski are professors in the Department of Statistics at North Carolina State. Their research has been eclectic, often with a robustness angle, although Stefanski is also known for research concentrated on measurement error, including a co-authored book on non-linear measurement error models. In recent years the authors have jointly worked on variable selection methods. ​

Inhaltsverzeichnis

Frontmatter

Introductory Material

Frontmatter
Chapter 1. Roles of Modeling in Statistical Inference
Abstract
A common format for a course in advanced statistical inference is to accept as given a parametric family of statistical or probability models and then to proceed to develop what is an almost exclusively model-dependent theory of inference. The usual focus is on the development of efficient estimators of model parameters and the development of efficient (powerful) tests of hypotheses about those unknown parameters.
Denni D Boos, L A Stefanski

Likelihood-Based Methods

Frontmatter
Chapter 2. Likelihood Construction and Estimation
Abstract
After a statistical model for the observed data has been formulated, the likelihood of the data is the natural starting point for inference in many statistical problems. This function typically leads to essentially automatic methods of inference, including point and interval estimation, and hypothesis testing. In fact, likelihood methods are generally asymptotically optimal provided the assumed statistical model is correct. Thus, the likelihood is the foundation of classical model-based statistical inference. In this chapter we describe, and often derive, basic likelihood methods and results. We start with construction of the likelihood in numerous examples and then move to various inferential techniques. The development in this chapter follows the classical frequentist approach to inference. However, the Bayesian approach to inference is equally dependent on the likelihood, and the material in this chapter is essential to understanding the Bayesian methods in Chapter 4.
Denni D Boos, L A Stefanski
Chapter 3. Likelihood-Based Tests and Confidence Regions
Abstract
There are three asymptotically equivalent testing methods based on the likelihood 3 function: Wald tests, likelihood ratio tests, and score tests also known as Lagrange 4 multiplier tests in the econometrics literature. In this chapter we present increasingly 5 more general versions of the three test statistics culminating in the most general 6
Denni D Boos, L A Stefanski
Chapter 4. Bayesian Inference
Abstract
The majority of this book is concerned with frequentist inference about an unknown parameter arising in a statistical model, typically a parametric model where data
Denni D Boos, L A Stefanski

Large Sample Approximations in Statistics

Frontmatter
Chapter 5. Large Sample Theory: The Basics
Abstract
A fundamental problem in inferential statistics is to determine, either exactly or approximately, the distribution of a statistic calculated from a probability sample of data. The statistic is usually a parameter estimate, in which case the distribution characterizes the sampling variability of the estimate, or a test statistic, in which case the distribution provides the critical values of the test and also is useful for power calculations. Because the number and type of inference problems admitting statistics for which exact distributions can be determined is limited, approximating the distribution of a statistic is usually necessary.
Denni D Boos, L A Stefanski
Chapter 6. Large Sample Results for Likelihood-Based Methods
Abstract
Most large sample results for likelihood-based methods are related to asymptotic normality of the maximum likelihood estimator b MLE under standard regularity conditions. In this chapter we discuss these results. If consistency of b MLE is assumed, then the proof of asymptotic normality of b MLE is straightforward. Thus we start with consistency and then give theorems for asymptotic normality of b MLE and for the asymptotic chi-squared convergence of the likelihood-based tests TW,TS, and TLR. Recall that Strong consistency of b MLE means b MLE converges with probability one to the true value, and weak consistency of b MLE refers to b MLE converging in probability to the true value..
Denni D Boos, L A Stefanski

Methods for Misspecified Likelihoods and Partially Specified Models

Frontmatter
Chapter 7. M-Estimation (Estimating Equations)
Abstract
In Chapter 1 we made the distinction between the parts of a fully specified statistical model. The primary part is the part that is most important for answering the underlying scientific questions. The secondary part consists of all the remaining details of the model. Usually the primary part is the mean or systematic part of the model, and the secondary part is mainly concerned with the distributional assumptions about the random part of the model. The full specification of the model is important for constructing the likelihood and for using the associated classical methods of inference as spelled out in Chapters 2 and 3 and supported by the asymptotic results of Chapter 6.
Denni D Boos, L A Stefanski
Chapter 8. Hypothesis Tests under Misspecification and Relaxed Assumptions
Abstract
In Chapter 6 we gave the classical asymptotic chi-squared results for TW, TS, and TLR under ideal conditions, that is, when the specified model density f.yI is correct. In this chapter we give asymptotic results for the case when the specified model density is incorrect, and also show how to adjust the statistics to have the usual asymptotic chi-squared results. For Wald and score tests, we give these adjusted statistics in the generalM-estimation context.We also discuss the quadratic inference functions of Lindsay and Qu (2003) that are related to the Generalized Method of Moments (GMM) approach found in the econometrics literature.
Denni D Boos, L A Stefanski

Computation-Based Methods

Frontmatter
Chapter 9. Monte Carlo Simulation Studies
Abstract
Modern statistical problems require a mix of analytical, computational and simulation techniques for implementing and understanding statistical inferential methods. Previous chapters dealt with analytical and computational techniques, and this chapter focuses on simulation techniques.We are primarily interested in Monte Carlo simulation studies in which the sampling properties of statistical procedures are investigated. We discuss a few important principles in the design and analysis of these Monte Carlo studies. These principles are perhaps obvious after reflection,but senior, as well as junior, statistics researchers often neglect one or more of them.Secondly, we give some practical hints about choosing the Monte Carlo size N and analyzing and presenting results from Monte Carlo studies.
Denni D Boos, L A Stefanski
Chapter 10. Jackknife
Abstract
The jackknife was developed by Quenouille (1949, 1956) as a general method to remove bias from estimators. Tukey (1958) noticed that the approach also led to a method for estimating variances. Since that time, the jackknife has been used more for variance estimation than for bias estimation. Thus our focus is mainly on jackknife variance estimation, although we begin below in the historical order with bias estimation.
Denni D Boos, L A Stefanski
Chapter 11. Bootstrap
Abstract
The bootstrap is a general technique for estimating unknown quantities associated with statistical models. Often the bootstrap is used to find 1. standard errors for estimators, 2. confidence intervals for unknown parameters,p-values for test statistics under a null hypothesis.
Denni D Boos, L A Stefanski
Chapter 12. Permutation and Rank Tests
Abstract
In the early 1930s R. A. Fisher discovered a very general exact method of testing hypotheses based on permuting the data in ways that do not change its distribution under the null hypothesis. This permutation method does not require standard parametric assumptions such as normality of the data.
Denni D Boos, L A Stefanski
Backmatter
Metadaten
Titel
Essential Statistical Inference
verfasst von
Dennis D Boos
L. A Stefanski
Copyright-Jahr
2013
Verlag
Springer New York
Electronic ISBN
978-1-4614-4818-1
Print ISBN
978-1-4614-4817-4
DOI
https://doi.org/10.1007/978-1-4614-4818-1

Premium Partner