Skip to main content

2022 | Buch

Testing Statistical Hypotheses

insite
SUCHEN

Über dieses Buch

Testing Statistical Hypotheses, 4th Edition updates and expands upon the classic graduate text, now a two-volume work. The first volume covers finite-sample theory, while the second volume discusses large-sample theory. A definitive resource for graduate students and researchers alike, this work grows to include new topics of current relevance. New additions include an expanded treatment of multiple hypothesis testing, a new section on extensions of the Central Limit Theorem, coverage of high-dimensional testing, expanded discussions of permutation and randomization tests, coverage of testing moment inequalities, and many new problems throughout the text.

Inhaltsverzeichnis

Frontmatter

Conditional Inference

Frontmatter
Chapter 1. The General Decision Problem
Abstract
The raw material of a statistical investigation is a set of observations; these are the values taken on by random variables X whose distribution \(P_\theta \) is at least partly unknown. Of the parameter \(\theta \), which labels the distribution, it is assumed known only that it lies in a certain set \(\Omega \), the parameter space. Statistical inference is concerned with methods of using this observational material to obtain information concerning the distribution of X or the parameter \(\theta \) with which it is labeled. To arrive at a more precise formulation of the problem we shall consider the purpose of the inference.
E. L. Lehmann, Joseph P. Romano
Chapter 2. The Probability Background
Abstract
The mathematical framework for statistical decision theory is provided by the theory of probability, which in turn has its foundations in the theory of measure and integration.
E. L. Lehmann, Joseph P. Romano
Chapter 3. Uniformly Most Powerful Tests
Abstract
We now begin the study of the statistical problem that forms the principal subject of this book, the problem of hypothesis testing. As the term suggests, one wishes to decide whether or not some hypothesis that has been formulated is correct. The choice here lies between only two decisions: accepting or rejecting the hypothesis. A decision procedure for such a problem is called a test of the hypothesis in question.
E. L. Lehmann, Joseph P. Romano
Chapter 4. Unbiasedness: Theory and First Applications
Abstract
For a large class of problems for which a UMP test does not exist, there does exist a UMP unbiased test (which we sometimes abbreviate as UMPU). This is the case in particular for certain hypotheses of the form \(\theta \le \theta _0\) or \(\theta =\theta _0\), where the distribution of the random observables depends on other parameters besides \(\theta \).
E. L. Lehmann, Joseph P. Romano
Chapter 5. Unbiasedness: Applications to Normal Distributions; Confidence Intervals
Abstract
A general expression for the UMP unbiased tests of the hypotheses \(H_1:\theta \le \theta _0\) and \(H_4:\theta =\theta _0\) in the exponential family.
E. L. Lehmann, Joseph P. Romano
Chapter 6. Invariance
Abstract
Many statistical problems exhibit symmetries, which provide natural restrictions to impose on the statistical procedures that are to be employed. Suppose, for example, that \(X_1,\ldots ,X_n\) are independently distributed with probability densities \(p_{\theta _1}(x_1),\ldots ,p_{\theta _n}(x_n)\). For testing the hypothesis \(H:\theta _1=\cdots =\theta _n\) against the alternative that the \(\theta \)’s are not all equal, the test should be symmetric in \(x_1,\ldots ,x_n\), since otherwise the acceptance or rejection of the hypothesis would depend on the (presumably quite irrelevant) numbering of these variables.
E. L. Lehmann, Joseph P. Romano
Chapter 7. Linear Hypotheses
Abstract
Many testing problems concern the means of normal distributions and are special cases of the following general univariate linear hypothesis .
E. L. Lehmann, Joseph P. Romano
Chapter 8. The Minimax Principle
Abstract
The criteria discussed so far, unbiasedness and invariance, suffer from the disadvantage of being applicable, or leading to optimum solutions, only in rather restricted classes of problems. We shall therefore turn now to an alternative approach, which potentially is of much wider applicability.
E. L. Lehmann, Joseph P. Romano
Chapter 9. Multiple Testing and Simultaneous Inference
Abstract
Virtually any scientific experiment sets out to answer questions about the process under investigation, which often can be formally translated into a set of hypotheses. It is the exception that a single hypothesis is considered. In fact, the analysis of data or “data snooping” or “data mining” often leads to further questions as well.
E. L. Lehmann, Joseph P. Romano
Chapter 10. Conditional Inference
Abstract
The present chapter has a somewhat different character from the preceding ones. It is concerned with problems regarding the proper choice and interpretation of tests and confidence procedures, problems which—despite a large literature—have not found a definitive solution. The discussion will thus be more tentative than in earlier chapters, and will focus on conceptual aspects more than on technical ones. Consider the situation in which either the experiment \({\mathcal {E}}\) of observing a random quantity X with density \(p_{\theta }\) (with respect to \(\mu \)) or the experiment \({\mathcal {F}}\) of observing an X with density \(q_{\theta }\) (with respect to \(\nu \)) is performed with probability p and \(q=1-p\) respectively.
E. L. Lehmann, Joseph P. Romano

Asymptotic Theory

Frontmatter
Chapter 11. Basic Large-Sample Theory
Abstract
Chapters 3, 4, 5, 6, and 7 were concerned with the derivation of UMP, UMP unbiased, and UMP invariant tests. Unfortunately, the existence of such tests turned out to be restricted essentially to one-parameter families with monotone likelihood ratio, exponential families, and group families, respectively. Tests maximizing the minimum or average power over suitable classes of alternatives exist fairly generally, but are difficult to determine explicitly, and their derivation in Chapter 8 was confined primarily to situations in which invariance considerations apply.
E. L. Lehmann, Joseph P. Romano
Chapter 12. Extensions of the CLT to Sums of Dependent Random Variables
Abstract
In this chapter, we consider some extensions of the Central Limit Theorem to classes of sums (or averages) of dependent random variables. Many further extensions are possible, but we focus on ones that will be useful in the sequel. Section 12.2 considers sampling without replacement from a finite population. As an application, the potential outcomes framework is introduced in order to study treatment effects. The class of U-statistics is studied in Section 12.3, with applications to the classical one-sample signed-rank statistic and the two-sample Wilcoxon rank-sum statistic.
E. L. Lehmann, Joseph P. Romano
Chapter 13. Applications to Inference
Abstract
In this chapter, we apply some basic asymptotic concepts and results to gain insight into the large-sample behavior of some fundamental tests. Section 13.2 considers the robustness of some classical tests, like the t-test, when the underlying assumptions may not hold. For example, for testing the null hypothesis that the underlying mean is zero, it is seen that the t-test is pointwise asymptotically level \(\alpha \) for any distribution F with finite nonzero variance. However, such a robustness of validity result does not always extend to other parametric procedures, as will be seen.
E. L. Lehmann, Joseph P. Romano
Chapter 14. Quadratic Mean Differentiable Families
Abstract
As mentioned at the beginning of Chapter , the finite-sample theory of optimality for hypothesis testing is applied only to rather special parametric families, primarily exponential families and group families. On the other hand, asymptotic optimality will apply more generally to parametric families satisfying smoothness conditions. In particular, we shall assume a certain type of differentiability condition, called quadratic mean differentiability.
E. L. Lehmann, Joseph P. Romano
Chapter 15. Large-Sample Optimality
Abstract
In this chapter, some asymptotic optimality theory of hypothesis testing is developed. We consider testing one sequence of distributions against another (the asymptotic version of testing a simple hypothesis against a simple alternative). It turns out that this problem degenerates if the two sequences are too close together or too far apart. The non-degenerate situation can be characterized in terms of a suitable distance or metric between the distributions of the two sequences. Two such metrics, the total variation and the Hellinger metric, will be introduced below.
E. L. Lehmann, Joseph P. Romano
Chapter 16. Testing Goodness of Fit
Abstract
So far, the principal framework of this book has been optimality (either exact or asymptotic) in situations where both the hypothesis and the class of alternatives were specified by parametric models. In the present chapter, we shall take up the crucial problem of testing the validity of such models, the hypothesis of goodness of fit. For example, we would like to know whether a set of measurements \(X_1 , \ldots , X_n\) is consonant with the assumption that the X’s are an i.i.d. sample from a normal distribution.
E. L. Lehmann, Joseph P. Romano
Chapter 17. Permutation and Randomization Tests
Abstract
In this chapter, and Chapter 18, we shall deal with situations where both the null hypothesis and the class of alternatives may be nonparametric and so, as a result, it may be difficult even to construct tests (or confidence regions) that satisfactorily control the level (exactly or asymptotically). For such situations, we shall develop methods which achieve this modest goal under fairly general assumptions. A secondary aim will then be to obtain some idea of the power of the resulting tests.
E. L. Lehmann, Joseph P. Romano
Chapter 18. Bootstrap and Subsampling Methods
Abstract
The bootstrap, subsampling, and other resampling methods provide methods for inference, especially in problems where large-sample approximations are not tractable. Such methods are not foolproof and require mathematical justification. In this chapter, fundamental properties of these methods are developed.
E. L. Lehmann, Joseph P. Romano
Backmatter
Metadaten
Titel
Testing Statistical Hypotheses
verfasst von
Dr. E.L. Lehmann
Prof. Joseph P. Romano
Copyright-Jahr
2022
Electronic ISBN
978-3-030-70578-7
Print ISBN
978-3-030-70577-0
DOI
https://doi.org/10.1007/978-3-030-70578-7

Premium Partner