Skip to main content

Über dieses Buch

This book is based upon lecture notes developed by Jack Kiefer for a course in statistical inference he taught at Cornell University. The notes were distributed to the class in lieu of a textbook, and the problems were used for homework assignments. Relying only on modest prerequisites of probability theory and cal­ culus, Kiefer's approach to a first course in statistics is to present the central ideas of the modem mathematical theory with a minimum of fuss and formality. He is able to do this by using a rich mixture of examples, pictures, and math­ ematical derivations to complement a clear and logical discussion of the important ideas in plain English. The straightforwardness of Kiefer's presentation is remarkable in view of the sophistication and depth of his examination of the major theme: How should an intelligent person formulate a statistical problem and choose a statistical procedure to apply to it? Kiefer's view, in the same spirit as Neyman and Wald, is that one should try to assess the consequences of a statistical choice in some quan­ titative (frequentist) formulation and ought to choose a course of action that is verifiably optimal (or nearly so) without regard to the perceived "attractiveness" of certain dogmas and methods.



Chapter 1. Introduction to Statistical Inference

A typical problem in probability theory is of the following form: A sample space and underlying probability function are specified, and we are asked to compute the probability of a given chance event. For example, if X1, …, X n are independent Bernoulli random variables with P{X i = 1} = 1 − P{X i = 0} = p, we compute that the probability of the chance event \(\left\{ {\sum\limits_{i = 1}^n {{X_i} = r} } \right\}\), where r is an integer with \(0 \le r \le n\), is \(\left( {\begin{array}{*{20}{c}}n\\r\end{array}} \right){p^r}{\left( {1 - p} \right)^{n - r}}\). In a typical problem of statistics it is not a single underlying probability law which is specified, but rather a class of laws, any of which may possibly be the one which actually governs the chance device or experiment whose outcome we shall observe. We know that the underlying probability law is a member of this class, but we do not know which one it is. The object might then be to determine a “good” way of guessing, on the basis of the outcome of the experiment, which of the possible underlying probability laws is the one which actually governs the experiment whose outcome we are to observe.
Jack Carl Kiefer

Chapter 2. Specification of a Statistical Problem

We shall begin by giving a formulation of statistics problems which is general enough to cover many cases of practical importance. Modifications will be introduced into this formulation later, as they are needed.
Jack Carl Kiefer

Chapter 3. Classifications of Statistical Problems

When statisticians discuss statistical problems they naturally classify them in certain ways. In the sequel we give an informal discussion of several such classifications in terms of the nomenclature we have introduced. Many of the categories mentioned later correspond to important ideas. Discussion of statistical problems in a particular category therefore becomes an important topic. Thus the following is intended also to given an informal outline of some topics we shall cover in later chapters. You may not understand all aspects of this outline at a first reading, but you are urged to refer to this chapter as various topics are taken up in order to fit them into a proper perspective.
Jack Carl Kiefer

Chapter 4. Some Criteria For Choosing a Procedure

Having specified the statistics problem or model associated with an experiment with which we are concerned, we must select a single procedure t to be used with the data from that experiment. Since for most statistical problems there are no uniformly best procedures, some additional criterion (or criteria) is (are) necessary. The purpose of such a criterion is to specify uniquely a statistical procedure t which will be used. Some of the criteria which are often used to select a single procedure t to be used in a particular statistics problem are the following:
Bayes criterion
Minimax criterion
Invariance criterion
Unbiasedness criterion
Sufficiency criterion
Robustness criterion
Maximum likelihood and likelihood ratio methods
Method of moments
Criteria of asymptotic efficiency
Jack Carl Kiefer

Chapter 5. Linear Unbiased Estimation

Appendix C is optional reading with Section 5.1. It illustrates various minimization techniques of use here and elsewhere in statistics and in fields like operations research.
Jack Carl Kiefer

Chapter 6. Sufficiency

In the construction of statistical procedures using decision theory, one primary consideration in the search for good procedures is to limit the scope of the search as much as possible. It is the purpose of this chapter to describe sufficient statistics and sufficient partitions and to indicate why, from the point of view of decision theory, one need only use those procedures which are functions of a sufficient statistic or sufficient partition. (To do this it will sometimes be necessary to use randomized statistical procedures, defined in Section 4.3.) Since a sufficient statistic is often much simpler than X itself (often 1- rather than n-dimensional), it can be a considerable saving of effort only to look at functions of a sufficient statistic rather than at all procedures (functions of X).
Jack Carl Kiefer

Chapter 7. Point Estimation

In previous chapters, particularly Chapters 1 through 4, the ideas of decision theory and various types of statistical problems have been introduced. Chapter 5 has developed these ideas in the context of squared error and linear theory, treating the subjects of best linear unbiased estimation and the analysis of variance. Related to the material of Chapter 5 is the subject of experimental design. Chapter 6 introduced the idea of sufficiency. All of these are ideas which have great applicability to statistical problems.
Jack Carl Kiefer

Chapter 8. Hypothesis Testing

Recall Chapters 3 and 4 for brief mention of the setup of hypothesis testing and the way it fits into our general picture. (Also, review the parts of Section 4.4 and Problems 4.3, 4.12–4.17 on the geometry of risk points in the case where Ω contains two elements.)
Jack Carl Kiefer

Chapter 9. Confidence Intervals

We discussed interval estimation briefly in Chapters 3 and 4. We shall recall briefly how interval estimation differs from the considerations of Chapters 5 and 7, in which D consisted of the possible values of some function ϕ of the unknown parameter value θ, the decision d0 meaning “my guess is that the true value of ϕ(θ) is d0.” Thus d0 was typically a single point of an interval D (i.e., for simplicity we think of the set of decisions as a real number interval), and these problems were termed ones of point estimation. If the decision is to be an interval of real numbers rather than a single point, the problem is called one of interval estimation. In a problem of this kind, a decision may be “my guess is that the table is between 4 and 4.3 feet long” rather than “my guess is that the table is 4.1 feet long.” Thus, if were discussing confidence intervals, D consists of a collection of intervals, and we may denote by [a, b] that decision which consists of the set of real numbers between a and b inclusive; the decision in the example of the table would be [4.0,4.3].
Jack Carl Kiefer


Weitere Informationen