Skip to main content

1995 | Buch

Theory of Statistics

verfasst von: Mark J. Schervish

Verlag: Springer New York

Buchreihe : Springer Series in Statistics

insite
SUCHEN

Über dieses Buch

The aim of this graduate textbook is to provide a comprehensive advanced course in the theory of statistics covering those topics in estimation, testing, and large sample theory which a graduate student might typically need to learn as preparation for work on a Ph.D. An important strength of this book is that it provides a mathematically rigorous and even-handed account of both Classical and Bayesian inference in order to give readers a broad perspective. For example, the "uniformly most powerful" approach to testing is contrasted with available decision-theoretic approaches.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Probability Models
Abstract
The purpose of this book is to cover important topics in the theory of statistics in a very thorough and general fashion. In this section, we will briefly review some of the basic theory of statistics with which many students are familiar. All that we do here will be repeated in a more precise manner at the appropriate place in the text.
Mark J. Schervish
Chapter 2. Sufficient Statistics
Abstract
We now turn our attention to the broad area of statistics. This will concern the manner in which one learns from data. In this chapter, we will study some of the basic properties of probability models and how data can be used to help us learn about the parameters of those models.
Mark J. Schervish
Chapter 3. Decision Theory
Abstract
A major use of statistical inference is its application to decision making under uncertainty. When the costs and/or benefits of our actions depend on quantities we will not know until after we make our decisions, we need to be able to weigh the costs against the uncertainties intelligently.
Mark J. Schervish
Chapter 4. Hypothesis Testing
Abstract
Recall the setup used at the beginning of Chapter 3. We had a probability space (S,A,μ) and a function V : SV. One example of V is the parameter Θ. Other examples are measurable functions of Θ. Other V functions, which are not functions of Θ, are possible but are rarely seen in classical statistics. This is true to a greater extent in hypothesis testing for reasons that will become more apparent once we study the criteria used for selecting tests in classical statistics.
Mark J. Schervish
Chapter 5. Estimation
Abstract
In Chapter 3, we discussed methods for choosing decision rules in problems with specified loss functions. In Section 3.3, we gave an axiomatic derivation of some of those methods. This derivation led to the conclusion that there is a probability and a loss function, and one should minimize the expected loss. There are decision problems in which ℵ and Ω are the same (or nearly the same) space and the loss function L(θ,a) is an increasing function of some measure of distance between θ and a. Such problems are often called point estimation problems. The classical framework makes no use of the probability over the parameter space provided by the axiomatic derivation. One can also try to ignore the loss function as well. To estimate θ without a specific loss function, one can adopt ad hoc criteria to decide if an estimator is good. In this chapter, we will study some of these criteria as well as some criteria for the problem of set estimation. In set estimation, the action space is a collection of subsets of the parameter space (or the closure of the parameter space). The idea is to find a set that is likely to contain the parameter without being “too big” in some sense.
Mark J. Schervish
Chapter 6. Equivariance
Abstract
In Chapter 3, we introduced a few principles of classical decision theory (e.g., minimaxity, ancillarity) to help to choose among admissible rules. In Chapter 5, we introduced another principle called unbiasedness which could be used to select a subset of the class of all estimators. As we saw, sometimes none of the unbiased estimators was admissible. In this chapter, we introduce another ad hoc principle called equivariance,1 which can also be used to select a subset of the class of all estimators. The principle of equivariance, in its most general form, relies on the algebraic theory of groups. However, the basic concept can be understood by means of a simple class of problems in which the principle can apply.
Mark J. Schervish
Chapter 7. Large Sample Theory
Abstract
In calculus courses, the concept of convergence of sequences is introduced. In this section, we will generalize that concept to include different types of stochastic convergence.
Mark J. Schervish
Chapter 8. Hierarchical Models
Abstract
When a model has many parameters, it may be the case that we can consider them as a sample from some distribution. In this way we model the parameters with another set of parameters and build a model with different levels of hierarchy. In this chapter we will discuss situations in which it is natural to model in this way.
Mark J. Schervish
Chapter 9. Sequential Analysis
Abstract
Most of the results of earlier chapters concern situations in which a particular data set is to be observed and the decisions, if any, to be made concern the values of future observations. It sometimes happens that as we observe, we get to decide what, if any, data to collect next. In this chapter, we will describe some theory and methods for dealing with such situations.1
Mark J. Schervish
Erratum
Mark J. Schervish
Backmatter
Metadaten
Titel
Theory of Statistics
verfasst von
Mark J. Schervish
Copyright-Jahr
1995
Verlag
Springer New York
Electronic ISBN
978-1-4612-4250-5
Print ISBN
978-1-4612-8708-7
DOI
https://doi.org/10.1007/978-1-4612-4250-5