main-content

## Über dieses Buch

This monograph arose out of a desire to develop an approach to statistical infer­ ence that would be both comprehensive in its treatment of statistical principles and sufficiently powerful to be applicable to a variety of important practical problems. In the latter category, the problems of inference for stochastic processes (which arise com­ monly in engineering and biological applications) come to mind. Classes of estimating functions seem to be promising in this respect. The monograph examines some of the consequences of extending standard concepts of ancillarity, sufficiency and complete­ ness into this setting. The reader should note that the development is mathematically "mature" in its use of Hilbert space methods but not, we believe, mathematically difficult. This is in keeping with our desire to construct a theory that is rich in statistical tools for infer­ ence without the difficulties found in modern developments, such as likelihood analysis of stochastic processes or higher order methods, to name but two. The fundamental notions of orthogonality and projection are accessible to a good undergraduate or beginning graduate student. We hope that the monograph will serve the purpose of enriching the methods available to statisticians of various interests.

## Inhaltsverzeichnis

### Chapter 1. Introduction

Abstract
The theory of estimating functions has become of interest in a wide variety of statistical applications, partly because it has a number of virtues in common with methods such as maximum likelihood estimation while possessing sufficient flexibility to tackle problems where maximum likelihood fails, such as the Neyman-Scott paradox. In this monograph we present a self-contained development with a number of applications to estimation, censoring, robustness and inferential separation of parameters.
D. L. McLeish, Christopher G. Small

### Chapter 2. The Space of Inference Functions: Ancillarity, Sufficiency and Projection

Abstract
In this chapter, we construct the space of inference functions and information theoretic notions of E-sufficiency and E-ancillarity within this space. Let X be a sample space, and P be a class of probability measures P on X. For each P we let VP be the vector space of real valued functions f defined on the sample space X such that Ep[f(X)]2 < ∞. We introduce the usual inner product defined on VP,
$$<f_\textup{1},f_\textup{2}>_P=E_P\{f_\textup{1}(X)f_\textup{2}(X)\}$$
Let θ be a real valued function on the class of probability measures P and define the parameter space θ = {θ(P); P}. Note that θ need not be a one to one functional. If it is, we call the model a one-parameter model.
D. L. McLeish, Christopher G. Small

### Chapter 3. Selecting an Inference Function for 1-Parameter Models

Abstract
We shall also suppose that Ψ is unrestricted, in the sense that it consists of all unbiased inference functions.
D. L. McLeish, Christopher G. Small

### Chapter 4. Nuisance Parameters

Abstract
In this chapter we consider the traditional type of nuisance parameter model in which, in addition to the parameter θ we also have a vector of nuisance parameters (ξ1,…,ξp) such that θ, ξ1,…,ξPis a complete description of the probability model. By this we mean that if θ(P)=θ(Q) and ξi(P)=ξi(Q) for i=1,…,p then P=Q. The primary problem that we shall discuss is the construction of a space Ψ with constant covariance structure for different nuisance parameter problems. In section 4.1, we consider the use of group invariant methods to eliminate nuisance parameters with particular attention to location-scale models. In section 4.2 the use of conditional models is described in the context of inference functions. Finally, in section 4.3, we shall consider the construction of inference functions in more difficult situations where techniques described earlier do not work.
D. L. McLeish, Christopher G. Small

### Chapter 5. Inference under Restrictions: Least Squares, Censoring and Errors in Variables Techniques

Abstract
There are many problems in which a solution would be relatively easier to obtain if there were some additional information recorded or observable. We may, for example, observe a sum of variates of the form Y=X + ε where X and ε are independent. In general, if our observations are distributed according to a convolution, the probability density function may be intractable for maximum likelihood estimation since it may be expressible only as an integral or sum. However, if the components of the sum were observable, then estimation by likelihood methods would often be quite easy.
D. L. McLeish, Christopher G. Small

### Chapter 6. Inference for Stochastic Processes

Abstract
Likelihood methodology is justifiable on a discrete probability space line by a simple intuitive interpretation of likelihoods. For example, in a discrete probability space, the maximum likelihood estimator is that value of the parameter that would render the probability of the data maximum. However, when our observations are realized in a more complicated space, the intuitive argument for likelihood methodology is more difficult. In the case of models for continuous time stochastic processes there is often no single dominating measure on a sample space of paths that is analogous to Lebesgue measure for continuous random variables. Thus maximum likelihood estimates cannot be generally computed by maximizing over a convenient likelihood function. This leads to difficulties both in computation and in stability of the procedure as we shall now discuss. Consider for example a Wiener process Xt with zero drift and diffusion parameter σ. Suppose we wish to estimate σ based upon a single realization of the process from time t=0 to time t=T>0. Now the induced probability measures on the space of paths for two distinct values of σ are orthogonal. Therefore likelihood methodology suggests that from a single realization of the process the value of σ can be determined precisely. However, there is a continuity problem here that cannot be easily removed. In practice, it is impossible to make an infinitely precise measurement and so we must allow for the fact that the process actually recorded is a combination of Xt and an error et. For example, the combined process, Xt+et might be a rounded version of the original Wiener process. However, it can be seen that within any such error neighborhood of Xt lie paths which are not in the support of the Wiener measure.
D. L. McLeish, Christopher G. Small

### Backmatter

Weitere Informationen