main-content

## Über dieses Buch

Prediction of a random field based on observations of the random field at some set of locations arises in mining, hydrology, atmospheric sciences, and geography. Kriging, a prediction scheme defined as any prediction scheme that minimizes mean squared prediction error among some class of predictors under a particular model for the field, is commonly used in all these areas of prediction. This book summarizes past work and describes new approaches to thinking about kriging.

## Inhaltsverzeichnis

### 1. Linear Prediction

Abstract
This book investigates prediction of a spatially varying quantity based on observations of that quantity at some set of locations. Although the notion of prediction sometimes suggests the assessment of something that has not yet happened, here I take it to mean the assessment of any random quantity that is presently not known exactly. This work focuses on quantities that vary continuously in space and for which observations are made without error, although Sections 3.7, 4.2, 4.3, 6.6 and 6.8 do address some issues regarding measurement errors. Our goals are to obtain accurate predictions and to obtain reasonable assessments of the uncertainty in these predictions. The approach to prediction I take is to consider the spatially varying quantity to be a realization of a real-valued random field, that is, a family of random variables whose index set is $${\mathbb{R}^d}$$.
Michael L. Stein

### 2. Properties of Random Fields

Abstract
This chapter provides the necessary background on random fields for understanding the subsequent chapters on prediction and inference for random fields. The focus here is on weakly stationary random fields (defined later in this section) and the associated spectral theory. Some previous exposure to Fourier methods is assumed. A knowledge of the theory of characteristic functions at the level of a graduate course in probability (see, for example, Billingsley (1995), Chung (1974), or Feller (1971)) should, for the most part, suffice. When interpolating a random field, the local behavior of the random field turn out to be critical (see Chapter 3). Accordingly, this chapter goes into considerable detail about the local behavior of random fields and its relationship to spectral theory.
Michael L. Stein

### 3. Asymptotic Properties of Linear Predictors

Abstract
Suppose we observe a Gaussian random field Z with mean function m and covariance function K at some set of locations. Call the pair (m, K) the second-order structure of the random field. If (m, K) is known, then as noted in 1.2, the prediction of Z at unobserved locations is just a matter of calculation. To review, the conditional distribution of Z at an unobserved location is normal with conditional mean that is a linear function of the observations and constant conditional variance. In practice, (m, K) is at least partially unknown and it is usually necessary to estimate (m, K) from the same data we use to do the prediction. Thus, it might be natural to proceed immediately to methods for estimating second-order structures of Gaussian random fields. However, until we know something about the relationship between the second-order structure and linear predictors, it will be difficult to judge what is meant by a good estimate of the second-order structure. In particular, it will turn out that it is possible to get (m, K) nonnegligibly wrong and yet still get nearly optimal linear predictors. More specifically, for a random field possessing an autocovariance function, if the observations are tightly packed in a region in which we wish to predict the random field, then the low frequency behavior of the spectrum has little impact on the behavior of the optimal linear predictions.
Michael L. Stein

### 4. Equivalence of Gaussian Measures and Prediction

Abstract
The basic message of the results of 3.8 is that for interpolating a mean 0 weakly stationary random field based on observations on an infinite square lattice, the smaller the distance between neighboring observations in the lattice, the less the low frequency behavior of the spectrum matters. This suggests that if our goal is to interpolate our observations and we need to estimate the spectral density from these same observations, we should focus on getting the high frequency behavior of the spectral density as accurately as possible while not worrying so much about the low frequency behavior. Supposing that our observations and predictions will all take place in some bounded region R, a useful first question to ask is what can be done if we observe the process everywhere in R. Answering this question will put an upper bound on what one can hope to learn from some finite number of observations in R.
Michael L. Stein

### 5. Integration of Random Fields

Abstract
This chapter studies the prediction of integrals of random fields based on observations on a lattice. The goal here is not to give a full exposition of the topic (see Ritter (1995) for a more detailed treatment) but to make two specific points about properties of systematic designs. The first is that simple averages over observations from systematic designs can be very poor predictors of integrals of random fields, especially in higher dimensions. The second is that, at least for random fields that are not too anisotropic, the problem with this predictor is the simple average aspect of it, not the systematic design. These two points are of interest on their own, but they are also critical to understanding a serious flaw in an argument of Matheron (1971) purporting to demonstrate that statistical inference is “impossible” for differentiable random fields (see 6.3).
Michael L. Stein

### 6. Predicting With Estimated Parameters

Abstract
Chapters 3 and 4 examined the behavior of pseudo-BLPs. Although the results given there provide an understanding of how linear predictors depend on the spectral density of a stationary random field, they do not directly address the more practically pertinent problem of prediction when parameters of a model must be estimated from the same data that are available for prediction. The reason I have avoided prediction with estimated parameters until now is that it is very hard to obtain rigorous results for this problem. The basic difficulty is that once we have to estimate any parameters of the covariance structure, “linear” predictors based on these estimates are no longer actually linear since the coefficients of the predictors depend on the data.
Michael L. Stein

### Backmatter

Weitere Informationen