Skip to main content
main-content

Über dieses Buch

In 1960, R. E. Kalman published his celebrated paper on recursive min­ imum variance estimation in dynamical systems [14]. This paper, which introduced an algorithm that has since been known as the discrete Kalman filter, produced a virtual revolution in the field of systems engineering. Today, Kalman filters are used in such diverse areas as navigation, guid­ ance, oil drilling, water and air quality, and geodetic surveys. In addition, Kalman's work led to a multitude of books and papers on minimum vari­ ance estimation in dynamical systems, including one by Kalman and Bucy on continuous time systems [15]. Most of this work was done outside of the mathematics and statistics communities and, in the spirit of true academic parochialism, was, with a few notable exceptions, ignored by them. This text is my effort toward closing that chasm. For mathematics students, the Kalman filtering theorem is a beautiful illustration of functional analysis in action; Hilbert spaces being used to solve an extremely important problem in applied mathematics. For statistics students, the Kalman filter is a vivid example of Bayesian statistics in action. The present text grew out of a series of graduate courses given by me in the past decade. Most of these courses were given at the University of Mas­ sachusetts at Amherst.

Inhaltsverzeichnis

Frontmatter

1. Basic Probability

Abstract
We begin by introducing the notion of a probability measure, and then we will discuss the intuitive interpretation of this definition.
Donald E. Catlin

2. Minimum Variance Estimation—How the Theory Fits

Abstract
Other than intellectual curiosity, we can only think of two reasons why a scientist or engineer might study mathematics. The first is to obtain procedures or algorithms to solve a problem or class of problems. The second is to clearly and precisely conceptualize an idea; to capture its essence. At first glance, the former may appear to be the more important of the two. After all, aren’t solutions to problems what we are really after? Yes, indeed! But the two reasons above are not really dichotomous. Certainly, the history of probability and statistics shows that viable solutions to many problems were not forthcoming until people like Borel and Kolmororov laid the proper foundations so that others could technically formulate their problems and rigorously check them. For example, it is hard to imagine formulating and proving the Kalman theorem with the probability theory of 1850. It would be misleading, however, to suggest that the only reason for precisely formulating ideas is to obtain solutions to problems. There are other, very practical, reasons for doing so. Let us briefly explore this.
Donald E. Catlin

3. The Maximum Entropy Principle

Abstract
In Section 1.1, we alluded to the principle that titles this chapter. The idea of this principle, as we said there, is to assign probabilities in such a way that the resulting distribution contains no more information than is inherent in the data. The first attempt to do this was by Laplace and was called the “Principle of Insufficient Reason.” This principle said that two events should be assigned the same probability if there is no reason to do otherwise. This appears reasonable and is fine as far as it goes. However, Jaynes [12] said, “except in cases where there is an evident element of symmetry that clearly renders the events ‘equally possible,’ this assumption may appear just as arbitrary as any other that might be made.” What is needed, of course, is a way to uniquely quantify the “amount of uncertainty” represented by a probability distribution. As mentioned in Section 1.1, this was done in 1948 by C. E. Shannon in a paper entitled A Mathematical Theory of Communication [25]. Shannon was specifically interested in the problem of sen ding data through a noisy, discrete transmission channel. However, his results, most notably a rigorous treatment of entropy, have had far-reaching consequences in the theory of probability and statistics. We are not going to even attempt to summarize the broad effects his work has had nor reference the many papers that have since been written on the subject; our purpose is to quickly develop the notion of entropy and get on with the estimation problem of the last chapter. For more information on the subject of entropy, the reader is referred to the text by Ellis [4].
Donald E. Catlin

4. Adjoints, Projections, Pseudoinverses

Abstract
This chapter is concerned with developing some technical theorems in mathematics that will be used in the remainder of the text.
Donald E. Catlin

5. Linear Minimum Variance Estimation

Abstract
After wading through all the technical details of Chapter 4, it is probably wise to refresh ourselves in terms of minimum variance estimation. At the end of Chapter 3, our status could be described as follows.
Donald E. Catlin

6. Recursive Linear Estimation (Bayesian Estimation)

Abstract
In the last chapter, we developed formulas enabling us to calculate the linear minimum variance estimator of X based on knowledge of a random vector Y. Moreover, we also calculated an expression for the so-called covariance of the estimation error. Specifically, then, our output was a random variable \(\hat X\) representing an estimator of X, and a matrix P defined by
$$P \triangleq E\left( {\left( {X - \hat X} \right)\left( {X - \hat X} \right)^T } \right).$$
(6.1-1)
Both of these outputs were based on knowledge of the means μ x and μ y , the covariances of X and Y, and their combined covariance, cov(X, Y). Of course, as discussed in Chapter 2, one generally would not have knowledge of Y. Instead, one would have knowledge of some realization of Y, that is, Y(ω) for some ω. The real “output” is, therefore, an estimating function (the g of Chapter 2) and a covariance matrix.
Donald E. Catlin

7. The Discrete Kalman Filter

Abstract
In Chapter 6, we discussed the problem of making recursive estimates of a random vector X. The problem was static in the sense that every measurement was used to update or improve the estimate of the same random vector X. We now consider the case where the random vector changes in time, between measurements, according to a specified statistical dynamic.
Donald E. Catlin

8. The Linear Quadratic Tracking Problem

Abstract
In this chapter we will study a very important problem in the field of optimal control theory. In general, control theory is concerned with using the measurements in a dynamical system to “control” the state vector x. The precise meaning of the word control will be made clear as we proceed. The word “quadratic” in the title of this chapter refers to a particular class of control problems that use a quadratic form to measure the performance of a system. The reason we choose this particular performance index is that in the stochastic case it leads to a tractable solution. For reasons that will be clear later, we begin with the deterministic problem.
Donald E. Catlin

9. Fixed Interval Smoothing

Abstract
We now come to the final topic in this text, fixed interval smoothing. Just as we have seen with other topics in past chapters, this is an easy problem to state and very complex problem to solve.
Donald E. Catlin

Backmatter

Weitere Informationen