Skip to main content
Top

1989 | OriginalPaper | Chapter

Recursive Linear Estimation (Bayesian Estimation)

Author : Donald E. Catlin

Published in: Estimation, Control, and the Discrete Kalman Filter

Publisher: Springer New York

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

In the last chapter, we developed formulas enabling us to calculate the linear minimum variance estimator of X based on knowledge of a random vector Y. Moreover, we also calculated an expression for the so-called covariance of the estimation error. Specifically, then, our output was a random variable $$\hat X$$ representing an estimator of X, and a matrix P defined by (6.1-1)$$P \triangleq E\left( {\left( {X - \hat X} \right)\left( {X - \hat X} \right)^T } \right).$$ Both of these outputs were based on knowledge of the means μx and μy, the covariances of X and Y, and their combined covariance, cov(X, Y). Of course, as discussed in Chapter 2, one generally would not have knowledge of Y. Instead, one would have knowledge of some realization of Y, that is, Y(ω) for some ω. The real “output” is, therefore, an estimating function (the g of Chapter 2) and a covariance matrix.

Metadata
Title
Recursive Linear Estimation (Bayesian Estimation)
Author
Donald E. Catlin
Copyright Year
1989
Publisher
Springer New York
DOI
https://doi.org/10.1007/978-1-4612-4528-5_6