1989 | OriginalPaper | Chapter
Recursive Linear Estimation (Bayesian Estimation)
Author : Donald E. Catlin
Published in: Estimation, Control, and the Discrete Kalman Filter
Publisher: Springer New York
Included in: Professional Book Archive
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In the last chapter, we developed formulas enabling us to calculate the linear minimum variance estimator of X based on knowledge of a random vector Y. Moreover, we also calculated an expression for the so-called covariance of the estimation error. Specifically, then, our output was a random variable $$\hat X$$ representing an estimator of X, and a matrix P defined by (6.1-1)$$P \triangleq E\left( {\left( {X - \hat X} \right)\left( {X - \hat X} \right)^T } \right).$$ Both of these outputs were based on knowledge of the means μx and μy, the covariances of X and Y, and their combined covariance, cov(X, Y). Of course, as discussed in Chapter 2, one generally would not have knowledge of Y. Instead, one would have knowledge of some realization of Y, that is, Y(ω) for some ω. The real “output” is, therefore, an estimating function (the g of Chapter 2) and a covariance matrix.