1989 | OriginalPaper | Buchkapitel
Recursive Linear Estimation (Bayesian Estimation)
verfasst von : Donald E. Catlin
Erschienen in: Estimation, Control, and the Discrete Kalman Filter
Verlag: Springer New York
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In the last chapter, we developed formulas enabling us to calculate the linear minimum variance estimator of X based on knowledge of a random vector Y. Moreover, we also calculated an expression for the so-called covariance of the estimation error. Specifically, then, our output was a random variable $$\hat X$$ representing an estimator of X, and a matrix P defined by (6.1-1)$$P \triangleq E\left( {\left( {X - \hat X} \right)\left( {X - \hat X} \right)^T } \right).$$ Both of these outputs were based on knowledge of the means μx and μy, the covariances of X and Y, and their combined covariance, cov(X, Y). Of course, as discussed in Chapter 2, one generally would not have knowledge of Y. Instead, one would have knowledge of some realization of Y, that is, Y(ω) for some ω. The real “output” is, therefore, an estimating function (the g of Chapter 2) and a covariance matrix.