Skip to main content

1989 | OriginalPaper | Buchkapitel

Recursive Linear Estimation (Bayesian Estimation)

verfasst von : Donald E. Catlin

Erschienen in: Estimation, Control, and the Discrete Kalman Filter

Verlag: Springer New York

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

In the last chapter, we developed formulas enabling us to calculate the linear minimum variance estimator of X based on knowledge of a random vector Y. Moreover, we also calculated an expression for the so-called covariance of the estimation error. Specifically, then, our output was a random variable $$\hat X$$ representing an estimator of X, and a matrix P defined by (6.1-1)$$P \triangleq E\left( {\left( {X - \hat X} \right)\left( {X - \hat X} \right)^T } \right).$$ Both of these outputs were based on knowledge of the means μx and μy, the covariances of X and Y, and their combined covariance, cov(X, Y). Of course, as discussed in Chapter 2, one generally would not have knowledge of Y. Instead, one would have knowledge of some realization of Y, that is, Y(ω) for some ω. The real “output” is, therefore, an estimating function (the g of Chapter 2) and a covariance matrix.

Metadaten
Titel
Recursive Linear Estimation (Bayesian Estimation)
verfasst von
Donald E. Catlin
Copyright-Jahr
1989
Verlag
Springer New York
DOI
https://doi.org/10.1007/978-1-4612-4528-5_6

Neuer Inhalt