Elsevier

Automatica

Volume 45, Issue 7, July 2009, Pages 1686-1693
Automatica

Brief paper
Adaptive divided difference filtering for simultaneous state and parameter estimation

https://doi.org/10.1016/j.automatica.2009.02.029Get rights and content

Abstract

A novel adaptive version of the divided difference filter (DDF) applicable to non-linear systems with a linear output equation is presented in this work. In order to make the filter robust to modeling errors, upper bounds on the state covariance matrix are derived. The parameters of this upper bound are then estimated using a combination of offline tuning and online optimization with a linear matrix inequality (LMI) constraint, which ensures that the predicted output error covariance is larger than the observed output error covariance. The resulting sub-optimal, high-gain filter is applied to the problem of joint state and parameter estimation. Simulation results demonstrate the superior performance of the proposed filter as compared to the standard DDF.

Introduction

Observer design for non-linear discrete-time systems is a challenging problem which has started receiving more and more attention over the past few years because of its wide scope for application. Although the Kalman Filter (KF) solved the problem of optimal state filtering in linear models with Gaussian noise, there is no such solution available for non-linear systems. Instead, a number of estimators may be found in the literature which are basically extensions of the KF based on approximations of the non-linear system equations. The most prominent and widely used among these is the Extended Kalman Filter (EKF) (Boutayeb and Aubry, 1999, Ljung, 1979, Reif et al., 1999), which is obtained by first order linearization of the system equations so that the traditional KF can be applied at each step. Extensive use of the EKF over the past couple of decades has indicated that in several important practical problems the first order approximation provides insufficient accuracy, resulting in significant bias or even convergence problems (Simon, 2006). Moreover, the derivation of the Jacobian matrix, necessary for the first order approximation, is non-trivial in many applications.

Recently, efficient derivative free filtering techniques, which do not require evaluation of the Jacobian matrix, have been proposed. These filters have a computational complexity similar to the EKF but are much simpler to implement. Schei (1997) uses a first order approximation of a non-linear transformation using divided differences for the propagation of the mean and covariance of the state distribution. Quine (2006) uses a minimal ensemble set of state vectors to propagate the first two moments of the state distribution to obtain a filter which is shown to be equivalent to the EKF in a limiting case. More accurate filters, termed as Sigma Point Kalman Filters (SPKFs) (van der Merwe et al., 2004), which use second or higher order approximations of the non-linear transformation, have also been proposed. This includes the Unscented Kalman Filter (UKF), which is based on the unscented transform, a technique for propagating the mean and covariance of a distribution through a non-linear transformation (Julier & Uhlmann, 2004), and the divided difference filter (DDF), which is based on a second order approximation using Stirling’s interpolation formula (Norgaard, Poulsen, & Ravn, 2000). This work makes use of the DDF and its view of second order non-linear stochastic filtering as the propagation of a Gaussian distribution through a second order approximation of the non-linearity based on Stirling’s interpolation formula.

Even in the case of linear systems with Gaussian noise, when the system model is not exact, a KF can experience divergence issues. Hence, a significant body of research has been devoted to compensating modeling errors using techniques such as adaptive noise estimation (Mehra, 1972, Myers and Tapley, 1976) and adaptive KFs with fading memory (Liang et al., 2004, Ozbek and Efe, 2004, Xia et al., 1994, Zhou and Frank, 1998). These methods may also be applied to non-linear models using the EKF by considering the residual non-linear terms as disturbances, but this implies that there is some modeling error even when the non-linear model may be exact.

The effect of modeling error has been explicitly considered in some works as an additional bounded disturbance term. Finite-horizon robust filters have been proposed to account for the effect of norm-bounded uncertainties (Fu et al., 2001, Yang et al., 2002, Yoon et al., 2004). H (Mahmoud, Xie, & Soh, 2000) and H2/H (Duan et al., 2006, Geromel and Oliveira, 2001, Xie et al., 2004) filtering techniques have also been proposed to attenuate the effects of convex bounded uncertainties. An upper bound for the covariance matrix was derived and used by Elanayar and Shin (1994) to design a sub-optimal filter that is robust to convex bounded modeling error.

These methods have proved to be successful at improving the robustness of the KF and its extensions but all the above-mentioned techniques only consider time varying linear approximations of the non-linear system equations to propagate the covariance matrix. This method of handling non-linear models has its drawbacks when there are significant non-linearities in the system equations, and lumping the residual non-linearity into a disturbance term results in a significant loss of information contained in the model. In such a scenario, the upper bounds of the covariance matrix estimated by many of these robust filters may be too conservative, resulting in loss of filter accuracy. Therefore there is a need to develop adaptive versions of second order filters (such as UKF and DDF) which can better exploit the information in the non-linear system equations while simultaneously being robust to modeling error. Some effort has been made in this direction, such as adaptive noise estimation based sigma point filtering (Lee & Alfriend, 2004), but an explicit analysis of the effects of modeling and approximation errors on the predicted state covariance has not been reported.

This paper tries to address these issues and is presented in the following order. A brief introduction to the DDF is given first. This is followed by an analysis of the effects of modeling error on the propagation of covariance in the DDF along with the derivation of an upper bound for the state covariance matrix by considering these errors. The use of upper bounds for the state covariance matrix serves to incorporate knowledge about the inaccuracy of the approximate models and makes the filtering process more robust by making suitable tradeoffs between trusting the model prediction and using high gain feedback to correct errors. Next, the development of an adaptive divided difference filter (ADDF) is presented for non-linear systems with a linear output equation. Based on the structure of the derived upper bound, we justify the use of an adaptive fading memory KF-like algorithm to determine one of its parameters while tuning the other parameters offline. Online parameter tuning is done by constraining the upper bound of the state covariance to satisfy a linear matrix inequality (LMI) such that the predicted output covariance matrix is an upper bound for the observed output covariance matrix. Finally, an example is given to show the superiority of the proposed ADDF over the standard DDF for the problem of joint state, parameter and fault estimation, followed by the conclusions.

Section snippets

Divided difference filter

This section gives a brief summary of the DDF. The interested reader is referred to Norgaard et al. (2000) for a detailed description and analysis of the DDF. The class of systems considered here is given below. xk+1=fm(xk,uk)+wkyk=Hxk+vk where xkRn is the state vector, ykRp is the output vector, ukRm is the input vector and fm(xk,uk) is the non-linear plant model. Assume that fm is either a globally Lipschitz continuous function or that it is locally Lipschitz continuous with xk restricted

Adaptive divided difference filter

While propagating the covariance matrix through the state equation, the DDF assumes that the modeling errors as well as the approximation errors are negligible, but these assumptions may not hold in general. Therefore, in order to make the filter more robust, it is essential to ensure that the filter does not place undue confidence in the model prediction. This can be done by using an upper bound for the state covariance matrix, which increases the uncertainty in the model prediction by taking

Example application

One of the areas where second order filters have enjoyed a lot of success is joint state and parameter estimation by using an augmented state vector, which consists of both the system states as well as the parameters. Let θ be the vector representing the time varying parameters, and since usually nothing is known about the variation of these parameters, a state transition equation of the form θ(k+1)=θ(k)+wk is assumed, i.e., a constant model is assumed. This is obviously wrong if the parameters

Conclusions

In this paper, an upper bound was derived for the state covariance matrix in a DDF by considering the modeling errors as well as errors introduced during the approximation of a non-linear state transition equation. A parametric structure was derived from this upper bound for the modified a priori state covariance estimate to reflect the effect of these errors. This parametric form allows for the addition of uncertainty to the state estimate in three different ways with optimal parameters

Niranjan Subrahmanya received his Ph.D. degree from Purdue University, Indiana, USA, in May, 2009. He is currently working with the Complex Systems Science group in Exxon Mobil Research and Engineering. His research interests include intelligent systems, dynamics and control, data-based systems modeling, monitoring and diagnostics and machine learning.

References (27)

  • V.T., S. Elanayar et al.

    Radial basis function neural network for approximation and estimation of nonlinear stochastic dynamic systems

    IEEE Transactions on Neural Networks

    (1994)
  • M. Fu et al.

    Finite-horizon robust Kalman filter design

    IEEE Transactions on Signal Processing

    (2001)
  • J.C. Geromel et al.

    H2 and H robust filtering for convex bounded uncertain systems

    IEEE Transactions on Automatic Control

    (2001)
  • Cited by (0)

    Niranjan Subrahmanya received his Ph.D. degree from Purdue University, Indiana, USA, in May, 2009. He is currently working with the Complex Systems Science group in Exxon Mobil Research and Engineering. His research interests include intelligent systems, dynamics and control, data-based systems modeling, monitoring and diagnostics and machine learning.

    Yung C. Shin received his Ph.D. in mechanical engineering from University of Wisconsin in Madison in 1984, and is currently a professor of mechanical engineering at Purdue University. He worked as a senior project engineer at the GM Technical Center from 1984 to 1988 and as a faculty member at Pennsylvania State University from 1988 to 1990, prior to joining Purdue University in 1990. His research areas include intelligent and adaptive control, process monitoring and diagnostics, laser processing of materials, high speed machining, process modeling and simulation. He has published over 200 papers in archived journals, refereed conference proceedings, authored chapters in several engineering handbooks and co-edited two books. He is also a co-author of Intelligent Systems: Modeling, Optimization and Control (CRC Press, 2008). He has organized or co-organized many conferences and symposia in his areas of research, including the first and second “Artificial Neural Networks in Engineering Conference”, and “Symposium on Neural Networks in Manufacturing and Robotics” and “Symposium on Intelligent Design and Manufacturing” at the ASME IMECE.

    This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Derong Liu under the direction of Editor Miroslav Krstic.

    View full text