Elsevier

Journal of Process Control

Volume 24, Issue 9, September 2014, Pages 1337-1345
Journal of Process Control

Subspace identification with non-steady Kalman filter parameterization

https://doi.org/10.1016/j.jprocont.2014.05.016Get rights and content

Highlights

  • We analyze the deficiency of subspace identification schemes with finite data.

  • We propose a non-steady Kalman filter based subspace identification method.

  • We show that the proposed method is superior for finite data horizon.

  • We demonstrate that the proposed method is insensitive to horizon lengths.

Abstract

Most existing subspace identification methods use steady-state Kalman filter (SKF) in parameterization, hence, infinite data horizons are implicitly assumed to allow the Kalman gain to reach steady state. However, using infinite horizons requires collecting infinite data which is unrealistic in practice. In this paper, a subspace framework with non-steady state Kalman filter (NKF) parameterization is established to provide exact parameterization for finite data horizon identification problems. Based on this we propose a novel subspace identification method with NKF parameterization which can handle closed-loop data and avoid assumption on infinite horizons. It is shown that with finite data, the proposed parameterization method provides more accurate and consistent solutions than existing SKF based methods. The paper also reveals why it is often beneficial in practice to estimate a bank of ARX models over a single ARX model.

Introduction

Over the last forty years, subspace identification methods (SIMs), based on projection techniques in the Euclidean space, have become a dominating research stream in system identification. Originated in the Ho–Kalman paper [1], subspace identification went through the development of stochastic realization theory [2], and the solution of the combined deterministic–stochastic realization problem [3], [4], [5], [6], [7], [8]. Several representative algorithms have been developed for open-loop identification during the last twenty years, including canonical variate analysis (CVA, [5]), numerical algorithm of subspace state space system identification (N4SID, [9]) and multivariate output-error state space (MOESP, [7]). The unifying theorem by Van Overschee and De Moor [10] provides a framework in which these algorithms can be interpreted as a singular value decomposition of various weight matrices.

However, the aforementioned SIMs are biased for closed loop identification unless special treatments are introduced. Closed-loop identification is of special interest for engineering applications. Due to safety and quality requirements, it is preferred that identification experiments are carried out under closed-loop condition. Unlike prediction error methods (PEMs), the open-loop SIMs (e.g., CVA, N4SID and MOESP) are biased under closed-loop condition due to the correlation between the noise and the control input. To solve the closed-loop identification problem, several closed-loop subspace identification methods were proposed during the last decade [11], [12], [13]. The recent developments are presented in [14], [15], [16], [17], [18], [19]. The OKID method [14] estimates the Markov parameters from the high order ARX (HOARX) model and forms the Hankel matrix, based on which the system matrices are recovered via ERA [20]. The SSARX method [15] used the pre-estimated Markov parameters to construct G¯f so that the future input effect was removed from the output. Qin and Ljung [19] developed the innovation estimation method which pre-estimates the Ef from the recursive regression on the ARMAX model. In the whitening filter approach (WFA, [18]), Chiuso and Picci [18] used the row-wise regression on the predictor form to avoid the problem of correlation between input and noise. Wang and Qin [16] proposed the parity space and principal component analysis (PCA) for error-in-variable closed-loop identification. Chiuso and Picci [18] provided theoretical consistency analysis of these methods and pointed out the bias caused by regression on finite horizons. Qin, et al. [21] proposed a progressive parameterization framework to interpret the procedures of closed-loop SIMs.

A general closed-loop subspace identification algorithm usually goes through the following steps:

  • 1.

    Pre-estimation: obtain Markov parameter estimates from HOARX, which is non-parametric and has its own application in practice (for example, in dynamic matrix control, [22]).

  • 2.

    Model reduction: perform SVD on the Hankel matrix or weighted Hankel matrix, leading to extended observability matrix Γ¯f and extended controllability matrix L¯p estimates, which by themselves are reduced order non-parametric models.

  • 3.

    System matrices identification: recover the system matrices A, B, C, K from the extended observability and controllability matrices or directly from the estimated state sequence.

In the pre-estimation step [15], [14], [23], [24], the predictor Markov parameters are estimated via least squares regression. These non-parametric estimates are usually interpreted and further parameterized with steady-state Kalman filter (SKF) as the predictor Markov parameters. However, inspired by the similarity between the objective functions of the least squares and the non-steady state Kalman filter (NKF), Sorenson [25] showed that NKF represents a recursive solution to least squares problems. Van Overschee and De Moor [9] proved that the least squares regression of the future output on a finite horizon past input and output data can be interpreted as a bank of NKF. Recently, Zhao, Sun and Qin [26], [21] pointed out that SKF parameterization is not consistent with the finite data length and thus does not lead to the optimal solution. The employment of the NKF structure is necessary to obtain the optimal estimation in practical cases when the data horizon is finite.

In this paper, we show that the SKF based SIMs inherently require the data horizon to be infinite which cannot be satisfied in practice, and thus the implementation on finite data yields sub-optimal estimation. Complying with the finite data horizon, a SIM with NKF parameterization is proposed to address the practical closed-loop identification problem. In addition, the relationship between predictor Markov parameters and system Markov parameters is established under the NKF framework.

The reminder of this paper is organized as follows. The traditional SKF parameterization is first given for the ease of defining notation and the conversion relationship between predictor and system Markov parameters is introduced in Section 2. The NKF parameterization and the corresponding subspace models are discussed in Section 3. An NKF parameterization based closed-loop subspace identification method is developed in Section 4, followed by the simulation results in Section 5 and the conclusions in Section 6.

Section snippets

Estimating predictor Markov parameters

The purpose of system identification is to estimate an appropriate dynamic model for a properly collected time series of system input and output data {uk, yk} which can be generated under open-loop or closed-loop conditions. Assuming the data sequence {uk, yk} is generated by the following state space process model,

xk+1=Axk+Buk+wkyk=Cxk+vkwhere xkn. Assuming (C, A) is observable, we can design a stable state observer as follows:xˆk+1=Axˆk+Buk+Kekyk=Cxˆk+ekwhere K is the steady-state Kalman

NKF parameterization and estimation

By employing the SKF, the state space model (6) inherently assumes an infinite past data horizon for Kalman filter to reach the steady gain. However, in practice, the length of the data set is always finite hence the infinite horizon condition cannot be satisfied. However, the state space model with NKF puts no constraints on the data horizon and provides an optimal model for finite data length [18], [26].

Subspace identification with NKF parameterization

In this section, we shall develop a NKF parameterization based subspace identification method (abbreviated as NKFID) to solve the finite data horizon closed-loop identification problem.

NKF parameterization complicates the structure of the predictor observability matrix Γ¯fNKF and predictor controllability matrix L¯k,pNKF, making the recovery of the system matrices more difficult than the SKF parameterization. As an alternative, the process observability matrix Γf, which is defined as,Γf=CCACAf

Simulation example

In this section, we use a benchmark problem to evaluate the proposed NKFID method and compare it with the SSARX method. The closed-loop control system is adopted from Verhaegen [12] who first used it to evaluate different closed-loop SIMs. SSARX is chosen for comparison because it takes almost the same procedure as the proposed NKFID to handle the correlation between the input and the noise and to estimate model coefficients. The comparison between NKFID and SSARX can highlight the effect of

Conclusions

Data sets in practical identification problems always have finite data length. This condition challenges the existing SKF parameterization popularly and conveniently used in most existing SIMs since SKF imposes infinite data horizons for Kalman filter to become steady. In this paper, we propose a novel subspace identification algorithm with non-steady state Kalman filter parameterization. The proposed method provides optimal parameterization and solution regardless of the horizon lengths of the

Acknowledgements

Financial support from the Texas-Wisconsin-California Control Consortium (TWCCC) and Center for Interactive Smart Oilfield Technologies (CiSoft) of University of Southern California is gratefully acknowledged.

References (32)

  • S.J. Qin et al.

    On the role of future horizon in closed-loop subspace identification

  • S.J. Qin et al.

    A novel subspace identification approach with enforced causal models

    Automatica

    (2005)
  • S.J. Qin et al.

    A survey of industrial model predictive control technology

    Control Eng. Pract.

    (2003)
  • S.J. Qin

    An overview of subspace identification

    Comput. Chem. Eng.

    (2006)
  • B.L. Ho et al.

    Effective construction of linear state-variable models from input–output functions

    Regelungstechnik

    (1965)
  • H. Akaike

    A new look at the statistical model identification

    IEEE Trans. Autom. Control

    (1974)
  • Cited by (0)

    View full text