Skip to main content
Top

1991 | Book | 2. edition

Digital Control Systems

Volume 2: Stochastic Control, Multivariable Control, Adaptive Control, Applications

Author: Professor Dr.-Ing. Rolf Isermann

Publisher: Springer Berlin Heidelberg

insite
SEARCH

About this book

The great advances made in large-scale integration of semiconductors and the resulting cost-effective digital processors and data storage devices determine the present development of automation. The application of digital techniques to process automation started in about 1960, when the first process computer was installed. From about 1970 process computers with cathodic ray tube display have become standard equipment for larger automation systems. Until about 1980 the annual increase of process computers was about 20 to 30%. The cost of hardware has already then shown a tendency to decrease, whereas the relative cost of user software has tended to increase. Because of the high total cost the first phase of digital process automation is characterized by the centralization of many functions in a single (though sometimes in several) process computer. Application was mainly restricted to medium and large processes. Because of the far-reaching consequences of a breakdown in the central computer parallel standby computers or parallel back-up systems had to be provided. This meant a substantial increase in cost. The tendency to overload the capacity and software problems caused further difficulties. In 1971 the first microprocessors were marketed which, together with large-scale integrated semiconductor memory units and input/output modules, can be assem­ bled into cost-effective microcomputers. These microcomputers differ from process computers in fewer but higher integrated modules and in the adaptability of their hardware and software to specialized, less comprehensive tasks.

Table of Contents

Frontmatter

Control Systems for Stochastic Disturbances

Frontmatter
12. Stochastic Control Systems (Introduction)
Abstract
The controllers treated in Volume I were designed for deterministic disturbances, that means for signals which are exactly known a priori and can be described analytically. Real disturbances, however, are mostly stochastic signals which cannot be exactly described nor predicted. The deterministic signals used for the design of control systems are often ‘proxies’ of real signals. These proxies have simple shapes to reduce the design complexity and to allow for easy interpretation of the control system output. The resulting control systems are then optimal only for the chosen proxy signal and the applied criterion. For all other signals the control system is sub-optimal; however, this is not very important in most cases. If the demands on the control performance increase, the controllers must be matched not only to the dynamic behaviour of the processes but also to the disturbances. To this the theory of stochastic signals has much to contribute.
Rolf Isermann
13. Parameter-optimized Controllers for Stochastic Disturbances
Abstract
The parameter-optimized control algorithms given in chapter 5 can be modified to include stochastic disturbance signals n(k) by using the quadratic performance criterion
$$S_{eu}^2 = \sum\limits_{k = 1}^M {\left[ {{e^2}\left( k \right) + rK_p^2\Delta {u^2}\left( k \right)} \right]} $$
(13.1)
if the disturbance signals are known. When using a process computer, the stochastic noise can first be stored and then used in the optimization of controller parameters. If the disturbance is stationary, and if it has been measured and stored for a sufficiently long time, it can then be assumed that the designed controller is optimal also for future noise and a mathematical noise model is not necessary for parameter optimization.
Rolf Isermann
14. Minimum Variance Controllers for Stochastic Disturbances
Abstract
In the design of minimum variance controllers the variance of the controlled variable
$${\mathop{\rm var}} \left[ {y\left( k \right)} \right] = E\left\{ {{y^2}\left( k \right)} \right\}$$
is minimized. This criterion was used in [12.4] by assuming a noise filter given by (12.2.31) but with C(z −1) = A(z −1). The manipulated variable u(k) was not weighted, so that in many cases excessive input changes are produced. A weighing r on the input was proposed in [14.1], so that the criterion
$$\matrix{ {E\left\{ {{y^2}\left( {k + i} \right) + r{u^2}\left( k \right)} \right\};} & {i = d + 1} \cr } $$
is minimized. The noise n(k) can be modelled using a nonparametric model (impulse response) or a parametric model as in (12.2.31). As a result of the additional weighting of the input, the variance of the controlled variable is no longer minimal; instead the variance of a combination of the controlled variable and the manipulated variable are minimized. Therefore a generalized minimum variance controller is produced.
Rolf Isermann
15. State Controllers for Stochastic Disturbances
Abstract
The process model assumed in chapter 8 for the derivation of the state controller for deterministic initial values is now excited by a vector stochastic noise signal v(k)
$$ x\left( {k + 1} \right) = Ax\left( k \right) + Bu\left( k \right) + Fu\left( k \right). $$
(15.1.1)
Rolf Isermann

Interconnected Control Systems

Frontmatter
16. Cascade Control Systems
Abstract
The design of an optimal state controller involves the feedback of all the state variables of the process. If not all state variables can be measured, but for example only one state variable between the process input and output, then improvements can be obtained for single loop systems using for example parameter optimized controllers, by assuming this state variable to be an auxiliary control variable y 2 which is fed back to the manipulated variable via an auxiliary controller, as shown in Figure 16.1. Then the process part GPu2 and the auxiliary controller G R2 form an auxiliary control loop whose reference value is the output of the main controller G R1.
Rolf Isermann
17. Feedforward Control
Abstract
If an external disturbance v of a process can be measured before it acts on the output variable y then the control performance with respect to this disturbance can often be improved by feedforward control, as shown in Figure 17.1. Here immediately after a change in the disturbance v the process input u is manipulated by a feedforward control G s which does not wait as with feedback control until the disturbance has effected the control variable y. Significant improvement in control performance, however, can only be obtained for a restricted manipulation range if the process behaviour G Pu is not slow compared with the disturbance behaviour G Pv.
Rolf Isermann

Multivariable Control Systems

Frontmatter
18. Structures of Multivariable Processes
Abstract
Part E considers some design methods for linear discrete-time multivariable processes. As shown in Figure 18.1 the inputs u i and outputs yj of multivariable processes influence each other, resulting in mutual interactions of the direct signal paths U 1y 1 , u 2y 2, etc. The internal structure of multivariable processes has a significant effect on the design of multivariable control systems. This structure can be obtained by theoretical modelling if there is sufficient knowledge of the process. The structures of technical processes are very different such that they cannot be described in terms of only a few standardized structures. However, the real structure can often be transformed into a canonical model structure using similarity transformations or simply block diagram conversion rules. The following sections consider special structures of multivariable processes based on the transfer function representation, matrix polynomial representation and state representation. These structures are the basis for the designs of multivariable controllers presented in the following chapters.
Rolf Isermann
19. Parameter-optimized Multivariable Control Systems
Abstract
Parameter-optimized multivariable controllers are characterized by a given controller structure and by the choice of free parameters using optimization criteria or tuning rules. Unlike single variable control systems, the structure of a multivariable controller consists not only of the order of the different control algorithms but also of the mutual arrangement of the coupling elements, as in chapter 18. Corresponding to the main and coupling transfer elements of multivariable processes, one distinguishes main and coupling controllers (cross controllers). The main controllers R ii are directly dedicated to the main elements G ii of the process and serve to control the variables y i close to the reference variables w i, see Figure 19.1a. The coupling controllers R ij couple the single loops on the controller side, Figure 19.1b-d. They can be designed to decouple the loops completely or partially or to reinforce the coupling This depends on the process, the acting disturbance and command signals and on the requirements on the control performance.
Rolf Isermann
20. Multivariable Matrix Polynomial Control Systems
Abstract
Based on the matrix polynomial representation of multivariable processes described in section 18.1.5 the design principles of some single input/single output controllers can be transferred to the multivariable case with equal numbers of process inputs and outputs.
Rolf Isermann
21. Multivariable State Control Systems
Abstract
The state controller for multivariable processes was designed in chapter 8. Therefore only a few additional comments are made in this chapter. The process equation considered in the deterministic case is
$$x\left( {k + 1} \right) = Ax\left( k \right) + Bu\left( k \right)$$
(21.1)
$$y\left( k \right) = Cx\left( k \right)$$
(21.2)
with m state variables, p process inputs and r process outputs. The optimal steady-state controller is then
$$u\left( k \right) = - Kx\left( k \right)$$
(21.3)
and possesses p×m coefficients if each state variable acts on each process input.
Rolf Isermann
22. State Estimation
Abstract
For the realization of state controllers for processes with stochastic disturbances estimates x(k) of the internal state variables are required which are based on measured input and output signals u(k) and y(k), see chapter 15 and 21. The measurable variables y(k) are not only contaminated by u(k) but also by the nonmeasurable noise signals v(k) and n(k). Since, however, only the signal part caused by u(k) is of interest in the state variables x(k), suitable filtering methods have to separate the signal from the noise. Therefore, the general problem of how to separate signals from noise is briefly treated first, followed by the derivation of the Kaiman filter explaining in several steps the principle of estimation for both, the scalar and vector cases. State representation allows a direct consideration of multivariable processes.
Rolf Isermann

Adaptive Control Systems

Frontmatter
23. Adaptive Control Systems (A Short Review)
Abstract
The implementation of structure optimal and precisely adjusted control algorithms presupposes the knowledge of dynamic process models. For stochastic control systems signal models have to be known additionally. Since both process models and signal models can be obtained through identification- and parameter estimation methods using digital computers, a combination with controller design methods is appropriate. If the single problems are to be solved on-line, in real-time and in closed loop, this then leads to selftuning and adaptive control systems.
Rolf Isermann
24. On-line Identification of Dynamical Processes and Stochastic Signals
Abstract
Identification is the experimental determination of the dynamical behaviour of processes and their signals. Measured signals are used to determine the system behaviour within a class of mathematical models. The error between the real process or signal and its mathematical model has to be as small as possible [3.12], [3.13]. On-line identification means the identification with computers in on-line operation with the proces. If the measured signals are first stored in a block or arrays this is called batch processing. However, if the signals are processed after each sample instant this is called real time processing.
Rolf Isermann
25. On-line Identification in Closed Loop
Abstract
If the design of adaptive control systems is based on identified process models, process identification has to be performed in closed loop. There are also other applications in which dynamic processes have to be identified in closed loop. Relevant examples are processes which have to operate in closed loop because of technical reasons or for integrated and economical processes for which feedback is an integrated part of the overall system. Process identification in closed loop therefore is of general significance and will not be restricted to application for adaptive control systems in this chapter. It must be first established whether methods developed for open loop identification can also be applied to the closed loop taking into account the various convergence conditions. The problem is quite obvious if correlation analysis is considered. For convergence of the cross correlation function it is required that the input u(k) is not correlated with the noise n(k). Feedback, however, generates such a correlation. If the method of least squares is considered for parameter estimation, the error signal e(k) must be uncorrelated with the elements of the data vector ψt(k). It will have to be examined whether feedback changes this independence.
Rolf Isermann
26. Parameter-adaptive Controllers
Abstract
This chapter treats parameter-adaptive controllers which are based on suitable parameter estimation methods, controller design methods and control algorithms, c.f. chapter 23. The relevant parameter estimation methods were discussed in chapter 24 and 25. This chapter is therefore mainly devoted to the discussion of combining the identified process model with appropriate controller design methods, to examining the resulting behaviour, to give examples of various parameter-adaptive control systems and their applications, the continuous supervision, etc.
Rolf Isermann

Digital Control with Process Computers and Microcomputers

Frontmatter
27. The Influence of Amplitude Quantization on Digital Control
Abstract
In the previous chapters the treatment of digital control systems was based on sampled, i.e. discrete-time signals only. Any amplitude quantization was assumed to be so fine that the amplitudes could be considered as quasi continuous. This assumption is justified for large signal changes in current process computers. However, for small signal changes and for digital controllers with small word lengths the resulting effects have to be considered and compared with the continuous case.
Rolf Isermann
28. Filtering of Disturbances
Abstract
Some control systems and many measurement techniques require the determination of signals which are contaminated by noise. It is assumed that a signal s(k) is contaminated additively by n(k) and only
$$y\left( t \right) = s\left( t \right) + n\left( t \right)$$
is measurable.
Rolf Isermann
29. Combining Control Algorithms and Actuators
Abstract
Within a control system actuators have to be moved to a certain absolute position U(k). After linearization around an operating point the linear controllers determine relative positions u(k) = ΔU(k) = U(k) — U 00 with respect to the operating point U 00 which depends on the command variable. In digital control systems recursive algorithms are used to determine u(k). Programming can be performed such that the first difference Δu(k) = u(k) — u(k — 1) appears at the digital controller output. (For the PID-control algorithm, section 5.1, this is called “velocity algorithm”.)
Rolf Isermann
30. Computer-aided Control Algorithm Design
Abstract
Conventionally analog and digital control algorithms of PID-type in practice are designed and tuned by trial and error, supported by rules of thumb and sometimes by simulation studies. For processes with
  • little knowledge on the internal behaviour
  • difficult dynamic behaviour
  • strong couplings in multivariable systems
  • large dimension
  • long settling times
  • high control performance requirements
Rolf Isermann
31. Adaptive and Seiftuning Control Systems Using Microcomputers and Process Computers
Abstract
As already described in chapter 26, parameter-adaptive control systems are generated if recursive parameter estimation methods and controller design methods are combined. The following will briefly report on their implementation with microcomputers and on various applications.
Rolf Isermann
Backmatter
Metadata
Title
Digital Control Systems
Author
Professor Dr.-Ing. Rolf Isermann
Copyright Year
1991
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-86420-9
Print ISBN
978-3-642-86422-3
DOI
https://doi.org/10.1007/978-3-642-86420-9