Skip to main content
main-content

Über dieses Buch

This book joins the multitude of Control Systems books now available, but is neither a textbook nor a monograph. Rather it may be described as a resource book or survey of the elements/essentials of feedback control systems. The material included is a result of my development, over a period of several years, of summaries written to supplement a number of standard textbooks for undergraduate and early post-graduate courses. Those notes, plus more work than I care right now to contemplate, are intended to be helpful both to students and to professional engineers. Too often, standard textbooks seem to overlook some of the engineering realities of (roughly) how much things cost or how big of hardware for computer programs for simple algorithms are, sensing and actuation, of special systems such as PLCs and PID controllers, of the engineering of real systems from coverage of SISO theories, and of the special characteristics of computers, their programming, and their potential interactions into systems. In particular, students with specializations other than control systems are not being exposed to the breadth of the considerations needed in control systems engineering, perhaps because it is assumed that they are always to be part of a multicourse sequence taken by specialists. The lectures given to introduce at least some of these aspects were more effective when supported by written material: hence, the need for my notes which preceded this book.

Inhaltsverzeichnis

Frontmatter

1. Introduction and overview

The feedback control systems specialist has a multifaceted job involving several different types of tasks; an alternative point of view is that control systems encompass several different sub-specialties from which an engineer might choose to emphasize only one or two. The aspects of the systems to be understood include: 1the process being controlled, whether it be a beer brewing plant, a refinery, an aircraft, or an artificially paced human heart;2the hardware, including instruments for sensing, wiring carrying data, computers processing data, and motors implementing commands; and3the algorithms used and the computer coding which implements them.

L. C. Westphal

2. Elements of systems engineering of digital control

Control systems are the subsystems of plants which generate and send the commands to the plants’ ‘working’ components. Hence, they are the elements which turn motors on and off, regulate inputs, record (or log) data on the processes, send messages to operators, etc. The level of sophistication is decided at the systems engineering stage, with the goal of using control components and techniques appropriate to the task - neither using a supercomputer to turn a heater on and off nor a commercial personal microcomputer for a sophisticated satellite antenna pointing system. The various decisions involved are aspects of systems engineering, and require decisions at a number of levels. This chapter explores only two levels: system structuring and component selection.

L. C. Westphal

3. Sensors and instrumentation

In this book we choose to work from the process back to the controller on a system such as is indicated abstractly in Fig. 3.1 and more specifically in Fig. 3.2. The immediate connections to the process are the sensors which measure the state of the process and the actuators which influence the control elements to adjust the process.

L. C. Westphal

4. Control elements, actuators, and displays

The reverse operation of computer data gathering is information output, particularly data display and control commands. We now look at the interface aspects of these — the transduction of the computer words to signals appropriate for operating valves, moving dials, running motors, etc. In this context, a control element is a device such as a valve, an actuator is a motor or solenoid which opens and closes the valve, and a display is an indicator on the operator’s console.

L. C. Westphal

5. Computer systems hardware

The connecting hardware element between sensors and actuators is the computer system, with connections being performed using various communications strategies (Chapter 7). In this chapter some of the essential aspects of computers in a real-time environment are introduced.

L. C. Westphal

6. Computer software

The computer software encodes the control algorithms and the logical commands for the interfacing of the I/O and for the emergency and other routines which the system is expected to perform.

L. C. Westphal

7. Communications

In this chapter we consider the connection of computer control system components at three levels: simple wiring, instruments to computers using digital signalling, and computer networking.

L. C. Westphal

8. Control laws without theory

Many control systems can be and have been set up with almost no reliance upon the mathematical theory of control. Many more have been based upon a notion of how the mathematics might turn out but with no explicit reliance upon that tool. This is true even leaving aside the PLCs devoted simply to turning on and off the various machines involved.

L. C. Westphal

9. Sources of system models

In previous chapters we have virtually ignored the system being controlled. Knowing a temperature was to be controlled, it was implicit that a temperature sensor was to be used to measure the output and a heater of some kind to cause the output to vary, but the mechanism of heat transfer was ignored. It was even suggested that the loop could be tuned by formulae which barely recognized the nature of the controlled process. None of this is completely true, of course, as a good engineer will have a notion of how the system works, and how it reacts to input and he will be influenced by this in many aspects of engineering the system. So, while the previous work did not use mathematical models of the controlled plant, these will be pervasive in what follows. In fact, the use of mathematical models is fundamental to use of control theories.

L. C. Westphal

10. Continuous-time system representations

The mathematical model of a plant or process usually comprises a set of differential equations to describe its operations. For convenience, these are frequently converted to other representations and this chapter considers the linear time-invariant systems to which these representations apply.

L. C. Westphal

11. Sampled-data system representations

This chapter parallels Chapter 10 for discrete-time systems. Because difference equations are undoubtedly less familiar to students than differential equation methods, we also take a brief look at the characteristics of difference equation time responses.

L. C. Westphal

12. Conversions of continuous time to discrete time models

To a computer, a plant looks like a discrete time system even though usually it is well defined for continuous time. In addition, the computer issues its commands at discrete times even if the original control law design was based on differential equations. For these reasons, it is necessary to be able to convert continuous time representations to equivalent discrete time representations.

L. C. Westphal

13. System performance indicators

Closed-loop systems are expected to give ‘good’ performance. In this chapter we introduce some of the properties used to measure performance.

L. C. Westphal

14. BIBO stability and simple tests

BIBO stability of constant coefficient linear systems, whether described by differential or difference equations, is determined by the pole locations of the closed-loop systems. These poles are, by definition, the roots of the denominator polynomial in transfer function representations and of the characteristic equation of the A matrix in state-space representations. The poles must lie in the left-half plane for continuous time systems and within the unit circle for discrete time systems. The straightforward way of checking this is to compute the poles. An alternative that is easy and can lead to other insights is to process the coefficients of the denominator polynomial of the transfer function, which is the same as the determinant of the state-space dynamics matrix. This chapter demonstrates those tests and shows how they may be used in three different ways.

L. C. Westphal

15. Nyquist stability theory

One of the now classical methods of testing closed-loop stability as a function of plant model and of control loop structure is based upon complex variable theory, with the particular application due to Nyquist. The idea is subtle for beginners but is basically straightforward. It has been applied to basic stability testing, to certain non-linear systems, to multivariable systems, and to developing the concepts of relative stability.

L. C. Westphal

16. Lyapunov stability testing

The second or direct method of Lyapunov is entirely different from pole analysis in philosophy, nature and detail, although there are a few overlaps for linear time-invariant systems.

L. C. Westphal

17. Steady-state response: error constants and system type

The interest of the engineer is in two aspects of the system response: steady-state response and transient response. The former is the output as t→∞, while the latter is the portion of the output which dies away after a short time. Provided that the system is asymptotically stable, the transients will indeed die out. This section is concerned with aspects of the steady-state error.

L. C. Westphal

18. Root locus methods for analysis and design

Root locus methods allow study of the changes in the roots of a polynomial as a parameter varies: their implicit but important message is that these roots vary in a regular manner as a parameter varies. Root locus methods are applied to the denominator polynomials of closed-loop transfer functions and hence indicate the movement of system poles as a parameter (typically a control compensator parameter such as a gain) varies. The techniques are independent of whether the system is discrete time or continuous time, but the criteria for good and bad poles depend upon that fact (Chapter 19).

L. C. Westphal

19. Desirable pole locations

It is always required that system characteristic values, also known as transfer function poles and dynamics matrix eigenvalues, be stable, meaning that they should be in the left-half plane for continuous time systems and within the unit circle for sampled data systems. It is also true, however, that some pole values may yield more desirable system responses than other values. We explore this issue in this chapter.

L. C. Westphal

20. Bode diagrams for frequency domain analysis and design

The classical methods known as frequency domain techniques have their origin with electrical engineers, who rely extensively on representations of signals as sums of sinusoids in their modelling and analysis. It has seemed natural for them to carry such ideas with them into control systems analysis and synthesis, often with considerable success. Less successful has been the direct use of the methods for sampled data control systems (although indirect use characterized by conversion of continuous system designs is possible). This has been for several reasons: the approximations that allow sketching do not always apply, the compensators of most common use are not so relevant for digital control systems, and well established relationships between step responses, pole locations, and frequency responses seem to hold only roughly for sampled data systems. Nevertheless having at least some knowledge of frequency domain methods is fundamental, and for that reason we review them here.

L. C. Westphal

21. A special control law: deadbeat control

Most of the methods we have considered so far have their origins in linear control laws for systems described by linear constant coefficient differential equations, with sampled data control studied by adapting those theories. Even in the nominally linear control law realm, however, it is possible to develop a special and interesting performance property with discrete time control. In particular, it is possible in principle to achieve a zero error to an input after finite time with a linear control law; this contrasts with continuous time control, which can only asymptotically provide zero error, and follows from the fact that computer control commands are piecewise constant in nature. A response which quickly reaches zero error at the sampling instants and has little ripple between samples is called a deadbeat response. In this chapter we develop both transfer function oriented and state-space oriented approaches to design of the control laws, called deadbeat controllers, which yield such response.

L. C. Westphal

22. Controllability

Two properties of the linear state-space system descriptions which are often needed in proofs about existence of certain types of controllers and state estimators are controllability and observability. Although these have important and easily visualized interpretations in terms of the natural language definitions of the words, it should be recognized that they are ultimately technical terms, used as shorthand to summarize properties of the system which allow certain types of controllers and state estimators to be designed. In particular, a system which is not controllable is not ‘uncontrollable’ in the natural language sense, nor is a not observable system ‘unobservable’.

L. C. Westphal

23. Controller design by pole placement

In classical methods of control law design, a structure for the controller is introduced by the designer and then several parameters within that structure are chosen to yield a response which meets specifications. Design work is usually done with the transfer function, either in a complex plane (as in root locus) or in the frequency domain (Nyquist-Bode-Nichols). The mathematics of pole placement design are as useful and interesting as many of the results, and therefore this chapter considers many variations on the basic problem: the essential result is that under certain conditions the poles of the closed-loop system may be placed at arbitrary locations of the designer’s choice.

L. C. Westphal

24. Observability

Like controllability, observability has important and easily visualized interpretations in terms of the natural language definitions of the word, but it should be recognized ultimately as a technical term. In particular, a system which is not an observable system is not ‘unobservable’. In this section we look at some of the tests and concepts for observability. The reader may note the striking similarity to the Chapter 22 section on controllability; this is not quite accidental, as the two concepts are mathematical duals of each other. To avoid near repetition, some results for continuous time systems are emphasized along with standard discrete time system results.

L. C. Westphal

25. State observers

State estimators are algorithms for deriving state estimates from measurements. Based upon proper manipulation of system models, they allow, for example, the estimation of state variables such as accelerations from position measurements. Applications can be found in instrumentation, flight reconstruction, and in providing inputs to state feedback control laws. The most famous state estimators are the Luenberger observer and the Kalman filter. The basic form of the former does not explicitly consider the possibility of noise on the measurements and it is in many respects related to pole-placement controllers. Hence it is reasonably accessible theoretically and we introduce it at this point. We concentrate on discrete time formulations simply because implementation is likely to be on digital computers, but analogous results hold for continuous-time observers.

L. C. Westphal

26. Optimal control by multiplier-type methods

Optimal control theory is concerned with the mathematics of finding a function or set of parameters which cause a given functional (function of a function) to take on an extremal value — minimum or maximum — subject to constraints. Mathematically, the continuous time problem is often of the following form.

L. C. Westphal

27. Other optimal control methods

Chapter 26 discussed Lagrange multiplier methods of converting constrained optimization problems to unconstrained problems, with the particular application being control system problems in which the state is to traverse an optimum path subject to the constraint that the equations of motion (usually the laws of physics) are not to be violated. There are other methods, and we look briefly at some of them in this section.

L. C. Westphal

28. State estimation in noise

There are a number of signal processing approaches called filtering: 1signal frequency content shapersm, including notch filters for removing power line noise and band-pass filters for extracting desirable signals such as AM radio broadcasts form the environment2detection filters such as matched filters for indication the presence or absence of certain signal typersm such as radar pulses and discrete symbols in communication networks; and3state estimation filters for inferring estimates of signal states from available measurements and noise4a useful feature of the book: the exercises

L. C. Westphal

29. State feedback using state estimates

Modern optimal control laws, whether state feedback or open-loop feedback in nature, almost invariably have the control commands based ultimately upon the state variables. In addition, state-space design methods such as pole placement are most easily done using full state feedback. The full state is rarely measured, however, so it is necessary for the engineer to modify the design. A very useful approach to doing this modification is to use state estimates in place of the state variables, with the estimates coming from an observer or Kalman filter. In this chapter we consider some implications of this approach.

L. C. Westphal

30. System identification

In previous chapters it was assumed that a model of the process exists and that the designer’s task is to use that model in the creation of a feedback controlled system. In fact, although a model structure may exist, it is frequently the case that its parameters are unknown, or at least are not known to the required precision. Hence a motor model may be derivable from basic electromagnetics and physics, but the moment of inertia of a particular motor may be only approximately known; a missile’s mass may decrease linearly with burn time of the rocket motor, but the rate and initial mass may be guaranteed to only a few per cent. This may or may not be a problem — after all, the reason for using feedback control in the first place is to reduce errors due to such factors as imprecisely known parameters — but accurate estimation of the parameters is sometimes important.

L. C. Westphal

31. Adaptive and self-tuning control

Feedback control was introduced for the purpose of compensating for variability in components and raw materials, so that the quality of product output could be improved. A properly designed feedback control system is expected to be rather insensitive to plant errors, component drift, and input disturbances and variability. Furthermore, the system should be stable, robust, and cost effective. Traditional techniques have usually been satisfactory for these purposes.

L. C. Westphal

32. Learning control

Adaptive control and system identification of the standard types usually reduce to parameter estimation. Since the control law is ultimately a mapping from the measurement history to commands to the plant, it is intriguing to consider attempting to establish an appropriate such mapping without using the specialized structure that the standard methods use. Several techniques have been suggested which, to a greater or lesser extent, self-learn or are taught the proper control. Among these are: artificial neural networksfunctional learningexpert systems, andfuzzy systems.

L. C. Westphal

33. Robust control

Feedback control is necessary to compensate for unpredictable disturbances to a plant or process and for inaccuracies in predicting plant response because of errors or approximations in plant models. There have always been several techniques for designing compensators: feedforward control based upon measurements of disturbances and placing margins in the design specifications are among the basic ones, while adaptive and self-tuning controllers are among the more complicated ways of improving plant models. An alternative, philosophically related to gain and phase margins, is the rapidly developing field of robust control theory, which entails mathematical design of control laws to meet defined uncertainty levels. In this chapter we briefly and superficially introduce this subject.

L. C. Westphal

34. Structures of multivariable controllers

Multi-input—multi-output (MIMO) control is in principle handled using the theory presented in earlier chapters. The classical theory has been extended to systems described by plant models G(s) where G is a matrix, and much state—space theory, such as optimal control and pole-placement feedback, is intrinsically multivariable in capability. The result is that, in principle, control can be done as in Fig. 34.1(a).

L. C. Westphal

Backmatter

Weitere Informationen