Skip to main content
main-content

Über dieses Buch

This book is a revision and extension of my 1995 Sourcebook of Control Systems Engineering. Because of the extensions and other modifications, it has been retitled Handbook of Control Systems Engineering, which it is intended to be for its prime audience: advanced undergraduate students, beginning graduate students, and practising engineers needing an understandable review of the field or recent developments which may prove useful. There are several differences between this edition and the first. • Two new chapters on aspects of nonlinear systems have been incorporated. In the first of these, selected material for nonlinear systems is concentrated on four aspects: showing the value of certain linear controllers, arguing the suitability of algebraic linearization, reviewing the semi-classical methods of harmonic balance, and introducing the nonlinear change of variable technique known as feedback linearization. In the second chapter, the topic of variable structure control, often with sliding mode, is introduced. • Another new chapter introduces discrete event systems, including several approaches to their analysis. • The chapters on robust control and intelligent control have been extensively revised. • Modest revisions and extensions have also been made to other chapters, often to incorporate extensions to nonlinear systems.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction and overview

The feedback control systems specialist has a multifaceted job involving several different types of tasks; an alternative point of view is that control systems encompasses several different sub-specialties in which an engineer might choose to emphasize only one or two. The aspects of the systems to be understood include: 1.the process being controlled, whether it be a beer brewing plant, a refinery, an aircraft, or an artificially paced human heart;2.the hardware, including instruments for sensing, wiring carrying data, computers processing data, and motors implementing commands; and3.the algorithms used and the computer coding which implements them.

Louis C. Westphal

Chapter 2. Elements of systems engineering of digital control

Control systems are the subsystems of plants that generate and send the commands to the plants’ ‘working’ components. Hence, they are the elements that turn motors on and off, regulate inputs, record (or log) data on the processes, send messages to operators, etc. The level of sophistication is decided at the systems engineering stage, with the goal of using control components and techniques appropriate to the task — neither using a supercomputer to turn a heater on and off nor a commercial personal microcomputer for a sophisticated satellite antenna pointing system. The various decisions involved are aspects of systems engineering, and require decisions at a number of levels. This chapter explores only two levels: system structuring and component selection.

Louis C. Westphal

Chapter 3. Sensors and instrumentation

In this book we choose to work from the process back to the controller on a system such as is indicated abstractly in Figure 3.1 and more specifically in Figure 3.2. The immediate connections to the process are the sensors that measure the state of the process and the actuators that influence the control elements to adjust the process.

Louis C. Westphal

Chapter 4. Control elements, actuators, and displays

The reverse operation of computer data gathering is information output, particularly data display and control commands. We look in this section at the interface aspects of these — the transduction of the computer words to signals appropriate for operating valves, moving dials, running motors, etc. In this context, a control element is a device such as a valve, an actuator is a motor or solenoid that opens and closes the valve, and a display is an indicator on the operator’s console.

Louis C. Westphal

Chapter 5. Computer systems hardware

The connecting hardware element between sensors and actuators is the computer system, with connections being performed using various communications strategies (Chapter 7). In this chapter some of the essential aspects of computers in a real-time environment are introduced.

Louis C. Westphal

Chapter 6. Computer software

The computer software encodes the control algorithms and the logical commands for the interfacing of the I/O and for the emergency and other routines that the system is expected to perform.

Louis C. Westphal

Chapter 7. Communications

In this chapter we consider the connection of computer control system components at three levels: simple wiring, instruments to computers using digital signalling, and computer networking.

Louis C. Westphal

Chapter 8. Control laws without theory

Many control systems can be and have been set up with almost no reliance upon the mathematical theory of control. Many more have been based upon a notion of how the mathematics might turn out but with no explicit reliance upon that tool. This is true even leaving aside the PLCs devoted to simply turning on and off the various machines involved.

Louis C. Westphal

Chapter 9. Sources of system models

In previous chapters we have virtually ignored the system being controlled. Knowing a temperature was to be controlled, it was implicit that a temperature sensor was to be used to measure the output and a heater of some kind to cause the output to vary, but the mechanism of heat transfer was ignored. It was even suggested that the loop could be tuned by formulae that barely recognized the nature of the controlled process. None of this is completely true, of course, as a good engineer will have a notion of how the system works and how it reacts to input, and he will be influenced by this in many aspects of engineering the system. So, while the previous work did not use mathematical models of the controlled plant, these will be pervasive in what follows. In fact, the use of mathematical models is fundamental to use of control theories.

Louis C. Westphal

Chapter 10. Continuous-time system representations

The mathematical model of a plant or process usually comprises a set of differential equations to describe its operations. For convenience, they are frequently converted to other representations, and this chapter considers the linear time-invariant systems to which these apply.

Louis C. Westphal

Chapter 11. Sampled-data system representations

This section parallels Chapter 10 for discrete-time systems. Because difference equations are undoubtedly less familiar to students than differential equation methods, we also take a brief look at characteristics of difference equation time responses.

Louis C. Westphal

Chapter 12. Conversions of continuous time to discrete time models

To a computer, a plant looks like a discrete time system even though usually it is well defined for continuous time. In addition, the computer issues its commands at discrete times even if the original control law design was done based on differential equations. For these reasons, it is necessary to be able to convert continuous time representations to equivalent discrete time representations.

Louis C. Westphal

Chapter 13. System performance indicators

Closed loop systems are expected to give ‘good’ performance. In this section we introduce some of the properties of the system which are used measure performance.

Louis C. Westphal

Chapter 14. BIBO stability and simple tests

BIBO stability of constant coefficient linear systems, whether described by differential or difference equations, is determined by the pole locations of the closed loop systems. These poles are, by definition, the roots of the denominator polynomial in transfer function representations and of the characteristic equation of the A matrix in state-space representations. These poles must lie in the left-half plane for continuous time systems and within the unit circle for discrete time systems. The straightforward way of checking this is to compute the poles. An alternative that is easy and can lead to other insights is to process the coefficients of the denominator polynomial of the transfer function, which is the same as the determinant of the state space dynamics matrix. This chapter demonstrates those tests and shows how they may be used three different ways.

Louis C. Westphal

Chapter 15. Nyquist stability theory

One of the now-classical methods of testing closed-loop stability as a function of plant model and of control loop structure is based upon complex variable theory, with the particular application due to Nyquist. The idea is subtle for beginners but is basically straightforward. It has been applied to basic stability testing, to certain non-linear systems, to multivariable systems, and to developing the concepts of relative stability.

Louis C. Westphal

Chapter 16. Lyapunov stability testing

The second or direct method of Lyapunov is entirely different from pole analysis in philosophy, nature, and detail, although there are a few overlaps for linear time-invariant systems.

Louis C. Westphal

Chapter 17. Steady state response: error constants and system type

The interest of the engineer is in two aspects of the system response: steady state response and transient response. The former is the output as t → ∞, while the latter is the portion of the output that dies away after a short time. Provided that the system is asymptotically stable, the transients will indeed die out. This chapter is concerned with aspects of the steady state error.

Louis C. Westphal

Chapter 18. Root locus methods for analysis and design

Root locus methods allow study of the changes in the roots of a polynomial as a parameter varies; their implicit but important message is that these roots vary in a regular manner as a parameter varies. Root locus methods are applied to the denominator polynomials of closed loop transfer functions and hence indicate the movement of system poles as a parameter (typically a control compensator parameter such as a gain) varies. The techniques are independent of whether the system is discrete time or continuous time, but the criteria for good and bad poles depend upon that fact (Chanter 19).

Louis C. Westphal

Chapter 19. Desirable pole locations

It is always required that system characteristic values, also known as transfer function poles and dynamics matrix eigenvalues, be stable, meaning that they should be in the left half plane for continuous time systems and within the unit circle for sampled data systems. It is also true, however, that some pole values may yield more desirable system responses than other values. We explore this issue in this chapter.

Louis C. Westphal

Chapter 20. Bode diagrams for frequency domain analysis and design

The classical methods known as frequency domain techniques have their origin with electrical engineers, who rely extensively on representations of signals as sums of sinusoids in their modelling and analysis. It has seemed natural for them to carry such ideas with them into control systems analysis and synthesis, often with considerable success. Less successful has been the direct use of the methods for sampled data control systems (although indirect use characterized by conversion of continuous system designs is possible). This has been for several reasons: the approximations that allow sketching do not always apply, the compensators of most common use are not so relevant for digital control systems, and well established relationships between step responses, pole locations, and frequency responses seem to hold only roughly for sampled data systems. Nevertheless having at least some knowledge of frequency domain methods is fundamental, and for that reason we review them here.

Louis C. Westphal

Chapter 21. A special control law: deadbeat control

Most of the methods we have considered so far have their origins in linear control laws for systems described by linear constant coefficient differential equations, with sampled data control studied by adapting those theories. Even in the nominally linear control law realm, however, it is possible to develop a special and interesting performance property with discrete time control. In particular, it is possible in principle to achieve a zero error to an input after finite time with a linear control law; this contrasts with continuous time control, which can only asymptotically provide zero error, and follows from the fact that computer control commands are piecewise constant in nature. A response that quickly reaches zero error at the sampling instants and has little ripple between samples is called a deadbeat response. In this chapter we develop both transfer function oriented and state-space oriented approaches to design of the control laws, called deadbeat controllers, which yield such response.

Louis C. Westphal

Chapter 22. Controllability

Two properties of the linear state space system descriptions that are often needed in proofs about existence of certain types of controllers and state estimators are controllability and observability. Although these have important and easily visualized interpretations in terms of the natural language definitions of the words, it should be recognized that they are ultimately technical terms, used as shorthand to summarize properties of the system that allow certain types of controllers and state estimators to be designed. In particular, a system that is not controllable is not ‘uncontrollable’ in the natural language sense, nor is a not observable system ‘unobservable’.

Louis C. Westphal

Chapter 23. Controller design by pole placement

In classical methods of control law design, the designer introduces a structure for the controller and then several parameters within that structure are chosen to yield a response that meets specifications. Design work is usually done with the transfer function, either in a complex plane (as in root locus) or in the frequency domain (Bode-Nyquist-Nichols). The mathematics of pole placement design are as useful and interesting as many of the results, and therefore this chapter considers many variations on the basic problem: the essential result is that under certain conditions the poles of the closed-loop system may be placed at arbitrary locations of the designer’s choice.

Louis C. Westphal

Chapter 24. Observability

Like controllability, observability has important and easily visualized interpretations in terms of the natural language definitions of the word, but it should be recognized as ultimately a technical term. In particular, a system that is not an observable system is not ‘unobservable’. In this section we look at some of the tests and concepts for observability. The reader may note the striking similarity to the Chapter 22 section on controllability; this is not quite accidental, as the two concepts are mathematical duals of each other. To avoid near repetition, some results for continuous time systems are emphasized along with standard discrete time system results.

Louis C. Westphal

Chapter 25. State observers

State estimators are algorithms for deriving state estimates from measurements. Based upon proper manipulation of system models, they allow, for example, the estimation of state variables such as accelerations from position measurements. Applications can be found in instrumentation, flight reconstruction, and in providing inputs to state feedback control laws. The most famous state estimators are the Luenberger observer and the Kalman filter. The basic form of the former does not explicitly consider the possibility of noise on the measurements and it is in many respects related to pole-placement controllers. Hence it is reasonably accessible theoretically and we introduce it at this point. We concentrate on discrete time formulations simply because implementation is likely to be on digital computers, but analogous results hold for continuous-time observers. Nonlinear observers are briefly mentioned.

Louis C. Westphal

Chapter 26. Optimal control by multiplier-type methods

Optimal control theory is concerned with the mathematics of finding a function or set of parameters which cause a given functional (function of a function) to take on an extremal value — minimum or maximum — subject to constraints. Mathematically, the continuous time problem is often of the following form.

Louis C. Westphal

Chapter 27. Other optimal control methods

Chapter 26 discussed Lagrange multiplier methods of converting constrained optimization problems to unconstrained problems, with the particular application being control system problems in which the state is to traverse an optimum path subject to the constraint that the equations of motion (usually the laws of physics) not be violated. There are other methods, and we look briefly at some of them in this section.

Louis C. Westphal

Chapter 28. State estimation in noise

There are a number of signal processing approaches called filtering: 1.signal frequency content shapers, including notch filters for removing power line noise and band-pass filters for extracting desirable signals such as AM radio broadcasts from the environment;2.detection filters such as matched filters for indicating the presence or absence of certain signal types, such as radar pulses and discrete symbols in communication networks; and;3.state estimation filters for inferring estimates of signal states from available measurements and noise.

Louis C. Westphal

Chapter 29. State feedback using state estimates

Modern optimal control laws, whether state feedback or open-loop feedback in nature, almost invariably have the control commands based ultimately upon the state variables. In addition, state-space design methods such as pole placement are most easily done using full state feedback. The full state is rarely measured, however, so it is necessary for the engineer to modify the design. A very useful approach to doing this modification is to use state estimates in place of the state variables, with the estimates coming from an observer or Kalman filter. In this chapter we consider some implications of this approach.

Louis C. Westphal

Chapter 30. System identification

In previous chapters it was assumed that a model of the process exists and that the designer’s task is to use that model in the creation of a feedback controlled system. In fact, although a model structure may exist, it is frequently the case that its parameters are unknown, or at least are not known to the required precision. Hence a motor model may be derivable from basic electromagnetics and physics, but the moment of inertia of a particular motor may only be approximately known; a missile’s mass may decrease linearly with burn time of the rocket motor, but the rate and initial mass may only be guaranteed to a few per cent. This may or may not be a problem — after all, the reason for using feedback control in the first place is to reduce errors due to such factors as imprecisely known parameters — but accurate estimation of the parameters is sometimes important.

Louis C. Westphal

Chapter 31. Adaptive and self-tuning control

Feedback control was introduced for the purpose of compensating for variability in components and raw materials, so that the quality of product output could be improved. A properly designed feedback control system is expected to be rather insensitive to plant errors, component drift, and input disturbances and variability. Furthermore, the system should be stable, robust, and cost effective. Traditional techniques have usually been satisfactory for these purposes.

Louis C. Westphal

Chapter 32. Structures of multivariable controllers

Multi-input multi-output (MMO) control is in principle handled using the theory presented in earlier chapters. The classical theory has been extended to systems described by plant models G(s) where G is a matrix, and much state-space theory, such as optimal control and pole-placement feedback, is intrinsically multivariable in capability. The result is that in principle control can be done as in Figure 32.1(a).

Louis C. Westphal

Chapter 33. Linearization methods for nonlinear systems

Control theory for nonlinear systems is a set of special case results, for the simple reason that, unlike linear systems, the nonlinear differential equations used to describe them cannot generally be solved. At best the systems may have exploitable special structures, as robot dynamics do. Some common nonlinear system designs use linear controllers for nonlinear systems, use nonlinear controllers for linear systems, or exploit special structures of the system dynamics to allow coordinate transformations into systems that are well understood and have standard controllers. We consider these in the present chapter.

Louis C. Westphal

Chapter 34. Variable structures and sliding mode control

Many of the topics we have seen so far are analysis tools, with any synthesis being through repeated analysis. Somewhat different in that it is more directly aimed at getting a control law is the approach called variable structure control, which has a special case called sliding mode control.

Louis C. Westphal

Chapter 35. Intelligent control

Adaptive control and system identification of the standard types usually reduce to parameter estimation. Since the control law is ultimately a mapping from the measurement history to commands to the plant, it is intriguing to consider attempting to establish an appropriate such mapping without using the specialized structure that the standard methods use. Several techniques have been suggested which are motivated by biological notions and which, to a greater or lesser extent, result in self-learned or taught control algorithms. Among these are artificial neural networks, based on a simple brain-like structure;evolutionary computation, motivated by models of biological evolution;expert systems, which attempt to incorporate the operational rules used by knowledgeable controllers; andfuzzy systems, which try to represent inexact knowledge in a computer-compatible form.

Louis C. Westphal

Chapter 36. Robust control

Feedback control is used to compensate for unpredictable disturbances to a plant or process and for inaccuracies in predicting plant response because of errors or approximations in plant models. There have always been several techniques for designing compensators: feedforward control based upon measurements of disturbances and placing margins in the design specifications of feedback controllers are among the basic ones, while adaptive and self-tuning controllers and sliding mode control are among the more complicated ways of improving plant models. Robustness for many of these was built in through the allowance of safety margins, particularly gain and phase margins. An alternative, philosophically related to gain and phase margins, is the still-developing field of robust control theory, which entails mathematical design of control laws to meet defined uncertainty levels. In this chapter we briefly and superficially introduce this subject.

Louis C. Westphal

Chapter 37. Discrete event control systems

Many systems are characterized by the occurrence of events, rather than the motion. Among these are elevators, for which the stopping at floors and the opening and closing of doors are of prime interest, computer systems in which the arrival and departure of data blocks are important, and process control systems for which the starting, stopping, breakdowns, and repairs are of supervisor interest. Although each of these has its internal dynamics in continuous time, witness the motion of the elevator, it is the events of stopping, etc., that of interest.

Louis C. Westphal

Backmatter

Weitere Informationen