Skip to main content

2013 | Buch

Robust and Adaptive Control

With Aerospace Applications

verfasst von: Eugene Lavretsky, Kevin A. Wise

Verlag: Springer London

Buchreihe : Advanced Textbooks in Control and Signal Processing

insite
SUCHEN

Über dieses Buch

Robust and Adaptive Control shows the reader how to produce consistent and accurate controllers that operate in the presence of uncertainties and unforeseen events. Driven by aerospace applications the focus of the book is primarily on continuous-dynamical systems.

The text is a three-part treatment, beginning with robust and optimal linear control methods and moving on to a self-contained presentation of the design and analysis of model reference adaptive control (MRAC) for nonlinear uncertain dynamical systems. Recent extensions and modifications to MRAC design are included, as are guidelines for combining robust optimal and MRAC controllers. Features of the text include:

· case studies that demonstrate the benefits of robust and adaptive control for piloted, autonomous and experimental aerial platforms;

· detailed background material for each chapter to motivate theoretical developments;

· realistic examples and simulation data illustrating key features of the methods described; and

· problem solutions for instructors and MATLAB® code provided electronically.

The theoretical content and practical applications reported address real-life aerospace problems, being based on numerous transitions of control-theoretic results into operational systems and airborne vehicles that are drawn from the authors’ extensive professional experience with The Boeing Company. The systems covered are challenging, often open-loop unstable, with uncertainties in their dynamics, and thus requiring both persistently reliable control and the ability to track commands either from a pilot or a guidance computer.

Readers are assumed to have a basic understanding of root locus, Bode diagrams, and Nyquist plots, as well as linear algebra, ordinary differential equations, and the use of state-space methods in analysis and modeling of dynamical systems.

Robust and Adaptive Control is intended to methodically teach senior undergraduate and graduate students how to construct stable and predictable control algorithms for realistic industrial applications. Practicing engineers and academic researchers will also find the book of great instructional value.

Inhaltsverzeichnis

Frontmatter
Erratum
Eugene Lavretsky, Kevin A. Wise

Robust Control

Frontmatter
Chapter 1. Introduction
Abstract
In this chapter, we introduce formal methods and practical tools from dynamics and control. Our intent is to design, analyze, and evaluate robust and adaptive control algorithms for continuous dynamical systems. We begin our material with the presentation of optimal control algorithms to regulate linear-time-invariant systems while enforcing adequate stability margins, and thus providing necessary robustness properties to the overall design. We then introduce output feedback architectures to design optimal controllers for linear systems whose states are not available as measurements. We shall also discuss numerically efficient analysis methods to evaluate the robustness of the synthesized linear controllers. All these topics are covered in Part I of this book, whereby many examples are given to help the reader isolate and focus on the key points and features of the presented methodologies. Part II covers adaptive methods to control both linear and nonlinear systems with uncertainties in their dynamics. The material is self-contained. It includes a brief introduction into Lyapunov’s stability, followed by an exposition of model reference adaptive controllers, with their design and analysis methods illustrated on a series of practical examples from aerospace applications. After we introduce what is now known as the “classical adaptive control,” we focus our attention on extensions and design modifications to improve transient performance and utilize output measurements (as opposed to state feedback connections). Also, in this chapter, we offer design methods to combine robust and adaptive controllers into a unified resilient to uncertainties control architecture. Throughout the book, we will always strive to motivate and rationalize problem formulations, their solutions, and practicality of the methods.
Eugene Lavretsky, Kevin A. Wise
Chapter 2. Optimal Control and the Linear Quadratic Regulator
Abstract
In this chapter, we introduce optimal control theory and the linear quadratic regulator. In the introduction, we briefly discuss and compare classical control, modern control, and optimal control, and why optimal control designs have emerged as a popular design method of control in aerospace problems. We then begin by introducing optimal control problems and the resulting Hamilton–Jacobi–Bellman partial differential equation. Then, for linear systems with a quadratic performance index, we develop the linear quadratic regulator. We will cover both finite-time and infinite-time problems and will explore some very important stability and robustness properties of these systems. Central to the design of optimal control laws is the selection of the penalty matrices in the performance index. In the last section, we discuss some asymptotic properties with regard to the penalty matrices that will set the stage for detailed design of these controllers in later chapters.
Eugene Lavretsky, Kevin A. Wise
Chapter 3. Command Tracking and the Robust Servomechanism
Abstract
In this chapter, we discuss requirements and control system architectures that provide command tracking. This is a very important attribute in aerospace, automotive, and other industrial control problems. We begin by reviewing classical control terminology on system types and then extend this to the servomechanism problem. We then use the servomechanism problem formulation within an optimal control setting to design optimal command tracking controllers, optimal in the sense of the numerical weights used in the performance index. We shall spend considerable time discussing examples on how to select these weights using aerospace control example problems.
Eugene Lavretsky, Kevin A. Wise
Chapter 4. State Feedback H∞ Optimal Control
Abstract
This chapter presents full information state feedback H optimal control. This control synthesis method uses state space methods to achieve stability, performance, and robustness and allows for the direct loop shaping in the frequency domain. This chapter begins with an introduction of various norms used in control system design and analysis, followed by methods of specifying stability and performance specifications in the frequency domain. This logically leads into loop shaping using frequency-dependent weights. The state feedback control law is then synthesized using an algebraic Riccati equation approach called γ-iteration. This method is applied to a UAV design example. This control synthesis method is an excellent approach that teaches design engineers important properties in both the time domain and frequency domain and more importantly how to achieve these properties in a closed-loop design.
Eugene Lavretsky, Kevin A. Wise
Chapter 5. Frequency Domain Analysis
Abstract
This chapter presents frequency domain analysis methods for both single-input single-output and multi-input multi-output control systems. Transfer function matrices, Nyquist theory for multivariable system, and singular value frequency response methods are discussed in detail. Modeling techniques for robust stability analysis are covered in which both complex and real parametric uncertainties are covered. Theory and examples for the structured singular value μ and the real stability margin are presented. Flight control systems designed using classical and optimal control theories are analyzed to determine their robust stability.
Eugene Lavretsky, Kevin A. Wise
Chapter 6. Output Feedback Control
Abstract
Output feedback design methods are needed when the states are not available for feedback. There are many output feedback control design approaches. This chapter presents three design methods that have proven to be useful in developing output feedback flight control designs in aerospace applications. The first method is called projective control. This method is used to replicate the eigenstructure of a state feedback controller using static and/or dynamic output feedback. By selecting the dominant eigenvalues and associated eigenvectors from the state feedback design, the projective control retains those performance and robustness properties exhibited by that eigenstructure. For static output feedback, a partial eigenstructure can be retained equal to the number of feedback variables. For dynamic output feedback, a low-order compensator can be built that retains the entire state feedback eigenstructure. The second and third methods are based upon linear quadratic Gaussian with Loop Transfer Recovery (LQG/LTR). Both these methods use an optimal control state feedback control implemented with a full-order observer called a Kalman filter to estimate the states needed in the control law. These two variants of LQG/LTR have very different asymptotic properties for recovering frequency domain loop properties.
Eugene Lavretsky, Kevin A. Wise

Robust Adaptive Control

Frontmatter
Chapter 7. Direct Model Reference Adaptive Control: Motivation and Introduction
Abstract
This chapter presents essential concepts for the now-classical model reference adaptive control. We begin with motivational examples from aerospace applications, followed by basic definitions, and a brief description of control-theoretic tools for the design and analysis of state feedback adaptive controllers that are applicable to an aircraft-like general class of multi-input multi-output systems. Our primary goal here is to motivate, introduce, and outline the material that will be discussed in the remainder of the book.
Eugene Lavretsky, Kevin A. Wise
Chapter 8. Lyapunov Stability of Motion
Abstract
The main intent of this chapter is to introduce the essential mathematical tools for stability analysis of continuous finite-dimensional dynamical systems. We begin with an overview of sufficient conditions to guarantee existence and uniqueness of the system solutions, followed by a collection of Lyapunov-based methods for studying stability of the system equilibriums and trajectories. The beginning of what is known today as the Lyapunov stability theory can be traced back to the original publication of Alexander Mikhailovich Lyapunov’s doctoral thesis on “the general problem of the stability of motion,” which he defended at the University of Moscow in 1892. Our interest and emphasis on the Lyapunov’s stability methods stem from the fact that these methodologies lay out the much needed theoretical framework and the foundation for performing design and analysis of adaptive controllers. In this chapter, we review selected (but not inclusive) methods due to Lyapunov. This selection is primarily driven by our interest in justifying the design of stable model reference adaptive controllers, with predictable and guaranteed closed-loop performance, for a wide class of nonlinear nonautonomous multi-input multi-output dynamical systems.
Eugene Lavretsky, Kevin A. Wise
Chapter 9. State Feedback Direct Model Reference Adaptive Control
Abstract
This chapter presents basic design concepts and analysis methods in the development of direct model reference adaptive controllers for uncertain systems with continuous dynamics and full state measurements. We begin our discussions with scalar systems and gradually transition to adaptive controllers for multi-input multi-output dynamics with matched parametric uncertainties. In order to gain insights into the intricacy of these nonlinear systems and their expected behavior, we will consider several design examples of increasing complexity.
Eugene Lavretsky, Kevin A. Wise
Chapter 10. Model Reference Adaptive Control with Integral Feedback Connections
Abstract
In this chapter, we design adaptive command tracking controllers using concepts from linear integral control. Such linear controllers can asymptotically reject constant disturbances and, at the same time, track constant commands with zero steady-state errors. In our attempt to further extend robustness and tracking properties of the linear integral controllers, we will develop and analyze an adaptive augmentation method to combine a baseline linear (Proportional + Integral) state feedback controller with an MRAC system. The adaptive augmentation design approach paves the way to transitioning adaptive controllers into industrial applications where legacy integral controllers are common practice.
Eugene Lavretsky, Kevin A. Wise
Chapter 11. Robust Adaptive Control
Abstract
This chapter is devoted to the design of adaptive controllers for dynamical systems that operate in the presence of parametric uncertainties and bounded noise. Four MRAC design modifications for robustness are discussed: (1) the dead zone, (2) the \( \sigma \)-modification, (3) the \( e \)-modification, and (4) the Projection Operator. We argue that out of the four modifications, the dead zone and the Projection Operator are essential for any MRAC system designed to predictably operate in a realistic environment.
Eugene Lavretsky, Kevin A. Wise
Chapter 12. Approximation-Based Adaptive Control
Abstract
This chapter is focused on the design and analysis of adaptive controllers for dynamical systems operating in the presence of nonparametric unknown nonlinear functions and bounded time-varying disturbances. In order to counter these types of uncertainties, we will employ direct adaptive model reference controllers equipped with online function approximation architectures, such as artificial neural networks (NNs). We begin with an introductory review of theoretical results related to function approximation by NNs, (Sects. 12.1, 12.2, and 12.3). As for any other function representation constructs, NN-based approximations are valid only on bounded sets. So, a suitable control design must account for a set of state limiting constraints imposed by the chosen function approximation method. For our proposed adaptive control design (Sect.12.4), we will utilize online tunable artificial NNs to represent unstructured uncertainties in the system dynamics of interest. In addition, we will add a state limiting design modification to keep the system trajectories within predefined NN-induced state limiting constraints. We end this chapter with a comprehensive step-by-step design example of an automatic landing system for a medium-size transport aircraft.
Eugene Lavretsky, Kevin A. Wise
Chapter 13. Adaptive Control with Improved Transient Dynamics
Abstract
In this chapter, we revisit the original formulation of the model reference adaptive control (MRAC), examine its transient performance, and then propose a modification to improve the latter. Often in practice, it is the transient of a tracking controller that defines feasibility of the selected system. Tracking performance during the first few seconds of operation is often more important than its asymptotic behavior. For linear systems, we can quantify transients by invoking the classical notions, such as damping ratio, natural frequency, overshoot, and undershoot. As for MRAC, uniform transient performance characterization is not at all straightforward. To remedy this situation, we offer a design modification to enforce and quantify transients for a class of MRAC-controlled nonlinear dynamical systems with uncertainties. We begin by drawing a parallel between reference models in adaptive control and Luenberger asymptotic state observers (Luenberger DG, IEEE Trans Mil Electron MIL-8:74–80, 1964). Based on this comparison, we reformulate the MRAC reference model structure. Our design change consists of adding an observer-like tracking error feedback/mismatch term to the reference model dynamics. This term is also known as the “innovation process” in Kalman filtering problems. Similar to the observer design, we will show that such a modification allows the enforcing of sufficiently fast, and thus improved, tracking error dynamics in MRAC systems. We substantiate our method with Lyapunov-based arguments to show that it is capable of solving servomechanism tracking problems for a selected class of multi-input–multi-output dynamical systems with uncertainties.
Eugene Lavretsky, Kevin A. Wise
Chapter 14. Robust and Adaptive Control with Output Feedback
Abstract
In this chapter, we shall introduce an observer-based adaptive output feedback tracking control design for multi-input multi-output controllable and observable dynamical systems with matched uncertainties. The emphasis is on adaptive controllers that operate based on available output feedback signals (measurements), as opposed to state feedback connections. We assume that the number of the system measured outputs (sensors) is no less than the number of the control inputs (actuators). If the number of inputs and outputs are the same, we would require that the system has relative degree one. Such an input–output property might be restrictive for a generic class of systems. We will be able to alleviate the relative-degree-one restriction by assuming that the system has more outputs than inputs and that the corresponding output-to-input matrix has full rank. It turns out that in this case, the system can be “squared-up” (i.e., augmented) using pseudo-control signals to yield relative-degree-one minimum-phase dynamics. Since the “squaring-up” problem is solvable for any controllable and observable triplet (A, B, C) (Misra P, Numerical algorithms for squaring-up non-square systems, Part II: General case. In: Proceedings of American Control Conference, San Francisco, CA, 1998), our proposed adaptive output feedback design is applicable to systems whose regulated output dynamics may be nonminimum phase or have a high relative degree. In its core, our adaptive output feedback design is based on asymptotic properties of linear quadratic Gaussian regulators with Loop Transfer Recovery (Doyle JC, Stein G, IEEE Trans. Autom. Control 26(1):4–16, 1981). In essence, our method combines robust and adaptive controllers in a unified output feedback framework. The design is formally justified, that is, we will be able to formulate sufficient conditions to guarantee closed-loop stability and uniform ultimate boundedness of the corresponding tracking error dynamics. At the end of this chapter, we will offer a flight control design case study to demonstrate key features and benefits of the method.
Eugene Lavretsky, Kevin A. Wise
Backmatter
Metadaten
Titel
Robust and Adaptive Control
verfasst von
Eugene Lavretsky
Kevin A. Wise
Copyright-Jahr
2013
Verlag
Springer London
Electronic ISBN
978-1-4471-4396-3
Print ISBN
978-1-4471-4395-6
DOI
https://doi.org/10.1007/978-1-4471-4396-3

Neuer Inhalt