Skip to main content
Top

1986 | Book

The Stability and Control of Discrete Processes

Author: J. P. LaSalle

Publisher: Springer New York

Book Series : Applied Mathematical Sciences

insite
SEARCH

About this book

Professor J. P. LaSalle died on July 7, 1983 at the age of 67. The present book is being published posthumously with the careful assistance of Kenneth Meyer, one of the students of Professor LaSalle. It is appropriate that the last publi­ cation of Professor LaSalle should be on a subject which con­ tains many interesting ideas, is very useful in applications and can be understood at an undergraduate level. In addition to making many significant contributions at the research level to differential equations and control theory, he was an excel­ lent teacher and had the ability to make sophisticated con­ cepts appear to be very elementary. Two examples of this are his books with N. Hasser and J. Sullivan on analysis published by Ginn and Co. , 1949 and 1964, and the book with S. Lefschetz on stability by Liapunov's second method published by Academic Press, 1961. Thus, it is very fitting that the present volume could be completed. Jack K. Hale Kenneth R. Meyer TABLE OF CONTENTS page 1. Introduction 1 2. Liapunov's direct method 7 3. Linear systems Xl = Ax. 13 4. An algorithm for computing An. 19 5. Acharacterization of stable matrices. Computational criteria. 24 6. Liapunovls characterization of stable matrices. A Liapunov function for Xl = Ax. 32 7. Stability by the linear approximation. 38 8. The general solution of Xl = Ax. The Jordan Canonical Form. 40 9. Higher order equations. The general solution of ~(z)y = O.

Table of Contents

Frontmatter
1. Introduction
Abstract
This book will discuss the stability and controllability of a discrete dynamical system. It is assumed that at any particular time the system can be completely described by a finite dimensional vector xεR m--the state vector. Here R m is the real m-dimensional Euclidean space and
$$x\left\{ {\begin{array}{*{20}{c}} {{{x}_{1}}} \\ \vdots \\ {{{x}_{m}}} \\ \end{array} } \right\} \in R,\left| {\left| x \right|} \right| = {{\left( {x_{1}^{2} + \ldots + x_{m}^{2}} \right)}^{{1/2}}}$$
(the Euclidean length of the vector x). The components x1,…,xm might be the temperature, density, pressure etc. of some physical system.
J. P. LaSalle
2. Liapunov’s Direct Method
Abstract
In this section, we introduce the concept of Liapunov function in order to discuss stability questions for difference equations. Consider the difference equation
$${x^1} = Tx,x\left( 0 \right) = {x^0}$$
(2.1)
where as before T: R mR m is continuous. Throughout this section, we assume \(\bar x\) is an equilibrium solution for (2.1) so \(T\left( {\bar x} \right) = \bar x\) and \(x\left( n \right) = \bar x\) is a solution.
J. P. LaSalle
3. Linear systems x’ = Ax.
Abstract
$$B = \left( {{{b}_{{ij}}}} \right) = \left( {\begin{array}{*{20}{c}} {{{b}_{{11}}}\quad {{b}_{{12}}}\quad \cdots \quad {{b}_{{1s}}}} \hfill \\ {{{b}_{{21}}}\quad {{b}_{{22}}}\quad \cdots \quad {{b}_{{2s}}}} \hfill \\ \vdots \hfill \\ {{{b}_{{r1}}}\quad {{b}_{{r2}}}\quad \cdots \quad {{b}_{{rs}}}} \hfill \\ \end{array} } \right) = \left( {{{b}^{1}}{{b}^{2}} \ldots {{b}^{s}}} \right)$$
is an r×s matrix (real or complex),where \({{b}^{j}} = \left( {\begin{array}{*{20}{c}} {{{b}_{{1j}}}} \\ {{{b}_{{2j}}}} \\ \vdots \\ {{{b}_{{rj}}}} \\ \end{array} } \right)\) is the \(j\mathop {th}\limits_ - - colum\) vector in B.Thus,for \(c = \left( {{{c}_{i}}} \right) = \left( {\begin{array}{*{20}{c}} {{{c}_{1}}} \\ \vdots \\ {{{c}_{s}}} \\ \end{array} } \right)\) any s-vector
$$Bc = {c_1}{b^1} + {c_2}{b^2} + \cdots + {c_s}{b^s}$$
.
J. P. LaSalle
4. An algorithm for computing An
Abstract
The space of all m × m matrices is an m2-dimensional linear space. In this section A is any m × m matrix,real or complex. This means there is a smallest integer r such that I,A,…,Ar-1 are linearly independent, and hence there are real numbers αr-lr-2,…,α0 such that
$${A^r} + {\alpha _{r - 1}}{A^{r - 1}} + \cdots + {\alpha _0}I = 0$$
.
J. P. LaSalle
5. A characterization of stable matrices. Computational criteria
Abstract
We saw earlier in Section 3 that r(A) < 1 was a necessary condition for A to be stable. We now want to see that this condition is also sufficient and will prove this using the algorithm of the previous section. We give first an inductive and elementary proof, and then look at another proof that is more sophisticated and teaches us something about nonnegative matrices, which arise in and are important for many applications.
J. P. LaSalle
6. Liapunov’s characterization of stable matrices. A Liapunov function for x’ = Ax
Abstract
Although Liapunov did not consider difference equations, what we do here is the exact analog of what Liapunov did for linear differential equations. In the context of differential equations a matrix is said to be stable if \({e^{At}} \to 0\) as \(t \to \infty \), and for difference equations An is the analog of eAt.
J. P. LaSalle
7. Stability by the linear approximation
Abstract
The oldest method of investigating the stability of a system is to replace the system by its linear approximation. This method was used long before Liapunov, although Liapunov appears to be the first person who justified the method for ordinary differential equations. In 1929 Perron [1] investigated the question of when the stability of the difference equation
$$x' = Ax$$
(7.1)
determines the stability of the nonlinear equation,
$$x' = Ax + f\left( x \right)$$
(7.2)
J. P. LaSalle
8. The general solution of x’ = Ax. The Jordan Canonical Form.
Abstract
We already know from the algorithm for computing An a good deal about the solutions of the linear homogeneous equation
$$x' = Ax.$$
(8.1)
J. P. LaSalle
9. Higher order equations. The general solution of ψ(z)y = 0.
Abstract
We want now to examine the general solution of the mth -order difference equation
$$\psi \left( z \right)y = {y^{\left( m \right)}} + {a_{m - 1}}{y^{\left( {m - 1} \right)}} + \cdots + {a_0}y = 0;$$
(9.1)
$$\psi \left( \lambda \right) = {\lambda ^m} + {a_{m - 1}}{\lambda ^{m - 1}} + \cdots + {a_0} = \left( {\lambda - {\lambda _1}} \right)\left( {\lambda - {\lambda _2}} \right) \cdots \left( {\lambda - {\lambda _m}} \right) = 0$$
is called the characteristic equation of (9.1).
J. P. LaSalle
10. Companion matrices. The equivalence of x’ = Ax and ψ(z)y = 0.
Abstract
In some sense this section is a digression and could be postponed. However, the question we will ask is at this point a natural one, and its answer and the concept of companion matrices are general interest. What we do here is of special interest within the theory of the control and stability of continuous, as well as discrete, systems. We will see this for discrete systems in Section 15. The reader can, if he wishes,simply skim through this section and move on,and then come back later to pick up what is needed.
J. P. LaSalle
11. Another algorithm for computing An.
Abstract
In Section 4 we gave an algorithm for computing An that depended upon computing the eigenvalues of A. Here in this section we give an algorithm that does not require computing the eigenvalues. As before we let
$$\psi \left( \lambda \right) = {\lambda ^s} + {a_{s - 1}}{\lambda ^{s - 1}} + \cdots + {a_0}$$
be any polynomial that annihilates A -- i.e., such that ψ(A) = 0. We can, for instance, always take ψ(λ) to be the characteristic polynomial of A.
J. P. LaSalle
12. Nonhomogeneous linear systems x’ = Ax + f(n). Variation of parameters and undetermined coefficients.
Abstract
The general nonhomogeneous linear system with constant coefficients is
$$x' = Ax + f\left( n \right)$$
(12.1)
where,as always in this chapter,A is an m×m real matrix and f: J0 + C m. If f(n) = f1(n) + if2(n), where f1(n) and f2(n) are real, and if x(n) = x1(n) + ix2(n) is a solution of (12.1), x1(n) and x2(n) real, then \({x^l}^\prime \left( n \right) = A{x^1}\left( n \right) + {f_1}\left( n \right)\) and \({x^{2'}}\left( n \right) = A{x^2}\left( n \right) + {f_2}\left( n \right)\); and conversely, if x1(n) and x2(n) are real solutions of \({x^{l'}} = A{x^1} + {f_1}\left( n \right)\) and \({x^{2'}} = A{x^2} + A{x^2} + {f_2}\left( n \right)\), then x(n) = x1(n) + ix2(n) is a solution of (12.1). Thus, it is no more general to consider complex valued f(n), but it is convenient to do so. The block diagram for (12.1) is shown in Figure 12.1.
J. P. LaSalle
13. Forced oscillations.
Abstract
For f: J0C m, we say that f is periodic if, for some positive integer τ,
$$f\left( {n + \tau } \right) = f\left( n \right)foral\ln \in {J_0};$$
τ is called a period of f. The least such τ is the least period. If τ is the least period, then the only periods of f are integral multiples of τ. For instance \({e^{i\frac{{2\Pi \sigma n}}{\tau }}}\),σ a nonnegative integer, is periodic of period τ.If (σ, τ) = 1, τ is the least period. The constant functions have period 1.
J. P. LaSalle
14. Systems of higher order equations P(z)y = 0. The equivalence of polynomial matrices.
Abstract
Let us see by way of an example how we can solve a system of higher order difference equations. Consider
$$\begin{array}{*{20}{c}} {{{y}_{2}}^{{\prime \prime }} + {{y}_{1}}^{\prime } + {{y}_{2}}^{\prime } + {{y}_{3}}^{\prime } - {{y}_{1}} - 3{{y}_{2}} - {{y}_{3}} = 0} \hfill \\ {{{y}_{1}}^{{\prime \prime \prime }} - {{y}_{1}}^{{\prime \prime }} + {{y}_{3}}^{{\prime \prime }} - 4{{y}_{1}}^{\prime } - 4{{y}_{3}}^{\prime } + 2{{y}_{1}} = 0} \hfill \\ {{{y}_{1}}^{\prime } + {{y}_{2}}^{\prime } - {{y}_{1}} - {{y}_{3}} = 0.} \hfill \\ \end{array}$$
(14.1)
J. P. LaSalle
15. The control of linear systems. Controllability.
Abstract
Within control and system theory the fundamentally important concept of controllability arose naturally during the early development of optimal control theory in the late 1950’s and was discovered independently by a number of mathematicians and engineers in the United States and the Soviet Union.
J. P. LaSalle
16. Stabilization by linear feedback. Pole assignment.
Abstract
We saw at the end of the previous section that, if a linear control system with one control variable
$$x = Ax + bu$$
(16.1)
is controllable, then by the using linear feedback u = cTx the system becomes
$$x = \left( {A + b{c^T}} \right)x;$$
(16.2)
in this special case linear feedback can be used to stabilize the system; in fact, by the choice of cT we have complete control of the spectrum of (A+bcT) (see Proposition 10.4). Stabilization by linear feedback is the oldest method for the analysis and design of feedback controls and dates back at least to the early part of the 19 th century (see Fuller [1]). It was almost the only method used up to the 1950’s and remains of importance up to the present time. The result we present here is of more recent origin. It had been looked at by Langenhop [1] in 1964 over the complex field and was discovered independently by a number of engineers (see Wonham [1] and Padulo and Arbib [1, pp. 596-601]).
J. P. LaSalle
17. Minimum energy control. Minimum time-energy feedback controls.
Abstract
The positive semidefinite symmetrix matrix
$$w\left( n \right) = Y\left( n \right){Y^T}\left( n \right) = \sum\limits_{j = 0}^{n - 1} {{A^j}B{B^T}{{\left( {{A^J}} \right)}^T}} $$
(17.1)
introduced in Section 15 plays an important role in the theory of linear control systems and is called the controllability grammian (see Nering [1], p. 150 for the definition of the grammian of a set of vectors; W(n) is the grammian of the matrices B,AB,…,An-1B).
J. P. LaSalle
18. Observability. Observers. State Estimation. Stabilization by dynamic feedback.
Abstract
Up to now in our consideration of the linear system (Figure 18.1)
$$x = Ax + Bu$$
(18.1)
we have assumed that the output is the state of the system x.
J. P. LaSalle
Backmatter
Metadata
Title
The Stability and Control of Discrete Processes
Author
J. P. LaSalle
Copyright Year
1986
Publisher
Springer New York
Electronic ISBN
978-1-4612-1076-4
Print ISBN
978-0-387-96411-9
DOI
https://doi.org/10.1007/978-1-4612-1076-4