Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

10. Nonlinearity

verfasst von : Steven A. Frank

Erschienen in: Control Theory Tutorial

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

Real systems are nonlinear. This chapter begins with three reasons why the core theory of control focuses on linear analysis. The chapter continues by introducing various design methods that explicitly consider nonlinear system dynamics.
Real systems are nonlinear. Before discussing nonlinear theory, I review three reasons why the core theory of control focuses on linear analysis.
First, feedback compensates for model uncertainty. Suppose we analyze a feedback system based on a linear model of dynamics, and the true model is nonlinear. If the linear model captures essential aspects of the dynamics sufficiently, the true nonlinear feedback system will often have the same qualitative response characteristics as the modeled linear system. As Vinnicombe (2001, p. xvii) emphasized: “One of the key aims of using feedback is to minimize the effects of lack of knowledge about a system which is to be controlled.”
Second, the fundamental principles of control systems apply to both linear and nonlinear dynamics. The comprehensive theory for linear systems provides insight into nonlinear systems. For example, strong feedback signals often help to minimize error but can create instabilities. Controllers can be added at different points in a system to filter signals or modulate inputs that drive processes. Primary goals of analysis and design often emphasize stability, disturbance rejection, regulation or tracking. Certain tradeoffs inevitably arise. Integral control smooths response toward a long-term target. Derivative control improves the speed of response by using a simple prediction.
Third, the main tools for the analysis and design of nonlinear systems typically extend the tools developed for linear systems. For example, nonlinear systems are approximately linear near a particular operating point. One can study the linear approximation around that point, and then switch to an alternative linear approximation as the system moves to another operating domain. By piecing together the different linear approximations in neighboring domains, one develops a sequence of linear systems that together capture much of the nonlinear characteristics. Other tools of nonlinear analysis typically leverage the deep insights and methods of linear analysis (Slotine and Li 1991; Isidori 1995; Khalil 2002).
This chapter presents a few brief illustrations of nonlinear control systems.

10.1 Linear Approximation

In cellular biology, the concentration of a molecule, m, often responds an input signal, \(\phi \), by the Hill equation
$$\begin{aligned} m=k{\big (}\frac{\phi ^n}{1+\phi ^n}{\big )}, \end{aligned}$$
with parameters k and n. In this example, I let \(k=1\).
The nonlinear reaction system
$$\begin{aligned} \dot{x}_1&=1-x_1+u\nonumber \\[-7pt] \\[-7pt] \dot{x}_2&=\frac{x_1^n}{1+x_1^n}-\gamma x_2\nonumber \end{aligned}$$
(10.1)
describes the change in the output, \(y=x_2\), as driven by a Hill equation response to \(x_1\) and an intrinsic decay of \(x_2\) at a rate \(\gamma \). In the absence of external input, u, the internal dynamics hold the concentration of \(x_1\) at the equilibrium value of \(x_1^*=1\), which in turn sets the equilibrium of the output at \(x_2^*=1/2\gamma \). The system responds to external control signals or perturbations through u, which drives the concentration of \(x_1\), which in turn drives the concentration of \(x_2\).
We can study the dynamics and control of this system by linearizing the dynamics near a particular operating point, \({\big (}\hat{x}_1,\hat{x}_2{\big )}\). In this example, we use as the operating point the equilibrium, \({\big (}x^*_1,x^*_2{\big )}\), in the absence of external input, \(u=0\). The linearized system expresses the deviations from the equilibrium point as
$$\begin{aligned} \dot{x}_1&=-x_1+u\nonumber \\[-7pt] \\[-7pt] \dot{x}_2&=nx_1/4-\gamma x_2.\nonumber \end{aligned}$$
(10.2)
Figure 10.1 shows the response to an impulse perturbation by the nonlinear system and the linear approximation. In the left panel, with a weak impulse and small deviation from the equilibrium, the nonlinear and linear dynamics are nearly identical. As the impulse becomes stronger in the right panels, the deviation from equilibrium increases and the dynamics of the linear approximation diverge from the original linear system.

10.2 Regulation

We can analyze the benefits of feedback for regulating nonlinear dynamics. One approach analyzes feedback for the linear approximation near the target equilibrium. The feedback for the linear approximation should provide good regulation when applied to the nonlinear system near the equilibrium.
This section applies the linear state feedback regulation approach. I used that approach in a previous chapter, in which the cost function in Eq. 9.​3, repeated here,
$$\begin{aligned} \mathcal {J}=\int _0^T {\big (}\mathbf {u}'\mathbf {R}\mathbf {u}+\mathbf {x}'\mathbf {Q}\mathbf {x}{\big )}\text {d}t, \end{aligned}$$
balances the tradeoff between the cost of control inputs and the cost of state deviation from equilibrium. The model is written so that the equilibrium states are \(\mathbf {x}^*=\mathbf {0}\). We obtain the optimal state feedback by applying the methods described in the prior chapter (see also the supplemental Mathematica code).
Consider the linear approximation in Eq. 10.2. That system has one input, for which we let \(\mathbf {R}=1\) and scale the state costs accordingly. For each state, assume that the cost is \(\rho ^2\), so that the integrand of the cost becomes \(u^2+\rho ^2{\big (}x_1^2+x_2^2{\big )}\).
We can calculate the feedback input that minimizes the cost for the linearized approximation. Using the optimal feedback, we can form a closed-loop system for both the linearized system and the original nonlinear system.
Figure 10.2 shows the response to an impulse perturbation for the closed-loop systems. In each panel, the nonlinear (blue) and linear (gold) responses are similar, showing that the design for the linear system works well for the nonlinear system.
The panels from left to right show a decreasing cost weighting on the inputs relative to the states. As the relative input costs become less heavily weighted, the optimal feedback uses stronger inputs to regulate the response, driving the system back to equilibrium more quickly.
Minimizing a cost function by state feedback may lead to systems that become unstable with respect to variations in the model dynamics. Previous chapters discussed alternative robust techniques, including integral control and combinations of \(\mathcal {H}_2\) and \(\mathcal {H}_{\infty }\) methods. We may apply those alternative methods to the linearized approximation in Eq. 10.2. The linearized system corresponds to the transfer function
$$\begin{aligned} P(s)=\frac{n/4}{s^2+(1+\gamma )s+\gamma }. \end{aligned}$$
Robust feedback based on this transfer function may be applied to the original nonlinear system.

10.3 Piecewise Linear Analysis and Gain Scheduling

Linear approximations at a particular operating point provide nearly exact descriptions of nonlinear dynamics near the operating point. As the system moves further from the operating point, the linear approximation becomes less accurate.
In some cases, significant divergence from the operating point causes the qualitative nature of the nonlinear dynamics to differ from the linear approximation. In other cases, such as in Fig. 10.1, the qualitative dynamics remain the same, but the quantitative responses differ.
The distance from an operating point at which the linear approximation breaks down depends on the particular nonlinear system. By considering the region over which the linear approximation holds, one can approximate a nonlinear system by a sequence of linear approximations.
Starting from an initial operating point, the first linear approximation holds near that point. Then, as the approximation breaks down away from the initial operating point, one can use a new approximation around a second operating point.
By repeatedly updating the approximation as needed for new regions, the series of linear approximations describes the nonlinear system. Each linear approximation holds in its own region or “piece.” That approach leads to the piecewise linear approximation method (Rantzer and Johansson 2000).
For each piece, linear methods specify the design of feedback control. The overall control becomes a sequence of individual controllers based on linear analysis, with each particular control regime applied when the system is in the associated operating region. Alternative control in different operating regions is often called gain scheduling.

10.4 Feedback Linearization

Consider the simple nonlinear system with an input
$$\begin{aligned} \dot{x}=x^2+u, \end{aligned}$$
in which the output is equal to the single state, \(y=x\) (Khalil 2002, p. 473). The equilibrium \(x^*=0\) is unstable because any perturbation from the equilibrium leads to uncontrolled growth.
The error deviation from equilibrium is x. Classical negative linear feedback would apply the control input \(u=-kx\), in which the feedback is weighted by the gain, k. The closed-loop system becomes
$$\begin{aligned} \dot{x}=-kx+x^2. \end{aligned}$$
This system has a locally stable equilibrium at zero and an unstable equilibrium at k. For a perturbation that leaves \(x<k\), the system returns to its stable equilibrium. For a perturbation that pushes x beyond k, the system grows without bound. Thus, linear feedback provides local stability. The stronger the feedback, with larger k, the broader the local region of stability.
In this case, linear feedback transforms an unstable open-loop system into a locally stable closed-loop system. However, the closed-loop system remains nonlinear and prone to instability.
If we choose feedback to cancel the nonlinearity, \(u=-kx-x^2\), then we obtain the linearly stable closed-loop system, \(\dot{x}=-kx\).
Once we have a linear closed-loop system, we can treat that system as a linear open-loop subsystem, and use linear techniques to design controllers and feedback to achieve performance goals.
For example, we could consider the feedback linearized dynamics as \(\dot{x}=-kx + v\), in which v is an input into this new linearized subsystem. We could then design feedback control through the input v to achieve various performance goals, such as improved regulation to disturbance or improved tracking of an input signal.
A nonlinear system can be linearized by feedback if the states can be written in the form
$$\begin{aligned} \dot{x}=f(x) + g(x)u. \end{aligned}$$
(10.3)
Such systems are called input linear, because the dynamics are linear in the input, u. These systems are also called affine in input, because a transformation of the form \(a+bu\) is an affine transformation of the input, u. Here, f and g may be nonlinear functions of x, but do not depend on u.
In the example \(\dot{x}=x^2+u\), we easily found the required feedback to cancel the nonlinearity. For more complex nonlinearities, geometric techniques have been developed to find the linearizing feedback (Slotine and Li 1991; Isidori 1995; Khalil 2002). Once the linearized system is obtained, one may apply linear design and analysis techniques to study or to alter the system dynamics.
Feedback linearization depends on an accurate model of the dynamics. For example, if the actual model is
$$\begin{aligned} \dot{x}=ax^2+u, \end{aligned}$$
and the feedback linearization is taken as \(u=-kx-x^2\) under the assumption that \(a=1\), then the closed-loop system is
$$\begin{aligned} \dot{x}=-kx + (a-1)x^2. \end{aligned}$$
If the true value of the parameter is \(a=2\), then the feedback system has dynamics \(\dot{x}=-kx+x^2\), which is unstable for \(x>k\).
This example shows that feedback linearization is not robust to model uncertainties. The following chapter discusses an alternative method that can provide robust feedback control for nonlinear systems.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Metadaten
Titel
Nonlinearity
verfasst von
Steven A. Frank
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-91707-8_10

Neuer Inhalt