Skip to main content
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

3. Linear Active Disturbance Rejection Control

verfasst von : Gernot Herbst, Rafal Madonski

Erschienen in: Active Disturbance Rejection Control

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

As the core of all other ADRC variants covered in this book, the linear continuous-time state-space form of active disturbance rejection control is introduced in this chapter. Generalizing the first- and second-order cases considered in Chap. 2, the structure of Nth-order linear ADRC is being discussed, consisting of observer and state-feedback controller. Equations for tuning both observer and controller are derived for the so-called bandwidth parameterization approach.

3.1 Derivation and Core Concepts

Building on the examples from Chap. 2, we will now present linear active disturbance rejection control for the general Nth-order case. Starting from a (simple) plant model, this will involve setting up an inner control loop to create dynamics that can be easily controlled in a second, outer loop—both of which make use of an observer that will be essential to the ADRC concept.

3.1.1 Modeling the Plant

Consider the single-input, single-output (SISO) plant of order N in the control loop of Fig. 3.1, with output y(t), input u(t), and input disturbance d(t). Assume that one can describe its behavior with the following simple differential equation:
$$\displaystyle \begin{aligned} y^{(N)}(t) + \sum_{i=0}^{N-1} a_i \cdot y^{(i)}(t) = b \cdot u(t) + b_{\mathrm{d}} \cdot d(t) . {} \end{aligned} $$
(3.1)
When—as mostly the case in practice—the disturbance d is not measurable, the usual remaining assumption for designing a controller using plant model information is that the parameters b and ai in (3.1) are known. For ADRC, this is not the case; knowledge of the ai parameters is not needed. 1 A plant model for ADRC only requires the plant order N and an estimate b0 of the b parameter, typically with an unavoidable error Δb = b − b0. Let us rearrange (3.1) for the highest derivative of y(t) such that all unknown terms are conveniently combined to a single time-varying value f(t)
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_3/MediaObjects/617308_1_En_3_Equa_HTML.png
y Superscript left parenthesis upper N right parenthesis Baseline left parenthesis t right parenthesis equals ModifyingBelow ModifyingAbove minus sigma summation Underscript i equals 0 Overscript upper N minus 1 Endscripts a Subscript i Baseline dot y Superscript left parenthesis i right parenthesis Baseline left parenthesis t right parenthesis plus normal upper Delta b dot u left parenthesis t right parenthesis With top brace Overscript modeling errors Endscripts plus ModifyingAbove b Subscript d Baseline dot d left parenthesis t right parenthesis With top brace Overscript actual disturbance Endscripts With bottom brace Underscript t o t a l d i s t u r b a n c e f left parenthesis t right parenthesis Endscripts plus b 0 dot u left parenthesis t right parenthesis comma
which, for ADRC, is usually denoted as generalized or total disturbance, as it consists of unknown model terms (modeling errors) and actual disturbances. Using f(t), we can now see that the plant model used for ADRC has a very compact form:
$$\displaystyle \begin{aligned} y^{(N)}(t) = f(t) + b_0 \cdot u(t) . {} \end{aligned} $$
(3.2)
Being the only required coefficient, a bigger importance is naturally attached to the knowledge of b0. It is therefore sometimes denoted as the critical gain parameter in the context of ADRC. It may also become clear from (3.2) that the starting point of obtaining this model does not have to be a linear equation like (3.1). Other terms and nonlinearities might very well be condensed in f(t), as well. In any case, (3.2) describes the dynamics of an Nth-order integrator chain with input gain b0 and an additional disturbance input f(t), as shown in Fig. 3.2. Before discovering all the benefits of this, let us conclude our considerations on plant modeling for ADRC with some simple but practically important examples given in Table 3.1.
Table 3.1
Examples of plant model parameters required for ADRC for plants with simple first- and second-order low-pass behavior. If a plant model is available in form of transfer function parameters such as K or T—experimentally obtained by a step response, for example—or in form of a differential equation, this table demonstrates how to obtain required the parameters b0 (plant gain) and N (plant order)
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_3/MediaObjects/617308_1_En_3_Figaaa_HTML.png

3.1.2 Normalizing the Plant Behavior

It could appear unusual both to treat unknown terms of a plant model as a disturbance and to model a plant as an integrator chain regardless of its actual behavior. If you already had an idea of your plant to be controlled, the model (3.2) might not be what you wanted or expected! So how do these concepts of a total disturbance f and critical gain parameter b0 help us in building a controller for our control problem?
Despite having just introduced them, we will now try to get rid of f and cancel out the effects of b0. This will significantly ease the design of a controller for the closed loop. Firstly, knowledge of b0 allows scaling the controller output such that the task of tuning the actual control loop becomes independent of plant parameters. Secondly, if an estimate \(\hat {f}(t)\) of the total disturbance f(t) is available (with a tool we will discuss soon), modeling errors and effects of actual disturbances can be compensated at the same time. Equation (3.2) then gets reduced to a system with well-behaved, known dynamics. The control signal u(t) is therefore set up like this:
$$\displaystyle \begin{aligned} u(t) = \frac{u_0(t) - \hat{f}(t)}{b_0} . {} \end{aligned} $$
(3.3)
Equation (3.3) combines two core ideas of ADRC, whose effects on the plant model (3.2) are visualized in Fig. 3.3:
  • Disturbance rejection: Feeding \(-\frac {1}{b_0} \cdot \hat {f}(t)\) to the plant as part of the control signal u(t) (ideally) compensates the influence of f(t) on the dynamics of (3.2), resulting in an undisturbed chain of N integrators.
  • Plant gain inversion: Multiplying the control signal component u0(t) with the reciprocal value of the critical gain parameter b0 (approximately) creates—together with disturbance compensation—a unity-gain integrator chain with an input u0(t):
    $$\displaystyle \begin{aligned} y^{(N)}(t) \approx u_0(t) . {} \end{aligned} $$
    (3.4)
The required estimate \(\hat {f}(t)\) does not appear from nowhere. It will have to be determined using measurements of the real plant, which are then obviously fed back in (3.3). We have therefore just witnessed the creation of a first, inner control loop as part of the ADRC concept. With unity-gain Nth-order integrator chain behavior, this loop behaves like a “normalized plant.” Designing a controller with a control signal u0(t) for that will, as a result, become a straightforward task—we will tackle that right next. Said controller would then be able to work with any plant of order N described by (3.2) with the same value b0 without further modification.

3.1.3 Controlling a Normalized Plant

To better explain the further approach, let us assume the inner loop just introduced in Sect. 3.1.2 works perfectly and now has the behavior of an ideal, undisturbed, unity-gain chain of N integrators:
$$\displaystyle \begin{aligned} y^{(N)}(t) = u_0(t) . {} \end{aligned} $$
(3.5)
Let us further assume that the output signals of all integrators, i.e., y(t) and its first (N − 1) derivatives, are available. In practice, they are not, but do not worry right now! One can then construct a law for the control signal u0(t) as follows:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_3/MediaObjects/617308_1_En_3_Equ6_HTML.png
StartLayout 1st Row 1st Column u 0 left parenthesis t right parenthesis 2nd Column equals k 1 dot left parenthesis r left parenthesis t right parenthesis minus y left parenthesis t right parenthesis right parenthesis minus k 2 dot ModifyingAbove y With dot left parenthesis t right parenthesis minus ellipsis minus k Subscript upper N Baseline dot y Superscript left parenthesis upper N minus 1 right parenthesis Baseline left parenthesis t right parenthesis 2nd Row 1st Column Blank 2nd Column equals k 1 dot r left parenthesis t right parenthesis minus ModifyingBelow Start 1 By 3 Matrix 1st Row 1st Column k 1 2nd Column midline horizontal ellipsis 3rd Column k Subscript upper N Baseline EndMatrix With bottom brace Underscript bold italic k Superscript normal upper T Baseline Endscripts dot Start 3 By 1 Matrix 1st Row y left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row y Superscript left parenthesis upper N minus 1 right parenthesis Baseline left parenthesis t right parenthesis EndMatrix period EndLayout
(3.6)
Equation (3.4) can be interpreted as a higher-order generalization of a PD controller, with derivatives acting only on the plant output y, but not on the reference signal (set point) r. The resulting loop of control law (3.6) with its plant (3.5) is visualized in Fig. 3.4. In contrast to a traditional PD controller, the derivatives are not computed within the controller, but directly taken from the plant (“measured”). Subsequently, we will therefore adopt the appropriate nomenclature and call (3.6) by its real name: a linear state-feedback controller.
The effectiveness of this approach can be recognized by putting (3.6) into (3.5). With slight rearrangements we obtain the closed-loop behavior as having Nth-order low-pass characteristics and unity DC gain— i.e., a control loop with steady-state accuracy:
$$\displaystyle \begin{aligned} \frac{1}{k_1} \cdot y^{(N)}(t) + \frac{k_N}{k_1} \cdot y^{(N-1)}(t) + \ldots + \frac{k_2}{k_1} \cdot \dot{y}(t) + y(t) = r(t) . {} \end{aligned} $$
(3.7)
Remember that the behavior of a “normalized plant” was the result of using an inner control loop as described in Sect. 3.1.2. Within the ADRC concept, we have now built a second, outer loop around that, serving the purpose of shaping the dynamics of the overall control system. What if the inner loop is not perfect, as nothing is? Then (3.5) turns into (3.4), and (3.7) still holds approximately.
Tuning of the gain parameters k1, …, kN will be properly addressed in Sect. 3.2.1, but it does not take much imagination that straightforward equations to compute these gains can be obtained from (3.7) if desired dynamics are given in the form of differential equation coefficients for the closed loop. We promised that tuning the outer loop would be easier: The absence of any plant parameters here means that, for ADRC, tuning the controller will be possible independent of such parameters and be directly based only on the desired closed-loop behavior.
This will not free us from the laws of physics and engineering restrictions, of course, which limit the dynamics that can be achieved for a certain plant with a real-world actuator. Nevertheless, this constitutes a departure from the PID world, where controller gains have to be tuned interacting with plant parameters—a very nice quality-of-life improvement for the designer of a control system using ADRC.

3.1.4 Estimating the Total Disturbance and Integrator States

Some pieces are still missing to make the two control loops, which were partially based on idealized assumptions, work in practice. These shall be addressed now.
Firstly, normalizing the plant with the inner control loop introduced in Sect. 3.1.2 is only possible if an accurate, up-to-date estimate \(\hat {f}(t)\) of the total disturbance f(t) is available. Remember that f(t) combines actual disturbances and modeling errors, regardless if the latter are deliberate or unintended. It is therefore safe to assume that f(t) will never be measurable in practically relevant cases, if only because a plant model will never be capable of capturing all real-world details. All that we usually have available are signals outside of the actual plant: its measured output y(t), and its input u(t), which is the output of our controller, cf. Fig. 3.1. The tool of choice to reconstruct missing signals in control systems engineering is the use of an observer, which continuously provides estimated signal values.
Secondly, after compensating the total disturbance within the inner loop, the remaining task is to control a normalized system with integrator chain behavior, using the outer loop described in Sect. 3.1.3. This can—with high flexibility regarding the attainable closed-loop dynamics—very well be solved using a linear state-feedback controller if all outputs of the integrator stages are available. Bringing to mind that the integrator chain is just a (best-case) description of the behavior of the closed inner loop, it becomes obvious that its internal states cannot be measurable. They must be estimated, as well.
These two tasks are being solved jointly using an established tool from the linear control systems world—a Luenberger observer. An observer is built around a model. We already have one that combines an integrator chain with an unknown disturbance: ADRC’s plant model (3.2). A state-space description of this is required and can be given as follows:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_3/MediaObjects/617308_1_En_3_Equ8_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With dot Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row ModifyingAbove x With dot Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 2 By 2 Matrix 1st Row 1st Column bold 0 Superscript upper N times 1 Baseline 2nd Column bold italic upper I Superscript upper N times upper N Baseline 2nd Row 1st Column 0 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript bold italic x left parenthesis t right parenthesis Endscripts plus ModifyingBelow Start 3 By 1 Matrix 1st Row bold 0 Superscript left parenthesis upper N minus 1 right parenthesis times 1 Baseline 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus StartBinomialOrMatrix bold 0 Superscript upper N times 1 Baseline Choose 1 EndBinomialOrMatrix dot ModifyingAbove f With dot left parenthesis t right parenthesis 2nd Row 1st Column y left parenthesis t right parenthesis 2nd Column equals ModifyingBelow Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic c Superscript normal upper T Baseline Endscripts dot Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix period EndLayout
(3.8)
From cT in (3.8) it is clear that the first state variable is x1(t) = y(t). Looking at A furthermore reveals that \(\dot {x}_i(t) = x_{i+1}(t)\) holds for i = 1, …, N − 1, which means that xi+1(t) is the ith derivative of y(t). The next-to-last state equation reads \(\dot {x}_N(t) = x_{N+1}(t) + b_0 \cdot u(t)\), corresponding to y(N)(t) = f(t) + b0 ⋅ u(t). This means that f(t) is the final element of the vector of state variables, xN+1(t) = f(t), and consequently, the last line correctly states \(\dot {x}_{N+1}(t) = \dot {f}(t)\).
When setting up a Luenberger observer in (3.9) to obtain an estimate of the states \(\hat {\boldsymbol {x}}^{\mathrm {T}}(t) = \begin {pmatrix} \hat {x}_1(t) & \cdots & \hat {x}_{N+1}(t) \end {pmatrix}\), only measurements of u(t) and y(t) are available. Omitting the “virtual input” \(\dot {f}(t)\) means \(\hat {x}_{N+1}(t)\) is treated as a constant total disturbance at the plant model input. That way, \(\hat {x}_{N+1}(t)\) serves as the integrator state of ADRC necessary to achieve zero steady-state error for constant reference values and constant disturbances. But of course, just like classical PI or PID controllers, ADRC will also handle time-varying reference signals and disturbances if observer and controller are fast enough. The closed-loop observer equations for ADRC are
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_3/MediaObjects/617308_1_En_3_Equ9_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With dot Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row ModifyingAbove x With dot Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 2 By 2 Matrix 1st Row 1st Column bold 0 Superscript upper N times 1 Baseline 2nd Column bold italic upper I Superscript upper N times upper N Baseline 2nd Row 1st Column 0 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript bold italic x left parenthesis t right parenthesis Endscripts plus ModifyingBelow Start 3 By 1 Matrix 1st Row bold 0 Superscript left parenthesis upper N minus 1 right parenthesis times 1 Baseline 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus StartBinomialOrMatrix bold 0 Superscript upper N times 1 Baseline Choose 1 EndBinomialOrMatrix dot ModifyingAbove f With dot left parenthesis t right parenthesis 2nd Row 1st Column y left parenthesis t right parenthesis 2nd Column equals ModifyingBelow Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic c Superscript normal upper T Baseline Endscripts dot Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix period EndLayout
(3.9)
Looking at the dynamics of the estimation error \(\boldsymbol {e}_{\boldsymbol {x}} = \boldsymbol {x} - \hat {\boldsymbol {x}}\) and its derivative, one arrives at the following equation (again ignoring the nonavailable “virtual input”):
$$\displaystyle \begin{aligned} \boldsymbol{\dot{e}_x} = \frac{\mathrm{d}}{\mathrm{d}t} \left( \boldsymbol{x}(t) - \hat{\boldsymbol{x}}(t) \right) = \boldsymbol{\dot{x}}(t) - \boldsymbol{\dot{\hat{x}}}(t) = \left( \boldsymbol{A} - \boldsymbol{l} \boldsymbol{c}^{\mathrm{T}} \right) \cdot \left( \boldsymbol{x}(t) - \hat{\boldsymbol{x}}(t) \right). {} \end{aligned} $$
(3.10)
This means that the Luenberger observer gains \(\boldsymbol {l}^{\mathrm {T}} = \begin {pmatrix} l_1 & \cdots & l_{N+1} \end {pmatrix}\)must, as part of an ADRC design procedure, be chosen to let the estimation error converge to zero in a stable manner—and quickly enough to provide an estimate of the total disturbance for effective disturbance rejection in (3.3), as well as estimates of y(t) and its derivatives required for computing the state-feedback control signal u0(t). We will cover observer tuning by setting the eigenvalues of \(\left ( \boldsymbol {A} - \boldsymbol {l} \boldsymbol {c}^{\mathrm {T}} \right )\) in Sect. 3.2.2.
If we summarize the signals estimated by the observer,
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(t) = \begin{pmatrix} \hat{x}_1(t) \\ \hat{x}_2(t) \\ \vdots \\ \hat{x}_N(t) \\ \hat{x}_{N+1}(t) \end{pmatrix} = \begin{pmatrix} \hat{y}(t) \\ \dot{\hat{y}}(t) \\ \vdots \\ \hat{y}^{(N - 1)}(t) \\ \hat{f}(t) \end{pmatrix} , {} \end{aligned} $$
(3.11)
it is evident that we now have all the signals available to implement the two control loops: \(\hat {f}\) for disturbance rejection using (3.3) in the inner loop and estimated derivatives of y for state-feedback control of the integrator chain in the outer loop. For the latter, we can modify the control law (3.6) accordingly:
$$\displaystyle \begin{aligned} u_0(t) = k_1 \cdot \left(r(t) - \hat{y}(t)\right) - k_2 \cdot \dot{\hat{y}}(t) - \ldots - k_N \cdot \hat{y}^{(N-1)}(t) {} . \end{aligned} $$
(3.12)

3.1.5 Putting It All Together

Combining all previous results from Sect. 3.1, we can now summarize the equations that constitute linear ADRC, using the state variables \(\hat {\boldsymbol {x}}(t)\) of the observer listed in (3.11). The inner loop can be realized as
$$\displaystyle \begin{aligned} u(t) = \frac{u_0(t) - \hat{f}(t)}{b_0} = \frac{u_0(t) - \hat{x}_{N+1}(t)}{b_0} , {} \end{aligned} $$
(3.13)
while the state-feedback control law of the outer loop can be implemented using
$$\displaystyle \begin{aligned} u_0(t) = k_1 \cdot r(t) - \begin{pmatrix} k_1 & \cdots & k_N \end{pmatrix} \cdot \begin{pmatrix} \hat{x}_1(t) & \cdots & \hat{x}_N(t) \end{pmatrix}^{\mathrm{T}} . {} \end{aligned} $$
(3.14)
Putting (3.14) into (3.13) yields the combined, overall control law of ADRC:
$$\displaystyle \begin{gathered} u(t) = \frac{1}{b_0} \cdot \left( k_1 \cdot r(t) - \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(t) \right) {} \\ \text{with}\quad \boldsymbol{k}^{\mathrm{T}} = \begin{pmatrix} k_1 & \cdots & k_N \end{pmatrix} \quad \text{and}\quad \hat{\boldsymbol{x}} = \begin{pmatrix} \hat{x}_1 & \cdots & \hat{x}_{N+1} \end{pmatrix}^{\mathrm{T}} \notag . \end{gathered} $$
(3.15)
In (3.15), b0 is the critical gain parameter to be obtained during plant modeling and kT the vector of state-feedback gains that have to be tuned to achieve desired closed-loop dynamics. The state vector \(\hat {\boldsymbol {x}}(t)\) is being estimated by a Luenberger observer with a feedback gain vector l, to be tuned for faster response than the outer loop:
$$\displaystyle \begin{gathered} \boldsymbol{\dot{\hat{x}}}(t) = \boldsymbol{A} \cdot \hat{\boldsymbol{x}}(t) + \boldsymbol{b} \cdot u(t) + \boldsymbol{l} \cdot \left( y(t) - \boldsymbol{c}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}(t) \right) {} \\ \text{with}\quad \boldsymbol{A} = \begin{pmatrix} \boldsymbol{0}^{N \times 1} & \boldsymbol{I}^{N \times N} \\ 0 & \boldsymbol{0}^{1 \times N} \end{pmatrix} ,\ \boldsymbol{b} = \begin{pmatrix} \boldsymbol{0}^{(N-1) \times 1} \\ b_0 \\ 0 \end{pmatrix} ,\ \boldsymbol{l} = \begin{pmatrix} l_1 & \cdots & l_{N+1} \end{pmatrix}^{\mathrm{T}} ,\ \boldsymbol{c}^{\mathrm{T}} = \begin{pmatrix} 1 & \boldsymbol{0}^{1 \times N} \end{pmatrix} \notag . \end{gathered} $$
(3.16)
Figure 3.5 summarizes all equations of ADRC in block diagram form. Being based on the combined control law (3.15), the two control loops have apparently been merged. Nonetheless, they are present, and it is still possible to recognize the two core ideas of the inner loop: plant gain inversion (implemented using the gain block \(\frac {1}{b_0}\)) and disturbance rejection. The details of the latter are a bit hidden in the vector form of signals and gains but surface when analyzing the two instruments from linear control theory employed in ADRC:
  • Extended state observer (ESO): For linear ADRC of order N, an observer as given in (3.9) estimates a filtered version of the plant output y and its first (N − 1) derivatives, extended by an additional state \(\hat {x}_{N+1}\) representing the total disturbance. Due to this extension, in the context of ADRC, the name extended state observer is commonly being used, but be assured that the tool employed here is the well-known Luenberger observer—it is just about the particular choice of the model inside.
    Within the observer, the total disturbance is being modeled as a constant input disturbance to a plant with integrator chain dynamics. Note that a constant input disturbance represents the most common case in practice and the one corresponding to the disturbance rejection abilities of PID controllers. If required by the application, more sophisticated disturbance models can be incorporated into the observer, as well. 2
  • State feedback: A linear feedback of the states \(\hat {\boldsymbol {x}}\) combines inner and outer control loops: firstly, a state-feedback controller with gain kT for the integrator chain plant constituted by the inner loop (states \(\hat {x}_1, \ldots , \hat {x}_N\) estimated by the observer), and secondly, a unity-gain feedback of the state \(\hat {x}_{N+1}\), represented by the factor 1 as part of the feedback gain vector \(\begin {pmatrix} \boldsymbol {k}^{\mathrm {T}} & 1 \end {pmatrix}\). The latter implements rejection of the total disturbance in the inner loop. Finally, the output of the state-feedback controller is multiplied by a factor \(\frac {1}{b_0}\), the reciprocal value of the plant gain b0. Creating a unity-gain plant for the outer state-feedback controller considerably eases its tuning, as discussed in Sect. 3.1.3 and once more to be seen in Sect. 3.2.1.
Controller and observer in (3.15) and (3.16) are here given for the general, Nth-order case. In practice, N = 1 and N = 2 are most relevant, corresponding to the use cases of PI and PID controllers. For those, Table A.​1.​2 on page 192 of the Appendix specifies A and b in unabbreviated form.

3.2 Parameter Tuning

Let us briefly recapitulate parameters that need to be selected or tuned for ADRC. Plant modeling involves the plant gain b0 and order N—refer to Table 3.1 for examples. The latter will influence the number of observer state variables (N + 1) and hence the number of feedback gains that must be tuned. We will now address computing those N controller gains in kT and (N + 1) observer gains in l.

3.2.1 Tuning the Controller

The combined control law (3.15) consists of disturbance rejection, plant gain inversion, and the output of an outer linear state-feedback controller. As discussed in Sect. 3.1.2, the disturbance-compensating feedback of \(\hat {x}_{N+1}(t)\) can, in conjunction with plant gain inversion, be viewed as an inner control loop. As depicted in Fig. 3.6, this creates a “virtual plant,” which is then to be controlled by the (outer) state-feedback controller (3.14).
If we assume the observer to be fast enough, rejecting the total disturbance will be so effective that the state-feedback controller appears to control a plant with unity-gain integrator chain behavior—regardless of the actual plant. This is the result of “normalizing” the plant behavior in Sect. 3.1.2, which makes controller tuning very easy. A state-space representation of such a virtual plant y(N)(t) = u0(t) is
$$\displaystyle \begin{gathered} \boldsymbol{\dot{x}}_{\mathrm{VP}}(t) = \boldsymbol{A}_{\mathrm{VP}} \cdot \boldsymbol{x}_{\mathrm{VP}}(t) + \boldsymbol{b}_{\mathrm{VP}} \cdot u_0(t) ,\quad y(t) = \boldsymbol{c}_{\mathrm{VP}}^{\mathrm{T}} \cdot \boldsymbol{x}_{\mathrm{VP}}(t) {} \\ \text{with}\ \ \ \boldsymbol{A}_{\mathrm{VP}} = \begin{pmatrix} \boldsymbol{0}^{(N-1) \times 1} & \boldsymbol{I}^{(N-1) \times (N-1)} \\ 0 & \boldsymbol{0}^{1 \times (N-1)} \end{pmatrix} ,\ \boldsymbol{b}_{\mathrm{VP}} = \begin{pmatrix} \boldsymbol{0}^{(N-1) \times 1} \\ 1 \end{pmatrix} ,\ \boldsymbol{c}_{\mathrm{VP}}^{\mathrm{T}} = \begin{pmatrix} 1 & \boldsymbol{0}^{1 \times (N-1)} \end{pmatrix} \notag , \end{gathered} $$
(3.17)
in which the state variables are defined to match the meaning of the first N state variables in the observer model of (3.9): xVP,1(t) equals the output y(t), and xVP,i+1(t) is the ith derivative of y(t) for i = 1, …, N − 1.
A possible and common approach to shape the closed-loop behavior is pole placement, i.e., computing the N gains of the vector kT such that the system matrix of the closed loop \( \left ( \boldsymbol {A}_{\mathrm {VP}} - \boldsymbol {b}_{\mathrm {VP}} \boldsymbol {k}^{\mathrm {T}} \right )\) features N eigenvalues as desired: the poles of the closed-loop transfer function.
The freedom of selecting N eigenvalues may, however, be experienced as a burden by many users. A practical approach greatly simplifying the controller tuning process is known under the name bandwidth parameterization and consists of pole placement with all poles at a common location λ = −ωCL, the desired closed-loop bandwidth. The characteristic polynomial of the closed-loop system matrix then becomes
$$\displaystyle \begin{aligned} \left( \lambda + \omega_{\mathrm{CL}} \right)^N \stackrel{!}{=} \det\left( \lambda \boldsymbol{I} - \left( \boldsymbol{A}_{\mathrm{VP}} - \boldsymbol{b}_{\mathrm{VP}} \boldsymbol{k}^{\mathrm{T}} \right) \right) = \lambda^{N} + k_N \lambda^{N-1} + \ldots + k_2 \lambda + k_1 . {} \end{aligned} $$
(3.18)
After applying binomial expansion to the left-hand side of (3.18), the controller gains ki can be read off as follows:
$$\displaystyle \begin{aligned} k_i = \frac{N!}{(N-i+1)! \cdot (i-1)!} \cdot \omega_{\mathrm{CL}}^{N-i+1} \quad \forall i = 1, \ldots, N . {} \end{aligned} $$
(3.19)
While the generic expression for an Nth-order system in (3.19) might appear a bit convoluted, very simple tuning rules relating the desired bandwidth ωCL to the ki gains emerge for the two cases most relevant in practice:
  • For N = 1, k1 = ωCL.
  • For N = 2, \(k_1 = \omega _{\mathrm {CL}}^2\)  and  k2 = 2ωCL.
Example 3.1
To demonstrate the use of (3.19), controller gains shall be computed for second-order ADRC (N = 2) with a desired closed-loop bandwidth of 0.2 Hz, i.e., ωCL = 2π ⋅ 0.2 rad∕s. Solution: \(k_1 = \omega _{\mathrm {CL}}^2 \approx 1.579\,\frac {1}{\mathrm {s}^2}\) and \(k_2 = 2 \omega _{\mathrm {CL}} \approx 2.513\,\frac {1}{\mathrm {s}}\).
Frequency-Domain Derivation
For readers unfamiliar with the process of placing eigenvalues in a state-feedback control system, one can arrive at the same result in the frequency domain. Taking the Laplace transform of the (idealized) closed-loop behavior (3.7) and comparing that with a critically damped Nth-order low-pass filter (only real poles) yield
$$\displaystyle \begin{aligned} G_{\mathrm{CL}}(s) = \frac{y(s)}{r(s)} = \frac{ 1 }{ \frac{1}{k_1} s^N + \frac{k_N}{k_1} \cdot s^{N-1} + \ldots + \frac{k_2}{k_1} \cdot s + 1 } \stackrel{!}{=} \frac{ 1 }{ \left( \displaystyle\frac{s}{\omega_{\mathrm{CL}}} + 1 \right)^N } , \end{aligned}$$
where, after multiplying both sides with \(k_1 = \omega _{\mathrm {CL}}^N\), we can compare the denominators more easily:
$$\displaystyle \begin{aligned} s^N + k_N \cdot s^{N-1} + \ldots + k_2 \cdot s + k_1 \stackrel{!}{=} \left( s + \mathrm{CL} \right)^N . {} \end{aligned} $$
(3.20)
Obviously (3.20) has the same structure as (3.18), which means one will obtain the same results for the controller gains ki as given in (3.19), e.g., \(k_1 = \omega _{\mathrm {CL}}^N\) and kN = N ⋅ ωCL.
Tuning Based on Time-Domain Characteristics
A time-domain interpretation may simplify choosing the frequency-domain tuning goal ωCL. A common measure in the time domain is the settling time of a control loop, where the controlled variable reaches a certain percentage of the final value, subsequently further converging. Various approximations balancing simplicity against accuracy can be found, such as \(\omega _{\mathrm {CL}} \approx \frac {4}{T_{\mathrm {settle},98\%}}\) and \(\omega _{\mathrm {CL}} \approx \frac {6}{T_{\mathrm {settle},98\%}}\) for the first- and second-order cases, respectively. With a more accurate approximation, the following relation provides an ωCL value leading to the desired 98 % settling time with less than 1% error for orders from N = 1 to N = 6, as shown in Fig. 3.7:
$$\displaystyle \begin{aligned} \omega_{\mathrm{CL}} \approx \frac{1.9 + 2.09 N - 0.07 N^2}{T_{\mathrm{settle},98\%}} . {} \end{aligned} $$
(3.21)

3.2.2 Tuning the Observer

Computing the feedback gain vector l of ESO will conclude the tuning procedure required for linear ADRC. As with the controller gain vector kT, there is freedom to choose any possible design approach for l. A pragmatic choice is to employ pole placement with bandwidth parameterization also for the observer, again to lower the overall number of tuning parameters.
The total disturbance must be estimated so fast that the inner loop of Fig. 3.6 (“virtual plant”) maintains the desired integrator chain behavior. Also, the derivatives of the plant output must be provided by the observer fast enough to serve as a meaningful basis for the outer state-feedback controller. This is, as usual in observer-based control, only possible if the observer bandwidth ωO exceeds the desired closed-loop bandwidth ωCL. Therefore, in this book, we will express the observer bandwidth depending on the closed-loop bandwidth using a relative factor kESO:
$$\displaystyle \begin{aligned} \omega_{\mathrm{O}} = k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}} . {} \end{aligned} $$
(3.22)
Using kESO shall emphasize that the bandwidths cannot be independently selected. As a rule of thumb, the observer should be tuned for a three- to tenfold bandwidth compared to ωCL. In practice, an upper bound will be often determined by the acceptable noise level of the control signal. A lower bound is formed when closed-loop dynamics significantly deviate from the intended design, i.e., when the observer is too slow to fully reject disturbances and maintain the “virtual plant” behavior.
The characteristic polynomial of the observer system matrix \(\left ( \boldsymbol {A} - \boldsymbol {l} \boldsymbol {c}^{\mathrm {T}} \right )\) in the case of bandwidth parameterization for a common pole location λ = −ωO = −kESO ⋅ ωCL has to satisfy the following equation:
$$\displaystyle \begin{aligned} \left( \lambda + k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}} \right)^{N+1} \stackrel{!}{=} \det\left( \lambda \boldsymbol{I} - \left( \boldsymbol{A} - \boldsymbol{l} \boldsymbol{c}^{\mathrm{T}} \right) \right) = \lambda^{N+1} + l_1 \lambda^N + \ldots + l_N \lambda + l_{N+1} . {} \end{aligned} $$
(3.23)
The similarity of (3.24) and (3.18) is not accidental given the structural similarities of the A and AVP matrices and the b and cT vectors. Binominal expansion of the left-hand side of (3.23) yields the general equation for computing ADRC observer gains under the bandwidth parameterization approach:
$$\displaystyle \begin{aligned} l_i = \frac{(N+1)!}{(N-i+1)! \cdot i!} \cdot \left( k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}} \right)^i \quad \forall i = 1, \ldots, N+1 . {} \end{aligned} $$
(3.24)
Particularly relevant in practice are values for the first- and second-order cases:
  • For N = 1, l1 = 2kESOωCL  and  \(l_2 = k_{\mathrm {ESO}}^2 \omega _{\mathrm {CL}}^2\).
  • For N = 2, l1 = 3kESOωCL,  \(l_2 = 3 k_{\mathrm {ESO}}^2 \omega _{\mathrm {CL}}^2\),  and  \(l_3 = k_{\mathrm {ESO}}^3 \omega _{\mathrm {CL}}^3\).
Example 3.2
Continuing example 3.1, the observer gains shall now be computed, with an observer bandwidth that is kESO = 5 times higher than the closed-loop bandwidth ωCL. Solution: \(l_1 = 3 k_{\mathrm {ESO}} \omega _{\mathrm {CL}} \approx 18.85\,\frac {1}{\mathrm {s}}\), \(l_2 = 3 k_{\mathrm {ESO}}^2 \omega _{\mathrm {CL}}^2 \approx 118.4\,\frac {1}{\mathrm {s}^2}\), \(l_3 = k_{\mathrm {ESO}}^3 \omega _{\mathrm {CL}}^3 \approx 248.1\,\frac {1}{\mathrm {s}^3}\). Hint: If you cannot wait to see this controller/observer pair in action, have a peek at Sect. 5.​2, where we present a first closed-loop simulation example using exactly these parameters.

3.3 Summary and Outlook

Whenever we introduce a new variant of ADRC in this book that can be put into practice—be it in the form of a simulation model or later as a custom software implementation—we will summarize the required steps in the form of a “cooking recipe.” We have now reached such a point for the first time.
Cooking Recipe (ADRC, State-Space Form)
To implement and tune linear active disturbance rejection control in its continuous-time state-space form, these steps have to be carried out:
1.
Plant modeling: For the dominant dynamic plant behavior, identify the order N and the parameter b0 of the plant model (3.2). Refer to Table 3.1 for some simple examples.
 
2a.
Controller tuning: ADRC of order N requires tuning of N controller gains k1,…,N. When using bandwidth parameterization for the desired closed-loop dynamics, these gain values are obtained from (3.19) or Table A.​1.​1. If necessary, use an approximation as (3.21) to translate a time-domain specification into the tuning parameter ωCL.
 
2b.
Observer tuning: ADRC of order N requires (N + 1) observer gains l1,…,N+1. In the case of bandwidth parameterization, choosing a relative observer bandwidth kESO = 3, …, 10 is a good starting point. Use (3.24) or Table A.​1.​1 to compute the gains from kESO and ωCL.
 
3.
Implementation: Implement the state-space form of ADRC as shown in Fig. 3.5. Required matrices and vectors are defined in the equations of the controller (3.15) and the extended state observer (3.16).
 
To serve as a quick reference guide, all relevant equations of this chapter are once more compiled in Appendix A.​1. We will stick to this pattern throughout this book: “cooking recipes” summarizing a new variant whenever introduced, a detailed list of “ingredients” in the Appendix. Please bear in mind that a real-world controller contains more than the bare control law. Most notably a limitation of the controller output is still missing here, along with some form of anti-windup protection. On our way to a practical implementation, we will come back to this repeatedly in this book.
Regarding the nomenclature, note again that we speak of Nth-order ADRC if the assumed plant model (3.2) is of order N, which means there are N controller gains, (N − 1) estimated derivatives, and at least (N + 1) state variables in the observer: output, derivatives, and disturbance model.
Finally, let us stress once more that we have adopted the linear variant of ADRC as the basis for this book given its several advantages:
  • Methods: One can build on a century of linear control systems theory, e.g., concerning tuning methods or stability analyses.
  • Recognition factor: Connections and similarities to the existing, “classical” control world can be seen more easily.
  • Entry barrier: Being well-rooted in “proven technology” helps to lower the barriers to adopting ADRC, be it from an engineering or a management perspective.
For brevity, we will often refer to this linear variant (and its various descendants and forms that will be introduced in the book) simply as “ADRC”—the acronym that we also use to describe the whole field. Whenever a distinction needs to be made or the property of linearity needs to be highlighted, we will use the acronym “LADRC.”
Next Steps: Deeper Understanding
To facilitate further reading, in the upcoming three segments we offer the reader guidance through the remainder of this book. The first set of reading options revolves around deepening the understanding of ADRC:
  • Chapter 4 discusses linear ADRC from a classical control engineering perspective. Firstly, the connection to observer-based state-space control as known from linear systems theory is established in Sect. 4.​1. For readers with a preference for frequency rather than time-domain tools, Sect. 4.​2 derives transfer function representations of linear ADRC.
  • Chapter 5 continues with a comprehensive, visual-heavy analysis in both time and frequency domains, including the following aspects: How to interpret or obtain the critical gain parameter of a plant (Sect. 5.​1)? What is the role of the estimated state variables, notably the total disturbance, in a typical control scenario (Sect. 5.​2)? What is the influence of the “tuning knobs” (bandwidth, observer bandwidth, and critical gain parameter) on tracking performance, disturbance rejection abilities, and stability (Sect. 5.​3)? Part of Sect. 5.​3.​3 will also be a frequency-domain comparison to the transfer functions of PI and PID controllers. Finally, ADRC’s ability to cope with parametric or structural uncertainties of the plant to be controlled is examined in Sect. 5.​4.
  • Chapter 7 will make you familiar with relevant literature from the field of ADRC.
Next Steps: Implementation
The intention of this book is to guide from principles to practical implementation. In Chap. 8, it therefore works its way up to ready-to-use discrete-time forms of linear ADRC suitable for an (usually software-based) implementation.
  • A discrete-time form in state space is derived in Sect. 8.​1. This is the first ADRC variant that can actually be implemented in software and the basis for all other discrete-time implementations covered in this book.
  • Based on that, a discrete-time transfer function form is introduced in Sect. 8.​2. This form is very similar to classical digital controllers but—as will be seen—does not retain all features from the state-space variant.
  • A second discrete-time transfer function form with two feedback loops is introduced in Sect. 8.​3, keeping all features from the state-space variant. It can be implemented with a very low computational footprint, thus combining the advantages of the state space and transfer function worlds.
Depending on the reader’s previous experience, descending from an “equation level” view of ADRC to the lowlands of an actual (software) implementation might or might not appear easy. As this topic is (unfortunately, in our view) not always part of control engineering curricula, we have dedicated Chap. 10 to support this transition in both model-based (Sect. 10.​1) and handwritten source code implementations (Sect. 10.​2). Our application examples in Chap. 11 will make use of these.
Next Steps: Customization
It has to be stressed that, as presented in this chapter, you have just seen one form of ADRC. But please do not view ADRC as an immutable set of equations, but rather as a certain way of looking at, understanding, and solving control problems. From the core that we chose with linear ADRC, further variants and implementations can be derived. At the moment, many variations and extensions exist and can affect all ingredients. The following ones are covered in this book:
  • Incorporation of additional plant model information in the observer, deviating from the simple integrator chain model, again to improve the dynamic performance. We will examine in Sect. 6.​1 if and how additional information about the plant or disturbance improves tracking behavior and disturbance rejection.
  • In Sect. 6.​2, we will examine if and how additional information about the reference signal derivatives improves the tracking behavior.
  • If needed, some performance aspects of ADRC can potentially be improved by adding nonlinearity to its structure. Different tools and methods for improving tracking and estimation quality utilizing nonlinear ADRC schemes are discussed in Sect. 6.​3.
  • Error-based ADRC allows obtaining implementations even more similar to classical control engineering and is introduced in Sect. 6.​4. It can particularly well be combined with arbitrary reference signal prefilters, allowing accomplishing true two-degrees-of-freedom behavior. The implementation of discrete-time error-based ADRC can be found in Sect. 8.​4.
  • Further practical issues such as dealing with control signal limitations, smooth controller enabling, measurement noise, or process dead time are discussed in Chap. 9.
As you see, nothing is set in stone. Yet we have made specific choices in this book for good reasons, trying to guide you from principles to practice in a consistent manner. You can view linear ADRC from this chapter as a sound, practically proven starting point—a foundation you can build on and customize to your application’s needs.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
While knowledge of the ai parameters is not required, it does not have to be thrown away when available. One can indeed improve the performance of the control loop using a more detailed plant model. In Sect. 6.​1 we will briefly discuss this topic.
 
2
We will shed some more light on this in Sect. 4.​1 and provide an example in Sect. 6.​1.
 
Metadaten
Titel
Linear Active Disturbance Rejection Control
verfasst von
Gernot Herbst
Rafal Madonski
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-72687-3_3