Skip to main content
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

6. Extensions and Modifications

verfasst von : Gernot Herbst, Rafal Madonski

Erschienen in: Active Disturbance Rejection Control

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

ADRC is not a fixed set of equations and hence should not be viewed as a magical set of formulae that, once implemented on the real system, will do wonders. It should instead be understood as a modular, flexible framework that could be tailored to specific applications. This, however, requires a bit of a different mindset—one that is open to change. In this chapter, we show potential extensions and modifications to the ADRC scheme which take into account practical scenarios where certain actionable information is available about the governed system and/or acting disturbance. We also show how the performance of ADRC can be increased, for the price of additional tuning, by incorporating some nonlinear components. We will also introduce a so-called error-based variant of the ADRC scheme which is considered a bare-bones version and has some interesting practical characteristics.

6.1 Availability of Additional Model Information

The simplicity of the integrator chain plant model (3.​2) assumed so far is a key strength of ADRC, taking away most of the modeling burden from the user. However, additional model information might already be available from domain knowledge, previous identification runs, data sheets, etc. Incorporating this knowledge into the observer may deliver better results, as there are fewer uncertainties the observer must deal with—with a model, one can more precisely estimate the system states, instead of moving modeling uncertainties to the lumped total disturbance. Similarly one can benefit from a customized disturbance generator model if the assumption of a constant disturbance does not hold for an application.
Throughout the remainder of this book, we refrain from using a more detailed model of either plant or disturbance, to keep the modeling requirements at a minimum and position ADRC as a general-purpose controller. Yet in this section, we will demonstrate how the incorporation of additional model information can be achieved by following and adapting the generic state-space derivation of ADRC presented in Sect. 4.​1, should the need ever arise. The resulting controller will have the structure shown in Fig. 4.​1, and the following steps have to be carried out:
1.
Provide a model of the plant and the disturbance generator in the following form:
$$\displaystyle \begin{aligned} &\boldsymbol{\dot{x}}(t) = \boldsymbol{A} \cdot \boldsymbol{x}(t) + \boldsymbol{b} \cdot u(t) + \boldsymbol{e} \cdot d(t) ,\quad y(t) = \boldsymbol{c}^{\mathrm{T}} \cdot \boldsymbol{x}(t) , \\ &\boldsymbol{\dot{x}}_{\mathrm{d}}(t) = \boldsymbol{A}_{\mathrm{d}} \cdot \boldsymbol{x}_{\mathrm{d}}(t) ,\quad d(t) = \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \boldsymbol{x}_{\mathrm{d}}(t) . \end{aligned} $$
 
2.
Insert these matrices and vectors in the observer (4.​4), whose dynamics are repeated here for convenience:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equb_HTML.png
StartBinomialOrMatrix ModifyingAbove Above ModifyingAbove bold italic x With bold caret With bold dot left parenthesis t right parenthesis Choose ModifyingAbove Above ModifyingAbove bold italic x With bold caret With bold dot Subscript normal d Baseline left parenthesis t right parenthesis EndBinomialOrMatrix equals ModifyingBelow Start 2 By 2 Matrix 1st Row 1st Column bold italic upper A 2nd Column bold italic e dot bold italic c Subscript normal d Superscript normal upper T Baseline 2nd Row 1st Column bold 0 2nd Column bold italic upper A Subscript normal d Baseline EndMatrix With bottom brace Underscript bold italic upper A Subscript normal o Baseline Endscripts dot StartBinomialOrMatrix ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis Choose ModifyingAbove bold italic x With bold caret Subscript normal d Baseline left parenthesis t right parenthesis EndBinomialOrMatrix plus ModifyingBelow StartBinomialOrMatrix bold italic b Choose bold 0 EndBinomialOrMatrix With bottom brace Underscript bold italic b Subscript normal o Baseline Endscripts dot u left parenthesis t right parenthesis plus bold italic k Subscript normal o Baseline dot left parenthesis y left parenthesis t right parenthesis minus ModifyingBelow Start 1 By 2 Matrix 1st Row 1st Column bold italic c Superscript normal upper T Baseline 2nd Column 0 EndMatrix With bottom brace Underscript bold italic c Subscript normal o Superscript normal upper T Baseline Endscripts dot StartBinomialOrMatrix ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis Choose ModifyingAbove bold italic x With bold caret Subscript normal d Baseline left parenthesis t right parenthesis EndBinomialOrMatrix right parenthesis period
The control law can now be built upon that, combining feedback of the estimated plant states and the states of the disturbance model:
$$\displaystyle \begin{aligned} u(t) = k_{\mathrm{s}} \cdot r(t) - \boldsymbol{k}_{\mathrm{c}}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}(t) - \boldsymbol{k}_{\mathrm{d}}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}_{\mathrm{d}}(t) . \end{aligned}$$
 
3.
Design the feedback gain \(\boldsymbol {k}_{\mathrm {d}}^{\mathrm {T}}\) of the disturbance compensation to minimize the impact of the disturbance on the plant states. Full disturbance rejection is possible if \(\boldsymbol {b} \cdot \boldsymbol {k}_{\mathrm {d}}^{\mathrm {T}} = \boldsymbol {e} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}}\) can be achieved and the observer dynamics are fast enough.
 
4.
Compute the gains of the outer state-feedback controller \(\boldsymbol {k}_{\mathrm {c}}^{\mathrm {T}}\) to move the poles of the plant as desired for the closed-loop dynamics, which are determined by the eigenvalues of \(\left (\boldsymbol {A} - \boldsymbol {b} \boldsymbol {k}_{\mathrm {c}}^{\mathrm {T}} \right )\). The bandwidth parameterization approach, described in Sect. 3.​2.​1, can be used here, of course.
 
5.
Compute the observer gains ko to move the observer poles, which are determined by the eigenvalues of \(\left ( \boldsymbol {A}_{\mathrm {o}} - \boldsymbol {k}_{\mathrm {o}} \boldsymbol {c}^{\mathrm {T}}_{\mathrm {o}} \right )\), to the left of the closed-loop poles as desired and admissible by the application. Bandwidth parameterization may again be employed here to simplify pole placement.
 
6.
Compute the static gain ks to compensate the closed-loop gain using (4.​6), which is once more repeated here for convenience:
$$\displaystyle \begin{aligned} k_{\mathrm{s}} = -\left[ \boldsymbol{c}^{\mathrm{T}} \cdot \left( \boldsymbol{A} - \boldsymbol{b} \boldsymbol{k}_{\mathrm{c}}^{\mathrm{T}} \right)^{-1} \cdot \boldsymbol{b} \right]^{-1} . \end{aligned}$$
 
It should be noted that incorporating additional model information will always lead to custom controller and/or observer gain values. This means that the results and coefficients provided in the reference Chap. A of the Appendix will, in general, not be valid anymore, and the specific gains would have to be derived again, which increases the effort required to implement ADRC. If the additional performance obtained by this approach is worth, the effort should be decided in a case-by-case examination. To give some first hints, one example incorporating a more detailed plant model will be provided in the following, as well as another example with a customized disturbance model.
Example: Using Additional Plant Model Information
Let us examine a simple yet generic example of a plant with Nth-order low-pass behavior, whose transfer function P(s) is
$$\displaystyle \begin{aligned} P(s) = \frac{y(s)}{u(s)} = \frac{b_0}{s^N + a_{N-1} s^{N-1} + \ldots + a_1 s + a_0} . \end{aligned}$$
A state-space description of this plant can be given using the following matrix and vectors:
$$\displaystyle \begin{aligned} \boldsymbol{A} = \begin{pmatrix} \boldsymbol{0}^{(N-1) \times 1} & \boldsymbol{I}^{(N-1) \times (N-1)} \\ -a_0 & -a_1 \cdots -a_{N-1} \end{pmatrix} ,\quad \boldsymbol{b} = \begin{pmatrix} \boldsymbol{0}^{(N-1) \times 1} \\ b_0 \end{pmatrix} ,\quad \boldsymbol{c}^{\mathrm{T}} = \begin{pmatrix} 1 & \boldsymbol{0}^{1 \times (N-1)} \end{pmatrix} . \end{aligned}$$
Comparing these values with (4.​8) for “standard” ADRC (as introduced in Chap. 3) reveals that additional model information for this particular plant is in the a coefficients in the Nth row of the A matrix. We leave the assumption of a constant disturbance at the plant input unchanged, which means we maintain the disturbance generator model (4.​7) and gain vector (4.​9):
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equg_HTML.png
u left parenthesis t right parenthesis equals k Subscript normal s Baseline dot r left parenthesis t right parenthesis minus bold italic k Subscript normal c Superscript normal upper T Baseline dot ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis minus bold italic k Subscript normal d Superscript normal upper T Baseline dot ModifyingAbove bold italic x With bold caret Subscript normal d Baseline left parenthesis t right parenthesis period
Following the steps outlined before, we arrive at controller and observer gains \(\boldsymbol {k}_{\mathrm {c}}^{\mathrm {T}}\) ko, a compensating gain ks and a disturbance compensation gain kd (which remains unchanged at \(k_{\mathrm {d}} = \frac {1}{b_0}\) compared to (4.​13)). A numerical example for a second-order plant is presented in Fig. 6.1, with a comparison to “standard” ADRC.
Example: Using Additional Disturbance Model Information
In a second example, we will keep ADRC’s default plant model but enhance the observer with a disturbance generator model that enables rejection of both constant and sinusoidal disturbances (with one known, fixed frequency) at the plant input. The goal here is to aid the observer with knowledge about the acting disturbance to increase its estimation performance. The disturbance d(t) accordingly consists of a constant component d0 and a sinusoidal component with unknown amplitude d1, unknown phase shift φd, but known frequency ωd:
$$\displaystyle \begin{aligned} d(t) = d_0 + d_1 \cdot \sin{}(\omega_{\mathrm{d}} \cdot t + \varphi_{\mathrm{d}}) . \end{aligned}$$
Generating this disturbance d(t) requires superposition of an unknown constant value (as before in ADRC) and the output of a harmonic oscillator. As probably known from linear systems theory, oscillation requires a second-order system. One possible state-space model can be given as
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equi_HTML.png
The extended state observer will therefore estimate two additional state variables compared to the “standard” case, and two more observer gains have to be computed—which, of course, is not harder than before when sticking to bandwidth parameterization for simplicity.
Since \(\boldsymbol {e} = \left (\boldsymbol {0}^{1 \times (N-1)} \ \ 1 \right )^{\mathrm {T}}\) and \(\boldsymbol {b} = \left (\boldsymbol {0}^{1 \times (N-1)} \ \ b_0 \right )^{\mathrm {T}}\) remain unchanged assuming an Nth-order system, the condition for disturbance rejection \(\boldsymbol {b} \cdot \boldsymbol {k}_{\mathrm {d}}^{\mathrm {T}} = \boldsymbol {e} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}}\) can easily be solved for the feedback gains \(\boldsymbol {k}_{\mathrm {d}}^{\mathrm {T}}\), yielding
$$\displaystyle \begin{aligned} \boldsymbol{k}_{\mathrm{d}}^{\mathrm{T}} = \frac{1}{b_0} \cdot \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} = \frac{1}{b_0} \cdot \begin{pmatrix} 1 \ & \ 1 \ & \ 0 \end{pmatrix} . \end{aligned}$$
The “virtual plant” to be controlled by the outer state-feedback controller (introduced in Sect. 3.​2.​1) does not change owing to successful disturbance rejection. Computing the feedback gains \(k_{\mathrm {c}}^{\mathrm {T}} = \frac {1}{b_0} \cdot \begin {pmatrix} k_1 \dots k_n \end {pmatrix}\) therefore does not require changes and can, for example, directly employ the results of bandwidth parameterization given with (3.​19). For the same reason, kS does not need a customized design and remains unchanged at \(k_{\mathrm {s}} = \frac {k_1}{b_0}\).
A numerical example for a second-order plant is presented in Fig. 6.2 and compares the disturbance rejection capabilities of “standard” ADRC to the enhanced variant discussed here. As visible, a sinusoidal disturbance at the plant input can completely be removed in the latter case, if its frequency is fixed and known. It is interesting to compare the observer states: In the “standard” ADRC, all the burden is on the total disturbance (state \(\hat {x}_3\)), whereas using the enhanced disturbance model, \(\hat {x}_3\) only has to account for the steady-state controller output needed to follow the reference signal. The sinusoidal disturbance is completely accounted for in the additional states \(\hat {x}_{\mathrm {d1/2}}\).

6.2 Availability of Reference Signal Derivatives

Especially in motion control systems, generating a smooth trajectory for the reference signal so as not to violate constraints regarding its velocity, acceleration or even jerk is a common approach. In such scenarios, the tracking performance of the control loop is important. So far, we have tuned ADRC for a non-overshooting response with a certain bandwidth (or settling time, e.g., using the approximation (3.​21)). This is a desirable behavior for fixed set points or reference values that are (infrequently) changing in a stepwise manner—but too slow for a time-varying reference signal.
ADRC with Modified Control Law
The tracking performance of Nth-order ADRC can be considerably improved if the first N derivatives of the reference signal r(t) are available. These could either be computed by a trajectory generator or estimated from a scalar reference signal (more on that in Sect. 6.3). If we put these along with the reference r(t) in a reference signal vector r(t),
$$\displaystyle \begin{aligned} \boldsymbol{r}(t) = \begin{pmatrix} r(t) \ & \ \dot{r}(t) \ & \ \cdots \ & \ r^{(N)}(t) \end{pmatrix}^{\mathrm{T}} , {} \end{aligned} $$
(6.1)
we can extend the control law (3.​15) of ADRC as follows:
$$\displaystyle \begin{aligned} u(t) &= \frac{1}{b_0} \cdot \left[ \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left( \boldsymbol{r}(t) - \hat{\boldsymbol{x}}(t) \right) \right] {} \\ &= \frac{1}{b_0} \cdot \left[ k_1 \cdot \left( r(t) - \hat{x}_1(t) \right) + \cdots + \left( r^{(N)}(t) - \hat{x}_{N+1}(t) \right) \right] \notag. \end{aligned} $$
(6.2)
Figure 6.3 shows an accordingly modified state-space variant of ADRC, with a trajectory generator providing the reference signal vector r(t) in this example. For ADRC of order N, it might not always be the case that all N derivatives of r(t) are provided by the trajectory generator. In this case, using “whatever is available” may still lead to a somewhat improved tracking behavior and settling time, as demonstrated with a simulation example in Fig. 6.4.
Modular Solution
What if an implementation of ADRC is being chosen that does not allow for a modification of the state-feedback control law? As we saw in Sect. 4.​2, the controller gains kT can become somewhat inaccessible, buried in the transfer function coefficients. Fortunately, by comparing Figs. 6.3 and 3.​5, we can derive a modular solution. If we factor out the gain vector \(\frac {1}{k_1} \cdot \begin {pmatrix} \boldsymbol {k}^{\mathrm {T}} & 1 \end {pmatrix}\) from the reference signal input of the controller in Fig. 6.3, we obtain a series connection of said gain vector and an unmodified ADRC structure of Fig. 3.​5—which can, of course, be implemented using transfer functions, as well. In terms of the control law (6.2), this means
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equk_HTML.png
StartLayout 1st Row 1st Column u left parenthesis t right parenthesis 2nd Column equals StartFraction 1 Over b 0 EndFraction dot left bracket Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot left parenthesis bold italic r left parenthesis t right parenthesis minus ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis right parenthesis right bracket equals StartFraction 1 Over b 0 EndFraction dot left bracket Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot bold italic r left parenthesis t right parenthesis minus Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis right bracket 2nd Row 1st Column Blank 2nd Column equals StartFraction 1 Over b 0 EndFraction dot left bracket k 1 dot ModifyingBelow left parenthesis StartFraction 1 Over k 1 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot bold italic r left parenthesis t right parenthesis right parenthesis With bottom brace Underscript input to left double quotation mark standard right double quotation mark ADRC Endscripts minus Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis right bracket period EndLayout
If a reference vector (6.1), comprising of r(t) and its first N derivatives, is available, one can therefore plug a gain vector \(\frac {1}{k_1} \cdot \begin {pmatrix} \boldsymbol {k}^{\mathrm {T}} & 1 \end {pmatrix}\)between r(t) and the reference input of any implementation of “standard” ADRC (as introduced in Chap. 3) to improve the tracking performance. Figure 6.5 shows the control loop from Fig. 6.3, now implemented using a transfer function representation of unmodified ADRC and the mentioned gain vector module.

6.3 Nonlinear ADRC

Aiming at design simplicity and clarity of presentation, we made a conscious decision early on in the book to introduce and explain ADRC in its linear form. However one should be aware that incorporating some nonlinear components in the ADRC structure can potentially improve its performance, even for linear plants, although nonlinear ADRC (often denoted as NADRC) is not common in engineering practice. Interestingly, it is a foundational part of ADRC’s history: The original ADRC version was introduced as a nonlinear scheme. We will provide more context in Chap. 7. Here we show how the inclusion of nonlinear terms can be done using a specific form of NADRC that has two distinct elements that make it nonlinear, namely:
  • A nonlinear tracking differentiator, responsible for real-time estimation of signal’s higher-order derivatives
  • A nonlinear weighting function, which “scales” selected signals in observer and/or controller within the ADRC structure to improve their convergence rate
From a practical standpoint, if there exists a justifiable need to increase the performance offered by the linear ADRC, these nonlinear components can be added to the core design (see Fig. 6.6) and, for the price of more complicated structure, additional tuning, and increased computational complexity, can improve the desired aspect of ADRC, like tracking and/or observation quality. Next, we will show how the above nonlinear elements can be incorporated into the base linear ADRC.

6.3.1 Tracking Differentiator

The tracking differentiator, or TD, can generate estimates of a signal and its higher-order derivatives. Using reference signal r(t) as an exemplary input, the TD can be expressed as the following second-order integral chain:
$$\displaystyle \begin{aligned} \ddot{\hat{r}}(t) &= -\rho \cdot \operatorname{sign} \left( \hat{r}(t) - r(t) + \frac{ \dot{\hat{r}}(t)\cdot|\dot{\hat{r}}(t)| }{ 2 \cdot \rho } \right), {} \\ \dot{\hat{r}}(t) &= \int \ddot{\hat{r}}(t) \,\mathrm{d}t, \notag \\ \hat{r}(t) &= \int \dot{\hat{r}}(t) \,\mathrm{d}t, \notag \end{aligned} $$
(6.3)
where \(\hat {r}(t)\), \(\dot {\hat {r}}(t)\), \(\ddot {\hat {r}}(t)\) are to be used as estimates of r(t), \(\dot {r}(t)\), \(\ddot {r}(t)\), respectively, and ρ > 0 is the design parameter that balances speed of estimation and stability of the estimator. Figure 6.7 shows the TD in block diagram form.
The information about the estimated higher-order terms of the reference signal can be utilized in various ways. The following two examples show its usage as a trajectory generator and a transient profile generator. Both examples use the same TD structure (6.3) but differ in tuning.
Example: TD as Trajectory Generator for Improved Tracking
The benefits of having higher-order reference signals available during ADRC design have already been shown in Sect. 6.2. But the application of control action (6.2) requires availability of the reference time derivatives r(i)(t), for i up to the order of dynamics N. This is not a serious restriction when the references are preprogrammed, meaning the reference trajectory r(t) is designed (perfectly known) in advance, and all the reference time derivatives \(\dot {r}(t),\ddot {r}(t),\ldots ,r^{(N)}(t)\) can be computed a priori and fed to the controller.
In some control applications, however, the value of the reference trajectory is only known at a current time instant and the reference time derivatives are unavailable. In those instances, TD can be possibly used to reconstruct the otherwise unavailable signals in real time to provide feedforward information improving command response, similar to the trajectory generator depicted in Fig. 6.5. Using a generic sine signal as an example, Fig. 6.8 shows a potential use of a TD as the trajectory generator.1 In this case, the signals estimated by TD should be as close as possible to the real ones; hence tuning coefficient ρ needs to be set to a relatively large value.
Example: TD as Profile Generator for Smooth Transients
Another possible use of TD is that if the given set point is abrupt, then the TD estimates could provide a smooth transient profile that the output of the plant can then reasonably follow, which should be assessed on a case-by-case basis. Such avoidance of set point jumps is oftentimes a standard operating procedure when dealing, for example, with servo systems, for which motion profiles should be devised to ensure safe operation and not to overstrain the hardware components. In that case, if r(t) is the desired servo position, then TD in form of (6.3) would provide motion profiles for its reference position \(\hat {r}(t)\), velocity \(\dot {\hat {r}}(t)\), and acceleration \(\ddot {\hat {r}}(t)\), to be then utilized during controller synthesis.
Using a generic step signal as an example, Fig. 6.9 shows a potential use of a TD as the transient profile generator. In this case, the signals estimated by TD do not need to be as close as possible to the real signals as the former are meant to include the feasibility of their practical realization; hence tuning coefficient ρ needs to be set to a relatively small value.

6.3.2 Nonlinear Controller and Observer

Until now, when constructing ADRC, we have utilized a simple, linear combination of gains and signals in both the observer part and the controller part. However, a nonlinear combination may potentially be more effective, if one is willing to put in the extra work related to the implementation and tuning of additional components. Here we will show how to take advantage of nonlinear weighting in ADRC, starting by adding it to the baseline linear controller, discussed so far in previous chapters.
From Linear to Nonlinear Controller
To convey the idea behind a nonlinear controller, let us use as an example the specific form of ADRC discussed in Sect. 6.2. This time, however, we assume that the reference signal derivatives are not available beforehand but can be estimated in real time based solely on the known reference signal r(t), for example, using a TD. This assumption means that in the extended control law (6.2), we can now replace r(t) with its estimate:
$$\displaystyle \begin{aligned} \hat{\boldsymbol{r}}(t) = \begin{pmatrix} \hat{r}(t) \ & \ \dot{\hat{r}}(t) \ & \ \cdots \ & \ \hat{r}^{(N)}(t) \end{pmatrix}^{\mathrm{T}} = \begin{pmatrix} \hat{r}_1(t) \ & \ \hat{r}_2(t) \ & \ \cdots \ & \ \hat{r}_{N+1}(t) \end{pmatrix}^{\mathrm{T}} {} , \end{aligned} $$
(6.4)
which yields the following modified version of (6.2):
$$\displaystyle \begin{aligned} u(t) = \frac{1}{b_0} \cdot \left[ \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left( \hat{\boldsymbol{r}}(t) - \hat{\boldsymbol{x}}(t) \right) \right] . {} \end{aligned} $$
(6.5)
Furthermore, if we introduce the following error vector:
$$\displaystyle \begin{aligned} \boldsymbol{e}(t) = \begin{pmatrix} e_1 \\ \vdots \\ e_N \end{pmatrix} = \begin{pmatrix} \hat{r}_1(t) - \hat{x}_1(t) \\ \vdots \\ \hat{r}_N(t) - \hat{x}_N(t) \end{pmatrix} , \end{aligned} $$
(6.6)
the state-feedback controller u0(t), inside (6.5), can be written in a compact form:
$$\displaystyle \begin{aligned} u_0(t) = \overbrace{ \boldsymbol{k}^{\mathrm{T}} \cdot \boldsymbol{e}(t) }^{\mathrm{``standard'' linear weighting} } , {} \end{aligned} $$
(6.7)
with distinct linear weighting, which is emblematic of the “standard” LADRC form discussed so far in the book. Now, the above simple linear combination could also be replaced with a nonlinear one, thus changing (6.7) to
$$\displaystyle \begin{aligned} u_0(t) = \overbrace{ \boldsymbol{k}^{\mathrm{T}} \cdot \boldsymbol{f}\left(\boldsymbol{e}(t)\right)}^{\mathrm{nonlinear weighting} } , {} \end{aligned} $$
(6.8)
where
$$\displaystyle \begin{aligned} \boldsymbol{f} \left( \boldsymbol{e}(t) \right) = \begin{pmatrix} f(e_1(t),\boldsymbol{p}_1) \\ \vdots \\ f(e_N(t),\boldsymbol{p}_N) \end{pmatrix}^{\mathrm{T}} {} \end{aligned} $$
(6.9)
is some function that scales the entering signal in some nonlinear fashion with p1, …, pN being user-defined design parameters that shape this nonlinear behavior. The control law (6.8) with such nonlinear terms is sometimes referred to in the ADRC literature as nonlinear state error feedback or NLSEF. A specific example of such a nonlinear function will be shown later on.
From Linear to Nonlinear Observer
A similar transition from linear to nonlinear, as seen above for the controller part, can be done for the observer part. With the following auxiliary definition of estimation error, here being the difference between measured output and its estimate:
$$\displaystyle \begin{aligned} e_y(t) = y(t) - \boldsymbol{c}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}(t), \end{aligned} $$
(6.10)
let us recall the linear ESO from (3.​16):
$$\displaystyle \begin{gathered} \boldsymbol{\dot{\hat{x}}}(t) = \boldsymbol{A} \cdot \hat{\boldsymbol{x}}(t) + \boldsymbol{b} \cdot u(t) + \overbrace{ \boldsymbol{l} \cdot e_y(t)}^{\mathrm{``standard'' linear weighting} } {} , \end{gathered} $$
(6.11)
with the marked characteristic linear weighting between signal and gains. Here again, the linear weighting could be replaced with a nonlinear one, thus changing (6.11) to
$$\displaystyle \begin{gathered} \boldsymbol{\dot{\hat{x}}}(t) = \boldsymbol{A} \cdot \hat{\boldsymbol{x}}(t) + \boldsymbol{b} \cdot u(t) + \overbrace{ \boldsymbol{l} \cdot \boldsymbol{f} \left( e_y(t) \right)}^{\mathrm{nonlinear weighting} } , {} \end{gathered} $$
(6.12)
where
$$\displaystyle \begin{aligned} \boldsymbol{f} \left( e_y(t) \right) = \begin{pmatrix} f(e_y(t),\boldsymbol{p}_1) \\ \vdots \\ f(e_y(t),\boldsymbol{p}_{N+1}) \end{pmatrix} {} \end{aligned} $$
(6.13)
is a nonlinear function that scales the entering signal, with p1, …, pN+1 being the design parameters. The observer (6.12) with such nonlinear terms is often referred to in the literature on ADRC simply as nonlinear ESO or NESO. Let us now take a look at a specific example of the abovementioned nonlinear weighting function.
Specific Case of Nonlinear Weighting Function
In general, several different weighting functions can be used in nonlinear ADRC to replace the generic functions (6.9) and (6.13) in the controller and the observer, respectively. Here, however, we are focusing on a specific one, commonly referred to as fal(⋅) function:2
$$\displaystyle \begin{aligned} x_{\mathrm{out}}(t) = f_{\mathrm{al}}(x_{\mathrm{in}}(t),\alpha,\delta) = \begin{cases} \dfrac{x_{\mathrm{in}}(t)}{\delta^{(1-\alpha)}}, & \vert x_{\mathrm{in}}(t) \vert \leq \delta, \\ \vert x_{\mathrm{in}}(t) \vert^{\alpha} \cdot \operatorname{sign}(x_{\mathrm{in}}(t)), & \vert x_{\mathrm{in}}(t) \vert > \delta, \end{cases} {} \end{aligned} $$
(6.14)
where xin(t) is some signal entering the function and xout(t) is its weighted output that this nonlinear function produces. A graphical example of how the fal(⋅) function does nonlinear weighting is depicted in Fig. 6.10.
Both quantity and placement of the nonlinear functions within the constructed ADRC scheme are fully customizable by the control designer, as will be shown in the upcoming example. This decision, which could increase the convergence rate of selected signals, has to be made taking into account its practical feasibility, as it will have consequences on the overall implementation complexity and require additional tuning effort.
Example: Influence of Nonlinear Weighting
Through the following simulation example, we want to show what is the influence of the nonlinear components on the ADRC performance. Let us examine once again the nominal and generic second-order plant model from Sect. 5.​1 to compare two ADRC designs, one with “standard” linear weighting and the other with nonlinear weighting using fal(⋅) function.
In the case of linear controller, we have
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ15_HTML.png
StartLayout 1st Row u left parenthesis t right parenthesis equals StartFraction 1 Over b 0 EndFraction dot left parenthesis ModifyingAbove k 1 dot e 1 left parenthesis t right parenthesis With top brace Overscript factorial left double quotation mark standard right double quotation mark linear weighting exclamation mark Endscripts plus ModifyingBelow k 2 dot e 2 left parenthesis t right parenthesis With bottom brace Underscript factorial left double quotation mark standard right double quotation mark linear weighting exclamation mark Endscripts minus ModifyingAbove x With caret Subscript 3 Baseline left parenthesis t right parenthesis right parenthesis EndLayout
(6.15)
and in the case of a nonlinear controller
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ16_HTML.png
StartLayout 1st Row u left parenthesis t right parenthesis equals StartFraction 1 Over b 0 EndFraction dot left parenthesis ModifyingAbove k 1 dot f Subscript a l Baseline left parenthesis e 1 left parenthesis t right parenthesis comma alpha 1 comma delta right parenthesis With top brace Overscript factorial nonlinear weighting exclamation mark Endscripts plus ModifyingBelow k 2 dot f Subscript a l Baseline left parenthesis e 2 left parenthesis t right parenthesis comma alpha 2 comma delta right parenthesis With bottom brace Underscript factorial nonlinear weighting exclamation mark Endscripts minus ModifyingAbove x With caret Subscript 3 Baseline left parenthesis t right parenthesis right parenthesis period EndLayout
(6.16)
In both cases, we use the definitions of error signals \(e_1(t) = \hat {r}_1(t) - \hat {x}_1(t)\) and \(e_2(t) = \hat {r}_2(t) - \hat {x}_2(t)\). For the NLSEF (6.16), we make an arbitrary decision to place the nonlinear weighting on both feedback error-driven terms. Estimates \(\hat {r}_1(t)\) and \(\hat {r}_2(t)\) are taken from the same TD (similar to (6.3)), whereas estimates \(\hat {x}_1(t)\) and \(\hat {x}_2(t)\) are taken from respective observers.
Now, for the observer part in the tested ADRCs, the purely linear observer takes the “standard” form:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ17_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 2 Baseline left parenthesis t right parenthesis 3rd Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 3 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove Above ModifyingAbove bold italic x With bold caret With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 3 By 3 Matrix 1st Row 1st Column 0 2nd Column 1 3rd Column 0 2nd Row 1st Column 0 2nd Column 0 3rd Column 1 3rd Row 1st Column 0 2nd Column 0 3rd Column 0 EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With caret Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row ModifyingAbove x With caret Subscript 2 Baseline left parenthesis t right parenthesis 3rd Row ModifyingAbove x With caret Subscript 3 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis Endscripts plus ModifyingBelow Start 3 By 1 Matrix 1st Row 0 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus ModifyingAbove ModifyingBelow Start 3 By 1 Matrix 1st Row l 1 2nd Row l 2 3rd Row l 3 EndMatrix With bottom brace Underscript bold italic l Endscripts dot e Subscript y Baseline left parenthesis t right parenthesis With top brace Overscript factorial linear weighting exclamation mark Endscripts EndLayout comma
(6.17)
and in case of the mixed linear and nonlinear weighting, the observer writes
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ18_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 2 Baseline left parenthesis t right parenthesis 3rd Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 3 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove Above ModifyingAbove bold italic x With bold caret With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 3 By 3 Matrix 1st Row 1st Column 0 2nd Column 1 3rd Column 0 2nd Row 1st Column 0 2nd Column 0 3rd Column 1 3rd Row 1st Column 0 2nd Column 0 3rd Column 0 EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With caret Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row ModifyingAbove x With caret Subscript 2 Baseline left parenthesis t right parenthesis 3rd Row ModifyingAbove x With caret Subscript 3 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis Endscripts plus ModifyingBelow Start 3 By 1 Matrix 1st Row 0 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus ModifyingAbove ModifyingBelow Start 3 By 1 Matrix 1st Row l 1 2nd Row l 2 3rd Row l 3 EndMatrix With bottom brace Underscript bold italic l Endscripts dot Start 3 By 1 Matrix 1st Row e Subscript y Baseline left parenthesis t right parenthesis 2nd Row e Subscript y Baseline left parenthesis t right parenthesis 3rd Row f left parenthesis e Subscript y Baseline left parenthesis t right parenthesis comma alpha comma delta right parenthesis EndMatrix With top brace Overscript factorial mix of linear and nonlinear weighting exclamation mark Endscripts EndLayout period
(6.18)
In both cases, we have \(e_y(t) = x_1(t) - \hat {x}_1(t) = y(t) - \hat {y}(t)\). For the NESO (6.18), we make an arbitrary decision to place the nonlinear weighting only on the last term (estimating the total disturbance). This again shows how customizable the placement of nonlinear terms is. Here it results from a practical compromise between the desire to increase the performance of ADRC and its overall complexity of implementation and tuning (i.e., less nonlinear functions mean simpler tuning).
For the comparison, we start with “standard” LADRC and add nonlinear components to it, first in the observer, then in the controller, and finally in both of them. The obtained results are presented in Fig. 6.11. It can be seen that nonlinear terms can bring benefits in certain aspects but at the cost of some drawbacks in other areas. The proper selection, placement, and tuning of nonlinear extra terms should thus be a conscious decision made based on actual control needs and limitations that inevitably result in some kind of compromise. For example, in the case of structure NADRC3, the nonlinear terms not only increased disturbance rejection and response speed but also resulted in a larger overshoot than in its linear counterpart. And once again, possible improvements from adding nonlinearity will come with a rather significant price tag, namely more elements need to be implemented and tuned. On a personal note, we suggest always starting with a linear ADRC and going from there. As long as the job is done, engineering practice favors simplicity. This is yet another reason this book focuses almost exclusively on linear ADRC form.

6.4 Error-Based ADRC

The previous three sections were about “adding things” to the ADRC scheme. Let us now exercise the idea of potentially simplifying it and checking what benefits does that bring. However, as the adage goes, there is no such thing as free lunch. So it will also be necessary to answer the important question about the price of said simplification.
Here we focus on a particular modification of the ADRC structure, which is referred to as error-based ADRC; coining of this term will become clear in its upcoming derivation. This modification is dedicated to scenarios in which trajectory tracking control tasks are considered, but, in contrast to Sect. 6.2, the consecutive reference time derivatives are not measured or generated using a trajectory generator. In other words, they are not available and hence cannot be used during the controller synthesis. We have made a conscious decision to include the error-based variant in the book as it offers some interesting, practical features that may be found useful by prospective ADRC designers, but we will get to that later.
To best explain error-based ADRC, we start by recalling the Nth-order SISO plant model (3.​1), already expressed in its compact form (3.​2):
$$\displaystyle \begin{aligned} y^{(N)}(t) = f(t) + b_0 \cdot u(t) , {} \end{aligned} $$
(6.19)
as well as the fundamental definition of feedback error:
$$\displaystyle \begin{aligned} e(t) = r(t) - y(t) . {} \end{aligned} $$
(6.20)
By differentiating (6.20) up to the order of governed system dynamics (6.19)
$$\displaystyle \begin{aligned} \dot{e}(t) &= \dot{r}(t) - \dot{y}(t), \notag \\ &\vdots \notag \\ e^{(N)}(t) &= r^{(N)}(t) - y^{(N)}(t), {} \end{aligned} $$
(6.21)
one reaches the point where (6.19) can be substituted in (6.21), which, after a simple rearrangement of terms, can be expressed as
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ22_HTML.png
StartLayout 1st Row 1st Column e Superscript left parenthesis upper N right parenthesis Baseline left parenthesis t right parenthesis 2nd Column equals r Superscript left parenthesis upper N right parenthesis Baseline left parenthesis t right parenthesis minus ModifyingAbove left parenthesis f left parenthesis t right parenthesis plus b 0 dot u left parenthesis t right parenthesis right parenthesis With top brace Overscript y Superscript left parenthesis upper N right parenthesis Baseline left parenthesis t right parenthesis Endscripts 2nd Row 1st Column Blank 2nd Column equals ModifyingBelow r Superscript left parenthesis upper N right parenthesis Baseline left parenthesis t right parenthesis minus f left parenthesis t right parenthesis With bottom brace Underscript ModifyingAbove f With caret Superscript asterisk Baseline left parenthesis t right parenthesis Endscripts minus b 0 dot u left parenthesis t right parenthesis 3rd Row 1st Column Blank 2nd Column equals f Superscript asterisk Baseline left parenthesis t right parenthesis minus b 0 dot u left parenthesis t right parenthesis comma EndLayout
(6.22)
where f(t) now is the total disturbance, which, compared to (6.19), now not only consists of unknown model terms (modeling errors) and actual disturbances but also r(N)(t) being the unknown Nth derivative of the reference signal.
One can notice that the considered plant model can be expressed in two alternative forms, (6.19) or (6.22). Two major differences can be immediately spotted between the two expressions. Firstly, (6.19) represents system output dynamics, whereas (6.22) represents its error dynamics. The second difference is the sign attached to the critical gain parameter b0.
For the considered error-based system (6.22), we can follow the methodology of ADRC design established in Chap. 3 and construct a control law, similar to (3.​3), that embodies the two core ideas of ADRC, namely disturbance rejection and plant gain inversion. This, of course, assumes it is possible to obtain at least an estimate \(\hat {f}^*(t)\) of the total disturbance f(t) in an online manner. Such control signal u(t) can then be constructed as3
$$\displaystyle \begin{aligned} u(t) = \frac{u_0(t) + \hat{f}^*(t)}{b_0} . {} \end{aligned} $$
(6.23)
The application of control signal u(t) to (6.22) creates the desired unity-gain integrator chain behavior e(N)(t) ≈−u0(t), which is straightforward to control.
The above formulation of the control problem in the error domain is thus potentially very practically appealing as it allows the realization of the control objective without knowledge of the reference derivatives. However, as is usually the case in ADRC schemes, it all comes down to the estimation quality of the total disturbance. Fortunately, we can reach for known tools and methods, like ESO and bandwidth parameterization, to effectively reconstruct the missing information.
We see important benefits of introducing error-based ADRC using both state-space and transfer function representations; hence these two forms are derived in the remainder of this section.

6.4.1 State-Space Form

Similar to the output-based case in Sect. 3.​1, a good starting point here is the expression of (6.22)—the disturbed integrator chain in error domain—in state-space form, which results in
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ24_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With dot Subscript 1 2nd Row vertical ellipsis 3rd Row ModifyingAbove x With dot Subscript upper N plus 1 EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 2 By 2 Matrix 1st Row 1st Column bold 0 Superscript upper N times 1 Baseline 2nd Column bold italic upper I Superscript upper N times upper N Baseline 2nd Row 1st Column 0 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript bold italic x left parenthesis t right parenthesis Endscripts minus ModifyingBelow Start 3 By 1 Matrix 1st Row bold 0 Superscript left parenthesis upper N minus 1 right parenthesis times 1 Baseline 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus StartBinomialOrMatrix bold 0 Superscript upper N times 1 Baseline Choose 1 EndBinomialOrMatrix dot ModifyingAbove f With dot Superscript asterisk Baseline left parenthesis t right parenthesis 2nd Row 1st Column e left parenthesis t right parenthesis 2nd Column equals ModifyingBelow Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic c Superscript normal upper T Baseline Endscripts dot Start 3 By 1 Matrix 1st Row x 1 left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row x Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix period EndLayout
(6.24)
From cT in (6.24) it is clear that the first state variable is x1(t) = e(t). Looking at A furthermore reveals that \(\dot {x}_i(t) = x_{i+1}(t)\) holds for i = 1, …, N − 1, which means that xi+1(t) is the ith derivative of e(t). In other words, the estimated state vector \(\hat {\boldsymbol {x}}(t)\) in (6.24), contrary to what has been shown earlier in (3.​16), now consists of estimated feedback error \(\hat {e}(t)\) and its consecutive time derivatives. The next-to-last state equation reads \(\dot {x}_N(t) = x_{N+1}(t) - b_0 \cdot u(t)\), corresponding to e(N)(t) = f(t) − b0 ⋅ u(t). This means that f(t) is the final element of the vector of state variables, xN+1(t) = f(t), and consequently, the last line correctly states \(\dot {x}_{N+1}(t) = \dot {f}^*(t)\).
For the above extended state system model in state space, the following Luenberger observer can be designed to obtain an estimate of the states \(\hat {\boldsymbol {x}}^{\mathrm {T}}(t) = \begin {pmatrix} \hat {x}_1(t) & \cdots & \hat {x}_{N+1}(t) \end {pmatrix}\), with u(t) and e(t) being the only measurements available:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ25_HTML.png
StartLayout 1st Row 1st Column ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript 1 2nd Row vertical ellipsis 3rd Row ModifyingAbove Above ModifyingAbove x With caret With dot Subscript upper N plus 1 EndMatrix With bottom brace Underscript ModifyingAbove Above ModifyingAbove bold italic x With bold caret With bold dot left parenthesis t right parenthesis Endscripts 2nd Column equals ModifyingBelow Start 2 By 2 Matrix 1st Row 1st Column bold 0 Superscript upper N times 1 Baseline 2nd Column bold italic upper I Superscript upper N times upper N Baseline 2nd Row 1st Column 0 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic upper A Endscripts dot ModifyingBelow Start 3 By 1 Matrix 1st Row ModifyingAbove x With caret Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row ModifyingAbove x With caret Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix With bottom brace Underscript ModifyingAbove bold italic x With bold caret left parenthesis t right parenthesis Endscripts minus ModifyingBelow Start 3 By 1 Matrix 1st Row bold 0 Superscript left parenthesis upper N minus 1 right parenthesis times 1 Baseline 2nd Row b 0 3rd Row 0 EndMatrix With bottom brace Underscript bold italic b Endscripts dot u left parenthesis t right parenthesis plus ModifyingBelow Start 3 By 1 Matrix 1st Row l 1 2nd Row vertical ellipsis 3rd Row l Subscript upper N plus 1 Baseline EndMatrix With bottom brace Underscript bold italic l Endscripts dot left parenthesis e left parenthesis t right parenthesis minus ModifyingAbove e With caret left parenthesis t right parenthesis right parenthesis 2nd Row 1st Column ModifyingAbove e With caret left parenthesis t right parenthesis 2nd Column equals ModifyingBelow Start 1 By 2 Matrix 1st Row 1st Column 1 2nd Column bold 0 Superscript 1 times upper N Baseline EndMatrix With bottom brace Underscript bold italic c Superscript normal upper T Baseline Endscripts dot Start 3 By 1 Matrix 1st Row ModifyingAbove x With caret Subscript 1 Baseline left parenthesis t right parenthesis 2nd Row vertical ellipsis 3rd Row ModifyingAbove x With caret Subscript upper N plus 1 Baseline left parenthesis t right parenthesis EndMatrix period EndLayout
(6.25)
This can be written in a more compact way as
$$\displaystyle \begin{aligned} \boldsymbol{\dot{\hat{x}}}(t) = \boldsymbol{A} \cdot \hat{\boldsymbol{x}}(t) - \boldsymbol{b} \cdot u(t) + \boldsymbol{l} \cdot \left( e(t) - \boldsymbol{c}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}(t) \right) , {} \end{aligned} $$
(6.26)
with \(\boldsymbol {l} = \begin {pmatrix} l_1 & \cdots & l_{N+1} \end {pmatrix}^{\mathrm {T}}\) being the observer gains, to be tuned using, for example, the already introduced bandwidth parameterization approach from Sect. 3.​2.​2.
Here, similar to (3.​9), we omit the “virtual input” \(\dot {f}^*(t)\), which means setting the first derivative of \(\hat {f}^*(t)\) to zero, which results in \(\hat {x}_{N+1}(t)\) being estimated as a constant total disturbance at the plant model input. That way, the extended state variable \(\hat {x}_{N+1}(t)\) serves as the integrator state of ADRC necessary to achieve zero steady-state error for constant reference values and constant disturbances.
Please notice that now the control signal governing system (6.22) does not have to be designed as previously seen, for example, in (3.​15), but can take advantage of the fact that the estimated state vector \(\boldsymbol {\dot {\hat {x}}}(t)\) in observer (6.25) contains the estimated feedback error and its consecutive derivatives and thus can be constructed simply as
$$\displaystyle \begin{aligned} u(t) = \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(t) \quad \text{with}\quad \boldsymbol{k}^{\mathrm{T}} = \begin{pmatrix} k_1 & \cdots & k_N \end{pmatrix} , {} \end{aligned} $$
(6.27)
where k is the vector of controller gains, which again can be tuned using, for example, the bandwidth parameterization approach as seen in Sect. 3.​2.​1.
Finally, having now both the observer and the controller parts, the error-based ADRC in state space can be shown in Fig. 6.12. Immediately, through a quick inspection and comparison with its output-based counterpart in Fig. 3.​5, one can visually notice the two abovementioned differences between output- and error-based ADRC schemes. The first one is the fact that the input signal to the error-based ADRC scheme is not the plant output y(t) but the feedback error e(t), and the second one is the different sign applied to the input b ⋅ u(t).
So what is the justification behind expressing ADRC in such an alternative, error-based form? There are several reasons, but let us start with the obvious one. The availability of reference derivatives is no longer an issue, as they are being indirectly reconstructed on the fly by the observer, which estimates the feedback error and its consecutive derivatives. When fixed set point control tasks are considered, which means that higher-order derivatives of the reference signal are zero (except for possible stepwise changes to the set point), error-based ADRC offers a very peculiar behavior, especially when compared with the “standard” output-based variant.
This has been visualized in Fig. 6.13, where error-based ADRC generates a very rapid transient system response, resulting from a large initial control signal that, in return, generates a significant overshoot. This is the result of error-based ADRC estimating the unknown reference signal derivatives—which are huge for stepwise reference changes. Interestingly, both tested control variants have the same level of disturbance rejection—a characteristic we will go back to later in this section.
On the other hand, the error-based ADRC shows its potential when it is used in trajectory tracking tasks, like in the second example depicted in Fig. 6.14, in which the reference is continually changing. It then offers a significant improvement in tracking quality over output-based ADRC, even with the initial increased control signal related to the time needed for the states to converge to proper values. It should be reminded that both ADRC schemes considered in this example do not explicitly utilize the information about the reference derivatives during the process of observer or controller synthesis, so this is not the case as seen earlier in Sect. 6.2.
The second potential reason for using the error-based variant of ADRC is, however, best described when the controller is represented in a transfer function form; hence we now move toward its input-output representation in s-domain.
Cooking Recipe (Error-Based ADRC, State-Space Form)
To implement and tune linear error-based ADRC in its continuous-time state-space form, the following steps have to be carried out:
1.
Plant modeling: For the dominant dynamic plant behavior, identify the order N and the parameter b0 of the plant model (3.​2). Refer to Table 3.​1 for some simple examples. Reformulate (3.​2) into error-based form (6.22).
 
2a.
Controller tuning: Error-based ADRC of order N requires tuning of N controller gains k1,…,N. When using bandwidth parameterization for the desired closed-loop dynamics, these gain values are obtained from (3.​19) or Table A.​1.​1. If necessary, use an approximation as (3.​21) to translate a time-domain specification into the tuning parameter ωCL.
 
2b.
Observer tuning: Error-based ADRC of order N requires (N + 1) observer gains l1,…,N+1. In case of bandwidth parameterization, choosing a relative observer bandwidth kESO = 3, …, 10 is a good starting point. Use (3.​24) or Table A.​1.​1 to compute the gains from kESO and ωCL.
 
3.
Implementation: Implement the state-space form of error-based ADRC as shown in Fig. 6.12. Required matrices and vectors are defined in the equations of the extended state observer (6.25) and the controller (6.27).
 

6.4.2 Transfer Function Representation

To derive the error-based ADRC variant in transfer function form, we start with the derivation of its closed-loop observer dynamics. To do that, the Laplace transform of the control law (6.27) is first put in the observer (6.26), which gives
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_6/MediaObjects/617308_1_En_6_Equ28_HTML.png
ModifyingAbove bold italic x With bold caret left parenthesis s right parenthesis equals left parenthesis s bold italic upper I minus ModifyingBelow left parenthesis bold italic upper A minus bold italic l bold italic c Superscript normal upper T Baseline minus StartFraction 1 Over b 0 EndFraction bold italic b Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix right parenthesis With bottom brace Underscript bold italic upper A Subscript upper C upper L Baseline Endscripts right parenthesis Superscript negative 1 Baseline dot bold italic l dot e left parenthesis s right parenthesis period
(6.28)
To shorten further equations, we will use the abbreviated system matrix ACL of the closed-loop controller/observer dynamics as introduced in (6.28):
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{CL}} = \boldsymbol{A} - \boldsymbol{l}\boldsymbol{c}^{\mathrm{T}} - \frac{1}{b_0} \boldsymbol{b} \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} = \begin{pmatrix} \boldsymbol{0}^{N \times 1} & \boldsymbol{I}^{N \times N} \\ 0 & \boldsymbol{0}^{1 \times N} \end{pmatrix} - \begin{pmatrix} \boldsymbol{l} & \boldsymbol{0}^{(N+1) \times N} \end{pmatrix} - \begin{pmatrix} \boldsymbol{0}^{(N-1) \times N} & 0 \\ \boldsymbol{k}^{\mathrm{T}} & 1 \\ \boldsymbol{0}^{1 \times N} & 0 \end{pmatrix} , \end{aligned}$$
where ACL has the same form as seen in (4.​16). A transfer function of the controller can now be obtained by putting (6.28) back in the Laplace transform of (6.27), eliminating \(\hat {\boldsymbol {x}}(s)\) from the equations:
$$\displaystyle \begin{aligned} u(s) &= \frac{1}{b_0} \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left(s \boldsymbol{I} - \boldsymbol{A}_{\mathrm{CL}}\right)^{-1} \cdot \boldsymbol{l} \cdot e(s) \notag\\ &=\ \big( b_0 \cdot \det\left( s \boldsymbol{I} - \boldsymbol{A}_{\mathrm{CL}} \right) \big)^{-1} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \operatorname{adj}\left( s \boldsymbol{I} - \boldsymbol{A}_{\mathrm{CL}} \right) \cdot \boldsymbol{l} \cdot e(s) . {} \end{aligned} $$
(6.29)
We can therefore put forth a control law for error-based ADRC using a single transfer function as follows, which is also visualized in Fig. 6.15:
$$\displaystyle \begin{aligned} u(s) = C_{\mathrm{FB}}(s) \cdot e(s) {} . \end{aligned} $$
(6.30)
In (6.30), term CFB denotes the feedback controller transfer function, which can directly be read off (6.29):
$$\displaystyle \begin{aligned} C_{\mathrm{FB}}(s) = \frac{ \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \operatorname{adj}\left( s \boldsymbol{I} - \boldsymbol{A}_{\mathrm{CL}} \right) \cdot \boldsymbol{l} }{ b_0 \cdot \det\left( s \boldsymbol{I} - \boldsymbol{A}_{\mathrm{CL}} \right) } , {} \end{aligned} $$
(6.31)
which is a strictly proper transfer function that depends on the resolvent of ACL.
Does the above derivation of the error-based ADRC in transfer function form look familiar? It should! We have already gone through a similar procedure for the “standard” output-based ADRC in Sect. 4.​2. Through an independent derivation of the error-based variant, we have arrived at (6.31), which has exactly the same form as the feedback transfer function in (4.​20) for the output-based case. This means that we can conveniently reuse here the structures and parameters already defined for output-based ADRC. Consequently, the feedback controller transfer function (6.31) of error-based ADRC can be alternatively written as
$$\displaystyle \begin{aligned} C_{\mathrm{FB}}(s) = \frac{K_{\mathrm{I}}}{s} \cdot \frac { 1 + \displaystyle\sum_{i = 1}^N \beta_i \cdot s^i } { 1 + \displaystyle\sum_{i = 1}^N \alpha_i \cdot s^i } , \end{aligned}$$
which is identical to (4.​22) with exactly the same coefficients α, β, and gain KI, all of which were already introduced and are available from Appendix A.​2.
The above similarities reveal there are strong connections between the output- and error-based ADRC structures. By exploring those connections, let us identify potential benefits error-based ADRC can bring. The transfer function implementation of error-based ADRC is depicted in Fig. 6.15. By comparing it with the previously introduced transfer function representations of control loops with continuous-time ADRC in Figs. 4.​2 and 4.​3, one can notice that in the error-based case, due to the injection of feedback error as the algorithm input signal, the control law is trivialized to just CFB. Since the reference signal is not directly related to the derivation of the error-based ADRC algorithm, there is no need for the prefilter (meaning CPF = 1), which results in the control signal simply being (6.30). Therefore, the important practical issue of transfer function realizability, previously considered in Sect. 4.​2, is nonexistent here, as the sole transfer function CFB in (6.30) is always realizable.
However, the most important reason why the error-based variant of ADRC has been introduced in this book is that it gives more freedom to the control designer to construct custom control solutions, since the prefilter CPF is not built-in (imposed) in the design in advance—as it is the case in “standard” output-based ADRC. If needed, one can prepare a custom prefilter and add it to the reference channel of error-based ADRC to gain the possibility of shaping certain desired transient system behavior. Customizing error-based ADRC with a user-defined prefilter, shown in Fig. 6.16 and denoted as F(s), is recommended for true 2DOF possibilities. To visualize that, we have prepared an example, the results of which are seen in Fig. 6.17, where one can notice the influence of the user-defined prefilter on the system output.
Regarding the connection between error-based and output-based ADRC schemes, the presence of CPF in output-based ADRC has no effect on the stability margins, sensitivity to measurement noise, or disturbance rejection but has an impact on the steady-state reference tracking performance as shown in Fig. 6.17. In short, it can be seen, for example, from Fig. 6.15, that the error-based ADRC has a simpler, more compact design, resulting in a simpler implementation (due to the lack of prefilter), but for the price of reducing the control over transient tracking performance.
Before we move on to the next topic, there is also an interesting characteristic of the error-based ADRC that we did not mention before. The reformulation of ADRC in error-based form allows for its direct comparison with existing industrial controllers, like PI and PID, which are also often expressed in error-based form. This is potentially helpful to those who are interested in fostering the adoption of ADRC in industrial practice as a viable alternative to standard controllers. Expressing ADRC in a form, which is familiar to the field engineers, could be one of its selling points.
Cooking Recipe (Error-Based ADRC, Transfer Function Form)
To implement and tune the transfer function form of error-based ADRC derived in this section, the following steps have to be carried out:
1.
Plant modeling: Identify the order N and the gain parameter b0 of the plant model.
 
2.
Controller and observer tuning: When using bandwidth parameterization, the tuning parameters are ωCL and kESO. For first- and second-order error-based ADRC, these values can simply be put in the design equations for the KI and α, β coefficients tabulated in Tables A.​2.​2 and A.​2.​3 on page 194, respectively. It is also possible to compute these coefficients from the otherwise obtained state-space controller and observer gains kT and l.
 
3.
Implementation: The structure of the control law (6.30) is shown in Fig. 6.15, consisting solely of a feedback controller CFB(s). If needed, a custom prefilter F(s) could be optionally attached in the reference signal path, as shown in Fig. 6.16, to help with the transient tracking performance. The feedback controller CFB(s) contains an integrator, which, to avoid windup issues, should be implemented with output limits.
 
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
Another interesting approach addressing the issue of unavailability of higher-order terms of the reference signal is the use of so-called error-based ADRC, which we will cover later in Sect. 6.4.
 
2
Here again, there is a special historical meaning behind discussing this particular function, which we will explain in Chap. 7.
 
3
Please notice the sign change next to the total disturbance term, compared to (3.​3).
 
Metadaten
Titel
Extensions and Modifications
verfasst von
Gernot Herbst
Rafal Madonski
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-72687-3_6