Skip to main content
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

8. Discrete-Time Linear ADRC

verfasst von : Gernot Herbst, Rafal Madonski

Erschienen in: Active Disturbance Rejection Control

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We have now reached an important milestone in the book, and with this chapter we are moving from the theoretical foundations of ADRC to its applications. Putting ADRC in practice will almost always be in the form of a software-based implementation, be it in embedded systems or on PLC. Given the inherent discrete-time nature of the underlying target processor systems, it is obvious that the continuous-time variants of ADRC discussed so far are not yet the answer when asking for an actual implementation. To cross the line between “theory” and “practice” that has been established in the subtitle of this book, we will therefore present several discrete-time variants of linear ADRC with different feature sets in this chapter. All of them are ready for use in industrial practice and range from state-space to transfer function forms optimized for a low computational footprint.

8.1 State-Space Form

Based on the continuous-time state-space description of linear ADRC introduced in Chap. 3, we now want to derive a discrete-time version. This will involve controller, observer, and the tuning methods for both components.
Going from continuous to discrete time is a lossy procedure, and multiple approaches exist. They can be divided into three categories [1]: numerical integration, pole and zero mapping, and hold equivalence. For ADRC, a comparison in [2] revealed that zero-order hold (ZOH) equivalence performed best. We will follow that recommendation in this book.
Reviewing the linear ADRC equations summarized in Sect. 3.​1.​5, it becomes obvious that the control law (3.​15) merely consists of static state feedback. Replacing the continuous-time variable t with an integer sampling instant k is therefore already sufficient to arrive at the discrete-time control law:
$$\displaystyle \begin{aligned} u(k) = \frac{1}{b_0} \cdot \left( k_1 \cdot r(k) - \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(k) \right) \quad \text{with}\quad \boldsymbol{k}^{\mathrm{T}} = \begin{pmatrix} k_1 & \cdots & k_N \end{pmatrix} . {} \end{aligned} $$
(8.1)
Here we assume uniform sampling at tk = k ⋅ Ts, i.e., with a sampling interval Ts, and we stick to the common notation of x(k) denoting the value of x at time tk. Please do not mistake the sampling instant k for the controller gains k1,…,N!

8.1.1 Discretization of the Observer

The only dynamic system within ADRC that requires discretization is the observer (ESO). Remember that, structurally, the ESO (3.​16) is an ordinary Luenberger observer. Consequently, we can tackle the problem of discretization in a very general manner. The starting point is the continuous-time observer equation:
$$\displaystyle \begin{aligned} \boldsymbol{\dot{\hat{x}}}(t) = \boldsymbol{A} \cdot \hat{\boldsymbol{x}}(t) + \boldsymbol{b} \cdot u(t) + \boldsymbol{l} \cdot \left( y(t) - \boldsymbol{c}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}(t) \right). {} \end{aligned} $$
(8.2)
The zero-order hold discretization of (8.2) delivers the following discrete-time equivalent, which is depicted in Fig. 8.1:
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}_{\mathrm{p}}(k+1) = \boldsymbol{A}_{\mathrm{d}} \cdot \hat{\boldsymbol{x}}_{\mathrm{p}}(k) + \boldsymbol{b}_{\mathrm{d}} \cdot u(k) + \boldsymbol{l}_{\mathrm{p}} \cdot \left( y(k) - \boldsymbol{c}^{\mathrm{T}} \cdot \hat{\boldsymbol{x}}_{\mathrm{p}}(k) \right) {} \\ \text{with}\quad \boldsymbol{A}_{\mathrm{d}} = \boldsymbol{I} + \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}^i T_{\mathrm{s}}^i}{i!} ,\ \ \boldsymbol{b}_{\mathrm{d}} = \left( \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}^{i-1} T_{\mathrm{s}}^i}{i!} \right) \cdot \boldsymbol{b} ,\ \ \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} = \boldsymbol{c}^{\mathrm{T}} . \notag \end{gathered} $$
(8.3)
We could declare our task finished at this point, but an improvement is possible. Note from (8.2) that the most recent measurement y(k) affects \(\hat {\boldsymbol {x}}\) at time k + 1: The estimate is predicted one cycle ahead. This observer is therefore also known as predictive observer [1], which is why we attached a “p” subscript to \(\hat {\boldsymbol {x}}_{\mathrm {p}}\) and lp. While “prediction” might sound good, this means that the output u(k) of our control law (8.1) would be based on measurements made only up to time k − 1, although y(k) might already be available.
As an improvement in this regard, an alternative observer variant known as current observer [1] can be employed—another recommendation for discrete-time ADRC put forward by Miklosovic et al. [2]. A current observer solves the problem of incorporating the most recent measurement y(k) in the current estimate \(\hat {\boldsymbol {x}}(k)\) by splitting the estimation into a two-step procedure (prediction and correction), similar to a discrete-time Kalman filter. The update equations are given as follows, shown in Fig. 8.2:
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(k|k-1) &= \boldsymbol{A}_{\mathrm{d}} \cdot \hat{\boldsymbol{x}}(k-1|k-1) + \boldsymbol{b}_{\mathrm{d}} \cdot u(k-1) &\text{(prediction)}, \\ \hat{\boldsymbol{x}}(k|k) &= \hat{\boldsymbol{x}}(k|k-1) + \boldsymbol{l}_{\mathrm{c}} \cdot \left( y(k) - \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \hat{\boldsymbol{x}}(k|k-1) \right) &\text{(correction)}. \end{aligned} $$
Putting the prediction into the correction equation and abbreviating \(\hat {\boldsymbol {x}}(k|k) = \hat {\boldsymbol {x}}(k)\) lead to a combined equation for the current observer, visualized in Fig. 8.3:
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(k) = \left( \boldsymbol{A}_{\mathrm{d}} - \boldsymbol{l}_{\mathrm{c}} \cdot \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \boldsymbol{A}_{\mathrm{d}} \right) \cdot \hat{\boldsymbol{x}}(k-1) + \left( \boldsymbol{b}_{\mathrm{d}} - \boldsymbol{l}_{\mathrm{c}} \cdot \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \boldsymbol{b}_{\mathrm{d}} \right) \cdot u(k-1) + \boldsymbol{l}_{\mathrm{c}} \cdot y(k) . {} \end{aligned} $$
(8.4)
Since the current observer is the preferred variant for ADRC, we will, from now on, just denote the discrete-time observer gains as l = lc. Furthermore, we introduce the abbreviated matrix \(\boldsymbol {A}_{\mathrm {ESO}} = \boldsymbol {A}_{\mathrm {d}} - \boldsymbol {l} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}} \cdot \boldsymbol {A}_{\mathrm {d}}\) and vector \(\boldsymbol {b}_{\mathrm {ESO}} = \boldsymbol {b}_{\mathrm {d}} - \boldsymbol {l} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}} \cdot \boldsymbol {b}_{\mathrm {d}}\) that allow us to give a compact form of the observer equation (8.4).
For an actual (typically software-based) implementation, either Figs. 8.2 or 8.3 could be used. In this book, we will stick to the latter form, which reduces the computational footprint of the implementation in terms of the number of intermediate variables, coefficients, and arithmetic operations during runtime. As nothing comes for free, however, it should be kept in mind that a drawback of this more efficient form arises from the fact that the observer gains l are also embedded in both matrix AESO and vector bESO. Hence, when (re)tuning the observer, AESO and bESO must be updated, as well.
Concluding our derivation of the discrete-time extended state observer, we can now present its equation in compact form, which is shown as part of the block diagram of discrete-time ADRC in Fig. 8.4:
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(k) = \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(k-1) + \boldsymbol{b}_{\mathrm{ESO}} \cdot u(k-1) + \boldsymbol{l} \cdot y(k) {} \\ \text{with}\quad \boldsymbol{l} = \begin{pmatrix} l_1 & \cdots & l_{N+1} \end{pmatrix}^{\mathrm{T}} ,\quad \boldsymbol{A}_{\mathrm{ESO}} = \boldsymbol{A}_{\mathrm{d}} - \boldsymbol{l} \cdot \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \boldsymbol{A}_{\mathrm{d}} ,\quad \boldsymbol{b}_{\mathrm{ESO}} = \boldsymbol{b}_{\mathrm{d}} - \boldsymbol{l} \cdot \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} \cdot \boldsymbol{b}_{\mathrm{d}} , \notag\\ \text{where}\quad \boldsymbol{A}_{\mathrm{d}} = \boldsymbol{I} + \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}^i T_{\mathrm{s}}^i}{i!} ,\ \ \boldsymbol{b}_{\mathrm{d}} = \left( \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}^{i-1} T_{\mathrm{s}}^i}{i!} \right) \cdot \boldsymbol{b} ,\ \ \boldsymbol{c}^{\mathrm{T}}_{\mathrm{d}} = \boldsymbol{c}^{\mathrm{T}} . \notag \end{gathered} $$
(8.5)
For first- and second-order ADRCs, we will provide the matrices and vectors required for implementing the observer in detail:
For N = 1:
The continuous-time A and b are
$$\displaystyle \begin{aligned} \boldsymbol{A} = \begin{pmatrix} 0 \ & \ 1 \\[0.5ex] 0 \ & \ 0 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b} = \begin{pmatrix} b_0 \\[0.5ex] 0 \end{pmatrix} . \end{aligned}$$
Zero-order hold discretization yields
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{d}} = \begin{pmatrix} 1 \ & \ T_{\mathrm{s}} \\[0.5ex] 0 \ & \ 1 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b}_{\mathrm{d}} = \begin{pmatrix} b_0 T_{\mathrm{s}} \\[0.5ex] 0 \end{pmatrix} . \end{aligned}$$
For the observer equation (8.5), we therefore obtain AESO and bESO as
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{ESO}} = \begin{pmatrix} 1 - l_1 \ & \ T_{\mathrm{s}} - l_1 T_{\mathrm{s}} \\[0.5ex] - l_2 \ & \ 1 - l_2 T_{\mathrm{s}} \end{pmatrix} \quad \text{and}\quad \boldsymbol{b}_{\mathrm{ESO}} = \begin{pmatrix} b_0 T_{\mathrm{s}} - l_1 b_0 T_{\mathrm{s}} \\[0.5ex] - l_2 b_0 T_{\mathrm{s}} \end{pmatrix} . \end{aligned}$$
For N = 2:
We similarly start with the continuous-time A and b:
$$\displaystyle \begin{aligned} \boldsymbol{A} = \begin{pmatrix} 0 \ & \ 1 \ & \ 0 \\[0.5ex] 0 \ & \ 0 \ & \ 1 \\[0.5ex] 0 \ & \ 0 \ & \ 0 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b} = \begin{pmatrix} 0 \\[0.5ex] b_0 \\[0.5ex] 0 \end{pmatrix} . \end{aligned}$$
Zero-order hold discretization yields
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{d}} = \begin{pmatrix} 1 & T_{\mathrm{s}} & \frac{1}{2} T_{\mathrm{s}}^2 \\[0.5ex] 0 & 1 & T_{\mathrm{s}} \\[0.5ex] 0 & 0 & 1 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b}_{\mathrm{d}} = \begin{pmatrix} \frac{1}{2} b_0 T_{\mathrm{s}}^2 \\[0.5ex] b_0 T_{\mathrm{s}} \\[0.5ex] 0 \end{pmatrix} . \end{aligned}$$
The final results AESO and bESO accordingly read
$$\displaystyle \begin{gathered} \boldsymbol{A}_{\mathrm{ESO}} = \begin{pmatrix} 1 - l_1 \ & \ T_{\mathrm{s}} - l_1 T_{\mathrm{s}} \ & \ \frac{1}{2} T_{\mathrm{s}}^2 - \frac{1}{2} l_1 T_{\mathrm{s}}^2 \\[0.5ex] - l_2 \ & \ 1 - l_2 T_{\mathrm{s}} \ & \ T_{\mathrm{s}} - \frac{1}{2} l_2 T_{\mathrm{s}}^2 \\[0.5ex] - l_3 \ & \ - l_3 T_{\mathrm{s}} \ & \ 1 - \frac{1}{2} l_3 T_{\mathrm{s}}^2 \end{pmatrix} \\ \text{and}\quad \boldsymbol{b}_{\mathrm{ESO}} = \begin{pmatrix} \frac{1}{2} b_0 T_{\mathrm{s}}^2 - \frac{1}{2} l_1 b_0 T_{\mathrm{s}}^2 \\[0.5ex] b_0 T_{\mathrm{s}} - \frac{1}{2} l_2 b_0 T_{\mathrm{s}}^2 \\[0.5ex] - \frac{1}{2} l_3 b_0 T_{\mathrm{s}}^2 \end{pmatrix} . \end{gathered} $$

8.1.2 Tuning the Discrete-Time Observer

With the current observer approach, the error dynamics depend on the system matrix of the observer \(\boldsymbol {A}_{\mathrm {ESO}} = \boldsymbol {A}_{\mathrm {d}} - \boldsymbol {l} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}} \cdot \boldsymbol {A}_{\mathrm {d}}\):
$$\displaystyle \begin{aligned} \boldsymbol{e_x}(k+1) = \boldsymbol{x}(k+1) - \hat{\boldsymbol{x}}(k+1) = \boldsymbol{A}_{\mathrm{ESO}} \cdot \left( \boldsymbol{x}(k) - \hat{\boldsymbol{x}}(k) \right) . \end{aligned}$$
Similar to the continuous-time case, we can conveniently make use of bandwidth parameterization, placing all eigenvalues of AESO at a common location zESO, which must be mapped from s- to z-domain before. For the Nth-order case of ADRC, (N + 1) observer poles must be placed:
$$\displaystyle \begin{aligned} \det\left( z \boldsymbol{I} - \boldsymbol{A}_{\mathrm{ESO}} \right) \stackrel{!}{=} \left( z - z_{\mathrm{ESO}} \right)^{N+1} \quad \text{with}\quad z_{\mathrm{ESO}} = \mathrm{e}^{- k_{\mathrm{ESO}} \cdot \omega_{\mathrm{CL}} \cdot T_{\mathrm{s}}} . {} \end{aligned} $$
(8.6)
As can be seen, (8.6) is—apart from the sampling interval Ts, of course—being fed with the same tuning parameters kESO and ωCL as in the continuous-time case (3.​23). It must then be solved for the observer gains l = ( l1lN+1 ) Tembedded in the matrix \(\boldsymbol {A}_{\mathrm {ESO}} = \boldsymbol {A}_{\mathrm {d}} - \boldsymbol {l} \cdot \boldsymbol {c}^{\mathrm {T}}_{\mathrm {d}} \cdot \boldsymbol {A}_{\mathrm {d}}\). To make this more tangible, we will do this in detail for the especially important cases of first- and second-order ADRCs:
For N = 1:
The tuning approach (8.6) is \(\det \left ( z \boldsymbol {I} - \boldsymbol {A}_{\mathrm {ESO}} \right ) \stackrel {!}{=} \left ( z - z_{\mathrm {ESO}} \right )^{2}\). In detail, this gives
$$\displaystyle \begin{aligned} \det \begin{pmatrix} z - 1 + l_1 \ & \ -T_{\mathrm{s}} + l_1 T_{\mathrm{s}} \\[0.5ex] l_2 \ & \ z - 1 + l_2 T_{\mathrm{s}} \end{pmatrix} &\stackrel{!}{=} z^2 - 2z_{\mathrm{ESO}}z + z_{\mathrm{ESO}}^2 ,\notag\\ z^2 + \left( l_1 + l_2T_{\mathrm{s}} - 2 \right) z + (1 - l_1) &\stackrel{!}{=} z^2 - 2z_{\mathrm{ESO}}z + z_{\mathrm{ESO}}^2. {} \end{aligned} $$
(8.7)
By comparing the coefficients of the left- and right-hand sides of (8.7), one obtains for the observer gains l1 and l2
$$\displaystyle \begin{aligned} l_1 = 1 - z_{\mathrm{ESO}}^2 \quad \text{and}\quad l_2 = \frac{1}{T_{\mathrm{s}}} \cdot \left( 1 - z_{\mathrm{ESO}} \right)^2 . {} \end{aligned} $$
(8.8)
For N = 2:
The tuning approach (8.6) is \(\det \left ( z \boldsymbol {I} - \boldsymbol {A}_{\mathrm {ESO}} \right ) \stackrel {!}{=} \left ( z - z_{\mathrm {ESO}} \right )^{3}\). We expand the right-hand side to
$$\displaystyle \begin{aligned} \left( z - z_{\mathrm{ESO}} \right)^{3} = z^3 - 3z_{\mathrm{ESO}}z^2 + 3z_{\mathrm{ESO}}^2z - z_{\mathrm{ESO}}^3 {} \end{aligned} $$
(8.9)
and the left-hand side to
$$\displaystyle \begin{aligned} &\det\left( z \boldsymbol{I} - \boldsymbol{A}_{\mathrm{ESO}} \right) = \det\begin{pmatrix} z - 1 + l_1 \ & \ - T_{\mathrm{s}} + l_1 T_{\mathrm{s}} \ & \ -\frac{1}{2} T_{\mathrm{s}}^2 + \frac{1}{2} l_1 T_{\mathrm{s}}^2 \\[0.5ex] l_2 \ & \ z - 1 + l_2 T_{\mathrm{s}} \ & \ - T_{\mathrm{s}} + \frac{1}{2} l_2 T_{\mathrm{s}}^2 \\[0.5ex] l_3 \ & \ l_3 T_{\mathrm{s}} \ & \ z - 1 + \frac{1}{2} l_3 T_{\mathrm{s}}^2 \end{pmatrix} = \notag\\ &z^3 + \left(\frac{l_3T_{\mathrm{s}}^2}{2} + l_2T_{\mathrm{s}} + l_1 - 3\right) \!\cdot\! z^2 + \left(\frac{l_3T_{\mathrm{s}}^2}{2} - l_2T_{\mathrm{s}} - 2l_1 + 3\right) \!\cdot\! z + \left( l_1 - 1 \right) . {} \end{aligned} $$
(8.10)
By comparing the coefficients of (8.9) and (8.10), one obtains the tuning rules for the observer gains l1, l2, and l3:
$$\displaystyle \begin{aligned} \begin{gathered} l_1 = 1 - z_{\mathrm{ESO}}^3 , \quad l_2 = \frac{3}{2T_{\mathrm{s}}} \cdot \left(1 - z_{\mathrm{ESO}}\right)^2 \cdot \left(1 + z_{\mathrm{ESO}}\right) ,\\ \text{and}\quad l_3 = \frac{1}{T_{\mathrm{s}}^2} \cdot \left(1 - z_{\mathrm{ESO}}\right)^3 . \end{gathered} {} \end{aligned} $$
(8.11)

8.1.3 Tuning the Discrete-Time Controller

The outer state feedback controller of ADRC consists of a static feedback gain vector kT, which means that there are no dynamics to be discretized for the controller. However, as the states of the virtual plant to be controlled by kT are now being provided by a discrete-time observer, the plant dynamics should be described in discrete time, as well, to have a basis for a discrete-time design of the controller gains. We will therefore follow [3] and derive a discrete-time state-space model of the virtual plant (3.​17) using zero-order hold discretization:
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}_{\mathrm{VP}}(k+1) = \boldsymbol{A}_{\mathrm{VP,d}} \cdot \hat{\boldsymbol{x}}_{\mathrm{VP}}(k) + \boldsymbol{b}_{\mathrm{VP,d}} \cdot u(k), \quad y_{\mathrm{VP}}(k) = \boldsymbol{c}^{\mathrm{T}}_{\mathrm{VP,d}} \cdot \hat{\boldsymbol{x}}_{\mathrm{VP}}(k) {} \\ \text{with}\quad \boldsymbol{A}_{\mathrm{VP,d}} = \boldsymbol{I} + \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}_{\mathrm{VP}}^i T_{\mathrm{s}}^i}{i!} ,\ \ \boldsymbol{b}_{\mathrm{VP,d}} = \left( \sum_{i = 1}^\infty \displaystyle\frac{\boldsymbol{A}_{\mathrm{VP}}^{i-1} T_{\mathrm{s}}^i}{i!} \right) \cdot \boldsymbol{b}_{\mathrm{VP}} ,\ \ \boldsymbol{c}^{\mathrm{T}}_{\mathrm{VP,d}} = \boldsymbol{c}_{\mathrm{VP}}^{\mathrm{T}} . \notag \end{gathered} $$
(8.12)
Using bandwidth parameterization, all poles of the desired closed-loop dynamics are being placed at the same location zCL in z-domain, which gets mapped from the s-plane using \(z_{\mathrm {CL}} = \mathrm {e}^{-\omega _{\mathrm {CL}} T_{\mathrm {s}}}\). The pole placement approach therefore reads
$$\displaystyle \begin{aligned} \det\left( z \boldsymbol{I} - \left( \boldsymbol{A}_{\mathrm{VP,d}} - \boldsymbol{b}_{\mathrm{VP,d}} \boldsymbol{k}^{\mathrm{T}} \right) \right) \stackrel{!}{=} \left( z - z_{\mathrm{CL}} \right)^N \quad \text{with}\quad z_{\mathrm{CL}} = \mathrm{e}^{-\omega_{\mathrm{CL}} T_{\mathrm{s}}} . {} \end{aligned} $$
(8.13)
For the practically important cases of first- and second-order ADRCs, we will give the solution to (8.13) in detail:
For N = 1:
The elements of AVP,d and bVP,d are trivially obtained as
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{VP,d}} = \begin{pmatrix} 1 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b}_{\mathrm{VP,d}} = \begin{pmatrix} T_{\mathrm{s}} \end{pmatrix}. \end{aligned}$$
Equation (8.13) thus becomes
$$\displaystyle \begin{aligned} z + \left( T_{\mathrm{s}} k_1 - 1 \right) \stackrel{!}{=} z - z_{\mathrm{CL}} , \end{aligned}$$
and we can easily obtain the solution for the only controller gain k1 as
$$\displaystyle \begin{aligned} k_1 = \frac{1 - z_{\mathrm{CL}} }{ T_{\mathrm{s}} } . {} \end{aligned} $$
(8.14)
Equation (8.14) is the discrete-time controller tuning rule for first-order ADRC based on bandwidth parameterization with \(z_{\mathrm {CL}} = \mathrm {e}^{-\omega _{\mathrm {CL}} T_{\mathrm {s}}}\).
For N = 2:
The elements of AVP,d and bVP,d are
$$\displaystyle \begin{aligned} \boldsymbol{A}_{\mathrm{VP,d}} = \begin{pmatrix} 1 \ & \ T_{\mathrm{s}} \\ 0 \ & \ 1 \end{pmatrix} \quad \text{and}\quad \boldsymbol{b}_{\mathrm{VP,d}} = \begin{pmatrix} \frac{1}{2} T_{\mathrm{s}}^2 \\ T_{\mathrm{s}} \end{pmatrix} . \end{aligned}$$
Equation (8.13) now reads
$$\displaystyle \begin{aligned} z^2 + \left( \frac{k_1 T_{\mathrm{s}}^2}{2} + k_2 T_{\mathrm{s}} - 2 \right) \cdot z + \left( \frac{k_1 T_{\mathrm{s}}^2}{2} - k_2 T_{\mathrm{s}} + 1 \right) \stackrel{!}{=} z^2 - 2 z_{\mathrm{CL}} \cdot z + z_{\mathrm{CL}}^2 , \end{aligned}$$
from which, after solving for k1 and k2, we obtain these controller gains as
$$\displaystyle \begin{aligned} k_1 = \frac{\left( 1 - z_{\mathrm{CL}} \right)^2}{T_{\mathrm{s}}^2} \quad \text{and}\quad k_2 = \frac{4 - \left(1 + z_{\mathrm{CL}} \right)^2}{2 T_{\mathrm{s}}} . {} \end{aligned} $$
(8.15)

8.1.4 Comparison and Summary

Discrete-Time Versus Continuous-Time Tuning
If the sampling frequencies are high enough compared to the closed-loop bandwidth, the behavior of the discrete-time virtual plant becomes indistinguishable from its continuous-time version. In this case, which can be seen as the most common one in practice, the state-feedback controller can be tuned with the continuous-time approach, leading to the already known simpler equations for kT. We will demonstrate this for the first- and second-order cases in the following. Remembering the power series characterization of a real-valued exponential function, we can express zCL as
$$\displaystyle \begin{aligned} z_{\mathrm{CL}} = \mathrm{e}^{-\omega_{\mathrm{CL}} T_{\mathrm{s}}} = 1 + \left( -\omega_{\mathrm{CL}} T_{\mathrm{s}} \right) + \frac{\left( -\omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^2}{2!} + \frac{\left( -\omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^3}{3!} + \ldots , \end{aligned}$$
which, when we truncate after the second term given a sufficiently small value of ωCLTs ≪ 1 that can be assumed for sampling frequencies well above the desired closed-loop bandwidth (suggestion for a practical rule of thumb: \(\omega _{\mathrm {CL}} T_{\mathrm {s}} \lessapprox 0.05\)), gets reduced to the following approximation:
$$\displaystyle \begin{aligned} z_{\mathrm{CL}} \approx 1 - \omega_{\mathrm{CL}} T_{\mathrm{s}}. {} \end{aligned} $$
(8.16)
Putting (8.16) into (8.14) and (8.15) leads to
$$\displaystyle \begin{aligned} k_1 = \frac{1 - z_{\mathrm{CL}} }{ T_{\mathrm{s}} } \approx \frac{1 - \left( 1 - \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) }{ T_{\mathrm{s}} } = \omega_{\mathrm{CL}} {} \end{aligned} $$
(8.17)
for first-order ADRC, as well as
$$\displaystyle \begin{aligned} \begin{aligned}[b] k_1 &= \frac{\left( 1 - z_{\mathrm{CL}} \right)^2}{T_{\mathrm{s}}^2} \approx \frac{\left( 1 - \left( 1 - \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) \right)^2}{T_{\mathrm{s}}^2} = \omega_{\mathrm{CL}}^2 ,\quad \text{and} \\ k_2 &= \frac{4 - \left(1 + z_{\mathrm{CL}} \right)^2}{2 T_{\mathrm{s}}} \approx \frac{4 - \left(1 + \left( 1 - \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) \right)^2}{2 T_{\mathrm{s}}} = \omega_{\mathrm{CL}} \cdot \left(2 - \frac{\omega_{\mathrm{CL}} T_{\mathrm{s}}}{2} \right) \approx 2 \omega_{\mathrm{CL}} \end{aligned} {} \end{aligned} $$
(8.18)
for the second-order case, thus obtaining the tuning values of (3.​19). These approximations are accurate enough for many practical cases, in which a continuous-time controller tuning can simply be transferred to its discrete-time implementation. This is shown in Fig. 8.5, where (8.17) and (8.18) are compared to (3.​19). As visible, the continuous-time design leads to excessive controller gains only above \(\omega _{\mathrm {CL}} T_{\mathrm {s}} \gtrapprox 0.1\).
Similarly, it is possible to examine the relation between discrete- and continuous-time observer gains (when using bandwidth parameterization), especially since the solutions given in (8.8) and (8.11) appear to be very different to the continuous-time gains from (3.​24). If kESOωCLTs ≪ 1 holds, zESO can approximately be expressed as
$$\displaystyle \begin{aligned} z_{\mathrm{ESO}} &= \mathrm{e}^{-k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}}} = 1 + \left( -k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) + \frac{\left( -k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^2}{2!} + \ldots \notag\\ &\approx 1 - k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} . {} \end{aligned} $$
(8.19)
For the first-order case, putting (8.19) into (8.8) and, where necessary, further simplifying by making use of the assumption kESOωCLTs ≪ 1 yield
$$\displaystyle \begin{aligned} \begin{aligned}[b] l_1 &= 1 - z_{\mathrm{ESO}}^2 \approx 1 - \left( 1 - \omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^2 \approx T_{\mathrm{s}} \cdot 2 k_{\mathrm{ESO}} \omega_{\mathrm{CL}} ,\quad \text{and} \\ l_2 &= \frac{1}{T_{\mathrm{s}}} \cdot \left( 1 - z_{\mathrm{ESO}} \right)^2 \approx \frac{1}{T_{\mathrm{s}}} \cdot \left( 1 - \left( 1 - k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) \right)^2 = T_{\mathrm{s}} \cdot k_{\mathrm{ESO}}^2 \omega_{\mathrm{CL}}^2 . \end{aligned} \end{aligned} $$
(8.20)
For the second-order case, we obtain
$$\displaystyle \begin{aligned} \begin{aligned}[b] l_1 &= 1 - z_{\mathrm{ESO}}^3 \approx 1 - \left( 1 - k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^3 \approx T_{\mathrm{s}} \cdot 3 k_{\mathrm{ESO}} \omega_{\mathrm{CL}} ,\\ l_2 &= \frac{3}{2T_{\mathrm{s}}} \cdot \left(1 - z_{\mathrm{ESO}}\right)^2 \cdot \left(1 + z_{\mathrm{ESO}}\right) \\ &\approx \frac{3}{2T_{\mathrm{s}}} \cdot \left( k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right)^2 \cdot \left( 2 - k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) \approx T_{\mathrm{s}} \cdot 3 k_{\mathrm{ESO}}^2 \omega_{\mathrm{CL}}^2 ,\\ l_3 &= \frac{1}{T_{\mathrm{s}}^2} \cdot \left(1 - z_{\mathrm{ESO}}\right)^3 \approx \frac{1}{T_{\mathrm{s}}^2} \cdot \left( 1 - \left( 1 - k_{\mathrm{ESO}} \omega_{\mathrm{CL}} T_{\mathrm{s}} \right) \right)^3 = T_{\mathrm{s}} \cdot k_{\mathrm{ESO}}^3 \omega_{\mathrm{CL}}^3 . \end{aligned} \end{aligned} $$
(8.21)
In both cases (N = 1 and N = 2), the discrete-time observer gains are approximately related to the continuous-time gains from (3.​24) by the factor Ts, if the latter is small enough to warrant the assumption kESOωCLTs ≪ 1. Figure 8.6 compares discrete- and continuous-time observer gains also for cases where this assumption is being violated. The scaling factor Ts can be attributed to the replacement of the continuous-time integrator by an accumulator in the discrete-time observer.
Let us summarize this comparison of continuous- and discrete-time controllers and observer tuning. We can state that, while it is possible to reuse the continuous-time gains for a discrete-time observer implementation, we do not recommend doing so. For the observer design, it is much easier to violate the assumption kESOωCLTs ≪ 1 due to the relative bandwidth factor kESO. We therefore only use the exact discrete-time observer gains in this book. And although ωCLTs ≪ 1 will probably be fulfilled in most practical cases, we default to using exact solutions of the discrete-time pole placement for the outer control loop in this book, as well. When deriving transfer function forms of discrete-time ADRC in subsequent sections, we will, however, always provide the option to use a custom tuning such as the continuous-time gains as an alternative.
Bottom Line
Incorporating the elements discussed so far in this chapter, related to discrete observer and controller and their tuning, we are ready to present our first “cooking recipe” for discrete-time ADRC, here in its state-space form.
Cooking Recipe (Discrete-Time ADRC, State-Space Form)
To implement and tune the discrete-time state-space variant of ADRC that we have derived in this section, the following steps have to be carried out:
1.
Plant modeling: Identify the order N and the gain parameter b0 of the plant model.
 
2a.
Controller tuning: When using bandwidth parameterization, the tuning parameter is ωCL. The N controller gains k1,…,N for a discrete-time implementation with sampling interval Ts are obtained by solving (8.13). For sufficiently small sampling intervals ωCLTs ≪ 1, which is the most likely case, it is also possible to simply use the continuous-time controller tuning (3.​19).
 
2b.
Observer tuning: For bandwidth parameterization, the additional tuning parameter kESO (relative observer bandwidth) must be chosen. The (N + 1) observer gains l1,…,N+1 are then obtained by solving (8.6).
 
3.
Implementation: The discrete-time state-space form of ADRC consists of the control law (8.1) and the observer equation (8.5)—to be implemented as shown in Fig. 8.4.
 
4.
Customization: A practical controller will at least require an add-on mechanism for control signal limitation and windup protection, which we will cover in Sect. 9.​1. We can already tell that this variant is well prepared. If a limited control signal value ulim is computed outside of the control law, windup can be avoided by simply feeding the value ulim(k − 1) instead of u(k − 1) back into the observer (8.5).
 
All details required for implementing and tuning ADRC are once more compiled in the Appendix, for the discrete-time state-space variant in Appendix A.​5.

8.2 Transfer Function Form

ADRC lines up as a general-purpose controller, an alternative to classical PI or PID controllers. These will almost always not be implemented in a state-space form, as done for ADRC in the previous section. This could present a barrier to adopting ADRC for several reasons:
  • People are more familiar with transfer function forms of controllers and find it difficult to draw comparisons.
  • (Software) Building blocks exist for a transfer function controller implementation and cannot be reused.
  • At least for N ≥ 2, the matrix multiplication in the observer (8.5) comes with a higher computational footprint than a typical discrete-time transfer function implementation of a controller.
In this section we will therefore—as done in [4]—introduce a transfer function form of discrete-time ADRC that exhibits great similarity to “classical” controller implementations, addressing all of the problems mentioned above.
Derivation
In contrast to a state-space form, transfer functions directly relate the input signals r and y of the controller to its output signal u. When deriving a transfer function representation, the state variables \(\hat {\boldsymbol {x}}\) will therefore not be available anymore.
To obtain discrete-time transfer functions, we start by transforming the time-domain equations of controller (8.1) and observer (8.5) into z-domain:
$$\displaystyle \begin{gathered} u(z) = \frac{1}{b_0} \cdot \left( k_1 \cdot r(z) - \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(z) \right) {} , \end{gathered} $$
(8.22)
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(z) = z^{-1} \cdot \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(z) + z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u(z) + \boldsymbol{l} \cdot y(z) . {} \end{gathered} $$
(8.23)
If we put (8.22) into (8.23), we obtain the closed-loop dynamics of the discrete-time observer, introducing the matrix ΦESO as a shorthand notation:
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(z) = \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \left( z^{-1} \cdot \frac{k_1}{b_0} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot r(z) + \boldsymbol{l} \cdot y(z) \right) {} \end{gathered} $$
(8.24)
$$\displaystyle \begin{gathered} \text{with}\quad \boldsymbol{ \Phi}_{\mathrm{ESO}} = \left( \boldsymbol{I} - z^{-1} \cdot \left( \boldsymbol{A}_{\mathrm{ESO}} - \frac{1}{b_0} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \right) \right)^{-1} . {} \end{gathered} $$
(8.25)
This enables us to eliminate \(\hat {\boldsymbol {x}}(z)\) by putting (8.24) back into (8.22). The control law including the closed-loop observer dynamics now reads
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_8/MediaObjects/617308_1_En_8_Equ26_HTML.png
StartLayout 1st Row 1st Column u left parenthesis z right parenthesis 2nd Column equals StartFraction k 1 Over b 0 EndFraction dot r left parenthesis z right parenthesis minus StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot bold italic upper Phi Subscript upper E upper S upper O Baseline dot left parenthesis z Superscript negative 1 Baseline dot StartFraction k 1 Over b 0 EndFraction dot bold italic b Subscript upper E upper S upper O Baseline dot r left parenthesis z right parenthesis plus bold italic l dot y left parenthesis z right parenthesis right parenthesis 2nd Row 1st Column Blank 2nd Column equals StartFraction k 1 Over b 0 EndFraction dot left parenthesis 1 minus z Superscript negative 1 Baseline dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot bold italic upper Phi Subscript upper E upper S upper O Baseline dot StartFraction 1 Over b 0 EndFraction dot bold italic b Subscript upper E upper S upper O Baseline right parenthesis dot r left parenthesis z right parenthesis 3rd Row 1st Column Blank 2nd Column minus left parenthesis StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot bold italic upper Phi Subscript upper E upper S upper O Baseline dot bold italic l right parenthesis dot y left parenthesis z right parenthesis period EndLayout
(8.26)
We could already read off two transfer functions relating r and y to u from (8.26) and declare this finished, but we do not. For a good reason, which will be uncovered soon, things need to be made more complicated at first by factoring out the transfer function from y(z) to u(z) in (8.26). This yields
$$\displaystyle \begin{aligned} u(z) = &\left( \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \boldsymbol{l} \right) \notag\\ & \cdot \left[ \frac{ \frac{k_1}{b_0} \cdot \left( 1 - z^{-1} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \frac{1}{b_0} \cdot \boldsymbol{b}_{\mathrm{ESO}} \right) }{ \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \boldsymbol{l} } \cdot r(z) - y(z) \right] . {} \end{aligned} $$
(8.27)
Equation (8.27) very much resembles a typical discrete-time controller structure with two transfer functions, consisting of a feedback controller CPF(z) and a prefilter CPF(z), shown in Fig. 8.7:
$$\displaystyle \begin{aligned} u(z) = C_{\mathrm{FB}}(z) \cdot \left[ C_{\mathrm{PF}}(z) \cdot r(z) - y(z) \right] . {} \end{aligned} $$
(8.28)
Comparing (8.27) and (8.28), we can now read off the transfer functions of the feedback controller CFB(z) and the prefilter CPF(z):
$$\displaystyle \begin{aligned} C_{\mathrm{FB}}(z) &= \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \boldsymbol{l}, {} \end{aligned} $$
(8.29)
$$\displaystyle \begin{aligned} C_{\mathrm{PF}}(z) &= \frac{ \frac{k_1}{b_0} \cdot \left( 1 - z^{-1} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \frac{1}{b_0} \cdot \boldsymbol{b}_{\mathrm{ESO}} \right) }{ \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \boldsymbol{ \Phi}_{\mathrm{ESO}} \cdot \boldsymbol{l} } {} . \end{aligned} $$
(8.30)
Remember from the continuous-time transfer function representation developed in Sect. 4.​2 that the feedback controller includes an integrator. Consequently CFB(z) will feature a discrete-time integrator pole at z = 1. It is not strictly required, but beneficial to factor this pole out of CFB(z):
$$\displaystyle \begin{aligned} C_{\mathrm{FB}}(z) = \Delta C_{\mathrm{FB}}(z) \cdot \frac{1}{1 - z^{-1}} {} . \end{aligned} $$
(8.31)
That way, CFB(z) can be implemented as a series connection of a discrete-time filter and an accumulator, which has several advantages:
  • The structure of (8.31) better matches its continuous-time counterpart CFB(s) in (4.​22), where the integrator was factored out, as well.
  • A superfluous (dependent) coefficient of the denominator polynomial of ΔCFB(z) gets eliminated, saving one parameter to store and one multiplication at runtime.
  • The accumulator in (8.31) can be easily implemented as a clamped integrator, providing simple but quite effective windup protection—ΔCFB(z) has been brought into to the so-called incremental or velocity form [5, 6].
The two transfer functions can now be represented as given in (8.32). Their coefficients result from putting controller and observer gains in (8.29) and (8.30). Ready-to-use equations for α, β, γ are listed in Appendix A.​6. We factored out \(\frac {1}{\beta _0}\) in CPF(z), enabling reuse of the β and more compact equations for the γ coefficients.
$$\displaystyle \begin{aligned} C_{\mathrm{FB}}(z) = \frac { \displaystyle\sum_{i = 0}^N \beta_i z^{-i} } { 1 + \displaystyle\sum_{i = 1}^N \alpha_i z^{-i} } \cdot \frac{1}{1 - z^{-1}} \quad \text{and}\quad C_{\mathrm{PF}}(z) = \frac { \displaystyle \frac{1}{\beta_0} \sum_{i = 0}^{N+1} \gamma_i z^{-i} } { 1 + \displaystyle \frac{1}{\beta_0} \sum_{i = 1}^N \beta_i z^{-i} } . {} \end{aligned} $$
(8.32)
Discussion
We can now comment on our preference of the transfer function structure in (8.27) to the one of (8.26). As the feedback controller CFB(z) contains an integrator, it must only be fed with the control error. This ensures that the integrator remains constant in steady state when the control error is eliminated. In the structure of (8.26), both transfer functions would contain an integrator, their integrator states running away to infinity for any nonzero reference r. Not a problem in theory, but very much so in practice!
In contrast to the continuous-time case in Sect. 4.​2, no realizability issues for CPF(z) arise from the fact that its numerator polynomial is of higher order than its denominator. The introduction of a third (feedforward) transfer function is not necessary in the discrete-time domain.
If the runtime performance of a controller implementation is important, this transfer function form is more efficient than the state-space variant of Sect. 8.1. The number of multiplications and additions only scales linearly with N here, compared to a quadratic growth of operations in the state-space case [7].
Compared to the state-space form, the possible feedback path of a limited control signal value was removed while deriving the transfer functions, resulting in less flexibility regarding control signal limitation and anti-windup performance. In Sect. 8.3, we will introduce a third discrete-time form, combining the efficiency of transfer functions with the flexibility of the state-space form.
Cooking Recipe (Discrete-Time ADRC, Transfer Function Form)
To implement and tune the discrete-time transfer function form of ADRC derived in this section, the following steps have to be carried out:
1.
Plant modeling: Identify the order N and the gain parameter b0 of the plant model.
 
2.
Controller and observer tuning: When using bandwidth parameterization, the tuning parameters are ωCL and kESO. For first- and second-order ADRCs, these values (along with the sampling interval Ts and the plant gain b0) can simply be put in the design equations for the α, β, γ coefficients tabulated in Appendix A.​6. It is also possible to compute these coefficients from the otherwise obtained state-space controller and observer gains kT and l.
 
3.
Implementation: The structure of the control law (8.28), shown in Fig. 8.7, is well known from existing control engineering practice. It comprises a reference signal prefilter CPF(z) and a feedback controller CFB(z), preferably with a factored-out integrator as in (8.32), which should be implemented with output limits.
 
As always in this book, all relevant equations and details are once more compiled in the Appendix, for this discrete-time transfer function form in Appendix A.​6.

8.3 Dual-Feedback Transfer Function Form

When the observer dynamics were folded into transfer functions during the derivation in Sect. 8.2, the feedback path of the controller output back into the observer was removed. In this section, we will present an alternative transfer function form of discrete-time ADRC introduced in [7]. It preserves the feedback path of the previous controller output u(k − 1). This allows to apply an arbitrary control signal limitation outside of the actual controller and feed the limited signal ulim(k − 1) back, as in the state-space variant. At the same time, it will enable an even more efficient implementation than the transfer function variant of Sect. 8.2.
Derivation
Let us once more start by transforming the discrete-time controller (8.1) and observer (8.5) into z-domain:
$$\displaystyle \begin{gathered} u(z) = \frac{1}{b_0} \cdot \left( k_1 \cdot r(z) - \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(z) \right) , {} \end{gathered} $$
(8.33)
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(z) = z^{-1} \cdot \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(z) + z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u(z) + \boldsymbol{l} \cdot y(z) . {} \end{gathered} $$
(8.34)
We solve (8.34) for \(\hat {\boldsymbol {x}}(z)\):
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(z) = \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right)^{-1} \cdot \left( z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u(z) + \boldsymbol{l} \cdot y(z) \right) . {} \end{aligned} $$
(8.35)
Equation (8.35) relates the observer inputs u(z) and y(z) to its output, the estimation \(\hat {\boldsymbol {x}}(z)\). We have already mentioned multiple times that it will be beneficial to feed a limited control signal value ulim(z) into the observer to prevent possible windup issues in case a control signal limitation is being used (which one should). Replacing u(z) with ulim(z) in (8.35) and then putting this equation into (8.33) result in
$$\displaystyle \begin{aligned} u(z) = \frac{1}{b_0} \cdot \left( k_1 \cdot r(z) - \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right)^{-1} \cdot \left( z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u_{\mathrm{lim}}(z) + \boldsymbol{l} \cdot y(z) \right) \right). \end{aligned} $$
(8.36)
At this point we have obtained a control law consisting of three transfer functions (one of them being only a gain factor of \(\frac {k_1}{b_0}\)) that relate the controller inputs r(z), y(z), and ulim(z) to the controller output u(z), which we can also write as follows:
$$\displaystyle \begin{gathered} u(z) = \frac{k_1}{b_0} \cdot r(z) - C_{\mathrm{FBy}}(z) \cdot y(z) + C_{\mathrm{FBu}}(z) \cdot u_{\mathrm{lim}}(z) {} \end{gathered} $$
(8.37)
$$\displaystyle \begin{gathered} \text{with}\quad C_{\mathrm{FBy}}(z) = \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right)^{-1} \cdot \boldsymbol{l} {} \end{gathered} $$
(8.38)
$$\displaystyle \begin{gathered} \text{and}\quad C_{\mathrm{FBu}}(z) = -\frac{z^{-1}}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right)^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} . {} \end{gathered} $$
(8.39)
The two feedback transfer functions CFBy(z) and CFBu(z) of the control law (8.37), which is shown as a block diagram in Fig. 8.8, exhibit the following structure:
$$\displaystyle \begin{aligned} C_{\mathrm{FBy}}(z) = \frac{ \displaystyle\sum_{i = 0}^{N} \beta_i z^{-i} }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } \quad \text{and}\quad C_{\mathrm{FBu}}(z) = z^{-1} \cdot \frac{ \displaystyle\sum_{i = 0}^{N} \gamma_i z^{-i } }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } . {} \end{aligned} $$
(8.40)
Their coefficients are obtained by putting controller and observer gains kT and l in (8.38) and (8.39). Ready-to-use equations for α, β, γ are listed in Appendix A.​7.
Discussion
In a “classical” control loop, there is only one transfer function, feeding back the control error. The structure with dual feedback of ulim(z) and y(z) in this ADRC form led us to reflect this very fact in its name. Dedicated feedback of ulim(z) is what ensures the exact same flexibility and behavior compared to the state-space form regarding control signal limitation and windup protection.
If no control signal limitation would be used, ulim(z) = u(z) would be fed back to the updated controller output u(z). However, in any case, no algebraic loop is being created here, as there is a one-step delay (z−1) in CFBu(z).
Note that in (8.37) CFBy(z) ⋅ y(z) is being fed back with a negative sign, whereas we chose positive feedback for CFBu(z) ⋅ u(z). The latter is a sign change compared to the original derivation in [7], but for a good reason which we will uncover below. Both transfer functions CFBy(z) and CFBu(z) share the same denominator. Assuming bandwidth parameterization, we can express (8.38) and (8.39) as
$$\displaystyle \begin{aligned} C_{\mathrm{FBy}}(z) & = \frac{1}{b_0} \cdot \frac{ \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \operatorname{adj} \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right) \cdot \boldsymbol{l} }{ \left( 1 - z^{-1} z_{\mathrm{ESO}} \right)^{N+1} } ,\\ C_{\mathrm{FBu}}(z) &= -\frac{z^{-1}}{b_0} \cdot \frac{ \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \operatorname{adj} \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right) \cdot \boldsymbol{b}_{\mathrm{ESO}} }{ \left( 1 - z^{-1} z_{\mathrm{ESO}} \right)^{N+1} } . \end{aligned} $$
One can now clearly see that their denominator is the characteristic polynomial of the discrete-time observer dynamics as designed in the tuning approach (8.6). This means that both transfer functions have all their poles at \(z_{\mathrm {ESO}} = \mathrm {e}^{- k_{\mathrm {ESO}} \cdot \omega _{\mathrm {CL}} \cdot T_{\mathrm {s}}}\).
So—where is the integrator?
As it turns out, CFBu(z) has a steady-state gain of CFBu(z → 1) = 1. Look at Fig. 8.8 once more. Closing the inner loop around the control signal limiter with positive feedback and CFBu(z → 1) = 1 means the inner loop acts as a limited accumulator with filtered feedback. Without a limiter, its transfer function is
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_8/MediaObjects/617308_1_En_8_Equo_HTML.png
StartFraction 1 Over 1 minus upper C Subscript upper F upper B u Baseline left parenthesis z right parenthesis EndFraction equals StartFraction 1 Over 1 minus z Superscript negative 1 Baseline dot left parenthesis ModifyingBelow minus StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot left parenthesis bold italic upper I minus z Superscript negative 1 Baseline bold italic upper A Subscript upper E upper S upper O Baseline right parenthesis Superscript negative 1 Baseline dot bold italic b Subscript upper E upper S upper O Baseline With bottom brace Underscript filter with unity steady hyphen state gain Endscripts right parenthesis EndFraction comma
from which the similarity with a discrete-time accumulator \(\frac {1}{1 - z^{-1}}\) should become obvious. Clearly indicating this behavior was our reason to use positive feedback for CFBu(z) ⋅ u(z) in the control law (8.37).
Efficient Implementation
With two transfer functions, the dual-feedback variant of discrete-time ADRC is already similarly efficient as the transfer function approach from Sect. 8.2, while maintaining the exact behavior and features of the state-space form from Sect. 8.1. Yet, due to the identical denominators of CFBy(z) and CFBu(z), the implementation can be made even more efficient by making use of superposition:
$$\displaystyle \begin{aligned} u(z) = \frac{k_1}{b_0} \cdot r(z) + \frac{ - \left( \displaystyle\sum_{i = 0}^{N} \beta_i z^{-i} \right) \cdot y(z) + z^{-1} \cdot \left( \displaystyle\sum_{i = 0}^{N} \gamma_i z^{-i } \right) \cdot u_{\mathrm{lim}}(z) }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } . {} \end{aligned} $$
(8.41)
And finally, as proposed in [7], the number of required storage variables at runtime can be made as low as for the state-space form by choosing a (transposed) “direct form II,” known from digital signal processing [8]. That way, an implementation with both a minimum number of storage variables and algebraic operations of all known ADRC variants to date is obtained, which was accordingly referred to as “minimum-footprint” form in [7]. For first- and second-order ADRCs, the resulting structure of the controller is shown in Fig. 8.9. The above derivation and deliberation can be summarized with the following “cooking recipe.”
Cooking Recipe (Discrete-Time ADRC, Dual-Feedback TF Form)
To implement and tune this alternative discrete-time transfer function form, the following steps have to be carried out:
1.
Plant modeling: Identify the order N and the gain parameter b0 of the plant model.
 
2.
Controller and observer tuning: For bandwidth parameterization, the tuning parameters are ωCL and kESO. Together with the sampling interval Ts and the plant gain b0, these values can simply be put in design equations for the α, β, γ coefficients tabulated in Appendix A.​7 for the first- and second-order cases. It is also possible to compute these coefficients from the otherwise obtained state-space controller and observer gains kT and l.
 
3.
Implementation: The controller structure of this dual-feedback transfer function variant is shown in Fig. 8.8. The control law (8.37) can either be implemented using the two separate transfer functions CFBy(z) and CFBu(z) from (8.40) or in the more efficient form (8.41) shown in Fig. 8.9.
 
4.
Customization: As in the state-space form, an arbitrary control signal limitation add-on block can be added to a control loop with this ADRC variant, computing ulim from u. Feeding back ulim through the CFBu(z) transfer function is a simple yet effective measure of preventing windup issues.
 
As for all other ADRC variants in this book, all relevant equations and details are compiled in the Appendix, for this dual-feedback transfer function form in Appendix A.​7.

8.4 Discrete-Time Error-Based ADRC

Three discrete-time forms of ADRC have been derived in Sects. 8.1, 8.2, and 8.3 of this chapter. We will explicitly denote these as “output-based” ADRC variants in this section, as we are about to develop the same choices for the error-based form of linear ADRC that was introduced in Sect. 6.​4.
The strong connections between output- and error-based ADRCs, previously found in Sect. 6.​4, also apply to their discrete-time forms. This allows deriving the discrete-time error-based ADRC variants in a very concise manner, as done in [3]. It will be seen that the exact matrices, vectors, tuning gains, transfer functions, and coefficients of output-based ADRC can be reused. Therefore, we will refrain from giving “cooking recipes” for each variant but conclude with one common recipe, as only the controller structure will change, but not the required parameters.

8.4.1 State-Space Form

Recall the state-space derivation of discrete-time output-based ADRC in Sect. 8.1. Since the control law of ADRC is a static feedback of estimated states, providing a discrete-time equation is trivial. What has to be discretized, however, is the dynamics of the extended state observer providing these estimated states. If we compare the continuous-time state-space block diagrams of output- and error-based ADRCs in Figs. 3.​5 and 6.​12, it becomes obvious that the observers are very similar. The only two differences for error-based ADRC are that an input e(t) instead of y(t) is being used and that the controller output u(t)—or its limited value ulim(t)—enters the observer with a negative sign.
Without repeating the discretization procedure from Sect. 8.1, we can therefore directly present the discrete-time counterparts to the continuous-time equations of error-based ADRC (controller (6.​27) and observer (6.​26)):
$$\displaystyle \begin{gathered} u(k) = \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(k) \quad \text{with}\quad \boldsymbol{k}^{\mathrm{T}} = \begin{pmatrix} k_1 & \cdots & k_N \end{pmatrix} , {} \end{gathered} $$
(8.42)
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(k) = \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(k-1) - \boldsymbol{b}_{\mathrm{ESO}} \cdot u_{\mathrm{lim}}(k-1) + \boldsymbol{l} \cdot e(k) \quad \text{with}\quad \boldsymbol{l} = \begin{pmatrix} l_1 & \cdots & l_{N+1} \end{pmatrix}^{\mathrm{T}} . {} \end{gathered} $$
(8.43)
Comparing the resulting discrete-time observer equations for output- and error-based ADRC, (8.5) and (8.43), the only two differences are once again: input e(k) instead of y(k) and negative sign for ulim(k − 1) in (8.43). This is also apparent by comparing the block diagram in Fig. 8.10 with the output-based version in Fig. 8.4.
All matrices and vectors required to implement controller and observer, as well as their tuning methods, are identical to the output-based case. They can be found in Tables A.​5.​1 and A.​5.​2 of Appendix A.​5.

8.4.2 Transfer Function Form

Now that discrete-time error-based ADRC was introduced in state space, a transfer function form can be derived in the same manner as done in Sect. 8.2 for output-based ADRC.
The z-domain representation of the error-based control law (8.42) and observer (8.43) form the starting point. As seen in Sect. 8.2, the feedback path of the limited controller output will be removed during the transfer function derivation. In (8.43), we therefore have to set u(k − 1) = ulim(k − 1) before applying the z-transform:
$$\displaystyle \begin{gathered} u(z) = \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(z) {} , \end{gathered} $$
(8.44)
$$\displaystyle \begin{gathered} \hat{\boldsymbol{x}}(z) = z^{-1} \cdot \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(z) - z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u(z) + \boldsymbol{l} \cdot e(z) . {} \end{gathered} $$
(8.45)
Inserting (8.44) into (8.45) yields the closed-loop observer equation:
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(z) = \left( \boldsymbol{I} - z^{-1} \cdot \left( \boldsymbol{A}_{\mathrm{ESO}} - \frac{1}{b_0} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \right) \right)^{-1} \cdot \boldsymbol{l} \cdot e(z) , \end{aligned}$$
which is finally put back into (8.44):
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_8/MediaObjects/617308_1_En_8_Equ46_HTML.png
u left parenthesis z right parenthesis equals ModifyingBelow StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot left parenthesis bold italic upper I minus z Superscript negative 1 Baseline dot left parenthesis bold italic upper A Subscript upper E upper S upper O Baseline minus StartFraction 1 Over b 0 EndFraction dot bold italic b Subscript upper E upper S upper O Baseline dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix right parenthesis right parenthesis Superscript negative 1 Baseline dot bold italic l With bottom brace Underscript upper C Subscript upper F upper B Baseline left parenthesis z right parenthesis Endscripts dot e left parenthesis z right parenthesis period
(8.46)
Equation (8.46) is a control law that makes use of exactly the same transfer function CFB(z) as derived in (8.29) for output-based ADRC, and we can therefore express it as
$$\displaystyle \begin{aligned} u(z) = C_{\mathrm{FB}}(z) \cdot e(z) \quad \text{with}\quad C_{\mathrm{FB}}(z) = \frac{ \displaystyle\sum_{i = 0}^N \beta_i z^{-i} }{ 1 + \displaystyle\sum_{i = 1}^N \alpha_i z^{-i} } \cdot \frac{1}{1 - z^{-1}} . {} \end{aligned} $$
(8.47)
The α and β coefficients in (8.46) can consequently be found in the reference material of the output-based variant in Appendix A.​6, specifically in Tables A.​6.​2 and A.​6.​3.
As shown in Fig. 8.11, this form of error-based ADRC requires only one transfer function, equivalent to its continuous-time counterpart introduced in Sect. 6.​4. One can therefore easily turn a discrete-time transfer function form of output-based ADRC into an error-based one by omitting the prefilter. The resulting “textbook” control loop shown in Fig. 8.11 is a structure often used as a starting point in practical digital control engineering. It should therefore also be understood as an invitation to customize this structure to a particular application’s needs. Most notably a user-defined reference signal prefilter could be added to shape the control loop dynamics in a true two-degrees-of-freedom sense, as already discussed in Sect. 6.​4.

8.4.3 Dual-Feedback Transfer Function Form

An equivalent of the dual-feedback transfer function form of discrete-time ADRC can also be derived for error-based ADRC. Beginning once more with the z-transform of the observer (8.43),
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(z) = z^{-1} \cdot \boldsymbol{A}_{\mathrm{ESO}} \cdot \hat{\boldsymbol{x}}(z) - z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u_{\mathrm{lim}}(z) + \boldsymbol{l} \cdot e(z) , \end{aligned}$$
we solve for \(\hat {\boldsymbol {x}}(z)\) to obtain the expression
$$\displaystyle \begin{aligned} \hat{\boldsymbol{x}}(z) = \left( \boldsymbol{I} - z^{-1} \boldsymbol{A}_{\mathrm{ESO}} \right)^{-1} \cdot \left( - z^{-1} \cdot \boldsymbol{b}_{\mathrm{ESO}} \cdot u_{\mathrm{lim}}(z) + \boldsymbol{l} \cdot e(z) \right) , \end{aligned}$$
which can be put into the z-transform of the error-based controller (8.42),
$$\displaystyle \begin{aligned} u(z) = \frac{1}{b_0} \cdot \begin{pmatrix} \boldsymbol{k}^{\mathrm{T}} & 1 \end{pmatrix} \cdot \hat{\boldsymbol{x}}(z) , \end{aligned}$$
yielding the following control law:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-72687-3_8/MediaObjects/617308_1_En_8_Equ48_HTML.png
StartLayout 1st Row 1st Column u left parenthesis z right parenthesis equals 2nd Column StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T 2nd Column 1 EndMatrix dot left parenthesis bold italic upper I minus z Superscript negative 1 Baseline bold italic upper A Subscript upper E upper S upper O Baseline right parenthesis Superscript negative 1 Baseline dot left parenthesis minus z Superscript negative 1 Baseline dot bold italic b Subscript upper E upper S upper O Baseline dot u Subscript limit Baseline left parenthesis z right parenthesis plus bold italic l dot e left parenthesis z right parenthesis right parenthesis 2nd Row 1st Column equals 2nd Column left parenthesis ModifyingBelow minus StartFraction z Superscript negative 1 Baseline Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot left parenthesis bold italic upper I minus z Superscript negative 1 Baseline bold italic upper A Subscript upper E upper S upper O Baseline right parenthesis Superscript negative 1 Baseline dot bold italic b Subscript upper E upper S upper O Baseline With bottom brace Underscript upper C Subscript upper F upper B u Baseline left parenthesis z right parenthesis Endscripts right parenthesis dot u Subscript limit Baseline left parenthesis z right parenthesis 3rd Row 1st Column Blank 2nd Column plus left parenthesis ModifyingBelow StartFraction 1 Over b 0 EndFraction dot Start 1 By 2 Matrix 1st Row 1st Column bold italic k Superscript normal upper T Baseline 2nd Column 1 EndMatrix dot left parenthesis bold italic upper I minus z Superscript negative 1 Baseline bold italic upper A Subscript upper E upper S upper O Baseline right parenthesis Superscript negative 1 Baseline dot bold italic l With bottom brace Underscript upper C Subscript upper F upper B y Baseline left parenthesis z right parenthesis Endscripts right parenthesis dot e left parenthesis z right parenthesis period EndLayout
(8.48)
This means that we can abbreviate the control law (8.48) using the same two transfer functions CFBy(z) and CFBu(z) from (8.37) of the output-based counterpart:
$$\displaystyle \begin{gathered} u(z) = C_{\mathrm{FBy}}(z) \cdot e(z) + C_{\mathrm{FBu}}(z) \cdot u_{\mathrm{lim}}(z) {} \\ \text{with}\quad C_{\mathrm{FBy}}(z) = \frac{ \displaystyle\sum_{i = 0}^{N} \beta_i z^{-i} }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } \quad \text{and}\quad C_{\mathrm{FBu}}(z) = z^{-1} \cdot \frac{ \displaystyle\sum_{i = 0}^{N} \gamma_i z^{-i } }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } . \notag \end{gathered} $$
(8.49)
The visualization of (8.49) in Fig. 8.12 allows for an easy comparison with the output-based equivalent in Fig. 8.8. To clearly indicate that the exact same transfer functions of the output-based variant are being used, we keep using the notation CFBy(z) with subscript “y” although now the control error e is being fed into this transfer function.
An efficient implementation can be found using the same ideas from Sect. 8.3. The control law (8.49) can be written making use of the common denominator of CFBy(z) and CFBu(z):
$$\displaystyle \begin{aligned} u(z) = \frac{ \left( \displaystyle\sum_{i = 0}^{N} \beta_i z^{-i} \right) \cdot e(z) + z^{-1} \cdot \left( \displaystyle\sum_{i = 0}^{N} \gamma_i z^{-i } \right) \cdot u_{\mathrm{lim}}(z) }{ 1 + \displaystyle\sum_{i = 1}^{N+1} \alpha_i z^{-i} } . {} \end{aligned} $$
(8.50)
Using the (transposed) direct form II for (8.50) will then reduce the number of required storage variables (delays) during runtime to a minimum, as shown in Fig. 8.13 for the first- and second-order cases.
As all coefficients α, β, γ are unchanged, they can be obtained from the tables for dual-feedback output-based ADRC in Appendix A.​7. Let us now conclude our section on the three discrete-time variants of error-based ADRC with one common “cooking recipe.”
Cooking Recipe (Discrete-Time Error-Based ADRC)
To implement and tune discrete-time error-based ADRC in one of the (a) state-space, (b) transfer function, or (c) dual-feedback transfer function forms, the following steps have to be carried out:
1.
Plant modeling: Identify the order N and the gain parameter b0 of the plant model.
 
2.
Controller and observer tuning: For bandwidth parameterization, the tuning parameters are ωCL and kESO. Together with the sampling interval Ts and the plant gain b0, these values have to be put into the respective equations for the coefficients needed in the particular form:
 
3.
Implementation:
  • State-space form: control law (8.42) with observer (8.43), as shown in Fig. 8.10
  • Transfer function form: control law (8.47), as shown in Fig. 8.11
  • Dual-feedback transfer function form: control law (8.49), as shown in Fig. 8.12, or with the efficient implementation (8.50) shown in Fig. 8.13
 
4.
Customization: In the state-space and dual-feedback transfer function forms, an arbitrary control signal limitation add-on block can be added to the control loop, as shown in Figs. 8.10 and 8.12, respectively, without requiring any further anti-windup measures. Error-based ADRC can also be conveniently combined with a custom prefilter to shape the reference tracking dynamics, as described in Sect. 6.​4.​2 and shown in Fig. 6.​16 for the continuous-time case.
 
As part of the Appendix, the equations for all three forms of discrete-time error-based ADRC are once more summarized in Appendix A.​8.

8.5 Summary and Outlook

In this chapter we have covered three forms of discrete-time ADRC, all of which can be developed for both output-based and error-based ADRCs—see Table 8.1. Each approach comes with a different set of features and properties:
1.
State space: A form obtained using zero-order hold discretization and the current observer approach. This is the discrete-time counterpart of continuous-time linear ADRC as introduced in Chap. 3.
Table 8.1
Discrete-time forms of output-based and error-based ADRC introduced in this chapter, and where to find them
Form
Output-based
Error-based
Properties
State space
Sect. 8.1
Sect. 8.4.1
Recognition factor: corresponds to the original structure of continuous-time ADRC. Control signal limitation: flexible, built-in windup protection. Runtime footprint: minimum number of storage variables, but maximum number of coefficients and operations.
Transfer function
Sect. 8.2
Sect. 8.4.2
Recognition factor: equivalence to classical digital control with feedback controller and prefilter. Control signal limitation: only via clamped integrator. Runtime footprint: maximum number of storage variables, reduced number of coefficients and algebraic operations.
Dual-feedback TF
Sect. 8.3
Sect. 8.4.3
Recognition factor: fully replicates the state-space using transfer functions, but with an uncommon structure. Control signal limitation: flexible, built-in windup protection. Runtime footprint: minimum number of storage variables, coefficients, and algebraic operations.
 
2.
Transfer functions: Based on z-transform of the state-space equations, discrete-time transfer functions relating the reference signal and the measured plant output to the updated control signal are developed. In doing so, the inner feedback path of the controller output is removed. While being less flexible regarding anti-windup measures, simple forms of windup protection can be implemented nevertheless. An accumulator was factored out of the feedback controller transfer function with this particular goal in mind. This is the discrete-time counterpart of the continuous-time transfer function representation developed in Sect. 4.​2 and represents a structure often found in control engineering practice.
 
3.
Dual-feedback transfer functions: Maintaining the feedback path for the (usually limited) previous controller output, this variant, which is based on two transfer functions in the feedback path, replicates the exact behavior of the state-space form. At the same time, it is possible to achieve the lowest computational footprint of all three approaches.
 
For implementations with high sampling frequencies, for example, in power electronics applications, the runtime costs associated with a particular controller variant may not only become noticeable but even a bottleneck. In Table 8.2 we therefore compare the computational footprint of discrete-time output-based ADRC variants.
Table 8.2
The three discrete-time choices developed in this chapter come with different footprints. Computational costs to be expected at runtime (multiplications, additions, and storage variables; excluding costs of an optional limiter) are given for output-based ADRC of orders 1, 2, and N. For a fair comparison, direct form II implementations were taken as a basis for both transfer function approaches. Best values are highlighted. The dual-feedback variant combines favorable properties regarding both the number of operations and storage variables
Order
State space
Transfer functions
Dual-feedback TF
1st
11 mul.
7 mul.
7 mul.
 
10 add.
6 add.
6 add.
 
2 var.
4 var.
2 var.
2nd
19 mul.
11 mul.
10 mul.
 
18 add.
10 add.
9 add.
 
3 var.
6 var.
3 var.
Nth
(N2 + 5N + 5) mul.
(4N + 3) mul.
(3N + 4) mul.
 
(N2 + 5N + 4) add.
(4N + 2) add.
(3N + 3) add.
 
(N + 1) var.
(2N + 2) var.
(N + 1) var.
Error-based ADRC variants should be considered if either emphasis is put on the tracking performance, or a true two-degrees-of-freedom design shall be achieved using a custom prefilter in the reference signal path. In output-based ADRC, both tracking and disturbance rejection dynamics are coupled. It offers a good out-of-the-box performance for most applications with fixed, piecewise constant, or slowly changing set points, as it is tuned for a critically damped control loop response.
Next Steps
In this chapter, we have selected and derived several basic discrete-time ADRC variants. The next three chapters build on those results showing various aspects related to the implementation of ADRC in practical scenarios.
  • Chapter 9 describes various features that can further facilitate a successful deployment of ADRC thus contributing to closing the theory-practice gap.
  • Chapter 10 supports the transition from equations to actual software implementation, both in model-based and handwritten source code implementations.
  • Chapter 11 documents the use of the above discrete-time ADRC in actual application examples, showing how the manufactured “cooking recipes” perform when applied to real control problems, which call for actual hardware implementations.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
1.
Zurück zum Zitat Franklin, G.F., Powell, D., Workman, M.L.: Digital Control of Dynamic Systems, 3rd edn. Addison-Wesley Longman Publishing, Boston, MA, USA (1997) Franklin, G.F., Powell, D., Workman, M.L.: Digital Control of Dynamic Systems, 3rd edn. Addison-Wesley Longman Publishing, Boston, MA, USA (1997)
6.
Zurück zum Zitat Åström, K.J., Hägglund, T.: Advanced PID Control. International Society of Automation, New York (2006) Åström, K.J., Hägglund, T.: Advanced PID Control. International Society of Automation, New York (2006)
8.
Zurück zum Zitat Oppenheim, A., Schafer, R.: Discrete-Time Signal Processing, 3rd edn. Prentice Hall, New York (2010) Oppenheim, A., Schafer, R.: Discrete-Time Signal Processing, 3rd edn. Prentice Hall, New York (2010)
Metadaten
Titel
Discrete-Time Linear ADRC
verfasst von
Gernot Herbst
Rafal Madonski
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-72687-3_8