Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

14. Summary

verfasst von : Steven A. Frank

Erschienen in: Control Theory Tutorial

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

This chapter summarizes the three key topics of feedback, robust control, and design tradeoffs.
Many other control approaches and applications have been developed (Baillieul and Samad 2015). Those extensions build on the foundational principles emphasized in this tutorial. Three key principles recur.

14.1 Feedback

There are two, and only two, reasons for using feedback. The first is to reduce the effect of any unmeasured disturbances acting on the system. The second is to reduce the effect of any uncertainty about systems dynamics.                                                                               —[Vinnicombe 2001, p. xvii]
Feedback is unnecessary if one has a complete, accurate model of system dynamics. With an accurate model, one can map any input to the desired output. A direct feedforward open loop does the job.
However, unpredictable perturbations occur. Models of dynamics almost always incorrectly specify the true underlying process.
Correcting errors by feedback provides the single most powerful design method. Natural systems that control biological function often use feedback. Human-engineered systems typically correct errors through feedback.

14.2 Robust Control

[H]ow much do we need to know about a system in order to design a feedback compensator that leaves the closed loop behaviour insensitive to that which we don’t know?                                                                               —[Vinnicombe 2001, p. xvii]
Robustness means reduced sensitivity to disturbance or modeling error. Feedback improves robustness. However, feedback only describes a broad approach.
Many specific methods refine the deployment of feedback. For example, filters reduce the resonant peaks in system response. Controllers modulate dynamics to improve stability margin.
A large stability margin means that the system can maintain stability even if the true process dynamics depart significantly from the simple linear model used to describe the dynamics.

14.3 Design Tradeoffs and Optimization

A well-performing system moves rapidly toward the desired setpoint. However, rapid response can reduce stability. For example, a strong response to error can cause a system to overshoot its setpoint. If each overshoot increases the error, then the system diverges from the target.
The fast response of a high-performing system may destabilize the system or make it more sensitive to disturbances. A tradeoff occurs between performance and robustness.
Many other tradeoffs occur. For example, control signals modulate system dynamics. The energy required to produce control signals may be expensive. The costs of control signals trade off against the benefits of modulating the system response.
The sensitivity of a system to perturbations varies with the frequency at which the signal disturbs the system. Often, reduced sensitivity to one set of frequencies raises sensitivity to another set of frequencies.
Optimization provides a rigorous design approach to tradeoffs. One may assign costs and benefits to various aspects of performance and robustness or to the response at different frequencies. One can then consider how changes in system design alter the total balance of the various costs and benefits. Ideally, one finds the optimal balance.

14.4 Future Directions

Control theory remains a very active subject (Baillieul and Samad 2015). Methods such as robust \(\mathcal {H}_{\infty }\) analysis and model predictive control are recent developments.
Computational neural networks have been discussed for several decades as a method for the control of systems (Antsaklis 1990). Computational networks are loosely modeled after biological neural networks. A set of nodes takes inputs from the environment. Each input node connects to another set of nodes. Each of those intermediate nodes combines its inputs to produce an output that connects to yet another set of nodes, and so on. The final nodes classify the environmental state, possibly taking action based on that classification (Nielsen 2015; Goodfellow et al. 2016).
For many years, neural networks seemed like a promising approach for control design and for many other applications. However, that approach typically faced various practical challenges in implementation. Until recently, the practical problems meant that other methods often worked better in applications.
New methods and increased computational power have made neural networks the most promising approach for major advances in control system design. Spectacular examples include self-driving cars, real-time computer translation between languages, and the reshaping of modern financial markets. At a simpler level, we may soon see many of the control systems in basic daily devices driven by embedded neural networks instead of the traditional kinds of controllers.
The rise of neural networks also foreshadows a potential convergence between our understanding of human-designed engineering systems and naturally designed biological systems (Frank 2017).
In a human-designed system, an engineer may build a controller to improve the total benefits that arise from tradeoffs between cost, performance, and robustness. In biology, natural selection tends to build biochemical or physical systems that improve the tradeoffs between various dimensions of biological success. Those biological dimensions of success often can be expressed in terms of cost, performance, and robustness.
The similarities and differences between human-designed systems and naturally designed systems will provide many insights in the coming years. An understanding of the basic concepts of control design will be required to follow future progress and to contribute to that progress.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Metadaten
Titel
Summary
verfasst von
Steven A. Frank
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-91707-8_14

Neuer Inhalt