Skip to main content
main-content

Über dieses Buch

This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC.
The second edition has been substantially rewritten, edited and updated to reflect the significant advances that have been made since the publication of its predecessor, including:

• a new chapter on economic NMPC relaxing the assumption that the running cost penalizes the distance to a pre-defined equilibrium;
• a new chapter on distributed NMPC discussing methods which facilitate the control of large-scale systems by splitting up the optimization into smaller subproblems;
• an extended discussion of stability and performance using approximate updates rather than full optimization;
• replacement of the pivotal sufficient condition for stability without stabilizing terminal conditions with a weaker alternative and inclusion of an alternative and much simpler proof in the analysis; and
• further variations and extensions in response to suggestions from readers of the first edition.
Though primarily aimed at academic researchers and practitioners working in control and optimization, the text is self-contained, featuring background material on infinite-horizon optimal control and Lyapunov stability theory that also makes it accessible for graduate students in control engineering and applied mathematics.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction

Abstract
In this introduction, we present the basics of NMPC in an informal way. In particular, we introduce the central idea of iterative optimal control on a moving finite horizon. We provide a brief history of NMPC and MPC, explain the organization of the material in this book, and mention some topics which are not covered.
Lars Grüne, Jürgen Pannek

Chapter 2. Discrete Time and Sampled Data Systems

Abstract
We introduce the class of systems treated in the book—nonlinear discrete time control systems on metric spaces—and illustrate them by several examples. Moreover, we discuss sampled data systems as an important special case. Afterwards, we introduce the necessary background material from Lyapunov stability theory for discrete time and sampled data systems which will be needed for the stability analysis of NMPC schemes.
Lars Grüne, Jürgen Pannek

Chapter 3. Nonlinear Model Predictive Control

Abstract
In this chapter, we introduce the nonlinear model predictive control algorithm in a rigorous way. We start by defining a basic NMPC algorithm for constant reference and continue by formalizing state and control constraints. Viability (or weak forward invariance) of the set of state constraints is introduced and the consequences for the admissibility of the NMPC-feedback law are discussed. After having introduced NMPC in a special setting, we describe various extensions of the basic algorithm, considering time varying reference solutions, terminal constraints, and costs and additional weights. Finally, we investigate the optimal control problem corresponding to this generalized setting and prove several properties, most notably the dynamic programming principle.
Lars Grüne, Jürgen Pannek

Chapter 4. Infinite Horizon Optimal Control

Abstract
In this chapter we give an introduction to nonlinear infinite horizon optimal control. The dynamic programming principle as well as several consequences of this principle are proved. One of the main results of this chapter is that the infinite horizon optimal feedback law asymptotically stabilizes the system and that the infinite horizon optimal value function is a Lyapunov function for the closed-loop system. Motivated by this property we formulate a relaxed version of the dynamic programming principle, which allows to prove stability and suboptimality results for nonoptimal feedback laws and without using the optimal value function. A practical version of this principle is provided, too. These results will be central in the following chapters for the stability and performance analysis of NMPC algorithms. For the special case of sampled data systems we finally show that for suitable integral costs asymptotic stability of the continuous time sampled data closed-loop system follows from the asymptotic stability of the associated discrete time system.
Lars Grüne, Jürgen Pannek

Chapter 5. Stability and Suboptimality Using Stabilizing Terminal Conditions

Abstract
In this chapter, we present a comprehensive stability and suboptimality analysis for NMPC schemes with stabilizing terminal conditions. Both endpoint constraints as well as regional constraints plus Lyapunov function terminal cost are covered. We show that viability of the state constraint set can be replaced by viability of the terminal constraint set in order to ensure admissibility of the resulting NMPC-feedback law. The “reversing of monotonicity” of the finite time optimal value functions is proved and used in order to apply the relaxed dynamic programming framework introduced in the previous chapter. Using this framework, stability, suboptimality (i.e., estimates about the infinite horizon performance of the NMPC closed-loop system), and inverse optimality results are proved.
Lars Grüne, Jürgen Pannek

Chapter 6. Stability and Suboptimality Without Stabilizing Terminal Conditions

Abstract
In this chapter, we present a comprehensive stability and suboptimality analysis for NMPC schemes without stabilizing terminal conditions. After defining the setting and presenting motivating examples, we introduce a boundedness condition on the optimal value function and an asymptotic controllability assumption. Moreover, we give a detailed derivation of stability and performance estimates based on these assumptions and the relaxed dynamic programming framework introduced before. We show that our stability criterion is tight for the class of systems satisfying the controllability assumption and give conditions under which the level of suboptimality and a bound on the optimization horizon length needed for stability can be explicitly computed from the parameters in the controllability condition. As a spinoff we recover the well-known result that—under suitable conditions—stability of the NMPC closed loop can be expected if the optimization horizon is sufficiently large. We further deduce qualitative properties of the stage cost which lead to stability with small optimization horizons and illustrate by means of two examples how these criteria can be used even if the parameters in the controllability assumption cannot be evaluated precisely. Finally, we give weaker conditions under which semiglobal and semiglobal practical stability of the NMPC closed loop can be ensured.
Lars Grüne, Jürgen Pannek

Chapter 7. Feasibility and Robustness

Abstract
In this chapter we consider two different but related issues. In the first part we discuss the feasibility problem, i.e., that the nominal NMPC closed-loop solutions remain inside a set on which the finite horizon optimal control problems defining the NMPC feedback law are feasible. We formally define the property of recursive feasibility and explain why the assumptions of the previous chapters, i.e., viability of the state constraint set or of the terminal constraint set ensure this property. Then we present two ways to relax the viability assumption on the state constraint set in the case that no terminal constraints are used. After a comparative discussion on NMPC schemes with and without stabilizing terminal conditions, we start with the second part of the chapter in which robustness of the closed loop under additive perturbations and measurement errors is investigated. Here robustness concerns both the feasibility and admissibility as well as the stability of the closed loop. We provide different assumptions and resulting NMPC schemes for which we can rigorously prove such robustness results and also discuss examples which show that in general robustness may fail to hold.
Lars Grüne, Jürgen Pannek

Chapter 8. Economic NMPC

Abstract
Economic nonlinear model predictive control is the common name for NMPC schemes in which the stage cost does not penalize the distance to a predefined equilibrium, which was one of the key assumptions in Chaps. 5 and 6. Instead, the cost can, in principle, model all kinds of quantities, like energy consumption, yield of a substance, income of a firm, etc., which one would like to minimize or maximize. In such a general setting, it is by no means clear that the moving horizon MPC paradigm yields well-performing closed-loop solutions. It was first observed by Amrit, Angeli, and Rawlings in 2012 (extending earlier work by Diehl, Amrit, and Rawlings) that strict dissipativity is a sufficient systems theoretic property for ensuring proper performance of economic MPC. In this chapter, we will rigorously establish stability as well as averaged and non-averaged performance estimates for strictly dissipative economic MPC problems, both with and without terminal conditions.
Lars Grüne, Jürgen Pannek

Chapter 9. Distributed NMPC

Abstract
For large-scale systems such as street traffic, cyber- physical production systems or energy grids on an operational level, the MPC approach introduced in Chap. 3 is typically inapplicable in real time. Moreover, communication restrictions or privacy considerations may render the centralized solution of the optimal control problem in each step of the NMPC scheme impossible. To cope with these issues, the optimal control problem is split into subproblems, which are simpler to solve but may be linked by dynamics, cost functions or constraints. As the examples indicate, each subproblem may be seen as an independent unit. If these units are not coordinated, i.e., if there exists no data exchange and if inputs from connected units are considered as disturbances, the problem is referred to as decentralized. Including communication, the problem is called distributed and can again be split into subclasses of cooperative and noncooperative control. Within this chapter, we impose the assumption of flawless communication to analyze both stability and performance of the overall system for the distributed case. Additionally, we briefly sketch how to analyze the robustness of the distributed setting. Last, we discuss basic coordination methods on the tactical control layer to solve the distributed problem and relate these methods to our stability results.
Lars Grüne, Jürgen Pannek

Chapter 10. Variants and Extensions

Abstract
The results developed so far in this book can be extended in many ways. In this chapter, we present a selection of possible variants and extensions. Some of these introduce new combinations of techniques developed in the previous chapters, others relax some of the previous assumptions in order to obtain more general results or strengthen assumptions in order to derive stronger results. In order to make the presentation concise, we limit ourselves to stabilizing NMPC as presented in Chaps. 5 and 6. Several sections contain algorithmic ideas which can be added on top of the basic NMPC schemes from the previous chapters. Parts of this chapter contain results which are somewhat preliminary and are thus subject to further research. Some sections have a survey-like style and, in contrast to the other chapters of this book, proofs are occasionally only sketched with appropriate references to the literature.
Lars Grüne, Jürgen Pannek

Chapter 11. Numerical Discretization

Abstract
This chapter is particularly devoted to sampled data systems, which need to be discretized in order to be able to solve the optimal control problem within the NMPC algorithm numerically. We present suitable methods, discuss the convergence theory for one step methods and give an introduction into step size control algorithms. Furthermore,we explain how these methods can be integrated into NMPC algorithms, investigate how the numerical errors affect the stability of the NMPC controller derived from the numerical model and show which kind of robustness is needed in order to ensure a practical kind of stability.
Lars Grüne, Jürgen Pannek

Chapter 12. Numerical Optimal Control of Nonlinear Systems

Abstract
In this chapter, we focus on numerically solving the constrained finite horizon nonlinear optimal control problems occurring in each iterate of the NMPC procedure. To this end, we first state standard discretization techniques to obtain a nonlinear optimization problem in standard form. Utilizing this form, we outline basic versions of the two most common solution methods for such problems, that is, Sequential Quadratic Programming (SQP) and Interior Point Methods (IPM). Furthermore, we investigate interactions between the differential equation solver, the discretization technique, and the optimization method and present several NMPC specific details concerning the warm start of the optimization routine. Finally, we discuss NMPC variants relying on inexact solutions of the finite horizon optimal control problem.
Lars Grüne, Jürgen Pannek

Backmatter

Weitere Informationen