Skip to main content
Top

2019 | Book

Handbook of Model Predictive Control

insite
SEARCH

About this book

Recent developments in model-predictive control promise remarkable opportunities for designing multi-input, multi-output control systems and improving the control of single-input, single-output systems. This volume provides a definitive survey of the latest model-predictive control methods available to engineers and scientists today.

The initial set of chapters present various methods for managing uncertainty in systems, including stochastic model-predictive control. With the advent of affordable and fast computation, control engineers now need to think about using “computationally intensive controls,” so the second part of this book addresses the solution of optimization problems in “real” time for model-predictive control. The theory and applications of control theory often influence each other, so the last section of Handbook of Model Predictive Control rounds out the book with representative applications to automobiles, healthcare, robotics, and finance.

The chapters in this volume will be useful to working engineers, scientists, and mathematicians, as well as students and faculty interested in the progression of control theory. Future developments in MPC will no doubt build from concepts demonstrated in this book and anyone with an interest in MPC will find fruitful information and suggestions for additional reading.

Table of Contents

Frontmatter

Theory

Frontmatter
The Essentials of Model Predictive Control
Abstract
This article begins by using the analogy to chess to provide intuition about many aspects of MPC. This is followed by a brief review of the historical context within which MPC developed. This review also serves to provide the mathematical background needed in the proofs of the existence, feasibility and stability of the simplest and most basic version of MPC. Next, the basic aspects of MPC are described. Finally, a detailed example is presented and used to illustrate some possible uses, limitations, and possible extensions of the most elementary version of MPC.
William S. Levine
Dynamic Programming, Optimal Control and Model Predictive Control
Abstract
In this chapter, we give a survey of recent results on approximate optimality and stability of closed loop trajectories generated by model predictive control (MPC). Both stabilizing and economic MPC are considered and both schemes with and without terminal conditions are analyzed. A particular focus of the chapter is to highlight the role dynamic programming plays in this analysis. As we will see, dynamic programming arguments are ubiquitous in the analysis of MPC schemes.
Lars Grüne
Set-Valued and Lyapunov Methods for MPC
Abstract
Model predictive control (MPC), sometimes referred to as the receding horizon control, is an optimization-based approach to stabilization of discrete-time control systems. It is well-known that infinite-horizon optimal control, with the Linear-Quadratic Regulator [1] as the fundamental example, can provide optimal controls that result in asymptotically stabilizing feedback [8].
Rafal Goebel, Saša V. Raković
Stochastic Model Predictive Control
Abstract
Stochastic Model Predictive Control (SMPC) accounts for model uncertainties and disturbances based on their probabilistic description. This chapter considers several formulations and solutions of SMPC problems and discusses some examples and applications in this diverse, complex, and growing field.
Ali Mesbah, Ilya V. Kolmanovsky, Stefano Di Cairano
Moving Horizon Estimation
Abstract
Nearly every model predictive control (MPC) algorithm is premised on knowledge of the system’s state. As a result, state estimation is vital to good MPC performance. Moving horizon estimation (MHE) is an optimization-based state estimation algorithm. Similar to MPC, it relies on the minimization of a sum of stage costs subject to a dynamic model. Unlike MPC, however, conditions under which MHE is robustly stable have been slow to emerge. Recently, several results have appeared about the robust stability of MHE. We generalize the result on the robust stability of MHE without a max-term presented in Müller (Automatica 79:306–314, 2017). using assumptions inspired by the result in Hu (Robust stability of optimization-based state estimation under bounded disturbances. ArXiv e-prints, 2017). Furthermore, we show that all systems that are covered by the assumptions used in those previous works satisfy a certain form of exponential incremental input/output-to-state stability.
Douglas A. Allan, James B. Rawlings
Probing and Duality in Stochastic Model Predictive Control
Abstract
In a general nonlinear setting, stochastic optimal control involves the propagation of the conditional probability density of the state given the input signal and output measurements. This density is known as the information state in control circles and as the belief state in artificial intelligence and robotics squares. The choice of control signal affects the information state so that state observability becomes control-dependent. Thus, the feedback control law needs to include aspects of probing in addition to, or more accurately in competition with, its function in regulation. This is called duality of the control. In the linear case, this connection is not problematic since the control signal simply translates or recenters the conditional density without other effect. But for nonlinear systems, this complication renders all but the simplest optimal control problems computationally intractable.
Martin A. Sehr, Robert R. Bitmead
Economic Model Predictive Control: Some Design Tools and Analysis Techniques
Abstract
In recent years, Economic Model Predictive Control has emerged as a variant of traditional (tracking) MPC which aims at maximizing economic profitability by directly minimizing, over a receding prediction horizon, the costs incurred in system’s operation. Several design alternatives have been proposed in the literature, as well as suitable tools for the analysis of stability and performance of the different design methods. This note provides the authors’ perspective on some of the most relevant results that appeared within this framework as well as, for interested readers, references to the technical papers where such results have first been discussed.
David Angeli, Matthias A. Müller
Nonlinear Predictive Control for Trajectory Tracking and Path Following: An Introduction and Perspective
Abstract
Control tasks in various applications are posed as setpoint-stabilization problems, where a constant reference has to be stabilized. For systems where changing references are given, formulations for tracking of a time-dependent trajectory or for following a geometric path are more suitable. In other cases, no clear reference is given, but only optimal economic behavior is desired. We outline how these control goals can be captured and embedded in a model predictive control design and provide theoretic formulations. We compare trajectory tracking and path following in a realistic robot simulation example to highlight that it is important for an engineer to choose the appropriate formulation of the control task at hand.
Janine Matschek, Tobias Bäthge, Timm Faulwasser, Rolf Findeisen
Hybrid Model Predictive Control
Abstract
This article collects several model predictive control (MPC) strategies in the literature that have a hybrid flavor, which, due to the diverse use of the term hybrid, span a wide range of control settings. These include discrete-time systems with discontinuous right-hand sides, with states that include both continuous-valued and discrete-valued variables. It also includes systems controlled by MPC strategies using memory variables and logic states, continuous-time systems controlled by MPC strategies that update the feedback law periodically as well as those controlled by MPC strategies that employ more than one feedback controller. This article provides a unified presentation of these strategies with the purpose of serving as a self-contained summary of the state of the art in hybrid MPC, as a handbook with precise pointers to the literature to the interested control practitioner, and as a motivator for future research directions on the subject.
Ricardo G. Sanfelice
Model Predictive Control of Polynomial Systems
Abstract
This chapter describes the design of nonlinear model predictive control (MPC) for polynomial systems. Polynomial systems arise in many applications, including in power generation, automotives, aircraft, magnetic levitation, chemical reactors, and biological networks. Furthermore, general nonlinear dynamical systems can usually be rewritten exactly as polynomial systems or approximated as polynomial systems using Taylor series. MPC for discrete-time polynomial systems is formulated as a polynomial program. Hierarchical semidefinite programing relaxation methods are discussed for solving these polynomial programs to global optimality. Then, the methods for fast polynomial MPC are described, including convexification formulations for input-affine systems and explicit algorithms using algebraic geometry methods. Methods are then described for converting general nonlinear dynamical systems into polynomial systems using Taylor’s theorem, and an illustrative simulation example is presented for the practical implementation of Taylor’s theorem for bounding control trajectories. Finally, future directions for research are proposed, including real-time, output-feedback, and robust/stochastic polynomial MPC.
Eranda Harinath, Lucas C. Foguth, Joel A. Paulson, Richard D. Braatz
Distributed MPC for Large-Scale Systems
Abstract
This chapter presents the main approaches to the design of Distributed Model Predictive Control (DMPC) algorithms. For simplicity, focus is placed on the control of linear, time-invariant, discrete time systems. Emphasis is initially given to the system and problem partitioning, and a discussion is reported on how the adopted decomposition strongly influences the properties of the control scheme in terms of minimality of the representation, descriptive capabilities of the model, and information transmission requirements. Then, after a short discussion on decentralized MPC, a taxonomy of DMPC methods is proposed and some prototype DMPC algorithms are described with the aim to highlight the main characteristics of the classes of methods nowadays available. In the final part of the chapter, the most promising directions of research in the field are briefly summarized, and the most interesting fields of application of DMPC are listed.
Marcello Farina, Riccardo Scattolini
Scalable MPC Design
Abstract
This chapter is devoted to decentralized and distributed MPC architectures for cyberphysical systems composed of subsystems that can be added or removed over time. We focus on MPC design approaches where the synthesis of a local controller requires, at most, pieces of information from parent subsystems, while preserving collective properties such as stability and satisfaction of constraints. In these methods the complexity of MPC design for a subsystem scales with the number of its parents only, rather than the overall system size. In particular, we review plug-and-play synthesis algorithms where the addition and removal of subsystems can be automatically denied if unsafe for the whole system. We provide a tutorial description of the main theoretical concepts behind scalable and plug-and-play MPC, as well as a review of the main approaches available in the literature. Design methods are also illustrated through applications to power network systems and fleets of electric vehicles.
Marcello Farina, Giancarlo Ferrari-Trecate, Colin Jones, Stefano Riverso, Melanie Zeilinger

Computations

Frontmatter
Efficient Convex Optimization for Linear MPC
Abstract
MPC formulations with linear dynamics and quadratic objectives can be solved efficiently by using a primal-dual interior-point framework, with complexity proportional to the length of the horizon. An alternative, which is more able to exploit the similarity of the problems that are solved at each decision point of linear MPC, is to use an active-set approach, in which the MPC problem is viewed as a convex quadratic program that is parametrized by the initial state \(x_{0}\). Another alternative is to identify explicitly polyhedral regions of the space occupied by \(x_{0}\) within which the set of active constraints remains constant, and to pre-calculate solution operators on each of these regions. All these approaches are discussed here.
Stephen J. Wright
Implicit Non-convex Model Predictive Control
Abstract
Model Predictive Control (MPC) techniques often need to be deployed on a nonlinear dynamic model of the system to be controlled. This type of application of MPC is usually referred to as Nonlinear MPC (NMPC). Explicit approaches for NMPC are difficult to deploy, and one typically resorts to computing the solutions to the NMPC scheme on-line, i.e. implicitly. The difficulty then becomes one of performing the fairly heavy computations required to compute the NMPC solutions within the allotted time budget. In this chapter, we will present a summarized overview of the most commonly used techniques to approach this problem. We will focus on the main aspects of these approaches that are arguably keys to deploying real-time NMPC, namely: the problem discretization, path-following methods, and the structure of the underlying linear algebra. Our focus here will be on offering the reader an accessible overview of these crucial aspects.
Sebastien Gros
Convexification and Real-Time Optimization for MPC with Aerospace Applications
Abstract
This chapter gives an overview of recent developments of convexification and real-time convex optimization based control methods, in the context of Model Predictive Control (MPC). Lossless Convexification is a technique that formulates a class of non-convex control constraints as equivalent convex ones, while Successive Convexification gives an algorithm that targets nonlinear dynamics and certain non-convex state constraints. A large class of real-world optimal control problems can be solved with either method or a combination of both. For some time-critical applications, such as autonomous vehicles, it is crucial to have real-time capabilities. The real-time solution to these problems requires highly efficient customized convex programming solvers, which is also discussed as a part of this chapter. The effectiveness of convexification methods and real-time computation is demonstrated by a planetary soft landing problem throughout the chapter.
Yuanqi Mao, Daniel Dueri, Michael Szmuk, Behçet Açıkmeşe
Explicit (Offline) Optimization for MPC
Abstract
In this chapter, we present the fundamentals of multi-parametric programming and its application to explicit model predictive control (MPC), i.e. the offline solution of MPC problems for both continuous and hybrid systems. In particular, we first show how MPC problems can be reformulated as multi-parametric programming problems, then we show how explicit/multi-parametric solutions are derived and the key underlying theoretical properties. Finally, we present solution procedures for these type of problems and discuss applicability issues and potential future research directions.
Nikolaos A. Diangelakis, Richard Oberdieck, Efstratios N. Pistikopoulos
Real-Time Implementation of Explicit Model Predictive Control
Abstract
This chapter explains the synthesis of explicit MPC feedback laws that allow for real-time implementation on hardware with limited computational and storage properties. Four methods are introduced. The first one replaces the potentially complex explicit MPC controller by a simpler feedback law by exploiting the geometry of explicit solutions. The second method reduces the storage footprint of explicit MPC by a complete elimination of critical regions, replaced by a direct evaluation of optimality conditions. The common denominator of both methods is that they preserve optimality while considerably reducing the complexity. The third method trades lower complexity for suboptimality while simultaneously minimizing the performance loss. Finally, a method for designing stabilizing explicit MPC controllers for control of nonlinear systems is introduced.
Michal Kvasnica, Colin N. Jones, Ivan Pejcic, Juraj Holaza, Milan Korda, Peter Bakaráč
Robust Optimization for MPC
Abstract
This chapter aims to give a concise overview of numerical methods and algorithms for implementing robust model predictive control (MPC). We introduce the mathematical problem formulation and discuss convex approximations of linear robust MPC as well as numerical methods for nonlinear robust MPC. In particular, we review and compare generic approaches based on min-max dynamic programming and scenario-trees as well as Tube MPC based on set-propagation methods. As this chapter has a strong focus on numerical methods and their practical implementation, we also review a number of existing software packages for set computations, which can be used as building blocks for the implementation of robust MPC solvers.
Boris Houska, Mario E. Villanueva
Scenario Optimization for MPC
Abstract
In many control problems, disturbances are a fundamental ingredient and in stochastic Model Predictive Control (MPC) they are accounted for by considering an average cost and probabilistic constraints, where a violation of the constraints is accepted provided that the probability of this to happen is kept below a given threshold. This results in a so-called chance-constrained optimization, which however is known for being very hard to deal with. In this chapter, we describe a scheme to approximately solve stochastic MPC using the scenario approach to stochastic optimization. In the scenario approach the probabilistic constraints are replaced by a finite number of constraints, each one corresponding to a realization of the disturbance. Considering a finite sample of realizations makes the problem computationally tractable while the link to the original chance-constrained problem is established by a rigorous theory. With this approach, along with computational tractability, one gains the important advantage that no assumptions on the disturbance, such as boundedness, independence or Gaussianity, are required.
Marco C. Campi, Simone Garatti, Maria Prandini
Nonlinear Programming Formulations for Nonlinear and Economic Model Predictive Control
Abstract
We present a framework for constructing robust nonlinear model predictive controllers (NMPCs) with either tracking or economic objectives. For this, we explore properties of nonlinear programming problems (NLPs) that arise in the formulation of NMPC subproblems and show their influence on stability and robustness properties. In particular, NLPs that satisfy the Mangasarian-Fromovitz constraint qualification (MFCQ), the constant rank constraint qualification (CRCQ), and generalized strong second order sufficient conditions (GSSOSC) have solutions that are continuous with respect to perturbations of the problem data. These are important prerequisites for nominal and robust stability of NMPC controllers. Moreover, we show that ensuring these properties is possible through reformulation of the NLP subproblem for NMPC, through the addition of 1 penalty terms. We also show how these properties extend beyond tracking objective functions to economic NMPC (eNMPC), a more general dynamic optimization problem, where further reformulation is required for stability guarantees. We present and discuss the relative merits of three alternative methods for stabilizing eNMPC: objective regularization based on the full state-space, objective regularization based on a reduced set of states, and the addition of a stabilizing constraint. Finally, we demonstrate these eNMPC formulations on a continuously stirred tank reactor (CSTR) as well as a pair of coupled distillation columns.
Mingzhao Yu, Devin W. Griffith, Lorenz T. Biegler

Applications

Frontmatter
Automotive Applications of Model Predictive Control
Abstract
Model Predictive Control (MPC) has been investigated for a significant number of potential applications to automotive systems. The treatment of these applications has also stimulated several developments in MPC theory, design methods, and algorithms, in recent years.
Stefano Di Cairano, Ilya V. Kolmanovsky
Applications of MPC in the Area of Health Care
Abstract
Health is a major area of national expenditure in all developing countries. Many problems that arise in the area of health are, at their core, feedback control problems. This chapter provides an overview of the critical questions that must be asked to determine the nature of the problem and its solution. A variety of health applications are reviewed to identify their unique characteristics, the suitability of Model Predictive Control (MPC) to provide better treatments through better control and some examples of how MPC has been used for these applications.
G. C. Goodwin, A. M. Medioli, K. Murray, R. Sykes, C. Stephen
Model Predictive Control for Power Electronics Applications
Abstract
Power electronics converters use switching elements to manipulate voltage and current waveforms. This enables the interconnection of components having different requirements, e.g., when incorporating renewable energy sources into the grid. The use of switching elements may lead to high energy efficiency. However, switching dynamical systems are difficult to analyse and design. In this chapter, we outline how model predictive control concepts can be used in power electronics and electrical drives. Special emphasis is given on the finite-set nature of manipulated variables and associated stability and optimization issues. For particular classes of system models, we discuss practical algorithms, which make long-horizon predictive control suitable for power electronics applications.
Daniel E. Quevedo, Ricardo P. Aguilera, Tobias Geyer
Learning-Based Fast Nonlinear Model Predictive Control for Custom-Made 3D Printed Ground and Aerial Robots
Abstract
In this work, our goal is to use an online learning-based nonlinear model predictive control (NMPC) for systems with uncertain and/or time-varying parameters. We have deployed it for two robotic applications in real-time: an agricultural off-road ground vehicle and an aerial robotic system, namely a tilt-rotor tricopter unmanned aerial vehicle. Nonlinear moving horizon estimation (NMHE) is used to estimate the traction parameters in the former and the mass parameter in the latter. Thanks to its learning capability, NMHE makes the proposed framework adaptive – and therefore robust – to time-varying operational conditions. The experimental results for the trajectory tracking problem of the unmanned ground and aerial vehicles demonstrate a robust learning controller that provides an accurate tracking. The experimental results also show that the proposed framework provides a fast and computationally efficient methodology which can easily be implemented in ground and aerial robotic applications with reasonable computation power, where working conditions are time-varying and the modeling of the system is tedious.
Mohit Mehndiratta, Erkan Kayacan, Siddharth Patel, Erdal Kayacan, Girish Chowdhary
Applications of MPC to Building HVAC Systems
Abstract
Heating, ventilation, and air conditioning (HVAC) systems in buildings are an emerging application area for model predictive control (MPC) due to the significant cost benefits that can be achieved via load shifting in modern electricity markets. In this paper, we discuss some of the opportunities and challenges associated with applying MPC to commercial HVAC systems. After defining the control problem, a decomposition of the centralized MPC is presented and demonstrated for an example system. Recent work at the Stanford University campus is also highlighted to show these ideas in practice, and an outlook for the field is given.
Nishith R. Patel, James Rawlings
Toward Multi-Layered MPC for Complex Electric Energy Systems
Abstract
This chapter formalizes a model predictive control (MPC) formulation for complex multi-temporal multi-spatial electric energy systems. It is motivated by the need for data-enabled decision-making in the changing industry in which different entities have the ability to predict uncertainties at different temporal and spatial granularity. It is explained that these needs arise because today’s industry practice relies on software tools which assume specific temporal and spatial decoupling. These no longer hold in the environment with highly intermittent resources and diverse decision makers. To account for temporal and spatial interdependencies, concepts of spatial and temporal lifting are utilized. These are partly motivated by a previously proposed Dynamic Monitoring and Decision Systems (DyMoNDS) framework for operating smart grids. These methods confirm the information exchange protocols required for near-optimal and highly reliable cost-effective and clean electricity services as proposed earlier. The use of DyMonDS tools is illustrated for efficient integration of temporally diverse generation and demand response. It is shown that this can be done while ensuring stable operation with minimal fast expensive storage.
Marija Ilic, Rupamathi Jaddivada, Xia Miao, Nipun Popli
Applications of MPC to Finance
Abstract
This chapter describes the application of Model Predictive Control (MPC) to the finance problems of portfolio optimization and dynamic option hedging. Both of these problems are naturally formulated in the context of stochastic control, where, under idealized market settings, closed-form solutions have been found. However, realistic trading in financial markets is naturally a constrained environment. Moreover, models of stock price movement can be complex and not subject to analytical expression, while issues such as transaction costs can significantly affect the performance of trading strategies. These considerations have led to the development and successful application of MPC methods. Here, we develop the relevant system dynamics for trading, and present basic MPC formulations for both the portfolio optimization and dynamic option hedging problems. Key issues in the use of MPC for these problems and pointers to the literature provide the necessary background for those in the MPC community to begin to contribute to this exciting application area.
James A. Primbs
Backmatter
Metadata
Title
Handbook of Model Predictive Control
Editors
Saša V. Raković
William S. Levine
Copyright Year
2019
Electronic ISBN
978-3-319-77489-3
Print ISBN
978-3-319-77488-6
DOI
https://doi.org/10.1007/978-3-319-77489-3