Skip to main content

2015 | Buch

Optimal Boundary Control and Boundary Stabilization of Hyperbolic Systems

insite
SUCHEN

Über dieses Buch

This brief considers recent results on optimal control and stabilization of systems governed by hyperbolic partial differential equations, specifically those in which the control action takes place at the boundary. The wave equation is used as a typical example of a linear system, through which the author explores initial boundary value problems, concepts of exact controllability, optimal exact control, and boundary stabilization. Nonlinear systems are also covered, with the Korteweg-de Vries and Burgers Equations serving as standard examples. To keep the presentation as accessible as possible, the author uses the case of a system with a state that is defined on a finite space interval, so that there are only two boundary points where the system can be controlled. Graduate and post-graduate students as well as researchers in the field will find this to be an accessible introduction to problems of optimal control and stabilization.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Many important systems in engineering are governed by hyperbolic partial differential equations. Typical examples are the flow through water transportation networks (see for example Gugat et al., Modelling, stabilization and control of flow in networks of open channels, In: Grötschel et al. (eds.) Online Optimization of Large Scale Systems, pp. 251–270, Springer, Berlin, 2001), gas flow through pipeline networks (see for example Dick et al., Stabilization of networked hyperbolic systems with boundary feedback, In: Leugering, et al. (eds.), Trends in PDE Constrained Optimization, pp. 487–504, Birkhäuser, Basel, 2014 and the references therein), and power grids. Also traffic flow can be modeled by hyperbolic partial differential equations (see for example Gugat et al., J. Optim. Theory Appl. 126, 589–616, 2005; Work et al., Appl. Math. Res. Express 1, 1–35, 2010). These models allow to study how control action influences the states in these systems.
Martin Gugat
Chapter 2. Systems governed by the wave equation
Abstract
We consider systems that are governed by hyperbolic partial differential equations (pdes). As a first example, we consider the wave equation
$$\displaystyle\begin{array}{rcl} y_{tt} = c^{2}\,y_{ xx}.& & {}\\ \end{array}$$
Here c is a real number and | c | is called the wave speed. We will focus on the one-dimensional case, where we can present essential concepts. To analyze the wave equation, the concept of traveling waves is useful.
Martin Gugat
Chapter 3. Exact Controllability
Abstract
The question of exact controllability (see Lions, SIAM Rev. 30, 1–68, 1988; Russell, J. Math. Anal. Appl. 18, 542–560, 1967) is: Which states can be reached exactly at given control time T with a given set of control functions starting at time zero with an initial state from a prescribed set?
Martin Gugat
Chapter 4. Optimal Exact Control
Abstract
In optimal control problems, we choose the ‘best’ controls from the set of all admissible controls. In our case, the set of admissible controls consists of the set of all controls that steer the system to the desired terminal state at the given terminal time. In general, these exact controls are not uniquely determined. Therefore we can choose from the set of admissible controls an exact control that is optimal in the sense that it minimizes an objective function that models our preferences. This leads to an optimal control problem where the prescribed end conditions can be regarded as equality constraints. Often, the control costs that are given by the norm of the control function are an interesting objective function.
Martin Gugat
Chapter 5. Boundary Stabilization
Abstract
The aim of controls for boundary stabilization is to influence the system state from the points where the control action takes place in such a way that the states approaches a given desired state. Moreover, this should happen quite fast, if possible with an exponential rate. Often this is done using feedback laws, where the current observation and possibly information from the past is used to determine the control action. Often the feedback laws do not need the complete information about the current state but only partial information that is easier to observe. The observation is taken from sensors that only get information for the state at the point where they are located. In contrast to this approach, the optimal controls for a whole time interval [0, T] that we have considered in Chapter 4 are based upon the complete information about a given initial state.
Martin Gugat
Chapter 6. Nonlinear Systems
Abstract
Up to now, we have considered linear systems. If for such a linear system the existence of a solution can be shown for a certain finite time interval, then the solution exists for all times provided that the control keeps its regularity. For nonlinear systems, the situation is completely different. In a nonlinear hyperbolic system, the solution can loose a part of its regularity after a finite time. For example, classical solutions typically break down after finite time since there is a blow up in certain partial derivatives.
Martin Gugat
Chapter 7. Distributions
Abstract
For the analysis of partial differential equations often derivatives in the sense of distributions are needed, since classical solutions do not exist. Therefore we present a very short introduction to the theory of distributions that has essentially been influenced by Laurent Schwartz (see Schwartz, Méthodes mathématiques pour les sciences physiques, Hermann, Paris, 1998). First we define the set of test functions. Let the dimension n ∈ { 1, 2, 3, } be given.
Martin Gugat
Backmatter
Metadaten
Titel
Optimal Boundary Control and Boundary Stabilization of Hyperbolic Systems
verfasst von
Martin Gugat
Copyright-Jahr
2015
Electronic ISBN
978-3-319-18890-4
Print ISBN
978-3-319-18889-8
DOI
https://doi.org/10.1007/978-3-319-18890-4