Skip to main content
main-content

Über dieses Buch

This book examines application and methods to incorporating stochastic parameter variations into the optimization process to decrease expense in corrective measures. Basic types of deterministic substitute problems occurring mostly in practice involve i) minimization of the expected primary costs subject to expected recourse cost constraints (reliability constraints) and remaining deterministic constraints, e.g. box constraints, as well as ii) minimization of the expected total costs (costs of construction, design, recourse costs, etc.) subject to the remaining deterministic constraints.

After an introduction into the theory of dynamic control systems with random parameters, the major control laws are described, as open-loop control, closed-loop, feedback control and open-loop feedback control, used for iterative construction of feedback controls. For approximate solution of optimization and control problems with random parameters and involving expected cost/loss-type objective, constraint functions, Taylor expansion procedures, and Homotopy methods are considered, Examples and applications to stochastic optimization of regulators are given. Moreover, for reliability-based analysis and optimal design problems, corresponding optimization-based limit state functions are constructed. Because of the complexity of concrete optimization/control problems and their lack of the mathematical regularity as required of Mathematical Programming (MP) techniques, other optimization techniques, like random search methods (RSM) became increasingly important.

Basic results on the convergence and convergence rates of random search methods are presented. Moreover, for the improvement of the – sometimes very low – convergence rate of RSM, search methods based on optimal stochastic decision processes are presented. In order to improve the convergence behavior of RSM, the random search procedure is embedded into a stochastic decision process for an optimal control of the probability distributions of the search variates (mutation random variables).

Inhaltsverzeichnis

Frontmatter

Stochastic Optimization Methods

Frontmatter

Chapter 1. Optimal Control Under Stochastic Uncertainty

Abstract
Optimal control and regulator problems that arise in many concrete applications (mechanical, electrical, thermodynamical, chemical, etc.) are modeled by dynamical control systems obtained from physical measurements and/or known physical (a priori) laws. The basic control system (input–output system) is mathematically represented by a system of first order differential equations with random parameters:
$$\displaystyle \begin {array}{rcl} \dot z (t) & = & g \Big ( t, \omega , z (t) , u (t) \Big ), t_0 \leq t \leq t_f , ~ \omega \in \Omega \\ z (t_0) & = & z_0 ( \omega ). \end {array} $$
Here, ω is the basic random element taking values in a probability space \((\Omega , \mathcal {A}, P)\), and describing the random variations of model parameters or the influence of noise terms. The probability space \((\Omega , \mathcal {A}, P)\) consists of the sample space or set of elementary events Ω, the σ-algebra \(\mathcal {A}\) of events and the probability measure P. The plant state vector z = z(t, ω) is an m-vector involving direct or indirect measurable/observable quantities like displacements, stresses, voltage, current, pressure, concentrations, etc., and their time derivatives (velocities), z 0(ω) is the random initial state. The plant control or control input u(t) is a deterministic or stochastic n-vector denoting system inputs like external forces or moments, voltages, field current, thrust program, fuel consumption, production rate, etc. Furthermore, \(\dot z\) denotes the derivative with respect to the time t.
Kurt Marti

Chapter 2. Stochastic Optimization of Regulators

Abstract
The optimal design of regulators is often based on the use of given, fixed nominal values of initial conditions, external loads and dynamic parameters of the control system. However, due to variations of material properties, tasks to be executed, modeling errors, etc., the model parameters are not exactly known and given quantities. In addition, the state of the system cannot be observed or measured exactly there are always some observational/measurement errors. Thus, a predetermined (optimal) regulator should be robust, i.e. the regulator should guarantee satisfying results also in case of observational errors and errors in the selection of the initial conditions, external load parameters, dynamic parameters, etc. Since uncertainties can be modeled and recorded very efficiently by probabilistic terms, in contrast to other approaches in optimal regulator design, the occurring errors are modeled here by realizations of random variables having a given or at least partly known probability distribution. Thus, instead of calculating optimal regulators by solving very complex minimax optimization problems, here, robust optimal regulators can be found by means of stochastic optimization methods. Using Taylor expansion methods for calculating the occurring expectations, stochastic optimal regulators can be determined by deterministic optimization problems which can be handled partly analytically for additive as well as multiplicative measurement errors. Moreover, the procedure indicates how the gain matrices must be selected in order to get stable perturbation equations for the sensitivities. Finally, the given procedure is applied to the important field of active control of mechanical structures. An illustrative example is given.
Kurt Marti

Chapter 3. Optimal Open-Loop Control of Dynamic Systems Under Stochastic Uncertainty

Abstract
As shown in former sections on optimal tracking problems, optimal regulator problems, active structural optimization problems under stochastic uncertainty, approximate optimal feedback controls under stochastic uncertainties can be obtained by means of open-loop feedback control methods used also in model predictive control under stochastic uncertainty. In order to simplify the notation, the intermediate starting points t b of the open-loop control problems are here abbreviated here just by t 0.
Kurt Marti

Chapter 4. Construction of Feedback Control by Means of Homotopy Methods

Abstract
Homotopy methods are known from topology to describe deformations of certain mathematical objects O, e.g. geometrical bodies, sets, functions, etc., such that an initial object O 1 is continuously transformed into a terminal one O 2.
Kurt Marti

Chapter 5. Constructions of Limit State Functions

Abstract
The observations or measurements from real devices or objects, as, e.g., in natural sciences (such as physics, chemistry, biology, earth sciences, meteorology) as well as in social sciences (such as economics, psychology, sociology) are analyzed and ordered in order to describe them (approximate) by mathematical concepts, in mathematical terms. An important application of these mathematical models is the prediction of the future behavior of the real devices or objects under consideration. Moreover, future measurements can be used to validate the model.
Kurt Marti

Optimization by Stochastic Methods: Foundations and Optimal Control/Acceleration of Random Search Methods (RSM)

Frontmatter

Chapter 6. Random Search Procedures for Global Optimization

Abstract
Solving optimization problems from engineering, as, e.g., parameter—or process—optimization problems
$$\displaystyle \min F (x) \mbox{ s.t. } x \in D, $$
where D is a subset of \(\mathbb {R}^n\), one meets often the following situation:
(a)
One should find the global optimum in (6.1), hence most of the deterministic programming procedures, which are based on local improvements of the performance index F(x), will fail.
 
(b)
Concerning the objective function F one has a blackbox—situation, i.e. there is only few a priori information about the structure of F, especially there is no knowledge about the direct functional relationship between the control or input vector x ∈ D and its index of performance F(x); hence—besides the more or less detailed a priori information about F—the only way of getting objective information about the structure of F is via evaluations of its values F(x) by experiments or by means of a numerical procedure simulating the technical plant.
 
Kurt Marti

Chapter 7. Controlled Random Search Under Uncertainty

Abstract
In order to increase the rate of convergence of the basic search method (6.​2a), according to Sect. 6.​3 we consider the following procedure (Marti, Math Meth Oper Res 36:223–234 (1979); Marti, ZAMM 60:357–359 (1980); Marti, Controlled Random Search Procedures For Global Optimization. Lecture Notes in Control and Information Sciences, vol. 81, pp. 457–474. Springer, Berlin (1986)). Based on the basic random search method (6.​2a), by means of the definitions (I)–(III) we describe first an (infinite-stage) sequential stochastic decision process associated with (6.2a ).
Kurt Marti

Chapter 8. Controlled Random Search Procedures for Global Optimization

Abstract
Solving optimization problems arising from engineering and economics, as, e.g., parameter- or process-optimization problems,
$$\displaystyle \min F(x) \mbox{ s.t. } x \in D, $$
where D is a measurable subset of \(\mathbb {R}^d\) and F is a measurable real function defined (at least) on D, one meets often the following situation:
(I)
One should find the global minimum F and/or a global minimum point x of (8.1). Hence, most of the deterministic programming procedures, which are based on local improvements of the objective function F(x), will fail.
 
(II)
Concerning the objective function F(x) one has a black-box-situation, i.e. there is only few a priori information about F especially there is no (complete) knowledge about the direct functional relationship between the control or input vector x ∈ D and its function value y = F(x). Hence, besides the limited a priori information about F, only by evaluating F numerically or by experiments at certain points z 1, z 2, … of \(\mathbb {R}^d\) one gets further information on F.
 
Kurt Marti

Random Search Methods (RSM): Convergence and Convergence Rates

Frontmatter

Chapter 9. Mathematical Model of Random Search Methods and Elementary Properties

Abstract
Random search methods are special stochastic optimization methods to solve the following problem:
Minimize f(x) with regard to x ∈ D, where
\(f:M \rightarrow \mathbb {R}\) and \(D \subset M \subset \mathbb {R}^d\) \(( d \in \mathbb N )\).
Kurt Marti

Chapter 10. Special Random Search Methods

Abstract
An important subclass is formed by those methods whose mutation transition probabilities p n have Lebesgue densities.
Kurt Marti

Chapter 11. Accessibility Theorems

Abstract
The central topic in the next chapters is to find sufficient conditions concerning the feasible domain D, the objective function f, and the sequence \((p_n: n \in \mathbb {N}_0)\) of mutation probability distributions guaranteeing the convergence of the corresponding random search procedure \((X_n: n \in \mathbb {N}_0)\), hence,
$$\displaystyle \begin{aligned} f(X_n) \rightarrow \inf \limits _{x \in D}f(x) \quad P-\mbox{a.s.}, \:n \rightarrow \infty , ~~ \mbox{for all starting points} ~ x_0 \in D. \end{aligned} $$
Kurt Marti

Chapter 12. Convergence Theorems

Abstract
Let \((X_n:n\in \mathbb {N}_0)\) be a R-S-M with an absolutely continuous mutation sequence \((p_n:n\in \mathbb {N})\).
Kurt Marti

Chapter 13. Convergence of Stationary Random Search Methods for Positive Success Probability

Abstract
Let \((X_n:n\in \mathbb {N}_0)\) be in this section—in the sense of Definition 9.​3—a stationary R-S-M with the mutation transition probability p. Let us now again pose the question on which conditions \((X_n:n\in \mathbb {N}_0)\) converge, i.e. when does
$$\displaystyle f(X_n) \rightarrow f^{\ast } \qquad P-\mbox{a.s.} \qquad $$
apply to any starting point x 0 ∈ D?
Kurt Marti

Chapter 14. Random Search Methods of Convergence Order O(n −α)

Abstract
At the beginning of this section we would like to provide a small example that illustrates some of the most important ideas in simplified form.
Kurt Marti

Chapter 15. Random Search Methods with a Linear Rate of Convergence

Abstract
We return now once again to the example of Chap. 14. In this example we have
$$\displaystyle D=\mathbb {R}\quad ,\quad f(x)={\mid x\mid }^s \qquad (s>0 \mbox{ arbitrary), and } \quad p(x,\cdot ) $$
denotes the uniform distribution on the interval [x − r(x), x + r(x)].
Kurt Marti

Chapter 16. Success/Failure-Driven Random Direction Procedures

Abstract
In this section we consider a random direction procedure, cf. G.S. Tarasenko, based on a very simple step length control. At any iteration step n the step length 1n is chosen deterministically according to the following algorithm:
$$\displaystyle \begin{aligned} \begin{array}{rcl} \ell_1 &\displaystyle = &\displaystyle \ell > 0 \\ \ell_{n+1} &\displaystyle = &\displaystyle \left\{ \begin{array}{ll} \gamma_1 \ell_n &\displaystyle \text{in case of success at the }n\text{-th step} \\ \gamma_2 \ell_n &\displaystyle \text{in case of failure in the }n\text{-th step}, \end{array} \right. \\ \text{where} &\displaystyle &\displaystyle \gamma_1 > \ell , \gamma_2 \in (0, \ell) \end{array} \end{aligned} $$
are fixed parameters.
Kurt Marti

Chapter 17. Hybrid Methods

Abstract
Depending on the type of the random search method, it may happen that the procedure finds rather fast a local extremum \(x_{loc}^{\ast }\) of the objective function f under consideration, but get then stuck in this point. On the other hand, there are procedures having features to omit this behavior. For example, a special property of simulated annealing methods is that also non-improving steps are possible with decreasing probability (cooling). Thus, steps out of the neighborhood of a local extremum are possible with a certain probability.
Kurt Marti

Optimization Under Stochastic Uncertainty by Random Search Methods (RSM)

Frontmatter

Chapter 18. Solving Optimization Problems Under Stochastic Uncertainty by Random Search Methods (RSM)

Abstract
In many optimization problems arising in practice
$$\displaystyle \begin{aligned} \min \: F_0(x) \quad \mbox{s.t. } x \in D \end{aligned} $$
only statistical information is available about the objective function F 0. Hence, we have then only access to a realization of a random function f = f(ω, x) on a probability space \(\left (\Omega ,\mathcal {A},\mathcal {P}\right )\) such that F 0(x) is the expectation of f(⋅, x):
$$\displaystyle \begin{aligned} F_0(x) = Ef(\omega ,x), \: x \in D, \end{aligned} $$
see, e.g., A special case is given by
$$\displaystyle \begin{aligned} f(\omega ,x) = F_0(x) + n(\omega ,x), \end{aligned} $$
where n(ω, x) is an additional zero-mean random noise term. Having a certain number m of independent sample functions
$$\displaystyle \begin{aligned} f_k(x) = f(\omega _k,x), \: k \in M, \end{aligned} $$
of the random function in (18.2a), we may use the estimated objective function
$$\displaystyle \begin{aligned} \hat {F}(x):= \frac {1}{m}\sum \limits _{k \in M}f_k(x). \end{aligned} $$
Kurt Marti

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise