Skip to main content

2000 | Buch

Global Optimization with Non-Convex Constraints

Sequential and Parallel Algorithms

verfasst von: Roman G. Strongin, Yaroslav D. Sergeyev

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

Everything should be made as simple as possible, but not simpler. (Albert Einstein, Readers Digest, 1977) The modern practice of creating technical systems and technological processes of high effi.ciency besides the employment of new principles, new materials, new physical effects and other new solutions ( which is very traditional and plays the key role in the selection of the general structure of the object to be designed) also includes the choice of the best combination for the set of parameters (geometrical sizes, electrical and strength characteristics, etc.) concretizing this general structure, because the Variation of these parameters ( with the structure or linkage being already set defined) can essentially affect the objective performance indexes. The mathematical tools for choosing these best combinations are exactly what is this book about. With the advent of computers and the computer-aided design the pro­ bations of the selected variants are usually performed not for the real examples ( this may require some very expensive building of sample op­ tions and of the special installations to test them ), but by the analysis of the corresponding mathematical models. The sophistication of the mathematical models for the objects to be designed, which is the natu­ ral consequence of the raising complexity of these objects, greatly com­ plicates the objective performance analysis. Today, the main (and very often the only) available instrument for such an analysis is computer­ aided simulation of an object's behavior, based on numerical experiments with its mathematical model.

Inhaltsverzeichnis

Frontmatter

Global Optimization Algorithms as Decision Procedures. Theoretical Background and Core Univariate Case

Frontmatter
1. Introduction
Abstract
As mentioned in the Preface, mathematical problems of minimizing (or maximizing) a function often arise in the world of applications as models of decision making. To gain some insight into this relation, without citing a tedious listing of multiple applications of optimization ideas and techniques as presented in a wide range of publications, we shall start with consideration of one particular simple example of such a minimization problem — the problem of fitting a model to some given observed data.
Roman G. Strongin, Yaroslav D. Sergeyev
2. Global Optimization Algorithms as Statistical Decision Procedures — The Information Approach
Abstract
Coming back to optimization problems of the type (1.1.8), i.e.,
(2.1.1)
, let us accept that the sought approximation to the global optimizer x* is x* N provided by the uniform grid technique (1.1.13)–(1.1.15) for some specified number N of trials. This assumption, which is quite natural due to the relation (1.1.17), reduces the continuous problem (2.1.1) to the discrete problem of finding the node x α of the uniform grid
(2.1.2)
, satisfying the inequalities
(2.1.3)
, where
(2.1.4)
.
Roman G. Strongin, Yaroslav D. Sergeyev
3. Core Global Search Algorithm and Convergence Study
Abstract
The discrete global search algorithm derived in Sections 2.1–2.3 aims to produce the same solution for the problem (2.1.1) as the one juxtaposed to this problem by the uniform grid technique, but with lesser computational effort than in the item-by-item examination inherent to the grid technique. In fact, as already mentioned in Remark 2.6 at the end of Section 2.3, it may be purposeful to select substantially greater number n+1 of nodes for the grid (2.1.2) than is needed to ensure the required accuracy. Moreover, the requirements for the desired accuracy may happen to increase in the search process, and it may be quite reasonable to substantially increase the number of nodes in advance, aiming to cut the necessity for readjustment of the grid which otherwise may arise in the course of minimization. To completely resolve this problem, we develop the ‘limit algorithm’ for the problem (2.1.1) by transforming the above discrete algorithm with n → ∞. These transformations are convenient to carry out in the following way.
Roman G. Strongin, Yaroslav D. Sergeyev
4. Global Optimization Methods as Bounding Procedures — The Geometric Approach
Abstract
Let us consider again the Lipschitz-continuous function φ(x), x ∈ [a, b], from (1.2.6) and the corresponding minimization problem (2.1.1). As has already been mentioned in Chapter 1, this problem has been intensively studied by many authors. In this book, on a level with the information approach presented in the previous Chapters, we discuss the geometric approach for solving the problem (2.1.1). We pay great attention to the ideas of an adaptive estimation of the objective function behaviour introduced in Chapters 2, 3 and ways in which they may be applied to another class of algorithms.
Roman G. Strongin, Yaroslav D. Sergeyev

Generalizations for Parallel Computing, Constrained and Multiple Criteria Problems

Frontmatter
5. Parallel Global Optimization Algorithms and Evaluation of the Efficiency of Parallelism
Abstract
The general global optimization problem of finding a global minimizer x* and the global minimum φ(x*) of the multiextremal function φ(x) defined over a domain M, i.e.,
(5.1.1)
, arises in different applications and numerical methods are used to find ε-optimal solutions to this problem.
Roman G. Strongin, Yaroslav D. Sergeyev
6. Global Optimization under Non-Convex Constraints — The Index Approach
Abstract
Consider the constrained global minimization problem
(6.1.1)
where the objective function φ, henceforth denoted g m +1, i.e., g m +1(x) = φ(x), and left-hand sides g i , 1 ≤ im, of the constraints are assumed to be Lipschitzian respectively with constants L i , 1 ≤ im + 1, and, in general, are multi-extremal.
Roman G. Strongin, Yaroslav D. Sergeyev
7. Algorithms for Multiple Criteria Multiextremal Problems
Abstract
The problem of minimizing a vector-valued objective function
(7.1.1)
where
(7.1.2)
has received special attention in the context of multiple criteria decision making in optimal design of technical systems (see e.g., Batishchev (1975), Kasnoshchekov, Petrov and Fiodorov (1986)), in conditions of uncertainty (see e.g., Zhukovskii and Molostvov (1990)), in classical problems of identifying parameters of a model to match the experimental data, etc. (see also e.g., Hwang and Masud (1979), Podinovskii and Nogin (1982), Yemelianov and Larichev (1985), Yu (1985), Levandovski and Volkovich (1991), Steuer (1986) and the references given therein).
Roman G. Strongin, Yaroslav D. Sergeyev

Global Optimization in Many Dimensions. Generalizations through Peano Curves

Frontmatter
8. Peano-Type Space-Filling Curves as Means for Multivariate Problems
Abstract
A large number of decision problems in the world of applications may be formulated as searching for a constrained global optimum (minimum, for certainty)
(8.1.1)
, where the domain of search
(8.1.2)
, R N is the N-dimensional Euclidean space and the objective function φ(y) (henceforth denoted g m +1(y)) and the left-hand sides g i (y) 1 ≤ im, of the constraints are Lipschitzian with respective constants L i , 1 ≤ im + 1, i.e., for any two points yy″ ∈ D it is true that
$$ \left| {{g_i}\left( {y'} \right) - {g_i}\left( {y''} \right)} \right| \leqslant {L_i}\left\| {y' - y''} \right\| = {L_i}{\left\{ {\sum\limits_{j = 1}^N {{{\left( {{{y'}_j} - {{y''}_j}} \right)}^2}} } \right\}^{1/2}}, $$
(8.1.3)
1 ≤ im + 1. Note that (8.1.1) is the obvious generalization of the one-dimensional constrained problem (6.1.1) for the case of N dimensions, i.e., for the case when the sought decision is described by some N-dimensional vector y* ∈ D. If the domain of search is defined by the hyperparallelepiped
(8.1.4)
, then, by introducing the transformation
(8.1.5)
,
(8.1.6)
, and the extra constraint
(8.1.7)
it is possible to keep up the initial presentation (8.1.2) for the domain of search (which is assumed to be the standard one) not altering the relations of Lipschitzian properties in dimensions.
Roman G. Strongin, Yaroslav D. Sergeyev
9. Multidimensional Parallel Algorithms
Abstract
Let us consider again the multidimensional problem
(9.0.1)
,
(9.0.2)
, where φ(y) is a Lipschitz function with a constant L, 0 < L < ∞. In this Chapter a few approaches introduced in the previous Chapters are unified for solving the problem (9.0.1), (9.0.2):
Roman G. Strongin, Yaroslav D. Sergeyev
10. Multiple Peano Scannings and Multidimensional Problems
Abstract
We commence by recalling some properties of the Peano curve y(x) from Theorem 8.1. From (8.1.21)–(8.1.23) follows that any two points x′, x″ from the unit interval [0,1] on the x-axis have some close images y′ = y(x′), y″ = y(x″) in the hypercube
(10.1.1)
; herewith, if
$$ \left| {x' - x''} \right| \leqslant {2^{ - \left( {M + 1} \right)N,}} $$
(10.1.2)
, where M ≥ 1 is an integer, then
$$ \max \left\{ {\left| {{{y'}_j} - {{y''}_j}} \right|:1 \leqslant j \leqslant N} \right\} \leqslant {2^{ - M}}. $$
(10.1.3)
Roman G. Strongin, Yaroslav D. Sergeyev
Backmatter
Metadaten
Titel
Global Optimization with Non-Convex Constraints
verfasst von
Roman G. Strongin
Yaroslav D. Sergeyev
Copyright-Jahr
2000
Verlag
Springer US
Electronic ISBN
978-1-4615-4677-1
Print ISBN
978-1-4613-7117-5
DOI
https://doi.org/10.1007/978-1-4615-4677-1