Skip to main content

2002 | Buch

Handbook of Global Optimization

Volume 2

herausgegeben von: Panos M. Pardalos, H. Edwin Romeijn

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

In 1995 the Handbook of Global Optimization (first volume), edited by R. Horst, and P.M. Pardalos, was published. This second volume of the Handbook of Global Optimization is comprised of chapters dealing with modern approaches to global optimization, including different types of heuristics.

Topics covered in the handbook include various metaheuristics, such as simulated annealing, genetic algorithms, neural networks, taboo search, shake-and-bake methods, and deformation methods. In addition, the book contains chapters on new exact stochastic and deterministic approaches to continuous and mixed-integer global optimization, such as stochastic adaptive search, two-phase methods, branch-and-bound methods with new relaxation and branching strategies, algorithms based on local optimization, and dynamical search. Finally, the book contains chapters on experimental analysis of algorithms and software, test problems, and applications.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Tight Relaxations for Nonconvex Optimization Problems Using the Reformulation-Linearization/Convexification Technique (RLT)
Abstract
This paper provides an expository discussion on applying the Reformulation-Linearization/Convexification Technique (RLT) to design exact and approximate methods for solving nonconvex optimization problems. While the main focus here is on continuous nonconvex programs, we demonstrate that this approach provides a unifying framework that can accommodate discrete problems such as linear zero-one mixed-integer programming problems and general bounded-variable integer programs as well. The basic RLT approach is designed to solve polynomial programming problems having nonconvex polynomial objective functions and constraints. The principal RLT construct for such problems is to first reformulate the problem by adding a suitable set of polynomial constraints and then to linearize this resulting problem through a variable substitution process in order to produce a tight higher dimensional linear programming relaxation. Various additional classes of valid inequalities, filtered through a constraint selection scheme or a separation routine, are proposed to further tighten the developed relaxation while keeping it manageable in size. This relaxation can be embedded in a suitable branch-and-bound scheme in order to solve the original problem to global optimality via a sequence of linear programming problems. Because of the tight convexification effect of this technique, a limited enumeration or even the use of a local search method applied to the initial relaxation’s solution usually serves as an effective heuristic strategy. We also discuss extensions of this approach to handle polynomial programs having rational exponents, as well as general factorable programming problems, and we provide recommendations for using RLT to solve even more general unstructured classes of nonconvex programming problems. Some special case applications to solve squared-Euclidean, Euclidean, or ℓ p -distance location-allocation problems are discussed to illustrate the RLT methodology.
Hanif D. Sherali
Chapter 2. Exact Algorithms for Global Optimization of Mixed-Integer Nonlinear Programs
Abstract
This chapter presents recent advances in the development and application of global optimization algorithms for solving mixed-integer nonlinear programs (MINLPs). It is demonstrated that practically relevant nonlinear programs can be solved to global optimality in a completely automated fashion when carefully chosen relaxation schemes, branching strategies, and domain reduction techniques are are used in conjunction with branch and bound to enhance its performance. In particular, this chapter presents a) applications of the convex extensions theory for constructing tight relaxations, b) unifying ideas behind domain reduction schemes, c) linear outer-approximation schemes with proven convergence guarantees, and d) branching schemes for factorable nonlinear programs. The chapter concludes with computational results on some benchmark mixed-integer nonlinear problems. New solutions are reported for four of these problems.
Mohit Tawarmalani, Nikolaos V. Sahinidis
Chapter 3. Algorithms for Global Optimization and Discrete Problems Based on Methods for Local Optimization
Abstract
One of the most challenging optimization problems is determining the minimizer of a nonlinear nonconvex problem in which there are some discrete variables. Any such problem may be transformed to that of finding a global optimum of a problem in continuous variables. However, such transformed problems have astronomically large numbers of local minimizers, making them harder to solve than typical global optimization problems. Despite this apparent disadvantage we show the approach is not hopeless.
Since the technique requires finding a global minimizer we review the approaches to solving such problems. Our interest is in problems for differentiable functions, and our focus is on algorithms that utilize the large body of work available on finding local minimizers of smooth functions. The method we advocate convexifies the problem and uses an homotopy approach to find the required solution. To illustrate how well the new algorithm performs we apply it to a hard frequency assignment problem, and to binary quadratic problems taken from the literature.
Walter Murray, Kien-Ming Ng
Chapter 4. An Introduction to Dynamical Search
Abstract
A number of classical optimisation algorithms which appear to converge smoothly behave in a haphazard fashion when looked at a local, or second order, level. Using different renormalisation procedures we link these algorithms to dynamical systems and then study these systems to get additional information about the rates of convergence of the original algorithms. These convergence rates are expressed in terms of the Lyapunov exponents and various entropies of the dynamical systems. Working in a dynamical system environment has suggested new types of algorithms and improvements over a number of classical algorithms. One result of the approach is that algorithms classically considered as optimal are in fact optimal in the worst-case, and, ergodically. the worst-case events have measure zero. We thus are often able to construct algorithms with improved ergodic performances. As the main application areas we consider line-search, ellipsoidal algorithms for linear and nonlinear programming, and gradient algorithms.
Luc Pronzato, Henry P. Wynn, Anatoly A. Zhigljavsky
Chapter 5. Two-Phase Methods for Global Optimization
Abstract
Virtually all methods for global optimization consist of two phases: a global phase, aimed at thorough exploration of the feasible region or subsets of the feasible region where it is known the global optimum will be found, and a local phase aimed at locally improving the approximation to some local optima. Often these two phases are blended into the same algorithm, which automatically switches between exploration and refinement. In this paper, some methods will be reviewed in which the two phases are well separated. A formal definition of two-phase methods will be given and some of the characteristics of these methods will be outlined.
Fabio Schoen
Chapter 6. Simulated Annealing Algorithms for Continuous Global Optimization
Abstract
In this chapter we will consider Simulated Annealing algorithms for continuous global optimization. After a description of the generic Simulated Annealing algorithm, its four main components (the distribution of the next candidate point, the acceptance function, the cooling schedule and the stopping criterion) will be analyzed in greater detail and some key ideas will be presented. In view of the strict connection with Simulated Annealing algorithms, we will also discuss the Langevin equation and its discretized version. The theoretical issue of convergence in probability to the set of global optima of the sequences of iterates and candidates will be explored. We will also give some insight into the properties of the objective functions for which a successful application of Simulated Annealing algorithms can be expected. Finally, some other issues such as applications, computational tests and parallelization of these algorithms will be discussed.
Marco Locatelli
Chapter 7. Stochastic Adaptive Search
Abstract
Large scale optimisation problems are often tackled using stochastic adaptive search algorithms, but the convergence of such methods to the global optimum is generally poorly understood. In recent years a variety of theoretical stochastic adaptive algorithms have been put forward and their favourable convergence properties confirmed analytically. Such research has two purposes: it offers some understanding of the convergence of stochastic adaptive methods while also providing motivation for the development of practical algorithms which approximate the ideal performance. This chapter summarises these developments.
Graham R. Wood, Zelda B. Zabinsky
Chapter 8. Implementation of Stochastic Adaptive Search with Hit-and-Run as a Generator
Abstract
The Hit-and-Run algorithm iteratively generates a sequence of points in a set by taking steps of random length in randomly generated directions. Hit-and-Run is used as a sampling technique within a global optimization algorithm, as a generator in the context of simulated annealing. The ultimate goal is to approximate the desirable properties of stochastic adaptive search, in particular, the linear complexity of pure adaptive search. Such an algorithm is the Hit-and-Run version of simulated annealing with temperature equal to zero, called Improving Hit-and-Run because only points improving the objective function are accepted. The Hit-and-Run generator coupled with a Metropolis acceptance criterion is called Hide-and-Seek, and is motivated by theoretical results from adaptive search. Other variations of Hit-and-Run are also described here, as applied to global optimization.
Zelda B. Zabinsky, Graham R. Wood
Chapter 9. Genetic Algorithms
Abstract
Evolutionary Algorithms are search algorithms based on the Darwinian metaphor of “Natural Selection”. Typically these algorithms maintain a finite memory, or “population” of individual solutions (points on the search landscape), each of which has a scalar value, or “fitness” attached to it, which in some way reflects the quality of the solution. The search proceeds via the iterative generation, evaluation and possible incorporation of new individuals based on the current population. A number of classes of Evolutionary Algorithms can (broadly speaking) be distinguished by the nature of the alphabet used to represent the search space and the specialisation of the various operators used, e.g. Genetic Algorithms (Holland, 1975: binary or finite discrete representations), Evolution Strategies (Rechenberg, 1973; Schwefel, 1981: real numbers), Evolutionary Programming (Fogel et al., 1966: real numbers), and Genetic Programming (Cramer, 1985; Koza, 1989: tree based representation of computer programs). Although not originally designed for function optimisation, genetic algorithms have been shown to demonstrate an impressive ability to locate optima in large, complex and noisy search spaces.
James E. Smith
Chapter 10. Dataflow Learning in Coupled Lattices: An Application to Artificial Neural Networks
Abstract
This chapter describes artificial neural networks (ANNs) as coupled lattices of dynamic nonlinear processing elements and studies ways to adapt their parameters. This view extends the conventional paradigm of static neural networks and shines light on the principles of parameter optimization, both for the static and dynamic cases. We show how gradient descent learning can be naturally implemented with local rules in coupled lattices. We review the present state of the art in neural network training and present recent results that take advantage of the distributed nature of coupled lattices for optimization.
Jose C. Principe, Curt Lefebvre, Craig L. Fancourt
Chapter 11. Taboo Search: An Approach to the Multiple-Minima Problem for Continuous Functions
Abstract
We decribe an approach, based on Taboo (or “Tabu”) Search for discrete functions, for solving the multiple-minima problem of continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. The procedure is generally applicable, derivative-free, easy to implement, conceptually simpler than Simulated Annealing and open to further improvement.
Djurdje Cvijović, Jacek Klinowski
Chapter 12. Recent Advances in the Direct Methods of X-Ray Crystallography
Abstract
The electron density function ρ(r) in a crystal determines its diffraction pattern, that is, both the magnitudes and phases of its X-ray diffraction maxima, and conversely. If, however, as is always the case, only magnitudes are available from the diffraction experiment, then the density function ρ(r) cannot be recovered. If one invokes prior structural knowledge, usually that the crystal is composed of discrete atoms of known atomic numbers, then the observed magnitudes are, in general, sufficient to determine the positions of the atoms, that is the crystal structure.
The intensities of a sufficient number X-ray diffraction maxima determine the structure of a crystal. The available intensities usually exceed the number of parameters needed to describe the structure. From these intensities a set of numbers |E H| can be derived, one corresponding to each intensity. However, the elucidation of the crystal structure also requires a knowledge of the complex numbers E H = |E H|exp( H), the normalized structure factors, of which only the magnitudes |E H| can be determined from experiment. Thus, a “phase” φ H, unobtainable from the diffraction experiment, must be assigned to each |E H|, and the problem of determining the phases when only the magnitudes |E H| are known is called “the phase problem.” Owing to the known atomicity of crystal structures and the redundancy of observed magnitudes |E H|, the phase problem is solvable in principle.
Herbert A. Hauptman
Chapter 13. Deformation Methods of Global Optimization in Chemistry and Physics
Abstract
Deformation of the target function in global optimization has been a novel possibility for the last decade. The techniques based on the deformation turned out to be related to a variety of fundamental laws: diffusion equation, time-dependent Schrödinger equations, Smoluchowski dynamics, Bloch equation of canonical ensemble evolution with temperature, Gibbs free-energy principle. The progress indicator of global optimization in those methods takes different physical meanings: time, imaginary time or the inverse absolute temperature. Despite of the fact that the phenomena described are different, the resulting global optimization procedures have a remarkable similarity. In the case of the Gaussian Ansatz for the wave function or density distribution, the underlying differential equations of motion for the Gaussian position and width are of the same kind for all the phenomena. The original potential energy function is smoothed by a convolution with a Gaussian distribution, its center denoting the current position in space during the minimization. The Gaussian position moves according to the negative gradient of the smoothed potential energy function. The Gaussian width is position dependent through the curvature of the smoothed potential energy function, and evolves according to the following rule. For sufficiently positive curvatures (close to minima of the smoothed potential) the width decreases, thus leading to a smoothed potential approaching the original potential energy function, while for negative curvatures (close to maxima) the width increases leading eventually to disappearance of humps of the original potential energy function. This allows for crossing barriers separating the energy basins. Some methods result in an additional term, which increases the width, when the potential becomes flat. This may be described as a feature allowing hunting for distant minima. Some deformation methods that are of non-convolutional character are also discussed.
Lucjan Piela
Chapter 14. Experimental Analysis of Algorithms
Abstract
This chapter surveys methodological issues that arise in experimental research on optimization algorithms. Guidelines are presented for selecting research problems, input classes, and program metrics; for ensuring correctness and reproducibility of the results; for applying appropriate data analysis techniques and presenting conclusions.
Catherine C. McGeoch
Chapter 15. Global Optimization: Software, Test Problems, and Applications
Abstract
This chapter provides a concise review of the most prominent global optimization (GO) strategies currently available. This is followed by a discussion of GO software, test problems and several important types of applications, with additional pointers. The exposition is concentrated around topics related to continuous GO, although in certain aspects it is also pertinent to analogous issues in combinatorial optimization.
János D. Pintér
Backmatter
Metadaten
Titel
Handbook of Global Optimization
herausgegeben von
Panos M. Pardalos
H. Edwin Romeijn
Copyright-Jahr
2002
Verlag
Springer US
Electronic ISBN
978-1-4757-5362-2
Print ISBN
978-1-4419-5221-9
DOI
https://doi.org/10.1007/978-1-4757-5362-2