Skip to main content

2003 | Buch

Stochastic Adaptive Search for Global Optimization

verfasst von: Zelda B. Zabinsky

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

The field of global optimization has been developing at a rapid pace. There is a journal devoted to the topic, as well as many publications and notable books discussing various aspects of global optimization. This book is intended to complement these other publications with a focus on stochastic methods for global optimization. Stochastic methods, such as simulated annealing and genetic algo­ rithms, are gaining in popularity among practitioners and engineers be­ they are relatively easy to program on a computer and may be cause applied to a broad class of global optimization problems. However, the theoretical performance of these stochastic methods is not well under­ stood. In this book, an attempt is made to describe the theoretical prop­ erties of several stochastic adaptive search methods. Such a theoretical understanding may allow us to better predict algorithm performance and ultimately design new and improved algorithms. This book consolidates a collection of papers on the analysis and de­ velopment of stochastic adaptive search. The first chapter introduces random search algorithms. Chapters 2-5 describe the theoretical anal­ ysis of a progression of algorithms. A main result is that the expected number of iterations for pure adaptive search is linear in dimension for a class of Lipschitz global optimization problems. Chapter 6 discusses algorithms, based on the Hit-and-Run sampling method, that have been developed to approximate the ideal performance of pure random search. The final chapter discusses several applications in engineering that use stochastic adaptive search methods.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Global optimization refers to a mathematical program, which seeks a maximum or minimum objective function value over a set of feasible solutions. The adjective “global” indicates that the optimization problem may be very general in nature; the objective function may be nonconvex, nondifferentiable, and possibly discontinuous over a continuous or discrete domain. A global optimization problem with continuous variables may contain several local optima or stationary points. The problem of designing algorithms that obtain global solutions is very difficult when there is no overriding structure that indicates whether a local solution is indeed the global solution.
Zelda B. Zabinsky
Chapter 2. Pure Random Search and Pure Adaptive Search
Abstract
A theoretical analysis of the performance of two stochastic methods is presented, pure random search (PRS) and pure adaptive search (PAS). These two random search algorithms can be viewed as extremes and are not intended to be practical algorithms, but rather are used to provide insight and bracket the performance of stochastic algorithms. Pure random search samples points from the domain independently, and the objective function has no impact on the technique of generating the next sample point. In contrast, pure adaptive search samples the next point from the subset of the domain with strictly superior objective function values. Pure random search is extreme in the sense that the iterates are completely independent and never use previous information to affect the search strategy, while pure adaptive search is extreme in an opposite sense because the iterates completely depend on each other, and the search strategy forces improvement by definition. These two extreme stochastic methods are analyzed and it is shown that, under certain conditions, the expected number of iterations of pure adaptive search is linear in dimension, while pure random search is exponential in dimension. The linearity result for pure adaptive search implies that adapting the search to sample improving points is very powerful.
Zelda B. Zabinsky
Chapter 3. Hesitant Adaptive Search
Abstract
Hesitant adaptive search (HAS) extends the ideas of PAS by accommodating hesitation, or pausing, at objective function values for both continuous and discrete problems. At each iteration the HAS algorithm can either generate a point in the improving region with a certain “bettering” probability, or it “hesitates” and stays at the current point. PAS is a special case of HAS, occurring when the bettering probability is equal to one. The HAS algorithm goes a step further than PAS in modeling stochastic algorithms, because it allows hesitation at each iterate which may represent rejecting a sampled point. The bettering probability depends on the objective function value, so it can capture the realistic effect of it being harder to find an improving point as the objective function value gets smaller.
Zelda B. Zabinsky
Chapter 4. Annealing Adaptive Search
Abstract
While pure adaptive search and hesitant adaptive search use an underlying sampling distribution that is restricted to sampling from improving sets, annealing adaptive search (AAS) always samples from the original feasible region but modifies the sampling distribution so that the probability of sampling in the improving level set is high. The record values of AAS were called adaptive search and analyzed by Romeijn and Smith [139]; they were shown to stochastically dominate PAS. Hence the number of improving points of AAS inherits the linear complexity of PAS. The number of sample points (including non-improving points) of AAS can be analyzed using a relationship to HAS. The additional samples needed before achieving an improving point may be interpreted as hesitating, and the link to HAS provides a bound on the expected number of sample points of AAS to achieve a specified level of accuracy. The name annealing adaptive search (suggested in [180]) is used because the algorithm samples according to a Boltzmann distribution where the temperature parameter is updated at each improving point. The algorithm is theoretical in the sense that it assumes points have an exact Boltzmann distribution, but the purpose of the analysis is to gain understanding of simulated annealing and other stochastic algorithms.
Zelda B. Zabinsky
Chapter 5. Backtracking Adaptive Search
Abstract
Backtracking adaptive search is a generalization of hesitant adaptive search that not only allows hesitation, but allows the algorithm to backtrack by accepting points with worse objective function values. Backtracking adaptive search is similar to AAS because it allows sampling from the full set S. It is a generalization of PAS and HAS, which both restrict the sampled points to be in the improving region. Backtracking adaptive search differs from AAS in the specific sampling distribution used. The sampling distribution of AAS is a Boltzmann distribution with a temperature parameter that depends on the best objective function value sampled thus far. The sampling distribution of backtracking adaptive search depends on the current objective function value, instead of the best objective function value. It modifies the underlying sampling distribution based on this current value. Backtracking adaptive search is a step toward more realistic algorithms, like simulated annealing, because it allows the sampling distribution to change as the algorithm accepts improving and non-improving points.
Zelda B. Zabinsky
Chapter 6. Hit-and-Run Based Algorithms
Abstract
The theoretical performance of stochastic adaptive search methods, as analyzed in previous chapters, is governed by the underlying sampling distribution of the global optimization algorithm. In particular, if one could sample uniformly in the improving level set of an optimization problem, as in pure adaptive search, then the expected number of iteration would be linear in dimension. The analyses of hesitant adaptive search and annealing adaptive search suggest that nonuniform sampling distributions may also provide polynomial performance as long as the sampling distribution has a high probability of generating an improving point. This chapter presents an approach to create a practical approximation of pure adaptive search by embedding a version of Hit-and-Run within the context of a simulated annealing algorithm. The ultimate goal is to develop an algorithm with polynomial complexity in the expected number of sample points.
Zelda B. Zabinsky
Chapter 7. Engineering Design Applications
Abstract
Applications of global optimization problems often arise when optimizing complex systems. Examples of such global optimization problems appear in engineering design where design problems are often highly nonlinear. The feasible region may be nonconvex and disconnected, and the functions may be nonconvex, nondifferentiable and even discontinuous. In addition, the decision variables may include both discrete and continuous variables. Many engineers are reporting success applying simulated annealing, genetic algorithms, evolutionary programming, hybrid methods, as well as variations of Improving Hit-and-Run and Hide-and-Seek to optimize complex engineering design problems.
Zelda B. Zabinsky
Backmatter
Metadaten
Titel
Stochastic Adaptive Search for Global Optimization
verfasst von
Zelda B. Zabinsky
Copyright-Jahr
2003
Verlag
Springer US
Electronic ISBN
978-1-4419-9182-9
Print ISBN
978-1-4613-4826-9
DOI
https://doi.org/10.1007/978-1-4419-9182-9