Zum Inhalt

Derivative-Free and Blackbox Optimization

  • 2017
  • Buch
insite
SUCHEN

Über dieses Buch

This book is designed as a textbook, suitable for self-learning or for teaching an upper-year university course on derivative-free and blackbox optimization.

The book is split into 5 parts and is designed to be modular; any individual part depends only on the material in Part I. Part I of the book discusses what is meant by Derivative-Free and Blackbox Optimization, provides background material, and early basics while Part II focuses on heuristic methods (Genetic Algorithms and Nelder-Mead). Part III presents direct search methods (Generalized Pattern Search and Mesh Adaptive Direct Search) and Part IV focuses on model-based methods (Simplex Gradient and Trust Region). Part V discusses dealing with constraints, using surrogates, and bi-objective optimization.

End of chapter exercises are included throughout as well as 15 end of chapter projects and over 40 figures. Benchmarking techniques are also presented in the appendix.

Inhaltsverzeichnis

Frontmatter

Introduction and Background Material

Frontmatter
Chapter 1. Introduction: Tools and Challenges in Derivative-Free and Blackbox Optimization
Abstract
In this introductory chapter, we present a high-level description of optimization, blackbox optimization, and derivative-free optimization. We introduce some basic optimization notation used throughout this book, and some of the standard classifications of optimization problems. We end with three examples where blackbox optimization problems have arisen in practice. The first of these examples will be used throughout this book as a case study for how various optimization algorithms behave.
Charles Audet, Warren Hare
Chapter 2. Mathematical Background
Abstract
For a full appreciation of the material in this book, it is assumed that the reader has followed courses in Multivariate Calculus and Linear Algebra. This chapter contains definitions, notation, and results that, although fairly common in mathematics literature, are not necessarily covered in standard multivariate calculus and linear algebra courses. Not all of this material will be used immediately and not all of the material is required for every chapter; however, all of this material shows up repeatedly throughout this book. We begin with an important comment on notation.
Charles Audet, Warren Hare
Chapter 3. The Beginnings of DFO Algorithms
Abstract
In this chapter, we explore some naive approaches to solving BBO problems, and explain why they are not acceptable. This will lead to a better understanding of why DFO is preferred for real applications and provide some foundational material that is used throughout this book.
Charles Audet, Warren Hare

Popular Heuristic Methods

Frontmatter
Chapter 4. Genetic Algorithms
Abstract
Optimization algorithms for blackbox functions can be broadly split into two categories: heuristic and non-heuristic. A heuristic is any approach that, while supported by some argument of why it should succeed, does not include a guarantee of success. In the framework of blackbox optimization, we take this statement to mean that the algorithm does not employ mathematically proven stopping criterion. Hence, derivative-free optimization is distinguished from heuristic blackbox optimization, by the presence of a stopping test that provides some assurance of optimality.
Charles Audet, Warren Hare
Chapter 5. Nelder-Mead
Abstract
In 1965, John Nelder and Roger Mead published a short (6 pages) note on a new method for unconstrained minimisation. The resulting algorithm has become one of the most widely applied and researched optimization algorithms in the world. Originally, Nelder and Mead titled their method the “Simplex Method”, as the method hinged around using function evaluations at the vertices of a simplex to seek a minimiser. Of course, in the world of optimization, the name “Simplex Method” was already in use – referring to Dantzig’s method algorithm for solving Linear Programs. Thus, in order to avoid confusion, the method became known as the Nelder-Mead (NM) method.
Charles Audet, Warren Hare

Direct Search Methods

Frontmatter
Chapter 6. Positive Bases and Nonsmooth Optimization
Abstract
Chapter 3 introduced a first practical DFO algorithm for unconstrained optimization, the coordinate search (CS) algorithm. While it was proven to converge to the first order in some circumstance (see Theorem 3.4), it was also noted that the algorithm can fail on very simple nondifferentiable convex functions (see Example 3.3). The CS algorithm is an example of the subclass of DFO methods called direct search methods. Direct search methods are methods that work from an incumbent solution and examine a collection of trial points. If improvement is found, then the incumbent solution is updated; while if no improvement is found, then a step size parameter is decreased and a new collection of trial points is examined.
Charles Audet, Warren Hare
Chapter 7. Generalised Pattern Search
Abstract
As mentioned in Chapter 6, the generalised pattern search (GPS) algorithm is an advancement on the CS algorithm first introduced in Chapter 3 The GPS algorithm follows the same basic logic behind the CS algorithm, namely it iteratively searches a collection of points seeking improvement over the incumbent solution. In order to increase the flexibility in how those points are selected, the GPS algorithm replaces the coordinate directions with a more general collection of vectors.
Charles Audet, Warren Hare
Chapter 8. Mesh Adaptive Direct Search
Abstract
Chapter 3 proposed the coordinate search (CS) algorithm for unconstrained optimization. The algorithm worked based on local exploration around the incumbent solution in the positive and negative orthogonal coordinate directions. In Chapter 7, this algorithm was expanded to allow a broader use of search directions, resulting in the generalised pattern search (GPS) algorithm. However, the convergence analysis of the GPS algorithm falls a little short of what we would like. First, the main result states that if the objective function f is locally Lipschitz, then the generalised directional derivatives are nonnegative for only a finite set of directions. Second, and more importantly, the analysis only examined unconstrained optimization problems. In practice, there are very few problems in which the variables are free to take any values.
Charles Audet, Warren Hare

Model-Based Methods

Frontmatter
Chapter 9. Building Linear and Quadratic Models
Abstract
Through this book, and in all areas of optimization, it should always be remembered that whenever derivatives are available, they should be used. Derivatives, or more generally gradients, provide powerful information about a function. In unconstrained optimization, descent directions and first order optimality hinge on these basic objects of multivariate calculus. Hessian matrices provide extremely powerful information regarding the local curvature of a function, and can be employed to build quadratically convergent algorithms.
Charles Audet, Warren Hare
Chapter 10. Model-Based Descent
Abstract
Model-based methods in DFO proceed from the idea that if it is possible to build a “good” model of the true objective function, then information from the model can be used to guide the optimization. In Chapter 9, we studied several methods for constructing model functions using only objective function evaluations. We also defined fully linear, as a term to mean that the optimiser has access to an accuracy parameter \(\Delta \) that can be used to drive the error in the model to 0 in a predictable way. We now turn our attention to how to employ model functions in unconstrained DFO algorithms.
Charles Audet, Warren Hare
Chapter 11. Model-Based Trust Region
Abstract
While the line-search-based method of Chapter 10 is quick and easy to understand, it does not exploit the full power of a model function. In essence, the model-based descent algorithm for unconstrained optimization only uses gradient approximations to confirm descent directions. This can be seen in the convergence analysis of Chapter 10, where only controllably accurate gradient approximations are used. In particular, convergence does not even require that the function values f(x k ) be estimated by a model.
Charles Audet, Warren Hare

Extensions and Refinements

Frontmatter
Chapter 12. Variables and Constraints
Abstract
Throughout this book, we have considered the general problem of minimising a multivariate objective function f over a constraint set \(\Omega \subseteq \mathbb{R}^{n}\). Let us now be more specific about the nature of the variables and of the constraint set.
Charles Audet, Warren Hare
Chapter 13. Optimization Using Surrogates and Models
Abstract
Section 1.​4 enumerated some common features of target applications of BBO. A recurring difficulty in such applications originates from the fact that the evaluation of the objective and constraint functions is computationally expensive. In some situations, a surrogate optimization problem is available. That is, a problem that is considerably cheaper to evaluate and provides some insight on the true optimization problem. This chapter discusses ways of exploiting a surrogate problem as a copilot for an optimization algorithm.
Charles Audet, Warren Hare
Chapter 14. Biobjective Optimization
Abstract
There are situations in which the optimization problem is driven by more than one objective function. Typically, these objectives are conflicting; for example, one may wish to maximise the solidity of a structure, while minimising its weight. In such a situation, one desires to take into account the relative tradeoffs between pairs of solutions.
Charles Audet, Warren Hare
Backmatter
Titel
Derivative-Free and Blackbox Optimization
Verfasst von
Charles Audet
Warren Hare
Copyright-Jahr
2017
Electronic ISBN
978-3-319-68913-5
Print ISBN
978-3-319-68912-8
DOI
https://doi.org/10.1007/978-3-319-68913-5

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    Schmalkalden/© Schmalkalden, NTT Data/© NTT Data, Verlagsgruppe Beltz/© Verlagsgruppe Beltz, EGYM Wellpass GmbH/© EGYM Wellpass GmbH, rku.it GmbH/© rku.it GmbH, zfm/© zfm, ibo Software GmbH/© ibo Software GmbH, Lorenz GmbH/© Lorenz GmbH, Axians Infoma GmbH/© Axians Infoma GmbH, genua GmbH/© genua GmbH, Prosoz Herten GmbH/© Prosoz Herten GmbH, Stormshield/© Stormshield, MACH AG/© MACH AG, OEDIV KG/© OEDIV KG, Rundstedt & Partner GmbH/© Rundstedt & Partner GmbH