Skip to main content

1995 | Buch

Handbook of Global Optimization

herausgegeben von: Reiner Horst, Panos M. Pardalos

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

Global optimization is concerned with the computation and characterization of global optima of nonlinear functions. During the past three decades the field of global optimization has been growing at a rapid pace, and the number of publications on all aspects of global optimization has been increasing steadily. Many applications, as well as new theoretical, algorithmic, and computational contributions have resulted.
The Handbook of Global Optimization is the first comprehensive book to cover recent developments in global optimization. Each contribution in the Handbook is essentially expository in nature, but scholarly in its treatment. The chapters cover optimality conditions, complexity results, concave minimization, DC programming, general quadratic programming, nonlinear complementarity, minimax problems, multiplicative programming, Lipschitz optimization, fractional programming, network problems, trajectory methods, homotopy methods, interval methods, and stochastic approaches.
The Handbook of Global Optimization is addressed to researchers in mathematical programming, as well as all scientists who use optimization methods to model and solve problems.

Inhaltsverzeichnis

Frontmatter
Conditions for Global Optimality
Abstract
Given an optimization problem we are interested in getting characterizations of its global solutions, i.e., necessary and sufficient conditions for a feasible point to be a global minimum (or maximum) of the objective function. We review here recent theoretical results in that direction, emphasizing those which have led to algorithms for global optimization.
J.-B. Hiriart-Urruty
Complexity Issues in Global Optimization: A Survey
Abstract
Complexity theory refers to the asymptotic analysis of problems and algorithms. How efficient is an algorithm for a particular optimization problem, as the number of variables gets large? Are there problems for which no efficient algorithm exists? These are the questions that complexity theory attempts to address. The theory originated in work by Hartmanis and Stearns (1965).
Stephen A. Vavasis
Concave Minimization: Theory, Applications and Algorithms
Abstract
The purpose of this chapter is to present the essential elements of the theory, applications, and solution algorithms of concave minimization. Concave minimization problems seek to globally minimize real-valued concave functions over closed convex sets. As in other global optimization problems, in concave minimization problems there generally exist many local optima which are not global. Concave minimization problems are NP-hard. Even seemingly-simple cases can possess an exponential number of local minima. However, in spite of these difficulties, concave minimization problems are more tractable than general global optimization problems. This is because concave functions and minimizations display some special mathematical properties. Concave minimization problems also have a surprisingly-diverse range of direct and indirect applications. The special mathematical properties and diverse range of applications of concave minimization have motivated the construction of a rich and varied set of approaches for their solution. Three fundamental classes of solution approaches can be distinguished. These are the enumerative, successive approximation, and successive partitioning (branch and bound) approaches. After giving a brief introduction, the chapter describes and discusses some of the most important special mathematical properties of concave functions and of concave minimization problems. Following this, it describes several of the most important categories of direct and indirect applications of concave minimization. The chapter then describes each of the three fundamental solution approaches for concave minimization. For each approach, the essential elements are explained in a unified framework. Following this, for each approach, most of the well-known individual concave minimization algorithms that use the approach are explained and compared. The scope of the presentation of the solution approaches and algorithms is limited to those that use deterministic (rather than stochastic) methods. The chapter concludes with some suggested topics for further research.
Harold P. Benson
D.C. Optimization: Theory, Methods and Algorithms
Abstract
Optimization problems involving d.c. functions (differences of convex functions) and d.c. sets (differences of convex sets) occur quite frequently in operations research,economics, engineering design and other fields. We present a review of the theory, methods and algorithms for this class of global optimization problems which have been elaborated in recent years
Hoang Tuy
Quadratic Optimization
Abstract
Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, the quadratic problem is known to be NP-hard, which makes this one of the most interesting and challenging class of optimization problems. In this chapter, we review various properties of the quadratic problem, and discuss different techniques for solving various classes of quadratic problems. Some of the more successful algorithms for solving the special cases of bound constrained and large scale quadratic problems are considered. Examples of various applications of quadratic programming are presented. A summary of the available computational results for the algorithms to solve the various classes of problems is presented.
Christodoulos A. Floudas, V. Visweswaran
Complementarity Problems
Abstract
This chapter presents a comprehensive treatment of the nonlinear complementarity problem and several related mathematical programs in finite dimensions. Topics discussed include existence theory, solution methods, sensitivity and stability analysis, and applications to equilibrium modeling and engineering problems. Some future research directions are suggested and an extensive list of references is given.
Jong-Shi Pang
Minimax and Its Applications
Abstract
Consider the problem min x∈X max i∈I f i (x) where X is a convex set, I is a finite set of indices and the f i (x)’s are continuous concave functions of x. In this article, we study a characterization of xX at which the minimax value is achieved. We also study some applications of the characterization.
Ding-Zhu Du
Multiplicative Programming Problems
Abstract
This chapter reviews recent algorithmic developments in multiplicative programming. The multiplicative programming problem is a class of minimization problems containing a product of several convex functions either in its objective or in its constraints. It has various practical applications in such areas as microeconomics, geometric optimization, multicriteria optimization and so on. A product of convex functions is in general not (quasi)convex, and hence the problem can have multiple local minima. However, some types of multiplicative problems can be solved in a practical sense. The types to be discussed in this chapter are minimization of a product of p convex functions over a convex set, minimization of a sum of p convex multiplicative functions, and minimization of a convex function subject to a constraint on a product of p convex functions. If p is less than four or five, it is shown that parametric simplex algorithms or global optimization algorithms work very well for these problems.
Hiroshi Konno, Takahito Kuno
Lipschitz Optimization
Abstract
Lipschitz optimization solves global optimization problems in which the objective function and constraint left-hand sides may be given by oracles (or explicitly) and have a bounded slope. The problems of finding an optimal solution, an ε-optimal one, all optimal solutions, and a small volume enclosure of all optimal solutions within hyperrectangles (possibly containing only a-optimal points) are investigated. In the univariate case, necessary conditions for finite convergence are studied as well as bounds on the number of iterations and convergence of e-optimal algorithms. Methods of Piyayskii, Evtushenko, Timonov, Schoen, Galperin, Shen and Zhu, and Hansen, Jaumard and Lu are discussed and compared experimentally. The same is done in the multivariate case for the algorithms of Piyayskii; Mladineo; Jaumard, Herrmann and Ribault; Pinter; Galperin; Meewella and Mayne; Wood; Gourdin, Hansen and Jaumard.
Constrained Lipschitz optimization is then discussed focusing on extensions of Piyayskii’s algorithm and methods which consider all constraints simultaneously, i.e., those of Thach and Tuy, as well as the recent method of Gourdin, Jaumard and MacGibbon. Heuristic algorithms, some of which involve estimation of the Lipschitz constant, are then briefly reviewed. A representative sample of the many existing and potential applications is discussed. An evaluation of the scope, strength and limitations of Lipschitz optimization completes the paper.
Pierre Hansen, Brigitte Jaumard
Fractional Programming
Abstract
An introduction to ratio optimization problems is provided which covers various applications as well as major theoretical and algorithmic developments. In addition to an extensive treatment of single-ratio fractional programming, three types of multi-ratio fractional programs are discussed: maximization of the smallest of several ratios, maximization of a sum of ratios and multi-objective fractional programs. Earlier as well as recent developments are discussed and open problems are identified. The article concludes with a comprehensive, up-to-date bibliography in fractional programming. Well over one thousand articles have appeared in more than thirty years of increasingly intensive research in fractional programming. The bibliography includes all references from the beginning until late 1993 to the extent they are known to the author at this time.
S. Schaible
Network Problems
Abstract
Network problems arise naturally in many application areas. For the case of nonconvex network optimization, it is often necessary to exploit the network structure to develop efficient algorithms. This paper summarizes the application areas, complexity, and solution techniques for popular network problems. Network formulations for numerous application areas are provided. A summary of complexity results for concave and indefinite problems, as well as for specific network structures is discussed. Classes of algorithms that have been used to solve network problems are summarized along with any reported performance results.
G. M. Guisewite
Trajectory Methods in Global Optimization
Abstract
We review the application of trajectory methods (not including homotopy methods) to global optimization problems. The main ideas and the most successful methods are described and directions of current and future research are indicated.
Immo Diener
Homotopy Methods
Abstract
This contribution discusses globally convergent methods for the solution of systems of nonlinear equations. The methods discussed are either based on piecewise linear approximations and pivoting steps or on differential topology and path following. Homotopy methods work by first solving a simple problem and then deforming this problem into the original complicated problem. During this deformation or homotopy we follow paths from the solutions of the simple problem to the solutions of the complicated problem. Concepts needed for understanding homotopies are explained and details of algorithms are given.
W. Forster
Interval Methods
Abstract
An introduction to the interval arithmetic tools and basic methods that can be used to solve global optimization problems are presented. These tools are applicable both to unconstrained and constrained as well as to nonsmooth optimization or to problems over unbounded domains. We also emphasize the role of bisections and attempts to find the right bisections when solving the problem computationally since almost all interval based global optimization algorithms use branch-and-bound principles where the problem domain is bisected iteratively and since the research on bisection strategies has made significant progress during the last decade.
Helmut Ratschek, Jon Rokne
Stochastic Methods
Abstract
As no algorithm can solve a general, smooth global optimization problem with certainty in finite time, stochastic methods are of eminent importance in global optimization. In this chapter we discuss three classes of stochastic methods: two-phase methods, random search methods and random function methods, as well as applicable stopping rules.
C. Guus E. Boender, H. Edwin Romeijn
Backmatter
Metadaten
Titel
Handbook of Global Optimization
herausgegeben von
Reiner Horst
Panos M. Pardalos
Copyright-Jahr
1995
Verlag
Springer US
Electronic ISBN
978-1-4615-2025-2
Print ISBN
978-1-4613-5838-1
DOI
https://doi.org/10.1007/978-1-4615-2025-2