Skip to main content

1994 | Buch

Large Scale Optimization

State of the Art

herausgegeben von: W. W. Hager, D. W. Hearn, P. M. Pardalos

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

On February 15-17, 1993, a conference on Large Scale Optimization, hosted by the Center for Applied Optimization, was held at the University of Florida. The con­ ference was supported by the National Science Foundation, the U. S. Army Research Office, and the University of Florida, with endorsements from SIAM, MPS, ORSA and IMACS. Forty one invited speakers presented papers on mathematical program­ ming and optimal control topics with an emphasis on algorithm development, real world applications and numerical results. Participants from Canada, Japan, Sweden, The Netherlands, Germany, Belgium, Greece, and Denmark gave the meeting an important international component. At­ tendees also included representatives from IBM, American Airlines, US Air, United Parcel Serice, AT & T Bell Labs, Thinking Machines, Army High Performance Com­ puting Research Center, and Argonne National Laboratory. In addition, the NSF sponsored attendance of thirteen graduate students from universities in the United States and abroad. Accurate modeling of scientific problems often leads to the formulation of large­ scale optimization problems involving thousands of continuous and/or discrete vari­ ables. Large scale optimization has seen a dramatic increase in activities in the past decade. This has been a natural consequence of new algorithmic developments and of the increased power of computers. For example, decomposition ideas proposed by G. Dantzig and P. Wolfe in the 1960's, are now implement able in distributed process­ ing systems, and today many optimization codes have been implemented on parallel machines.

Inhaltsverzeichnis

Frontmatter
Restarting Strategies for the DQA Algorithm
Abstract
A scenario-based decomposition algorithm is proposed for large stochastic pro-grams. The subproblem clusters consisting of separable quadratic programs are solved by means of a nonlinear interior point algorithm. Critical implementation issues are analyzed, including restarting and alternative splitting strategies. The approach is suited to a distributed multicomputer such as a network of workstations. Testing with several large LPs (117,000 constraints and 276,000 variables) shows the efficiency of the concepts.
Adam J. Berger, John M. Mulvey, Andrzej Ruszczyński
Mathematical Equivalence of the Auction Algorithm for Assignment and the ∊-Relaxation (Preflow-Push) Method for Min Cost Flow
Abstract
It is well known that the linear minimum cost flow network problem can be converted to an equivalent assignment problem. Here we give a simple proof that when the auction algorithm is applied to this equivalent problem, one obtains the generic form of the -relaxation method, and as a special case, the Goldberg-Tarjan preflow-push max-flow algorithm. The reverse equivalence is already known, that is, if we view the assignment problem as a special case of a minimum cost flow problem and we apply the -relaxation method with some special rules for choosing the node to iterate on, we obtain the auction algorithm. Thus, the two methods are mathematically equivalent.
Dimitri P. Bertsekas
Preliminary Computational Experience with Modified Log-Barrier Functions for Large-Scale Nonlinear Programming
Abstract
The paper considers Polyak’s modified logarithmic barrier function for nonlinear programming. Comparisons are made to the classic logarithmic barrier function, and the advantages of the modified log-barrier method, including starting from nonfeasible starting points, inclusion of equality constraints, and better conditioning are discussed. Extensive computational results are included which demonstrate that the method is clearly superior to the classic method and holds definite promise as a viable method for large-scale nonlinear programming.
Marc G. Breitfeld, David F. Shanno
A New Stochastic/Perturbation Method for Large-Scale Global Optimization and its Application to Water Cluster Problems
Abstract
We describe a class of new global optimization methods that has been designed to solve large, partially separable problems. The methods have been motivated by the consideration of problems from molecular chemistry, but should be applicable to other partially separable problems as well. They combine a first, stochastic phase that identifies an initial set of local minimizers, with a second, more deterministic phase that moves from low to even lower local minimizers and that accounts for most of the computational cost of the methods. Both phases make critical use of portions that vary only a small subset of the variables at once. Another important new feature of the methods is an expansion step that makes it easier to find new and structurally different local minimizers from current low minimizers. We give the results of the initial application of these methods to the problem of finding the minimum energy configuration of clusters of water molecules with up to 21 molecules (189 variables). These runs have led to improved minimizers, and interesting structures from the chemistry perspective.
Richard H. Byrd, Thomas Derby, Elizabeth Eskow, Klaas P. B. Oldenkamp, Robert B. Schnabel
Improving the Decomposition of Partially Separable Functions in the Context of Large-Scale Optimization: a First Approach
Abstract
This paper examines the question of modifying the decomposition of a partially separable function in order to improve computational efficiency of large-scale minimization algorithms using a conjugate-gradient inner iteration. The context and motivation are given and the application of a simple strategy discussed on examples extracted from the CUTE test problem collection.
Andrew R. Conn, Nick Gould, Philippe L. Toint
Gradient-Related Constrained Minimization Algorithms in Function Spaces: Convergence Properties and Computational Implications
Abstract
Good finite-dimensional approximations to projected gradient and conditional gradient iterates in feasible sets of L p functions u(-): [0,1] →U are relatively easy to compute when U is a simple closed convex set in Rm (e.g., an orthant, box, simplex, ball, etc.). Much is also known about the convergence behavior of the underlying infinite-dimensional iterative processes in these circumstances. Several novel features of this behavior are examined here, and the associated computational implications are explored with analytical tools and numerical experiments. The conclusions reached are immediately applicable to constrained input continuous-time optimal control problems.
Joseph C. Dunn
Some Reformulations and Applications of the Alternating Direction Method of Multipliers
Abstract
We consider the alternating direction method of multipliers decomposition algorithm for convex programming, as recently generalized by Eckstein and Bert- sekas. We give some reformulations of the algorithm, and discuss several alternative means for deriving them. We then apply these reformulations to a number of optimization problems, such as the minimum convex-cost transportation and multicommodity flow. The convex transportation version is closely related to a linear-cost transportation algorithm proposed earlier by Bertsekas and Tsitsiklis. Finally, we construct a simple data-parallel implementation of the convex-cost transportation algorithm for the CM-5 family of parallel computers, and give computational results. The method appears to converge quite quickly on sparse quadratic-cost transportation problems, even if they are very large; for example, we solve problems with over a million arcs in roughly 100 iterations, which equates to about 30 seconds of run time on a system with 256 processing nodes. Substantially better timings can probably be achieved with a more careful implementation.
Jonathan Eckstein, Masao Fukushima
Experience with a Primal Presolve Algorithm
Abstract
Sometimes an optimization problem can be simplified to a form that is faster to solve. Indeed, sometimes it is convenient to state a problem in a way that admits some obvious simplifications, such as eliminating fixed variables and removing constraints that become redundant after simple bounds on the variables have been updated appropriately. Because of this convenience, the AMPL modeling system includes a “presolver” that attempts to simplify a problem before passing it to a solver. The current AMPL presolver carries out all the primal simplifications described by Brearely et al. in 1975. This paper describes AMPL’s presolver, discusses reconstruction of dual values for eliminated constraints, and presents some computational results.
Robert Fourer, David M. Gay
A Trust Region Method for Constrained Nonsmooth Equations
Abstract
In this paper, we develop and analyze the convergence of a fairly general trust region method for solving a system of nonsmooth equations subject to some linear constraints. The method is based on the existence of an iteration function for the nonsmooth equations and involves the solution of a sequence of sub- problems defined by this function. A particular realization of the method leads to an arbitrary-norm trust region method. Applications of the latter method to the nonlinear complementarity and related problems are discussed. Sequential convergence of the method and its rate of convergence are established under certain regularity conditions similar to those used in the NE/SQP method [14] and its generalization [16]. Some computational results are reported.
Steven A. Gabriel, Jong-Shi Pang
On the Complexity of a Column Generation Algorithm for Convex or Quasiconvex Feasibility Problems
Abstract
We analyze the convergence and the complexity of a potential reduction column generation algorithm for solving general convex or quasiconvex feasibility problems defined by a separation oracle. The oracle is called at the analytic center of the set given by the intersection of the linear inequalities which are the previous answers of the oracle. We show that the algorithm converges in finite time and is in fact a fully polynomial approximation algorithm, provided that the feasible region has an nonempty interior. This result is based on the works of Ye [22] and Nesterov [16].
Jean-Louis Goffin, Zhi-Quan Luo, Yinyu Ye
Identification of the Support of Nonsmoothness
Abstract
We consider a class of nonlinear equations in function spaces for which the nonlinearity can be split into smooth and nonsmooth parts. Such problems arise in optimal control problems for parabolic partial differential equations with bound constraints on the control. In many situations the nonsmooth part acts on functions having support in a set of small measure. If this small set can be well identified one can exploit this structure and apply Newton-like methods away from the small set.
In this paper we extend and simplify earlier work on optimal control problems by developing a simple approach for construction of the splitting into smooth and nonsmooth parts. This paper extends earlier work on the generalization of the classical projected Newton method to infinite dimensional optimal control problems to the more general setting of splitting algorithms for nonlinear equations.
C. T. Kelley
On Very Large Scale Assignment Problems
Abstract
In this paper we present computational testing results on very large scale random assignment problems. We consider a fully dense assignment problem with 2n nodes. Some conjectured or derived properties regarding fully dense assignment problems including the convergence of the optimal objective function value and the portion of nodes assigned with their kth best arc have been verified for networks up to n = 100,000 in size. Also we demonstrate the power of our approach in solving very large scale assignment problems by solving a one million node, one trillion arc random assignment problem.
Yusin Lee, James B. Orlin
Numerical Solution of Parabolic State Constrained Control Problems Using SQP- and Interior-Point-Methods
Abstract
In this paper we consider the problem to fire ceramic kilns according to a given predetermined reference profile. It can be written as an optimal control problem with a nonlinear parabolic differential equation and control input through the boundary. The objective is a quadratic performance criterion for the observed temperature inside the probe and the firing curve. In addition, restrictions on the maximal temperature are imposed which leads to state constraints. The discretization of this problem gives an optimization problem with equality and inequality constraints. The numerical solution of this problem is carried out with a SQP-method where the quadratic subproblems are solved by an interior point algorithm. The goal of this paper is not to obtain theoretical results but to investigate if these type of methods can be used in the numerical solution of parabolic control problems.
Friedemann Leibfritz, Ekkehard W. Sachs
A Global Optimization Method For Weber’s Problem With Attraction And Repulsion
Abstract
Weber’s problem involves the optimum location of a single facility on a plane in such a way that the weighted sum of the Euclidean distances of the facility from n given points be at the global minimum. Each point can either have an attractive or repulsive effect on the location of the facility depending on whether the corresponding weight is positive or negative respectively. Because attractive contributions correspond to convex functions and repulsive contributions to concave ones, the total expression for the weighted sum of Euclidean distances is nonconvex. In this paper, two global optimization algorithms are proposed, one based on a concave and one on a concave + convex lower bounding operation. Both of these algorithms utilize efficient rectangular subdivision processes. The proposed approaches attain ∈-convergence to the global minimum in a finite number of iterations. Computational results are presented for problems involving as many as n = 10,000 points.
Costas D. Maranas, Christodoulos A. Floudas
Large-Scale Diversity Minimization via Parallel Genetic Algorithms
Abstract
Diversity minimization is a class of large-scale combinatorial optimization problems arising from the assignment of processors to database fragments in a parallel database environment. These problems may be formulated as nonconvex assignment problems, but they are very difficult to solve via standard optimization techniques because of their large size and because they have many alternative optima. The method described here utilizes theoretical results regarding the form of optimal solutions as the basis for the development of a high-level parallel genetic algorithm. In effect, the genetic operators serve to produce good starting points and neighborhoods for exploration by a heuristic that uses knowledge of the small number of alternatives for desirable processor assignment patterns. Computational results from a parallel implementation of the algorithm on a Connection Machine CM-5 are reported, including the computation of optimal solutions to a million-variable diversity minimization problem.
Robert R. Meyer, Jonathan Yackel
A Numerical Comparison of Barrier and Modified Barrier Methods For Large-Scale Bound-Constrained Optimization
Abstract
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated- Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Stephen G. Nash, R. Polyak, Ariela Sofer
A Numerical Study of Some Data Association Problems Arising in Multitarget Tracking
Abstract
The central problem in multitarget/multisensor tracking is the data association problem of partitioning the observations into tracks and false alarms so that an accurate estimate of the true tracks can be recovered. This data association problem is formulated in this work as a multidimensional assignment problem. These NP-hard data association problems are large scale, have noisy objective functions, and must be solved in real-time. A class of Lagrangian relaxation algorithms has been developed to construct near-optimal solutions in real-time, and thus the purpose of this work is to demonstrate many of the salient features of tracking problems by using these algorithms to numerically investigate constant acceleration models observed by a radar in two dimensional space. This formulation includes gating, clustering, and optimization problems associated with filtering. Extensive numerical simulations are used to demonstrate the effectiveness and robustness of a class of Lagrangian relaxation algorithms for the solution of these problems to the noise level in the problems.
Aubrey B. Poore, Nenad Rijavec
Identifying the Optimal Face of a Network Linear Program with a Globally Convergent Interior Point Method
Abstract
Based on recent convergence results for the affine scaling algorithm for linear programming, we investigate strategies to identify the optimal face of a minimum cost network flow problem. In the computational experiments described, one of the proposed optimality indicators is used to implement an early stopping criterion in DLNET, an implementation of the dual affine scaling algorithm for solving minimum cost network flow problems. We conclude from the experiments that the new indicator is far more robust than the one used in earlier versions of DLNET.
Mauricio G. C. Resende, Takashi Tsuchiya, Geraldo Veiga
Solution of Large Scale Stochastic Programs with Stochastic Decomposition Algorithms
Abstract
Stochastic Decomposition (SD) is a randomized version of Benders’ decomposition for the solution of two stage stochastic linear programs with recourse. It combines a recursive sampling scheme within a decomposition-coordination framework in which the algorithm alternates between a master program and. a subprogram. The master program represents a piecewise linear approximation in which each cut is obtained by solving one linear subproblem, and then performing a series of updates based on previously generated outcomes. Using recursive updates, we devise an efficient computer implementation that allows us to address very large two stage stochastic programs with recourse. We report our computational experience with some very large stochastic programs that arise in aircraft fleet scheduling and telecommunications network planning.
Suvrajeet Sen, Jason Mai, Julia L. Higle
A Simple, Quadratically Convergent Interior Point Algorithm for Linear Programming and Convex Quadratic Programming
Abstract
An algorithm for linear programming (LP) and convex quadratic programming (CQP) is proposed, based on an interior point iteration introduced more than ten years ago by J. Herskovits for the solution of nonlinear programming problems. Herskovits’ iteration can be simplified significantly in the LP/CQP case, and quadratic convergence from any initial point can be achieved. Interestingly the linear system solved at each iteration is identical to that of the primal-dual affine scaling scheme recently considered by Monteiro et al. independently of Herskovits’ work. The proposed algorithm, however, uses an iteratively selected step length, different for each component of the dual variable.
André L. Tits, Jian L. Zhou
On Two Algorithms for Nonconvex Nonsmooth Optimization Problems in Structural Mechanics
Abstract
The variational formulation of Structural Mechanics problems, when the material and boundary laws are assumed monotone, leads to variational inequalities (symmetric or not), that can be solved using modern convex minimization or nonlinear complementarity methods. In order to describe the gradual loss of strength, nonmonotone and possibly multivalued laws have been introduced recently, that give rise to hemivariational inequalities. Due to the lack of convexity and the nonsmoothness of the underlying (super) potentials those problems have generally nonunique solutions (stable or unstable). In the present paper a pair of methods for the decomposition of hemivariational inequalities to a finite number of variational inequalities is proposed. The first method is based on the decomposition of the contigent cone of the superpotential into convex constituents, while in the second the nonconvex superpotential is approximated locally by convex superpotentials. Both approaches lead to effective, reliable and versatile numerical algorithms for large scale hemivariational inequalities. Concluding, numerical examples illustrate the properties and applicability of the methods.
M. Ap. Tzaferopoulos, E. S. Mistakidis, C. D. Bisbos, P. D. Panagiotopoulos
Metadaten
Titel
Large Scale Optimization
herausgegeben von
W. W. Hager
D. W. Hearn
P. M. Pardalos
Copyright-Jahr
1994
Verlag
Springer US
Electronic ISBN
978-1-4613-3632-7
Print ISBN
978-1-4613-3634-1
DOI
https://doi.org/10.1007/978-1-4613-3632-7