Skip to main content

2006 | Buch

Large-Scale Nonlinear Optimization

insite
SUCHEN

Über dieses Buch

Large-Scale Nonlinear Optimization reviews and discusses recent advances in the development of methods and algorithms for nonlinear optimization and its applications, focusing on the large-dimensional case, the current forefront of much research.

The chapters of the book, authored by some of the most active and well-known researchers in nonlinear optimization, give an updated overview of the field from different and complementary standpoints, including theoretical analysis, algorithmic development, implementation issues and applications.

Inhaltsverzeichnis

Frontmatter
Fast Linear Algebra for Multiarc Trajectory Optimization
Summary
This paper presents some methods for solving in a fast and reliable way the linear systems arising when solving an optimal control problem by a Runge-Kutta discretization scheme, combined with an interior-point algorithm. Our analysis holds for a multiarc problem, i.e., when several arcs, each of them associated with a dynamics and integral cost, are linked by junction points, called nodes; with the latter are associated junction conditions and a cost function.
Our main result is that a sparse QR band factorization combined with a specific elimination procedure for arcs and nodes allows to factorize the Jacobian of the discrete optimality system in a small number of operations. Combined with an “optimal” refinement procedure, this gives an efficient method that we illustrate on Goddard’s problem.
Nicolas Bérend, J. Frédéric Bonnans, Julien Laurent-Varin, Mounir Haddou, Christophe Talbot
Lagrange Multipliers with Optimal Sensitivity Properties in Constrained Optimization
Summary
We consider optimization problems with inequality and abstract set constraints, and we derive sensitivity properties of Lagrange multipliers under very weak conditions. In particular, we do not assume uniqueness of a Lagrange multiplier or continuity of the perturbation function. We show that the Lagrange multiplier of minimum norm defines the optimal rate of improvement of the cost per unit constraint violation.
Dimitri P. Bertsekas
An O(n 2) Algorithm for Isotonic Regression
Summary
We consider the problem of minimizing the distance from a given n-dimensional vector to a set defined by constraints of the form x ix j. Such constraints induce a partial order of the components x i, which can be illustrated by an acyclic directed graph. This problem is also known as the isotonic regression (IR) problem. IR has important applications in statistics, operations research and signal processing, with most of them characterized by a very large value of n. For such large-scale problems, it is of great practical importance to develop algorithms whose complexity does not rise with n too rapidly. The existing optimization-based algorithms and statistical IR algorithms have either too high computational complexity or too low accuracy of the approximation to the optimal solution they generate. We introduce a new IR algorithm, which can be viewed as a generalization of the Pool-Adjacent-Violator (PAV) algorithm from completely to partially ordered data. Our algorithm combines both low computational complexity O(n 2) and high accuracy. This allows us to obtain sufficiently accurate solutions to IR problems with thousands of observations.
Oleg Burdakov, Oleg Sysoev, Anders Grimvall, Mohamed Hussian
Knitro: An Integrated Package for Nonlinear Optimization
Summary
This paper describes Knitro 5.0, a C-package for nonlinear optimization that combines complementary approaches to nonlinear optimization to achieve robust performance over a wide range of application requirements. The package is designed for solving large-scale, smooth nonlinear programming problems, and it is also effective for the following special cases: unconstrained optimization, nonlinear systems of equations, least squares, and linear and quadratic programming. Various algorithmic options are available, including two interior methods and an active-set method. The package provides crossover techniques between algorithmic options as well as automatic selection of options and settings.
Richard H. Byrd, Jorge Nocedal, Richard A. Waltz
On implicit-factorization constraint preconditioners
Summary
Recently Dollar and Wathen [14] proposed a class of incomplete factorizations for saddle-point problems, based upon earlier work by Schilders [40]. In this paper, we generalize this class of preconditioners, and examine the spectral implications of our approach. Numerical tests indicate the efficacy of our preconditioners.
H. Sue Dollar, Nicholas I. M. Gould, Andrew J. Wathen
Optimal algorithms for large sparse quadratic programming problems with uniformly bounded spectrum
Summary
Recently proposed algorithms for the solution of large quadratic programming problems are reviewed. An important feature of these algorithms is their capability to find an approximate solution of the convex equality and/or bound constrained quadratic programming problems with the uniformly bounded spectrum of the Hessian matrix at O(1) iterations. The theoretical results are presented and illustrated by numerical experiments.
Zdeněk Dostál
Numerical methods for separating two polyhedra
Summary
The problem of finding a family of parallel hyperplanes that separates two disjoint nonempty polyhedra is examined. The polyhedra are given by systems of linear inequalities or by systems of linear equalities with nonnegative variables. Constructive algorithms for solving these problems are presented. The proposed approach is based on the theorems of alternative.
Yury G. Evtushenko, Alexander I. Golikov, Saed Ketabchi
Exact penalty functions for generalized Nash problems
Summary
We propose the use exact penalty functions for the solution of generalized Nash equilibrium problems (GNEPs). We show that by this approach it is possible to reduce the solution of a GNEP to that of a usual Nash problem. This paves the way to the development of numerical methods for the solution of GNEPs. We also introduce the notion of generalized stationary point of a GNEP and argue that convergence to generalized stationary points is an appropriate aim for solution algorithms.
Francisco Facchinei, Jong-Shi Pang
Parametric Sensitivity Analysis for Optimal Boundary Control of a 3D Reaction-Diffusion System
Summary
A boundary optimal control problem for an instationary nonlinear reaction-diffusion equation system in three spatial dimensions is presented. The control is subject to pointwise control constraints and a penalized integral constraint. Under a coercivity condition on the Hessian of the Lagrange function, an optimal solution is shown to be a directionally differentiable function of perturbation parameters such as the reaction and diffusion constants or desired and initial states. The solution’s derivative, termed parametric sensitivity, is characterized as the solution of an auxiliary linear-quadratic optimal control problem. A numerical example illustrates the utility of parametric sensitivities which allow a quantitative and qualitative perturbation analysis of optimal solutions.
Roland Griesse, Stefan Volkwein
Projected Hessians for Preconditioning in One-Step One-Shot Design Optimization
Summary
One-shot optimization aims at attaining feasibility and optimality simultaneously, especially on problems where even the linearized constraint equations cannot be resolved economically. Here we consider a scenario where forming and factoring the active Jacobian is out of the question, as is for example the case when the constraints represent some discretization of the Navier Stokes equation. Assuming that the ‘user’ provides us with a linearly converging solver that gradually restores feasibility after each change in the design variables, we derive a corresponding adjoint iteration and attach an optimization (sub)step.
The key question addressed is how the approximate reduced gradient generated by the adjoint iteration should be preconditioned in order to achieve overall convergence at a reasonable speed. An eigenvalue analysis yields necessary conditions on the preconditioning matrix, which are typically not satisfied by the familiar reduced Hessian. Some other projection of the Lagrangian Hessian appears more promising and is found to work very satisfactorily on a nonlinear test problem.
The analyzed approach is one-step in that the normal, dual and design variables are always updated simultaneously on the basis of one function evaluation and its adjoint. Multi-step variants are promising but remain to be investigated.
Andreas Griewank
Conditions and parametric representations of approximate minimal elements of a set through scalarization
Summary
This work deals with approximate solutions in vector optimization problems. These solutions frequently appear when an iterative algorithm is used to solve a vector optimization problem. We consider a concept of approximate efficiency introduced by Kutateladze and widely used in the literature to study this kind of solutions. Working in the objective space, necessary and sufficient conditions for Kutateladze’s approximate elements of the image set are given through scalarization in such a way that these points are approximate solutions for a scalar optimization problem. To obtain sufficient conditions we use monotone functions. A new concept is then introduced to describe the idea of parametric representation of the approximate efficient set. Finally, through scalarization, characterizations and parametric representations for the set of approximate solutions in convex and nonconvex vector optimization problems are proved.
César Gutiérrez, Bienvenido Jiménez, Vicente Novo
Efficient methods for large-scale unconstrained optimization
Summary
This contribution contains a description of efficient methods for large-scale unconstrained optimization. Many of them have been developed recently by the authors. It concerns limited memory methods for general smooth optimization, variable-metric bundle methods for partially separable nonsmooth optimization, hybrid methods for sparse least squares and methods for solving large-scale trust-region subproblems.
Ladislav Lukšan, Jan Vlček
A variational approach for minimum cost flow problems
Summary
We consider a variational model for traffic network problems, which generalizes the classic minimum cost flow problem. This model has the peculiarity of being formulated by means of a suitable variational inequality. The Lagrangean approach to the study of this variational inequality allows us to consider dual variables associated with the constraints of the feasible set, and to generalize the classic Bellman optimality conditions in order to obtain a stopping criterion for a gap function algorithm.
Giandomenico Mastroeni
Multi-Objective Optimisation of Expensive Objective Functions with Variable Fidelity Models
Summary
For the major part of real-life application, the formulation of an optimisation problem involves a lot of different objective functions, often coming from different disciplines or areas. In this context, the optimisation represents a meeting point for many specialists, each one focused his proper requirements, that is, criteria constraints and objective functions. Different disciplines could be involved, like Computational Fluid Dynamics (CFD), structural analysis etc.
Moreover, being different criteria involved, Multi-Objective (MO) techniques must be adopted, in order to control the enhancements of all the objective functions. By the way, designers are not interested in marginal improvements of the starting design, and only Global Optimisation (GO) techniques are able to guarantee a wide and exhaustive exploration of the design space. In conjunction to that, high-fidelity models must be applied during the optimisation process, in order to ensure the quality of the optimising design. This last feature is conflicting with the desiderata of the GO algorithms, that usually require a large amount of evaluations on the objective function in order to qualify the global optimum. Moreover, the design team needs a solution in a short time, and the total time needed by the application of reliable solvers in conjunction with GO algorithms may be unpractical if a single objective function evaluation takes hours or days, as for CFD computations.
In this context, the only way to make the process feasible is to perform a strong reduction on the number of calls to the high-fidelity models, adopting a cheaper one to be substituted to the high-fidelity solver for the most of the calls, without loosing the accuracy of the high-fidelity model. This goal can be obtained by different strategies, all referring to the concept of Variable Fidelity Model (VFM): solvers with different complexity (and cost) are applied together, in a framework in which the exchange of information between the models makes possible to correct the evaluations of the low-fidelity one, substituting efficiently the high-fidelity model.
Here an algorithm for the solution of optimum ship design problems is presented. The procedure, illustrated in the framework of multi-objective optimisation problems, make use of high-fidelity, CPU time expensive computational models, like a free surface capturing RANSE solver, coupled with analytical meta-models of the objective functions (low-fidelity).
The optimisation is composed by global and local phases. In the global stage of the search, few computationally expensive simulations (high-fidelity) are applied and surrogate models (metamodels) of the objective functions are produced (low-fidelity). After that, a large number of tentative design, placed uniformly on the Feasible Solution Set (FSS), are evaluated with the low-fidelity model. The most promising designs are clustered, then locally minimized and eventually verified with high-fidelity simulations. New exact values are used to enlarge the training points for the low-fidelity model and repeated cycles of the algorithm are performed. A Decision Maker strategy is adopted to select the most promising designs.
Daniele Peri, Antonio Pinto, Emilio F. Campana
Towards the Numerical Solution of a Large Scale PDAE Constrained Optimization Problem Arising in Molten Carbonate Fuel Cell Modeling
Summary
Molten carbonate fuel cells (MCFCs) allow an efficient and environmentally friendly energy production by converting the chemical energy contained in the fuel gas in virtue of electro-chemical reactions. Their dynamical behavior can be described by large scale embedded systems of 1D or 2D nonlinear partial differential algebraic equations (PDAEs) up to dimension 28. They are of mixed parabolic-hyperbolic type with integral terms in the right hand side and initial and nonlinear boundary conditions, the latter governed by a system of ordinary differential equations.
In this paper a new 2D model together with results of its numerical simulation is presented. The numerical results show a good correspondence with the expected dynamical behavior of MCFCs. The ultimate goal is to optimize this large scale nonlinear PDAE system to increase efficiency and service life of MCFCs.
Hans Josef Pesch, Kati Sternberg, Kurt Chudej
The NEWUOA software for unconstrained optimization without derivatives
Summary
The NEWUOA software seeks the least value of a function F(x), xRn, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model QF being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the variables. When Q is revised, the new Q interpolates F at m points, the value m = 2n + 1 being recommended. The remaining freedom in the new Q is taken up by minimizing the Frobenius norm of the change to ∇2Q. Only one interpolation point is altered on each iteration. Thus, except for occasional origin shifts, the amount of work per iteration is only of order (m+n)2, which allows n to be quite large. Many questions were addressed during the development of NEWUOA, for the achievement of good accuracy and robustness. They include the choice of the initial quadratic model, the need to maintain enough linear independence in the interpolation conditions in the presence of computer rounding errors, and the stability of the updating of certain matrices that allow the fast revision of Q. Details are given of the techniques that answer all the questions that occurred. The software was tried on several test problems. Numerical results for nine of them are reported and discussed, in order to demonstrate the performance of the software for up to 160 variables.
M. J. D. Powell
Metadaten
Titel
Large-Scale Nonlinear Optimization
herausgegeben von
G. Di Pillo
M. Roma
Copyright-Jahr
2006
Verlag
Springer US
Electronic ISBN
978-0-387-30065-8
Print ISBN
978-0-387-30063-4
DOI
https://doi.org/10.1007/0-387-30065-1

Premium Partner