Skip to main content

Über dieses Buch

This IMA Volume in Mathematics and its Applications LARGE-SCALE OPTIMIZATION WITH APPLICATIONS, PART II: OPTIMAL DESIGN AND CONTROL is one of the three volumes based on the proceedings of the 1995 IMA three­ week Summer Program on "Large-Scale Optimization with Applications to Inverse Problems, Optimal Control and Design, and Molecular and Struc­ tural Optimization." The other two related proceedings appeared as Vol­ ume 92: Large-Scale Optirpization with Applications, Part I: Optimization in Inverse Problems and Design and Volume 94: Large-Scale Optimization with Applications, Part III: Molecular Structure and Optimization. We would like to thank Lorenz T. Biegler, Thomas F. Coleman, An­ drew R. Conn, and Fadil N. Santosa for their excellent work as organizers of the meetings and for editing the proceedings. We also take this opportunity to thank the National Science Founda­ tion (NSF), the Department of Energy (DOE), and the Alfred P. Sloan support made the workshops possible.



Large-Scale Optimization with Applications, Part II: Optimal Design and Control

The Development of the SQP Algorithm for Nonlinear Programming

The paper traces the steps in the development of the SQP algorithm, describing approaches used to deal with issues such as infeasible QP subproblems, and ensuring global and superlinear convergence. It then discusses the problems which arise in applying the SQP algorithm to large-scale problems, and finally describes a new algorithm which circumvents these problems.
R. W. H. Sargent

Some Aspects of Sequential Quadratic Programming Methods

Sequential quadratic programming (SQP) methods using strictly convex QP subproblems are well established both theoretically and practically. They are currently the method of choice when solving small or medium-sized problems. Over the last fifteen years several implementations have been written to solve specific applications in which large problems arise. Recently there have been some general purpose implementations.
We focus on two aspects of SQP methods: the use of a nonconvex subproblem and the choice of merit function. SQP methods with nonconvex subproblems have been investigated theoretically only recently although the use of such subproblems in practice goes back many years (see [BHV84]). Allowing a nonconvex subproblem enables second derivatives to be used in the approximation of the Lagrangian function. It also enables a different approach to be used when using quasi-Newton approximations. We review a new approach that uses nonconvex subproblems and discuss the steps necessary to achieve a practical implementation. Merit functions are used as a measure of defining how good a given point estimates the required solution. The choice of merit function is quite wide and all are an artificial measure of “goodness”. It is our view that the efficiency of an SQP algorithm is significantly impacted by the choice of merit function and the choice of any parameters that may be used in their definition.
Walter Murray

Computing Sparse Hessian and Jacobian Approximations with Optimal Hereditary Properties

In nonlinear optimization it is often important to estimate large sparse Hessian or Jacobian matrices, to be used for example in a trust region method. We propose an algorithm for computing a matrix B with a given sparsity pattern from a bundle of the m most recent difference vectors
$$\Delta = \left[ {{{\delta }^{{k - m + 1}}} \ldots {{\delta }^{k}}} \right],\Gamma = \left[ {{{\gamma }^{{k - m + 1}}} \ldots {{\gamma }^{k}}} \right] $$
where B should approximately map △ into Г. In this paper B is chosen such that it satisfies m quasi—Newton conditions B△ = Г in the least squares sense.
We show that B can always be computed by solving a positive semi—definite system of equations in the nonzero components of B. We give necessary and sufficient conditions under which this system is positive definite and indicate how B can be computed efficiently using a conjugate gradient method.
In the case of unconstrained optimization we use the technique to determine a Hessian approximation which is used in a trust region method. Some numerical results are presented for a range of unconstrained test problems.
Roger Fletcher, Andreas Grothey, Sven Leyffer

Experience with a Sparse Nonlinear Programming Algorithm

Nonlinear programming problems arise naturally in data fitting applications, and when discretization techniques are applied to systems described by ordinary or partial differential equations. For applications of this type the number of variables and constraints may be large (i.e. 100 < n < 100000), and the corresponding Jacobian and Hessian matrices are very sparse (i.e. typically less than 1% of the elements are nonzero). For small problems with dense matrices one of the most successful numerical techniques is the sequential quadratic programming approach. However, when algorithms appropriate for dense applications are applied to many large sparse problems, the computational expense is dominated by the solution of the quadratic programming subproblem and the evaluation of the Hessian matrices. A method appropriate for solving large sparse nonlinear programming problems is described in [3]. A review of the original method is presented with special attention given to a number of enhancements that have been made to the original algorithm which improve robustness and extend its utility. Particular attention is given to the method for constructing a modified Hessian approximation and the treatment of defective QP subproblems.
J. T. Betts

Mixed-Integer Nonlinear Programming: A Survey of Algorithms and Applications

This paper presents an overview of mixed-integer nonlinear programming techniques by first providing a unified treatment of the Branch and Bound, Outer-Approximation, Generalized Benders and Extended Cutting Plane methods as applied to nonlinear discrete optimization problems that are expressed in algebraic form. The extension of these methods is also considered for logic based representations. Finally, an overview of the applications in many areas in process engineering is presented.
Ignacio E. Grossmann, Zdravko Kravanja

A Multiplier-Free, Reduced Hessian Method for Process Optimization

Process optimization problems typically consist of large systems of algebraic equations with relatively few degrees of freedom. For these problems the equation system is generally constructed by linking individual process models together; solution of these models is frequently effected by calculation procedures that exploit their equation structure. This paper describes a tailored optimization strategy for these process models that is based on reduced Hessian Successive Quadratic Programming (SQP). In particular, this approach only requires Newton steps from the process models and their ‘sensitivities,’ and does not require the calculation of Lagrange multipliers for the equality constraints. It can also be extended to large-scale systems through the use of sparse matrix factorizations. The algorithm has the same superlinear and global properties as the reduced Hessian method developed in [4]. Here we summarize these properties and demonstrate the performance of the multiplier-free SQP method through numerical experiments.
Lorenz T. Biegler, Claudia Schmid, David Ternet

Deterministic Global Optimization in Design, Control, and Computational Chemistry

This paper presents an overview of the deterministic global optimization approaches and their applications in the areas of Process Design, Control, and Computational Chemistry. The focus is on (i) decomposition-based primal dual methods, (ii) methods for generalized geometric programming problems, and (iii) global optimization methods for general nonlinear programming problems. The classes of mathematical problems that are addressed range from indefinite quadratic programming to concave programs, to quadratically constrained problems, to polynomials, to general twice continuously differentiable nonlinear optimization problems. For the majority of the presented methods nondistributed global optimization approaches are discussed with the exception of decomposition-based methods where a distributed global optimization approach is presented.
Christodoulos A. Floudas

Optimization Problems in Model Predictive Control

This paper provides a review of the types of optimization problems that arise when implementing model predictive control, a feedback control method based on the on-line solution of open-loop optimal control problems. For linear, unconstrained processes, the issues are fully resolved; the controller is linear and the optimization reduces to a least squares problem that can be solved off line. For linear, constrained processes, the controller is nonlinear and the optimization is a convex quadratic program that must be solved on line. Maintaining feasibility of the optimization problem in the presence of the constraints is the remaining open issue. For nonlinear processes, the controller is nonlinear and the optimization is a nonlinear and non-convex problem. Because on-line solution of non-convex problems is difficult, alternatives to this optimization problem are presented. The available theory to establish robustness of the nonlinear controller to perturbations is summarized. Throughout the paper, we highlight the interplay between optimization theory and model predictive control theory.
Pierre O. M. Scokaert, James B. Rawlings

Some Recent Developments in Computational Optimal Control

Three recently introduced approaches to computational optimal control are discussed. All three methods fall into the category of direct approaches, i.e. they are based on some type of problem discretization and require nonlinear programming techniques to determine the optimal solution within certain finite-dimensional parameter spaces.
The first method is called Trajectory Optimization via Differential Inclusion (TODI). This method can be viewed as a finite difference discretization of optimal control problems represented in differential inclusion format, and results in a problem formulation completely devoid of control parameters. The second method, dubbed Concatenated Approach to Trajectory Optimization (CATO) solves an iterative sequence of low-order, discretized subproblems on short, overlapping subarcs of the original trajectory. The technique, which can be paired with practically any direct optimization approach to perform the inner-loop iterations, is fast and requires very little computer memory. Furthermore, it is ideally suited for parallel processing. Finally, the third method, which requires even less computing power, is based on an intuitive dense-sparse discretization scheme, with nodes placed densely near the initial time to capture immediate dynamics, and nodes placed sparsely over the rest of the trajectory to capture general trends. This method, too, can be paired with a large variety of off-the-shelf direct optimization techniques.
An F-15 minimum time-to-climb problem is used to test and compare all three methods.
Hans Seywald, Renjith R. Kumar

Large-Scale Structural Design Optimization

The design of mechanical structures may be posed as an optimization problem where some performance measure, such as structural weight, is optimized subject to constraints on stresses, deformations, buckling load, and other response characteristics. Design variables are usually chosen to be design parameters such as thicknesses and other cross-sectional dimensions of structural members, the shape of the structural boundary, and similar properties. The response of the structure is almost always determined using a finite element approximation to the partial differential equations governing the response of the structure.
The paper describes how different optimization strategies may be necessary to solve the many possible structural design problems that arise in applications. A particular issue to be considered is whether or not the state variables, typically the deformations in the finite element model, should be eliminated using the equilibrium equations. A nonlinear structural model may require that the state variables are kept as independent variables although the most common approach in structural optimization is to eliminate the state variables.
Ulf Torbjörn Ringertz

Large-Scale SQP Methods for Optimization of Navier-Stokes Flows

We consider the problem of optimal control of fluids governed by the steady Navier-Stokes equations. The control is affected by the suction or injection of fluid on portions of the boundary, and the objective function represents the rate at which energy is dissipated in the fluid. We show how reduced Hessian successive quadratic programming methods, which avoid converging the flow equations at each iteration, can be tailored to these problems. Both quasi-Newton and Newton variants are developed, and compared to the approach of eliminating the flow equations and variables, which is effectively the reduced gradient method. The examples demonstrate at least an order-of-magnitude reduction in time taken, allowing the optimal solution of realistic two-dimensional flow control problems in little time on a desktop workstation.
Omar Ghattas, Jai-Hyeong Bark

Numerical Optimal Control of Parabolic PDES Using DASOPT

This paper gives a preliminary description of DASOPT, a software system for the optimal control of processes described by time-dependent partial differential equations (PDEs). DASOPT combines the use of efficient numerical methods for solving differential-algebraic equations (DAEs) with a package for large-scale optimization based on sequential quadratic programming (SQP). DASOPT is intended for the computation of the optimal control of time-dependent nonlinear systems of PDEs in two (and eventually three) spatial dimensions, including possible inequality constraints on the state variables. By the use of either finite-difference or finite-element approximations to the spatial derivatives, the PDEs are converted into a large system of ODEs or DAEs. Special techniques are needed in order to solve this very large optimal control problem. The use of DASOPT is illustrated by its application to a nonlinear parabolic PDE boundary control problem in two spatial dimensions. Computational results with and without bounds on the state variables are presented.
Linda Petzold, J. Ben Rosen, Philip E. Gill, Laurent O. Jay, Kihong Park

The Promise (and Reality) of Multidisciplinary Design Optimization

Modern aerospace vehicle design requires the interaction of multiple disciplines, traditionally processed in a sequential order. Multidisciplinary optimization (MDO), a formal methodology for the integration of these disciplines, is evolving toward methods capable of replacing the traditional sequential methodology of aerospace vehicle design by concurrent algorithms, with both an overall gain in product performance and a decrease in design time. This paper discusses the obstacles to MDO, and presents a parallel MDO paradigm using variable-complexity modeling and multipoint response surface approximations for the particular instance of the design of a high speed civil transport (HSCT). This paradigm interleaves the disciplines at one level of complexity, and processes them hierarchically at another level of complexity, achieving parallelism within disciplines, rather than across disciplines. A master-slave paradigm manages a coarse grained parallelism of the analysis and optimization codes required by the disciplines showing reasonable speedups and efficiencies on an Intel Paragon.
Susan L. Burgee, Layne T. Watson


Weitere Informationen