Skip to main content

2017 | Buch

Continuous Nonlinear Optimization for Engineering Applications in GAMS Technology

insite
SUCHEN

Über dieses Buch

This book presents the theoretical details and computational performances of algorithms used for solving continuous nonlinear optimization applications imbedded in GAMS. Aimed toward scientists and graduate students who utilize optimization methods to model and solve problems in mathematical programming, operations research, business, engineering, and industry, this book enables readers with a background in nonlinear optimization and linear algebra to use GAMS technology to understand and utilize its important capabilities to optimize algorithms for modeling and solving complex, large-scale, continuous nonlinear optimization problems or applications.

Beginning with an overview of constrained nonlinear optimization methods, this book moves on to illustrate key aspects of mathematical modeling through modeling technologies based on algebraically oriented modeling languages. Next, the main feature of GAMS, an algebraically oriented language that allows for high-level algebraic representation of mathematical optimization models, is introduced to model and solve continuous nonlinear optimization applications. More than 15 real nonlinear optimization applications in algebraic and GAMS representation are presented which are used to illustrate the performances of the algorithms described in this book. Theoretical and computational results, methods, and techniques effective for solving nonlinear optimization problems, are detailed through the algorithms MINOS, KNITRO, CONOPT, SNOPT and IPOPT which work in GAMS technology.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
This book is on nonlinear optimization in the GAMS technology. Continuous nonlinear optimization problems have a simple mathematical model and always refer to a system its running we want to optimize. Firstly, it contains an objective function which measures the performances or requirements of the system. Often, this function represents a profit, a time interval, a level, a sort of energy, or combination of different quantities which have a physical significance for the modeler. The objective function depends on some characteristics of the system, called variables or unknowns. The purpose of any optimization problem is to find the values of these variables that minimize (or maximize) the objective function, subject to some constraints the variables must satisfy. Constraints of an optimization problem may have different algebraic expressions. There are static and dynamic constraints called functional constraints. The difference between these types of constraints comes from the structure of their Jacobian. Another very important type of constraints is the simple bounds on variables. Both the objective function and the constraints may depend on some parameters with known values which represent the constructive characteristics of the system under optimization. The process of identifying the variables, parameters, the objective functions, and constraints is known as modeling, one of the finest intellectual activities. It is worth saying that in this book, we assume that the variables can take real values and the objective function and the constraints are smooth enough (at least twice differentiable) with known first-order derivatives. When the number of variables and the number of constraints are large, the optimization problem is quite challenging.
Neculai Andrei
Chapter 2. Mathematical Modeling Using Algebraic Oriented Languages for Nonlinear Optimization
Abstract
In the last two decades, significant research activities have taken place in the area of local and global optimization, including many theoretical, computational, and software contributions. The access to this advanced optimization software needs more and more sophisticated modeling tools. Algebraic oriented optimization modeling languages represent an important class of tools that facilitate the communication of optimization models to decision-making systems based on the optimization paradigm. In a wider context, the algebraic oriented modeling tools evolve toward fully integrated modeling and optimization management systems with access to databases, spreadsheets, and graphical user interfaces.
Neculai Andrei
Chapter 3. Introduction to GAMS Technology
Abstract
This chapter deals with an introduction to the General Algebraic Modeling System – GAMS. It is a high-level algebraic modeling technology elaborated by the GAMS Development Corporation. It permits developing and solving the linear or nonlinear optimization models. In GAMS the optimization models can be introduced in a way similar to the one which presents them in research papers or books. GAMS has the ability to solve large-scale linear programming problems and integer linear programming problems, as well as to find local or global optima of nonlinear and mixed-integer programs of very different complexity.
Neculai Andrei
Chapter 4. Applications of Continuous Nonlinear Optimization
Abstract
A number of 18 real continuous nonlinear optimization applications are presented in this chapter. These are used for numerical experiments and comparisons among the algorithms described in this book. For each application, the mathematical model, its GAMS representation, and the solution are given. GAMS is a standard technology for modeling and solving large-scale linear and nonlinear optimization applications. It is characterized by a very powerful language and a large number of different advanced optimization algorithms which are imbedded in this technology. The syntax of the GAMS language is not too complicated and practically all types of difficult nonlinear optimization applications can be represented and solved. Additionally, the nonlinear optimal control application, by discretization, can also be represented and solved by GAMS. Therefore, in this chapter, we include both continuous nonlinear optimization applications and some optimal control problems from different areas of activity. Some applications are from mechanical, electrical, and chemical engineering, heat transfer and fluid dynamics, and economic development. Others are from optimal control. The purpose is to present these applications in algebraic form and to see how all these can be represented and solved in GAMS. The solutions of these applications are determined by the optimization algorithms imbedded in the GAMS technology (MINOS, CONOPT, KNITRO, SNOPT, IPOPT) as well as some other packages described in this book (SPENBAR, DONLP, NLPQLP, filterSD) not imbedded in GAMS.
Neculai Andrei
Chapter 5. Optimality Conditions for Continuous Nonlinear Optimization
Abstract
The optimization problems considered in this book involve minimization or maximization of a function of several real variables subject to one or more constraints. The constraints may be nonnegativity of variables, simple bounds on variables, and equalities or inequalities as functions of these variables. These problems are known as continuous nonlinear constrained optimization or nonlinear programming.
Neculai Andrei
Chapter 6. Simple Bound Constraints Optimization
Abstract
The simple bound constraints optimization is a class of nonlinear optimization problems with a special structure, found in many real practical applications. The mathematical model of these problems is as follows:
Neculai Andrei
Chapter 7. Penalty and Augmented Lagrangian Methods
Abstract
This chapter introduces two very important concepts in constrained nonlinear optimization. These are penalty and augmented Lagrangian concepts. The idea is that both these concepts replace the original problem by a sequence of sub-problems in which the constraints are expressed by terms added to the objective function. The penalty concept is implemented in two different methods. The quadratic penalty method adds to the objective function a multiple of the square of the violation of each constraint and solves a sequence of unconstrained optimization sub-problems. Simple and enough intuitive, this approach has some important deficiencies. The nonsmooth exact penalty method, on the other hand, solves a single unconstrained optimization problem. In this approach, a popular function is the l 1 penalty function. The problem with this method is that the nonsmoothness may create complications in numerical implementations. Finally, the second concept is the multiplier method or the augmented Lagrangian method, which explicitly uses Lagrange multiplier estimates in order to avoid the ill-conditioning of the quadratic penalty method.
Neculai Andrei
Chapter 8. A Penalty-Barrier Algorithm: SPENBAR
Abstract
Let us consider the following general nonlinear optimization problem:subject to:where x ∈  n ,E ≜ {1,  … , m e } and I c  ≜ {1,  … , m}. The functions c i  :  n  → , i ∈ I c  ∪ E, are assumed to be twice continuously differentiable on n . I l  , I u  ⊆ {1,  … , n}. To simplify the presentation of the algorithm, the simple bounds on the variables are also denoted c i (x). Define I sb as the set of indices such that for all j ∈ I l  ∪ I u there is an i ∈ I sb with the property:
Neculai Andrei
Chapter 9. Linearly Constrained Augmented Lagrangian: MINOS
Abstract
In this chapter we present one of the most respectable algorithms and softwares for solving general nonlinear optimization problems given by Murtagh and Saunders (1978, 1980, 1982, 1995). The main idea behind this method is to generate a step by minimizing the Lagrangian or the augmented Lagrangian subject to the linearizations of the constraints.
Neculai Andrei
Chapter 10. Quadratic Programming
Abstract
One of the most important nonlinear optimization problems is quadratic programming, in which a quadratic objective function is minimized with respect to linear equality and inequality constraints. These kinds of problems are present in many methods as sub-problems and in real applications from different areas of activity as mathematical models of these applications. At the very beginning, we consider the equality-constrained quadratic programming, after which the inequality-constrained programming will be presented.
Neculai Andrei
Chapter 11. Sequential Quadratic Programming (SQP)
Abstract
SQP is an active-set method. In this chapter we consider both the equality-constrained and the inequality-constrained sequential quadratic programming.
Neculai Andrei
Chapter 12. A SQP Method Using Only Equality-Constrained Sub-problems: DONLP
Abstract
Let us consider the general nonlinear optimization problem:subject towhere the functions f :  n  → , \( e:{\mathbb{R}}^n\to {\mathbb{R}}^{m_e} \), and c :  n  →  m are supposed to be twice continuously differentiable. Also, suppose that:
Neculai Andrei
Chapter 13. A SQP Algorithm with Successive Error Restoration: NLPQLP
Abstract
Let us consider the general nonlinear optimization problem with equality and inequality constraints:where it is assumed that the functions f :  n  → , and c i  :  n  → , i = 1 ,  …  , m, are twice continuously differentiable. Also, assume that
Neculai Andrei
Chapter 14. Active-set Sequential Linear-Quadratic Programming: KNITRO/ACTIVE
Abstract
KNITRO represents one of the most elaborated algorithms (and Fortran package) for solving general large-scale nonlinear optimization problems (Byrd et al. 2004). This is characterized by great flexibility and robustness integrating two very powerful and complementary algorithmic approaches for nonlinear optimization: the active-set sequential linear-quadratic approach and the interior point approach. KNITRO includes a number of much studied algorithms for linear algebra, very carefully implemented in computing programs, able to solve a large variety of nonlinear optimization problems like special cases of unconstrained optimization, systems of nonlinear equations, least square problems, and linear and nonlinear programming problems.
Neculai Andrei
Chapter 15. A SQP Algorithm for Large-Scale Constrained Optimization: SNOPT
Abstract
The algorithm described in this chapter, elaborated by Gill et al. (2002, 2005), is dedicated to solve nonlinear optimization problems of the following form:
Neculai Andrei
Chapter 16. Generalized Reduced Gradient with Sequential Linearization: CONOPT
Abstract
This algorithm is a very interesting and profitable combination of the generalized reduced gradient with the sequential linear programming and with the sequential quadratic programming. All these algorithms are imbedded in the generalized reduced gradient (GRG) scheme as described in Drud (1976, 1983, 1985, 1994, 1995, 1996, 2011).
Neculai Andrei
Chapter 17. Interior Point Methods
Abstract
One of the most powerful methods for solving nonlinear optimization problems known as interior point methods is to be presented in this chapter. They are related to barrier functions. The terms “interior point methods” and “barrier methods” have the same significance and may be used interchangeably. The idea is to keep the current points in the interior of the feasible region. A method for remaining in the interior of the feasible region is to add a component to the objective function, which penalizes close approaches to the boundary. This method was first suggested by Frisch 1955 and developed both in theoretical and computational details by Fiacco and McCormick (1964, 1966, 1968).
Neculai Andrei
Chapter 18. Filter Methods
Abstract
The filter methods developed by Fletcher and Leyffer (2002) as a new technique for globalization, the nonlinear optimization algorithms, are described in this chapter. The idea is motivated by the aim of avoiding the need to choose penalty parameters in penalty functions or augmented Lagrangan functions and their variants. Let us consider the nonlinear optimization problems with inequality constraints:subject towhere the objective function f :  n  →  and the functions c i  :  n  → ℝ i = 1 ,  …  , m defining the constraints are supposed to be twice continuously differentiable. The methods for solving this problem are based on the Newton method. Given an estimate x k of the solution x of (18.1), a linear or quadratic approximation of (18.1) is solved, thus obtaining a new estimate x k + 1 which we hope to be better as the previous one. Near a solution, this approach is guaranteed to be convergent. However, far away from the solution, the sequence {x k } generated by the above procedure may not converge. In this situation, away from the solution, the idea is to use the Newton method again but considering the penalty or merit functions. The penalty functions or the merit functions are a combination of the objective function and a measure of constraints violation such as h(x) = ‖c(x)+, where c(x) = [c 1(x),  … , c m (x)]T and \( {c}_i^{+}=\max \left\{0,{c}_i\right\} \). A very well-known example is the l 1 exact penalty function p(x, σ) = f(x) + σh(x), where σ > 0 is the penalty parameter. If σ is sufficiently large, then this penalty function can be minimized in an iterative procedure to ensure progress to the solution.
Neculai Andrei
Chapter 19. Interior Point Sequential Linear-Quadratic Programming: KNITRO/INTERIOR
Abstract
In Chapter 14 the KNITRO/ACTIVE algorithm based on the active-set sequential programming method has been presented. In this chapter the KNITRO/INTERIOR algorithm is being described, together with its numerical performances for solving large-scale general continuously nonlinear optimization problems. KNITRO/INTERIOR provides two procedures for computing the steps within the interior point approach. In the version INTERIOR-CG, each step is computed using a projected conjugate gradient iteration. It factors a projection matrix and uses the conjugate gradient method to approximately minimize a quadratic model of the barrier problem. In the version INTERIOR-DIRECT, the algorithm attempts to compute a new iterate by solving the primal-dual KKT system using direct linear algebra. In case this step cannot be guaranteed to be of good quality or if a negative curvature is detected, then the new iterate is computed by the INTERIOR-CG algorithm. The description of the KNITRO/INTERIOR-CG algorithm is given by Byrd et al. (1999), and its global convergence theory is presented by Byrd et al. (2000). The method implemented in the KNITRO/INTERIOR-DIRECT algorithm is described by Waltz et al. (2003).
Neculai Andrei
Chapter 20. Interior Point Filter Line Search: IPOPT
Abstract
In this chapter, an implementation of an interior point filter line-search algorithm for large-scale nonlinear programming proposed by Wächter and Biegler (2005a, b) is presented. As we know, to allow convergence from poor starting points and to enforce progress to the solution, interior point methods both in trust region and in line-search frameworks with exact penalty merit function have been developed. For example, KNITRO uses the l 1 exact penalty function (Byrd, Hribar, & Nocedal, 1999; Byrd, Gilbert, & Nocedal, 2000). On the other hand, Fletcher and Leyffer (2002) proposed filter methods as an alternative to merit functions, as a tool for global convergence guarantee in algorithms for nonlinear optimization. The idea of filter is that trial points are accepted if they improve the objective function value or improve the constraint violation instead of a combination of these two measures defined by the merit function. Even if the filter methods include different heuristics, they have been adapted to barrier methods in a number of ways. For example, Ulbrich et al. (2004) considered a trust-region filter method and accept the trial step on the basis of the norm of the optimality conditions. Benson et al. (2002a) proposed several heuristics using the idea of filter methods, for which improved efficiency is reported compared to their previous merit function approach. The global convergence of an interior point algorithm with a filter line search was analyzed by Wächter and Biegler (2001). Here, the assumptions made for the analysis of the global convergence are less restrictive than those made for line-search interior point methods for nonlinear programming developed, for example, by El-Bakry et al. (1996), Yamashita (1998), or Tits et al. (2003).
Neculai Andrei
Chapter 21. Numerical Studies: Comparisons
Abstract
As already seen, for solving nonlinear optimization applications we have a multitude of algorithms and codes integrated in GAMS which involve the theoretical concepts of the augmented Lagrangian, of sequential linear-quadratic or quadratic programming, of generalized reduced gradient, of interior point methods with line-search or interior point methods with trust region or filter, etc. All these algorithms work in conjunction with advanced linear algebra concepts, especially for solving positive definite or indefinite large-scale linear algebraic systems. Always, the performances of optimization algorithms crucially depend on the capabilities of linear algebra algorithms, on linear search, on filter, etc. On the other hand, for solving nonlinear optimization applications from different domains of activity, the present technology involves the usage of algebraic-oriented languages. At present, plenty modeling and optimization technologies are known (Andrei 2013b). The most used, both in academic tests and in real practical applications, are GAMS (Brooke et al. 1998), (Rosenthal 2011), AMPL (Fourer et al. 2002), MPL (Kristjansson 1993), LINDO (Schrage 1997), TOMLAB (Holmström 1997), etc.
Neculai Andrei
Backmatter
Metadaten
Titel
Continuous Nonlinear Optimization for Engineering Applications in GAMS Technology
verfasst von
Neculai Andrei
Copyright-Jahr
2017
Electronic ISBN
978-3-319-58356-3
Print ISBN
978-3-319-58355-6
DOI
https://doi.org/10.1007/978-3-319-58356-3