Skip to main content

2003 | Buch

Lagrange-type Functions in Constrained Non-Convex Optimization

verfasst von: Alexander Rubinov, Xiaoqi Yang

Verlag: Springer US

Buchreihe : Applied Optimization

insite
SUCHEN

Über dieses Buch

Lagrange and penalty function methods provide a powerful approach, both as a theoretical tool and a computational vehicle, for the study of constrained optimization problems. However, for a nonconvex constrained optimization problem, the classical Lagrange primal-dual method may fail to find a mini­ mum as a zero duality gap is not always guaranteed. A large penalty parameter is, in general, required for classical quadratic penalty functions in order that minima of penalty problems are a good approximation to those of the original constrained optimization problems. It is well-known that penaity functions with too large parameters cause an obstacle for numerical implementation. Thus the question arises how to generalize classical Lagrange and penalty functions, in order to obtain an appropriate scheme for reducing constrained optimiza­ tion problems to unconstrained ones that will be suitable for sufficiently broad classes of optimization problems from both the theoretical and computational viewpoints. Some approaches for such a scheme are studied in this book. One of them is as follows: an unconstrained problem is constructed, where the objective function is a convolution of the objective and constraint functions of the original problem. While a linear convolution leads to a classical Lagrange function, different kinds of nonlinear convolutions lead to interesting generalizations. We shall call functions that appear as a convolution of the objective function and the constraint functions, Lagrange-type functions.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Consider the following mathematical programming problem with inequality constraints:
(1.1)
where f and g i , i = 1,…, m are real-valued functions defined on a metric space X. Let g(x) = (g 1 (x),…, g m (x)). We consider g as a map defined on X and mapping into ℝ m . Denote the problem (1.1.1) as P(f, g).
Alexander Rubinov, Xiaoqi Yang
Chapter 2. Abstract Convexity
Abstract
For two functions f and h defined on a set Z, the notion h ≤ f means that h(z) ≤ f(z), for all z ∈ Z.
Alexander Rubinov, Xiaoqi Yang
Chapter 3. Lagrange-Type Functions
Abstract
Consider the following problem P(f, g)
(3.1.1)
where X is a metric space, f is a real-valued function defined on X, and g maps X into ℝ m , that is, g(x) = (g 1 (x),…, g m (x)), where g i are real-valued functions, defined on X.
Alexander Rubinov, Xiaoqi Yang
Chapter 4. Penalty-Type Functions
Abstract
Recall that a relation ≥ defined on a set X is called pre-order if (i) xx, for all xX, and (ii) xy and yz imply xz. If x ≥ y and yx, then x and y are called equivalent elements. A pre-order relation is called complete if, for any two elements x and y, either xy or yx.
Alexander Rubinov, Xiaoqi Yang
Chapter 5. Augmented Lagrangians
Abstract
In the previous chapters we have studied duality relations by using Lagrange-type function. A different approach is based on the notion of a dualizing parameterization function and the corresponding augmented Lagrangian that is an augmented (nonlinear) version of the classical linear Lagrange function. An augmented Lagrangian, which is generated by the so-called canonical dualizing parameterization, can also be considered as a Lagrange-type function corresponding to a certain convolution function. However, augmented Lagrangians using a general dualizing parameterization function cannot be derived using convolution functions.
Alexander Rubinov, Xiaoqi Yang
Chapter 6. Optimality conditions
Abstract
Nonsmooth analysis will play an important role in this chapter. Various calculus rules, such as, mean-value theorem, chain rule and Taylor expansion have been established, see [13, 24, 26, 23, 102, 125, 128]. In this chapter, we consider the convergence of first-order necessary condition and second-order necessary condition that are obtained by Lagrange-type and augmented Lagrangian problems to that of constrained optimization problems. In the literature, various methods have been investigated. Arc methods and penalty methods were given by [87] and [5] for inequality constrained optimization problems under C 2 assumptions. Such an analysis for C 1,1 optimization problems has been given in [126]. A method that combines curvilinear paths and trust regions is given in [19] for a unconstrained optimization problem.
Alexander Rubinov, Xiaoqi Yang
Chapter 7. Appendix: Numerical experiments
Abstract
In this section we present results of numerical experiments from [118] that confirm that numerical methods based on small penalty parameters (see Subsection 4.3.9) can be successfully used for global constrained non-convex optimization.
Alexander Rubinov, Xiaoqi Yang
Backmatter
Metadaten
Titel
Lagrange-type Functions in Constrained Non-Convex Optimization
verfasst von
Alexander Rubinov
Xiaoqi Yang
Copyright-Jahr
2003
Verlag
Springer US
Electronic ISBN
978-1-4419-9172-0
Print ISBN
978-1-4613-4821-4
DOI
https://doi.org/10.1007/978-1-4419-9172-0