Skip to main content

1997 | Buch

Smooth Nonlinear Optimization in R n

verfasst von: Tamás Rapcsák

Verlag: Springer US

Buchreihe : Nonconvex Optimization and Its Applications

insite
SUCHEN

Über dieses Buch

Experience gained during a ten-year long involvement in modelling, program­ ming and application in nonlinear optimization helped me to arrive at the conclusion that in the interest of having successful applications and efficient software production, knowing the structure of the problem to be solved is in­ dispensable. This is the reason why I have chosen the field in question as the sphere of my research. Since in applications, mainly from among the nonconvex optimization models, the differentiable ones proved to be the most efficient in modelling, especially in solving them with computers, I started to deal with the structure of smooth optimization problems. The book, which is a result of more than a decade of research, can be equally useful for researchers and stu­ dents showing interest in the domain, since the elementary notions necessary for understanding the book constitute a part of the university curriculum. I in­ tended dealing with the key questions of optimization theory, which endeavour, obviously, cannot bear all the marks of completeness. What I consider the most crucial point is the uniform, differential geometric treatment of various questions, which provides the reader with opportunities for learning the structure in the wide range, within optimization problems. I am grateful to my family for affording me tranquil, productive circumstances. I express my gratitude to F.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
In Chapter 1, a brief introduction of the contents of the book on smooth nonlinear optimization is provided. Formal mathematical definitions are found in subsequent chapters.
Tamás Rapcsák
2. Nonlinear Optimization Problems
Abstract
After a brief historical survey of nonlinear optimization, classical nonlinear optimization problems, optimality conditions, convex optimization and separation theorems will be studied.
Tamás Rapcsák
3. Optimality Conditions
Abstract
In this part, the necessary and sufficient optimality conditions will be derived for smooth optimization problems.
Tamás Rapcsák
4. Geometric Background of Optimality Conditions
Abstract
Chapter 4 is devoted to investigate the geometric background of optimality conditions studied in the preceding statements and to show some differential geometric aspects.
Tamás Rapcsák
5. Deduction of the Classical Optimality Conditions in Nonlinear Optimization
Abstract
In Chapter 5, the first-order and second-order necessary and sufficient optimally conditions will be deduced from the corresponding statements proved in the preceding Chapters.
Tamás Rapcsák
6. Geodesic Convex Functions
Abstract
In this part, the local-global property (every local optimum is global) of a smooth nonlinear optimization problem is studied in certain nonconvex cases, more precisely, if the set of feasible points is not convex (e.g., equality constraints) or the problem functions are not convex and generalized convex in the classical sense, or a Riemannian metric may be introduced related to a nonlinear coordinate representation (a nonlinear coordinate transformation) on the constraint manifold or to the improvement of the problem structure. The local-global property is in connection with the concept of generalized convexity (invexity) which plays an important role in mathematical optimization theory.
Tamás Rapcsák
7. On the Connectedness of the Solution Set to Complementarity Systems
Abstract
Complementarity systems are related to numerous important topics of nonlinear optimization. The interest in this problem is also motivated by its important and deep connections with nonlinear analysis and by its applications in areas such as engineering, structural mechanics, elasticity theory, lubrication theory, economics, calculus of variations, equilibrium theory of networks etc.
Tamás Rapcsák
8. Nonlinear Coordinate Representations
Abstract
Basic theoretical results of smooth optimization can be characterized in the image space (e.g., Giannessi, 1984, 1987, 1989) or in an invariant form by using Riemannian geometry and tensor calculus (e.g., Rapcsák, 1989, 1991/b, Rapcsák and Csendes, 1993). An advantage of the latter approach is that the results do not depend on nonlinear coordinate transformations. Thus, theoretical results and numerical representations can be separated. However, we want to emphasize the importance of the good numerical representations of optimization problems which may lead to more efficient methods.
Tamás Rapcsák
9. Tensors in Optimization
Abstract
Optimization problems can be formulated by using tensors and obtain in this way tensor field optimization problems introduced by (Rapcsák 1990. In differential geometry, theoretical physics and several applications of mathematics, the concept of tensor proved to be instrumental. In optimization theory, a new class of methods, called tensor methods, was introduced for solving systems of nonlinear equations (Schnabel and Frank, 1984) and for unconstrained optimization using second derivatives (Schnabel and Chowe, 1991). Tensor methods are of general purpose-methods intended especially for problems where the Jacobian matrix at the solution is singular or ill-conditioned. The description of a linear optimization problem in the tensor notation is proposed in order to study the integrability of vector and multivector fields associated with interior point methods by (Iri 1991). The most important feature of tensors is that their values do not change when they cause regular nonlinear coordinate transformations, and thus, this notion seems to be useful for the characterization of structural properties not depending on regular nonlinear coordinate transformations. This motivated the idea of using this notion within the framework of nonlinear optimization.
Tamás Rapcsák
10. Geodesic Convexity on R + n
Abstract
Many of the numerous articles published recently on interior point algorithms focused on vector fields, potential functions and the analysis of the potential reduction methods. Affine and projective scaling methods based on the affine and projective vector fields are originated from affine and projective metrics. Riemannian geometry underlying interior point methods was investigated in Karmarkar (1990), the affine and projective scaling trajectories in Bayer and Lagarias (1989/a, 1989/b) and the integrability of vector and multivector fields associated with interior point methods in Iri (1991).
Tamás Rapcsák
11. Variable Metric Methods Along Geodesics
Abstract
This part of the book is devoted to the analysis of variable metric methods along geodesies. These methods are iterative, meaning that the algorithms generate a series of points, each point calculated on the basis of the points preceding it, and they are descent which means that each new point is generated by the algorithms and that the corresponding value of some function, evaluated at the most recent point, decreases in value. The theory of iterative algorithms can be divided into three parts. The first is concerned with the creation of the algorithms based on the structure of problems and the efficiency of computers. The second is the verification whether a given algorithm generates a sequence converging to a solution. This aspect is referred to as global convergence analysis, since the question is whether an algorithm starting from an arbitrary initial point, be it far from the solutions, converges to a solution. Thus, an algorithm is said to be globally convergent if for arbitrary starting points , the algorithm is guaranteed to generate a sequence of points converging to a solution. Many of the most important algorithms for solving nonlinear optimization problems are not globally convergent in the purest form, and thus occasionally generate sequences that either do not converge at all, or converge to points that are not solutions. The third is referred to as local convergence analysis and is concerned with the rate at which the generated sequence of points converges to a solution.
Tamás Rapcsák
12. Polynomial Variable Metric Methods For Linear Optimization
Abstract
Since the elaboration of the framework by Karmarkar, many interior point algorithms have been proposed for linear optimization. Although these variants can be classified into main categories, e.g.: (i) projective methods, (ii) “pure” affine-scaling methods, (iii) path-following methods, (iv) affine potential reduction methods, a different variant needs a different investigation of its convergence or polynomial status. Thus, there is a natural question: how should we analyze the behaviour of these algorithms? A good survey was published on interior point methods (Terlaky (ed.), 1996).
Tamás Rapcsák
13. Special Function Classes
Abstract
In this Chapter, special function classes are studied. First, geodesic (strictly) quasiconvex, geodesic (strictly) pseudoconvex and the difference of two geodesic convex functions are introduced, then convex transformable functions, involving range and domain transformations as well, are treated. These results were mainly published in Rapcsák (1994). In the last section, the structure of smooth pseudolinear functions is characterized, i.e., the explicit formulation of the gradients of smooth pseudolinear functions is given (Rapcsák, 1991). In the next Chapter, it will be shown that this result is the solution of Fenchel problem of level sets in a special case.
Tamás Rapcsák
14. Fenchel’s Unsolved Problem of Level Sets
Abstract
It is well-known that a convex function has convex less-equal level sets. That the converse is not true was realized by de Finetti (1949). The problem of level sets, discussed first by Fenchel in 1953, is as follows: Under what conditions is the family of level sets of a convex function a nested family of closed convex sets? Fenchel (1953, 1956) gave necessary and sufficient conditions for the existence of a convex function with the prescribed level sets and the existence of a smooth convex function under the assumption that the given subsets are the level sets of a twice differentiable function. In the first case, seven conditions were deduced, and while the first six are simple and intuitive, the seventh is rather complicated. This fact and the additional assumption in the smooth case, according to which the given subsets are the level sets of a twice differentiable function, seem to be the motivation that Roberts and Varberg (1973, p. 271) drew up anew the following problem of level sets: “What “nice” conditions on a nested family of convex sets will ensure that it is the family of level sets of a convex function?” In the sequel, the notions of convexifiability and concavifiability are used as synonyms, because if a function f is convex, then –f concave.
Tamás Rapcsák
15. An Improvement of the Lagrange Multiplier Rule for Smooth Optimization Problems
Abstract
The famous Lagrange multiplier rule was introduced in “Lagrange, J. L., Mécanique analytique I-II, Paris, 1788” for minimizing a function subject to equality constraints, and thus, for finding the stable equilibrium of a mechanical system. Since then, a great many applications of wide variety have been published in the fields of theory (e.g., physics, mathematics, operations research, management science etc.), methodology and practice without changing its scope from the point of view of mathematics. (Prékopa 1980) stated that “This ingenious method was unable to attract mathematics student and teachers for a long time. The method of presentation used nowadays by many instructors is to prove the necessary condition first in the case of inequality constraints, give a geometric meaning to this, and refer to the case of equality constraints. This can make the students more enthusiastic about this theory.” Here, an improvement of the sufficiency part of the Lagrange multiplier rule is presented. That is to say that a result stronger than that of Lagrange, with respect to the sufficiency part, can be proved under the same type of conditions he used, based on a different, perhaps a deeper mathematical investigation. Moreover, the geometric meaning of the Lagrange theorem is brought into the limelight.
Tamás Rapcsák
Backmatter
Metadaten
Titel
Smooth Nonlinear Optimization in R n
verfasst von
Tamás Rapcsák
Copyright-Jahr
1997
Verlag
Springer US
Electronic ISBN
978-1-4615-6357-0
Print ISBN
978-1-4613-7920-1
DOI
https://doi.org/10.1007/978-1-4615-6357-0