Skip to main content

2000 | Buch

Convex Analysis and Nonlinear Optimization

Theory and Examples

verfasst von: Jonathan M. Borwein, Adrian S. Lewis

Verlag: Springer New York

Buchreihe : CMS Books in Mathematics

insite
SUCHEN

Über dieses Buch

Optimization is a rich and thriving mathematical discipline. The theory underlying current computational optimization techniques grows ever more sophisticated. The powerful and elegant language of convex analysis unifies much of this theory. The aim of this book is to provide a concise, accessible account of convex analysis and its applications and extensions, for a broad audience. It can serve as a teaching text, at roughly the level of first year graduate students. While the main body of the text is self-contained, each section concludes with an often extensive set of optional exercises. The new edition adds material on semismooth optimization, as well as several new proofs that will make this book even more self-contained.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Background
Abstract
We begin by reviewing some of the fundamental algebraic, geometric and analytic ideas we use throughout the book. Our setting, for most of the book, is an arbitrary Euclidean space E, by which we mean a finite-dimensional vector space over the reals R, equipped with an inner product ‹·,·›. We would lose no generality if we considered only the space R n of real (column) n-vectors (with its standard inner product), but a more abstract, coordinate-free notation is often more flexible and elegant.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 2. Inequality Constraints
Abstract
Early in multivariate calculus we learn the significance of differentiability in finding minimizers. In this section we begin our study of the interplay between convexity and differentiability in optimality conditions.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 3. Fenchel Duality
Abstract
We have already seen, in the First order sufficient condition (2.1.2), one benefit of convexity in optimization: critical points of convex functions are global minimizers. In this section we extend the types of functions we consider in two important ways:
(i)
We do not require f to be differentiable.
 
(ii)
We allow f to take the value + ∞.
 
Jonathan M. Borwein, Adrian S. Lewis
Chapter 4. Convex Analysis
Abstract
We have already seen that linear functions are always continuous. More generally, a remarkable feature of convex functions on E is that they must be continuous on the interior of their domains. Part of the surprise is that an algebraic/geometric assumption (convexity) leads to a topological conclusion (continuity). It is this powerful fact that guarantees the usefulness of regularity conditions like Adom f ∩ cont g ≠ ∅ (3.3.9), which we studied in the previous section.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 5. Special Cases
Abstract
In our earlier section on theorems of the alternative (Section 2.2), we observed that finitely generated cones are closed. Remarkably, a finite linear-algebraic assumption leads to a topological conclusion. In this section we pursue the consequences of this type of assumption in convex analysis.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 6. Nonsmooth Optimization
Abstract
From the perspective of optimization, the subdifferential ∂f(·) of a convex function f has many of the useful properties of the derivative. Some examples: it gives the necessary optimality condition 0 ∈ ∂f(x) when the point x is a (local) minimizer (Proposition 3.1.5); it reduces to {∇f(x)} when f is differentiable at x (Corollary 3.1.10); and it often satisfies certain calculus rules such as ∂(f + g)(x) = ∂ f(x) + ∂ g(x) (Theorem 3.3.5). For a variety of reasons, if the function f is not convex, the subdifferential ∂f(·) is not a particularly helpful idea. This makes it very tempting to look for definitions of the subdifferential for a nonconvex function. In this section we outline some examples; the most appropriate choice often depends on context.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 7. Karush-Kuhn-Tucker Theory
Abstract
Our main optimization models so far are inequality-constrained. A little thought shows our techniques are not useful for equality-constrained problems like
$$ \inf \left\{ {f\left( x \right)|h\left( x \right) = 0} \right\}. $$
Jonathan M. Borwein, Adrian S. Lewis
Chapter 8. Fixed Points
Abstract
Many questions in optimization and analysis reduce to solving a nonlinear equation h (x) = 0, for some function h: E → E. Equivalently, if we define another map f = Ih (where I is the identity map), we seek a point x in E satisfying f (x) = x; we call x a fixed point of f.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 9. Postscript: Infinite Versus Finite Dimensions
Abstract
We have chosen to finish this book by indicating many of the ways in which finite dimensionality has played a critical role in the previous chapters. While our list is far from complete it should help illuminate the places in which care is appropriate when “generalizing”. Many of our main results (on subgradients, variational principles, open mappings, Fenchel duality, metric regularity) immediately generalize to at least reflexive Banach spaces. When they do not, it is principally because the compactness properties and support properties of convex sets have become significantly more subtle. There are also significantly many properties that characterize Hilbert space. The most striking is perhaps the deep result that a Banach space X is (isomorphic to) Hilbert space if and only if every closed vector subspace is complemented in X. Especially with respect to best approximation properties, it is Hilbert space that best captures the properties of Euclidean space.
Jonathan M. Borwein, Adrian S. Lewis
Chapter 10. List of Results and Notation
Jonathan M. Borwein, Adrian S. Lewis
Backmatter
Metadaten
Titel
Convex Analysis and Nonlinear Optimization
verfasst von
Jonathan M. Borwein
Adrian S. Lewis
Copyright-Jahr
2000
Verlag
Springer New York
Electronic ISBN
978-1-4757-9859-3
Print ISBN
978-1-4757-9861-6
DOI
https://doi.org/10.1007/978-1-4757-9859-3