Skip to main content
main-content

Über dieses Buch

This book is an abridged version of our two-volume opus Convex Analysis and Minimization Algorithms [18], about which we have received very positive feedback from users, readers, lecturers ever since it was published - by Springer-Verlag in 1993. Its pedagogical qualities were particularly appreciated, in the combination with a rather advanced technical material. Now [18] hasa dual but clearly defined nature: - an introduction to the basic concepts in convex analysis, - a study of convex minimization problems (with an emphasis on numerical al- rithms), and insists on their mutual interpenetration. It is our feeling that the above basic introduction is much needed in the scientific community. This is the motivation for the present edition, our intention being to create a tool useful to teach convex anal­ ysis. We have thus extracted from [18] its "backbone" devoted to convex analysis, namely ChapsIII-VI and X. Apart from some local improvements, the present text is mostly a copy of the corresponding chapters. The main difference is that we have deleted material deemed too advanced for an introduction, or too closely attached to numerical algorithms. Further, we have included exercises, whose degree of difficulty is suggested by 0, I or 2 stars *. Finally, the index has been considerably enriched. Just as in [18], each chapter is presented as a "lesson", in the sense of our old masters, treating of a given subject in its entirety.

Inhaltsverzeichnis

Frontmatter

0.. Introduction: Notation, Elementary Results

Abstract
We start this chapter by listing some basic concepts, which are or should be wellknown — but it is good sometimes to return to basics. This gives us the opportunity of making precise the system of notation used in this book. For example, some readers may have forgotten that “i.e.” means id est, the literal translation of “that is (to say)”. If we get closer to mathematics, S∖{x} denotes the set obtained by depriving a set S function of a point x ∈ S. We also mention that, if f is a function, f −1 (y) is the inverse image of y, i.e. the set of all points x such that f(x) = y. When f is invertible, this set is the singleton {f −1(y)}.
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

A.. Convex Sets

Abstract
Our working space is ℝ n . We recall that this space has the structure of a real vector space (its elements being called vectors), and also of an affine space (a set of points); the latter can be identified with the vector-space ℝ n whenever an origin is specified. It is not always possible, nor even desirable, to distinguish vectors and points.
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

B.. Convex Functions

Abstract
The study of convex functions goes together with that of convex sets; accordingly, this chapter and the previous one constitute the first serious steps into the world of convex analysis. This chapter has no pretension to exhaustivity; similarly to Chap. A, it has been kept minimal, containing what is necessary to comprehend the sequel.
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

C.. Sublinearity and Support Functions

Abstract
In classical real analysis, the simplest functions are linear. In convex analysis, the next simplest convex functions (apart from the affine functions, widely used in §B.1.2), are so-called sublinear. We give three motivations for their study.
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

D.. Subdifferentials of Finite Convex Functions

Abstract
We have mentioned in our preamble to Chap. C that sublinearity permits the approximation of convex functions to first order around a given point. In fact, we will show here that, if f : ℝn → ℝ is convex and x ∈ ℝn is fixed, then the function
$$ f(x + h) = f(x) + f'(x,h) + o(||h||). $$
(0.1)
exists and is finite sublinear. Furthermore, fapproximates f around x in the sense that
$$ d \mapsto f'(x, d): = \mathop {\lim }\limits_{t \downarrow 0} \frac{{f(x + td) - f(x)}} {t}$$
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

E.. Conjugacy in Convex Analysis

Abstract
In classical real analysis, the gradient of a differentiable function f : ℝn → ℝ. plays a key role - to say the least. Considering this gradient as a mapping xs(x) = ∇f(x) from (some subset X of) ℝn to (some subset S of) ℝn, an interesting object is then its inverse: to a given sS, associate the xX such that s =f(x). This question may be meaningless: not all mappings are invertible! but could for example be considered locally, taking for X x S a neighborhood of some (x 0, s 0 = ∇f(x 0)), with ∇2 f continuous and invertible at x 0 (use the local inverse theorem).
Jean-Baptiste Hiriart-Urruty, Claude Lemaréchal

Backmatter

Weitere Informationen