Skip to main content

2015 | Buch

Convex Optimization in Normed Spaces

Theory, Methods and Examples

insite
SUCHEN

Über dieses Buch

This work is intended to serve as a guide for graduate students and researchers who wish to get acquainted with the main theoretical and practical tools for the numerical minimization of convex functions on Hilbert spaces. Therefore, it contains the main tools that are necessary to conduct independent research on the topic. It is also a concise, easy-to-follow and self-contained textbook, which may be useful for any researcher working on related fields, as well as teachers giving graduate-level courses on the topic. It will contain a thorough revision of the extant literature including both classical and state-of-the-art references.

Inhaltsverzeichnis

Frontmatter
1. Basic Functional Analysis
Abstract
Functional and convex analysis are closely intertwined. In this chapter we recall the basic concepts and results from functional analysis and calculus that will be needed throughout this book. A first section is devoted to general normed spaces. We begin by establishing some of their main properties, with an emphasis on the linear functions between spaces. This leads us to bounded linear functionals and the topological dual. Second, we review the Hahn–Banach Separation Theorem, a very powerful tool with important consequences. It also illustrates the fact that the boundaries between functional and convex analysis can be rather fuzzy at times. Next, we discuss some relevant results concerning the weak topology, especially in terms of closedness and compactness. Finally, we include a subsection on differential calculus, which also provides an introduction to standard smooth optimization techniques. The second section deals with Hilbert spaces, and their very rich geometric structure, including the ideas of projection and orthogonality. We also revisit some of the general concepts from the first section (duality, reflexivity, weak convergence) in the light of this geometry.
Juan Peypouquet
2. Existence of Minimizers
Abstract
In this chapter, we present sufficient conditions for an extended real-valued function to have minimizers. After discussing the main concepts, we begin by addressing the existing issue in abstract Hausdorff spaces, under certain (one-sided) continuity and compactness hypotheses. We also present Ekeland’s Variational Principle, providing the existence of approximate minimizers that are strict in some sense. Afterward, we study the minimization of convex functions in reflexive spaces, where the verification of the hypothesis is more practical. Although it is possible to focus directly on this setting, we preferred to take the long path. Actually, the techniques used for the abstract framework are useful for problems that do not fit in the convex reflexive setting, but where convexity and reflexivity still play an important role.
Juan Peypouquet
3. Convex Analysis and Subdifferential Calculus
Abstract
This chapter deals with several properties of convex functions, especially in connection with their regularity, on the one hand, and the characterization of their minimizers, on the other. We shall explore sufficient conditions for a convex function to be continuous, as well as several connections between convexity and differentiability. Next, we present the notion of subgradient, a generalization of the concept of derivative for nondifferentiable convex functions that will allow us to characterize their minimizers. After discussing conditions that guarantee their existence, we present the basic (yet subtle) calculus rules, along with their remarkable consequences. Other important theoretical and practical tools, such as the Fenchel conjugate and the Lagrange multipliers, will also be studied. These are particularly useful for solving constrained problems.
Juan Peypouquet
4. Examples
Abstract
The tools presented in the previous chapters are useful, on the one hand, to prove that a wide variety of optimization problems have solutions; and, on the other, to provide useful characterizations allowing to determine them. In this chapter, we present a short selection of problems to illustrate some of those tools. We begin by revisiting some results from functional analysis concerning the maximization of bounded linear functionals and the realization of the dual norm. Next, we discuss some problems in optimal control and calculus of variations. Another standard application of these convex analysis techniques lies in the field of elliptic partial differential equations. We shall review the theorems of Stampacchia and Lax-Milgram, along with some variations of Poisson’s equation, including the obstacle problem and the p-Laplacian. We finish by commenting a problem of data compression and restoration.
Juan Peypouquet
5. Problem-Solving Strategies
Abstract
Only in rare cases, can problems in function spaces be solved analytically and exactly. In most occasions, it is necessary to apply computational methods to approximate the solutions. In this chapter, we discuss some of the basic general strategies that can be applied. First, we present several connections between optimization and discretization, along with their role in the problem-solving process. Next, we introduce the idea of iterative procedures, and discuss some abstract tools for proving their convergence. Finally, we comment some ideas that are useful to simplify or reduce the problems, in order to make them tractable or more efficiently solved.
Juan Peypouquet
6. Keynote Iterative Methods
Abstract
In this chapter we give an introduction to the basic (sub)gradient-based methods for minimizing a convex function on a Hilbert space. We pay special attention to the proximal point algorithm and the gradient method, which are interpreted as time discretizations of the steepest descent differential inclusion. Moreover, these methods, along with some extensions and variants, are the building blocks for other - more sophisticated - methods that exploit particular features of the problems, such as the structure of the feasible (or constraint) set. The choice of proximal- or gradient-type schemes depends strongly on the regularity of the objective function.
Throughout this chapter, H is a real Hilbert space.
Juan Peypouquet
Backmatter
Metadaten
Titel
Convex Optimization in Normed Spaces
verfasst von
Juan Peypouquet
Copyright-Jahr
2015
Electronic ISBN
978-3-319-13710-0
Print ISBN
978-3-319-13709-4
DOI
https://doi.org/10.1007/978-3-319-13710-0

Premium Partner