Skip to main content
Top

2019 | Book

Convex and Stochastic Optimization

insite
SEARCH

About this book

This textbook provides an introduction to convex duality for optimization problems in Banach spaces, integration theory, and their application to stochastic programming problems in a static or dynamic setting. It introduces and analyses the main algorithms for stochastic programs, while the theoretical aspects are carefully dealt with.

The reader is shown how these tools can be applied to various fields, including approximation theory, semidefinite and second-order cone programming and linear decision rules.

This textbook is recommended for students, engineers and researchers who are willing to take a rigorous approach to the mathematics involved in the application of duality theory to optimization with uncertainty.

Table of Contents

Frontmatter
Chapter 1. A Convex Optimization Toolbox
Abstract
This chapter presents the duality theory for optimization problems, by both the minimax and perturbation approach, in a Banach space setting. Under some stability (qualification) hypotheses, it is shown that the dual problem has a nonempty and bounded set of solutions. This leads to the subdifferential calculus, which appears to be nothing but a partial subdifferential rule. Applications are provided to the infimal convolution, as well as recession and perspective functions. The relaxation of some nonconvex problems is analyzed thanks to the Shapley–Folkman theorem.
J. Frédéric Bonnans
Chapter 2. Semidefinite and Semi-infinite Programming
Abstract
This chapter discusses optimization problems in the cone of positive semidefinite matrices, and the duality theory for such ‘linear’ problems. We relate convex rotationally invariant matrix functions to convex functions of the spectrum; this allows us to compute the conjugate of the logarithmic barrier function and the dual of associate optimization problems. The semidefinite relaxation of problems with nonconvex quadratic cost and constraints is presented. Second-order cone optimization is shown to be a subclass of semidefinite programming. The second part of the chapter is devoted to semi-infinite programming and its dual in the space of measures with finite support, with application to Chebyshev approximation and to one-dimensional polynomial optimization.
J. Frédéric Bonnans
Chapter 3. An Integration Toolbox
Abstract
This chapter gives a concise presentation of integration theory in a general measure space, including classical theorems on the limit of integrals. It gives an extension to the Bochner integrals, needed for measurable functions with values in a Banach space. Then it shows how to compute the conjugate and subdifferential of integral functionals, either in the convex case, based on convex integrand theory, or in the case of Carathéodory integrands. Then optimization problems with integral cost and constraint functions are analyzed using the Shapley–Folkman theorem.
J. Frédéric Bonnans
Chapter 4. Risk Measures
Abstract
Minimizing an expectation gives little control of the risk of a reward that is far from the expected value. So, it is useful to design functionals whose minimization will allow one to make a tradeoff between the risk and expected value. This chapter gives a concise introduction to the corresponding theory of risk measures. After an introduction to utility functions, the monetary measures of risk are introduced and connected to their acceptation sets. Then the case of deviation and semi-deviation, as well as the (conditional) value at risk, are discussed.
J. Frédéric Bonnans
Chapter 5. Sampling and Optimizing
Abstract
This chapter discusses what happens when, instead of minimizing an expectation, one minimizes the sample approximation obtained by getting a sample of independent events. The analysis relies on the theory of asymptotic laws (delta theorems) and its applications in stochastic programming. We extend the results to the case of constraints in expectation.
J. Frédéric Bonnans
Chapter 6. Dynamic Stochastic Optimization
Abstract
Dynamic stochastic optimization problems have the following information constraint: each decision must be a function of the available information at the corresponding time. This can be expressed as a linear constraint involving conditional expectations. This chapter develops the corresponding theory for convex problems with full observation of the state. The resulting optimality system involves a backward costate equation, the control variable being a point of minimum of some Hamiltonian function.
J. Frédéric Bonnans
Chapter 7. Markov Decision Processes
Abstract
This chapter considers the problem of minimizing the expectation of a reward for a controlled Markov chain process, either with a finite horizon, or an infinite one for which the reward has discounted values, including the cases of exit times and stopping decisions. The value and policy (Howard) iterations are compared. Extensions of these results are provided for problems with expectations constraints, partial observation, and for the ergodic case, limit in some sense of large horizon problems with undiscounted cost.
J. Frédéric Bonnans
Chapter 8. Algorithms
Abstract
In the case of convex, dynamical stochastic optimization problems, the Bellman functions, being convex, can be approximated as finite suprema of affine functions. Starting with static and deterministic problems, it is shown how this leads to the effective stochastic dual dynamic programming algorithm. The second part of the chapter is devoted to the promising approach of linear decision rules, which allows one to obtain upper and lower bounds of the value functions of stochastic optimization problems.
J. Frédéric Bonnans
Chapter 9. Generalized Convexity and Transportation Theory
Abstract
This chapter first presents the generalization of convexity theory when replacing duality products with general coupling functions on arbitrary sets. The notions of Fenchel conjugates, cyclical monotonicity and duality of optimization problems, have a natural extension to this setting, in which the augmented Lagrangian approach has a natural interpretation. Convex functions over measure spaces, constructed as Fenchel conjugates of integral functions of continuous functions, are shown to be sometimes equal to some integral of a function of their density. This is used in the presentation of optimal transportation theory over compact sets, and the associated penalized problems. The chapter ends with a discussion of the multi-transport setting.
J. Frédéric Bonnans
Backmatter
Metadata
Title
Convex and Stochastic Optimization
Author
J. Frédéric Bonnans
Copyright Year
2019
Electronic ISBN
978-3-030-14977-2
Print ISBN
978-3-030-14976-5
DOI
https://doi.org/10.1007/978-3-030-14977-2