Skip to main content

2000 | Buch

Handbook of Semidefinite Programming

Theory, Algorithms, and Applications

herausgegeben von: Henry Wolkowicz, Romesh Saigal, Lieven Vandenberghe

Verlag: Springer US

Buchreihe : International Series in Operations Research & Management Science

insite
SUCHEN

Über dieses Buch

Semidefinite programming (SDP) is one of the most exciting and active research areas in optimization. It has and continues to attract researchers with very diverse backgrounds, including experts in convex programming, linear algebra, numerical optimization, combinatorial optimization, control theory, and statistics. This tremendous research activity has been prompted by the discovery of important applications in combinatorial optimization and control theory, the development of efficient interior-point algorithms for solving SDP problems, and the depth and elegance of the underlying optimization theory.
The Handbook of Semidefinite Programming offers an advanced and broad overview of the current state of the field. It contains nineteen chapters written by the leading experts on the subject. The chapters are organized in three parts: Theory, Algorithms, and Applications and Extensions.

Inhaltsverzeichnis

Frontmatter

Introduction

1. Introduction
Abstract
Semidefinite programming refers to optimization problems that can be expressed in the form minimize CX subject to A i X = b i for all i=1, . . . , m(1.1.1) X0 where the variable is XS n , the space of real symmetric nxn matrices.
Henry Wolkowicz, Romesh Saigal, Lieven Vandenberghe

Theory

Frontmatter
2. Convex Analysis on Symmetric Matrices
Abstract
This introductory chapter serves as a reference to keep the handbook self-contained in particular for non-experts. We state basic definitions, review concepts from linear algebra that relate to semi-definite optimization, and summarize special results from convex analysis for symmetric matrices. It is our intention to state each result in a simple language, and therefore we limit our attention to the case of real symmetric matrices. For each result references are given where a discussion can be found, and in many cases also interesting counterparts for complex or nonsymmetric matrices.
Florian Jarre
3. The Geometry of Semidefinite Programming
Abstract
Consider the primal-dual pair of optimization problems
$$ \begin{gathered} Min \left\langle {c,x} \right\rangle {\rm M}ax \left\langle {b,y} \right\rangle \hfill \\ (P) s.t. x \in K s.t. z \in K* (D) \hfill \\ Ax = b A*y + z = c \hfill \\ \end{gathered} $$
where
  • X and Y are Euclidean spaces with dim X ≥ dim Y.
  • A : XY is a linear operator, assumed to be onto.
  • A* : YX is its adjoint.
  • K is a closed, convex, facially exposed cone in X.
  • K* := {z|〈z,x〉≤ 0 ∀xK} is the dual of K, also a closed, convex, facially exposed cone.
Gábor Pataki
4. Duality and Optimality Conditions
Abstract
Consider the optimization problem Min f(x) subject to G(x) 0, (4.1.1) x C where C is a convex closed cone in the Euclidean space ℝ n , f : ℝ n → ℝ and G : ℝ n Y is a mapping from ℝ n into the space Y := S m of m x m symmetric matrices.
Alexander Shapiro, Katya Scheinberg
5. Self-Dual Embeddings
Abstract
Most semidefinite programming algorithms found in the literature require strictly feasible starting points (X° ≻ 0, S° ≻ 0) for the primal and dual problems respectively. So-called ‘big-M’ methods (see e.g. [807]) are often employed in practice to obtain feasible starting points.
Etienne de Klerk, Tamás Terlaky, Kees Roos
6. Robustness
Abstract
We consider a semidefinite programming problem (SDP) of the form
$$ \max {b^T}y subject to F(y) = {F_0} + \sum\limits_{i = 1}^m {{y_i}{F_i}\underline \succ 0} $$
(6.1.1)
where bR m is given, and F is an affine map from yR m to S n .
Aharon Ben-Tal, Laurent El Ghaoui, Arkadi Nemirovski
7. Error Analysis
Abstract
We study a system of mixed linear, positive semidefinite (PSD) and second order cone (SOC) constraints:
$$ \left\{ \begin{gathered} x \in b + A \hfill \\ x \in K, \hfill \\ \end{gathered} \right. $$
(7.1.1)
where b is a given vector in ℜ N , A is a linear subspace of ℜ N , and K ⊂ ℜ N is a Cartesian product of second order cones and positive semidefinite cones.
Zhiquan Luo, Jos Sturm

Algorithms

Frontmatter
8. Symmetric Cones, Potential Reduction Methods and Word-by-Word Extensions
Abstract
This is the first of three chapters in this book dealing with polynomial time complexity of interior point algorithms for semidefinite programming (SDP). As such it deals with, in a sense, the easiest class of algorithms for which polynomial time convergence can be established. More precisely, we present a “recipe” whereby polynomial time convergence proofs in linear programming (LP) can be extended “word-by-word” to analogous proofs in SDP. Our presentation closely follows [21].
Farid Alizadeh, Stefan Schmieta
9. Potential Reduction and Primal-Dual Methods
Abstract
We start with an abstract definition of interior-point methods. These are the methods of solving convex optimization problems by generating a sequence of points which lies in the relative interior of a convex set defined by the difficult constraints (see [797]). In this definition, we are thinking about a formulation of the underlying problem in which one has a maximal set of linear equality constraints and some convex set constraint. The fundamental idea of interior-point algorithms goes back at least to Frisch [257] and to Fiacco and McCormick [230]. However, almost all of the new interior-point algorithms have their foundations in Karmarkar’s seminal work [404]. Karrnarkar’s work introduced many ingredients of the interior-point algorithms with polynomial iteration complexity that have been studied extensively since 1984. These algorithms possess polynomial bounds on the number of arithmetic operations required to solve a linear programming (LP) problem. In this chapter, we will be content with the bounds on the number of iterations. To date, it is not known whether even the SDP feasibility problem belongs to the class of decision problems solvable in polynomial time (assuming rational data and using the “bit model” on a Turing machine).
Levent Tuncel
10. Path-Following Methods
Abstract
In this chapter we study interior-point primal-dual path-following algorithms for solving the semidefinite programming (SDP) problem. In contrast to linear programming, there are several ways one can define the Newton-type search directions used by these algorithms. We discuss several ways in which this can done by motivating and introducing several search directions and families of directions. Polynomial convergence results for short- and long-step path-following algorithms using the Monteiro and Zhang family of directions are derived in detail; similar results are only surveyed, without proofs, for the Kojima, Shindoh and Hara family and the Monteiro and Tsuchiya family.
Renato Monteiro, Michael Todd
11. Bundle Methods to Minimize the Maximum Eigenvalue Function
Abstract
In the last ten years the study of interior point methods dominated algorithmic research in semidefinite programming. Only recently interest in nonsmooth optimization methods revived again, the impetus coming from two different directions. On the one hand alternative possibilities were sought to solve structured large scale semidefinite programs which were not amenable to current interior point codes [338], on the other hand new developments in the second order theory of nonsmooth convex optimization suggested the specialization of these theoretic techniques to semidefinite programming [597, 598]. We present these new methods under the common framework of bundle methods and survey the underlying theory as well as some implementational aspects. In order to illustrate the efficiency and potential of the algorithms we also present numerical results.
Christoph Helmberg, Francois Oustry

Applications and Extensions

Frontmatter
12. Combinatorial Optimization
Abstract
In this section we investigate various ways to derive semidefinite relaxations of combinatorial optimization problems. We start out with a generic way to obtain an SDP relaxation for problems in binary variables. The key step is to linearize quadratic functions in the original vector x ∈ ℝ n through a new n×n matrix X, see also Chapter 13.
Michel Goemans, Franz Rendl
13. Semidefinite Programming Relaxations of Nonconvex Quadratic Optimization
Abstract
Quadratically constrained quadratic programs, denoted Q2P, are an important modelling tool, e.g.: for hard combinatorial optimization problems, Chapter 12; and SQP methods in nonlinear programming, Chapter 20. These problems are too hard to solve in general. Therefore, relaxations such as the Lagrangian relaxation are used. The dual of the Lagrangian relaxation is the SDP relaxation. Thus SDP has enabled us to efficiently solve the Lagrangian relaxation and find good approximate solutions for these hard, possibly nonconvex, Q2P. This area has generated a lot of research recently. This has resulted in many strong and elegant theorems that describe the strength/performance of the bounds obtained from solving relaxations of these Q2P.
Yuri Nesterov, Henry Wolkowicz, Yinyu Ye
14. Semidefinite Programming in Systems and Control Theory
Abstract
It has been long recognized that SDP constraints, i.e., Linear Matrix Inequalities (LMIs), arise naturally and frequently in the analysis of the solution of finite-dimensional differential equations that model control systems.
Venkataramanan Balakrishnan, Fan Wang
15. Structural Design
Abstract
Consider a mechanical construction C with M < ∞ degrees of freedom; thus, virtual displacements of C are specified by vectors w ∈ IRM. The potential energy capacitated by C under a displacement w is assumed to be a nonnegative quadratic function 1/2 wT Qw of the displacement (“linearly elastic construction”), Q being a symmetric positive semidefinite matrix characterizing C; this matrix is assumed to depend linearly on the vector t of design variables: Q = Q(t).
Aharon Ben-Tal, Arkadi Nemirovski
16. Moment Problems and Semidefinite Optimization
Abstract
Problems involving moments of random variables arise naturally in many areas of mathematics, economics, and operations research. Let us give some examples that motivate the present paper.
Dimitris Bertsimas, Jay Sethuraman
17. Design of Experiments in Statistics
Abstract
Models and Information Matrix. Most of the results in experimental design theory are related to the linear regression models:
$$ E\{ y|x\} = \eta ({\theta ^T},x) and Var\{ y|x\} = \sigma 2(x), $$
(17.1.1)
where the observation y is a random variable, E and Var stand for expectation and variance, respectively.
Valerii Fedorov, Jon Lee
18. Matrix Completion Problems
Abstract
In the matrix completion problem we are given a partial symmetric real matrix A with certain elements specified or fixed and the rest are unspecified or free; and, we are asked whether A can be completed to satisfy a given property (P) by assigning certain values to its free elements. In this chapter, we are interested in the following two completion problems: the positive semidefinite matrix completion problem corresponding to (P) being the positive semi-defmiteness (PSD) property; and the Euclidean distance matrix completion problem corresponding to (P) being the Euclidean distance (ED) property. (We concentrate on the latter problem. A survey of the former is given in [373]. The relationships between the two is discussed in e.g. [465, 464, 380].)
Abdo Alfakih, Henry Wolkowicz
19. Eigenvalue Problems and Nonconvex Minimization
Abstract
Many semidefinite optimization problems are reformulations of problems that are initially derived from certain eigenvalue bounds. As examples we discuss the problems of minimizing the largest eigenvalue of a symmetric matrix, and, more generally, a weighted sum of the κ largest eigenvalues of a symmetric matrix, or the largest generalized eigenvalue of a symmetric matrix pencil. In extension to the theory in Chapter 1 of this book, we summarize some known theoretical results on eigenvalues of symmetric matrices. These results can be used to reformulate eigenvalue problems such that the problems involve smooth functions only—for the constraints as well as for the objective function—and that convexity is preserved if possible.
Florian Jarre
20. Sequential, Quadratic Constrained, Quadratic Programming for General Nonlinear Programming
Abstract
A proven approach for unconstrained minimization of a function, f(x), x ∈ ℜ n , is to build and solve a quadratic model at a local estimate x (k) i.e. apply the trust region method. In this paper we propose a direct extension of this modeling approach to constrained minimization. A local quadratic model of both the objective function and the constraints is built. This model is too hard to solve, so it is relaxed using the Lagrangian dual, which is then solved by semidefinite programming techniques. The key ingredient in this approach is the equivalence between the Lagrangian and semidefinite relaxations.
Serge Kruk, Henry Wolkowicz
Backmatter
Metadaten
Titel
Handbook of Semidefinite Programming
herausgegeben von
Henry Wolkowicz
Romesh Saigal
Lieven Vandenberghe
Copyright-Jahr
2000
Verlag
Springer US
Electronic ISBN
978-1-4615-4381-7
Print ISBN
978-1-4613-6970-7
DOI
https://doi.org/10.1007/978-1-4615-4381-7