Skip to main content
main-content

Über dieses Buch

Semidefinite and conic optimization is a major and thriving research area within the optimization community. Although semidefinite optimization has been studied (under different names) since at least the 1940s, its importance grew immensely during the 1990s after polynomial-time interior-point methods for linear optimization were extended to solve semidefinite optimization problems.

Since the beginning of the 21st century, not only has research into semidefinite and conic optimization continued unabated, but also a fruitful interaction has developed with algebraic geometry through the close connections between semidefinite matrices and polynomial optimization. This has brought about important new results and led to an even higher level of research activity.

This Handbook on Semidefinite, Conic and Polynomial Optimization provides the reader with a snapshot of the state-of-the-art in the growing and mutually enriching areas of semidefinite optimization, conic optimization, and polynomial optimization. It contains a compendium of the recent research activity that has taken place in these thrilling areas, and will appeal to doctoral students, young graduates, and experienced researchers alike.

The Handbook’s thirty-one chapters are organized into four parts:

Theory, covering significant theoretical developments as well as the interactions between conic optimization and polynomial optimization;Algorithms, documenting the directions of current algorithmic development;Software, providing an overview of the state-of-the-art;Applications, dealing with the application areas where semidefinite and conic optimization has made a significant impact in recent years.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction to Semidefinite, Conic and Polynomial Optimization

Conic optimization refers to the problem of optimizing a linear function over the intersection of an affine space and a closed convex cone. We focus particularly on the special case where the cone is chosen as the cone of positive semidefinite matrices for which the resulting optimization problem is called a semidefinite optimization problem.

Miguel F. Anjos, Jean B. Lasserre

Theory

Frontmatter

Chapter 2. The Approach of Moments for Polynomial Equations

In this chapter we present the moment based approach for computing all real solutions of a given system of polynomial equations. This approach builds upon a lifting method for constructing semidefinite relaxations of several nonconvex optimization problems, using sums of squares of polynomials and the dual theory of moments. A crucial ingredient is a semidefinite characterization of the real radical ideal, consisting of all polynomials with the same real zero set as the system of polynomials to be solved. Combining this characterization with ideas from commutative algebra, (numerical) linear algebra and semidefinite optimization yields a new class of real algebraic algorithms. This chapter sheds some light on the underlying theory and the link to polynomial optimization.

Monique Laurent, Philipp Rostalski

Chapter 3. Algebraic Degree in Semidefinite and Polynomial Optimization

In polynomial optimization problems, where the objective function and the contraints are described by multivariate polynomials, an optimizer is algebraic. Its coordinates are roots of univariate polynomials whose coefficients are polynomials in the input data. The number of complex critical points estimates the degree of this polynomial, and is called the algebraic degree. We give some background, tools and examples on how to compute this degree in a number of different problem classes.

Kristian Ranestad

Chapter 4. Semidefinite Representation of Convex Sets and Convex Hulls

Semidefinite programming (SDP) (Ben-Tal and Nemirovski: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization, SIAM, Philadelphia (2001); Nesterov and Nemirovskii: Interior-point polynomial algorithms in convex programming. SIAM Studies in Applied Mathematics 13. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1994); Nemirovskii, A.: Advances in convex optimization: conic programming. Plenary Lecture at the International Congress of Mathematicians (ICM), Madrid, Spain, 22-30 August 2006; Wolkowicz et al.: Handbook of semidefinite programming. Kluwer’s, Boston, MA (2000)) is one of the main advances in convex optimization theory and applications. It has a profound effect on combinatorial optimization, control theory and nonconvex optimization as well as many other disciplines. There are effective numerical algorithms for solving problems presented in terms of Linear Matrix Inequalities (LMIs). A fundamental problem in semidefinite programming concerns the range of its applicability. What convex sets

S

can be represented by using LMI? This question has several natural variants; the main one is to represent

S

as the projection of a higher dimensional set that is representable by LMI; such

S

is called SDP representable (SDr). Here we shall describe these variants and what is known about them. The chapter is a survey of the existing results of (Helton and Nie: Mathematical Programming

122

(1):21–64 (2010); Helton and Nie: SIAM Journal on Optimization

20

(2):759–791 (2009); Lasserre: Mathematical Programming

120

:457–477 (2009)) and related work.

J. William Helton, Jiawang Nie

Chapter 5. Convex Hulls of Algebraic Sets

This article describes a method to compute successive convex approximations of the convex hull of the solutions to a system of polynomial equations over the reals. The method relies on sums of squares of polynomials and the dual theory of moment matrices. The main feature of the technique is that all computations are done modulo the ideal generated by the polynomials defining the set to the convexified. This work was motivated by questions raised by Lovász concerning extensions of the theta body of a graph to arbitrary real algebraic varieties, and hence the relaxations described here are called theta bodies. The convexification process can be seen as an incarnation of Lasserre’s hierarchy of convex relaxations of a real semialgebraic set. When the defining ideal is real radical the results become especially nice. We provide several examples of the method and discuss convergence issues. Finite convergence, especially after the first step of the method, can be described explicitly for finite point sets.

João Gouveia, Rekha Thomas

Chapter 6. Convex Relaxations and Integrality Gaps

We discuss the effectiveness of linear and semidefinite relaxations in approximating the optimum for combinatorial optimization problems. Various hierarchies of these relaxations, such as the ones defined by Lovasz and Schrijver, Sherali and Adams, and Lasserre generate increasingly strong linear and semidefinite programming relaxations starting from a basic one. We survey some positive applications of these hierarchies, where their use yields improved approximation algorithms. We also discuss known lower bounds on the integrality gaps of relaxations arising from these hierarchies, demonstrating limits on the applicability of such hierarchies for certain optimization problems.

Eden Chlamtac, Madhur Tulsiani

Chapter 7. Relaxations of Combinatorial Problems Via Association Schemes

In this chapter we describe a novel way of deriving semidefinite programming relaxations of a wide class of combinatorial optimization problems. Many combinatorial optimization problems may be viewed as finding an induced subgraph of a specific type of maximum weight in a given weighted graph. The relaxations we describe are motivated by concepts from algebraic combinatorics. In particular, we consider a matrix algebra that contains the adjacency matrix of the required subgraph, and formulate a convex relaxation of this algebra. Depending on the type of subgraph, this algebra may be the Bose–Mesner algebra of an association scheme, or, more generally, a coherent algebra. Thus we obtain new (and known) relaxations of the traveling salesman problem, maximum equipartition problems in graphs, the maximum stable set problem, etc.

Etienne de Klerk, Fernando M. de Oliveira Filho, Dmitrii V. Pasechnik

Chapter 8. Copositive Programming

This chapter provides an introduction to

copositive programming

, which is linear programming over the convex conic of copositive matrices. Researchers have shown that many NP-hard optimization problems can be represented as copositive programs, and this chapter recounts and extends these results.

Samuel Burer

Chapter 9. Invariant Semidefinite Programs

This chapter provides the reader with the necessary background for dealing with semidefinite programs which have symmetry. The basic theory is given and it is illustrated in applications from coding theory, combinatorics, geometry, and polynomial optimization.

Christine Bachoc, Dion C. Gijswijt, Alexander Schrijver, Frank Vallentin

Chapter 10. A “Joint+Marginal” Approach in Optimization

We present the “joint+marginal” approach initially developed for polynomial optimization. In particular, it is shown that the optimal value (a function of the parameters) can be approximated in a strong sense by polynomials via solving a hierarchy of semidefinite programs whose size depends on the degree of the polynomial approximant. We also show how to exploit this approximation property in other contexts, e.g., to provide (a) an algorithm for robust optimization (where the parameter is the robust decision) and (b), an iterative algorithm for non parametric optimization (where the parameter is the first coordinate

x

1

of the variable, then

x

2

after

x

1

has been calculated, etc.)

Jean B. Lasserre

Chapter 11. An Introduction to Formally Real Jordan Algebras and Their Applications in Optimization

In this chapter we study formally real Jordan algebras and their impact on certain convex optimization problems. We first show how common topics in convex optimization problems, such as complementarity and interior point algorithms, give rise to algebraic questions. Next we study the basic properties of formally real Jordan algebras including properties of their multiplication operator, quadratic representation, spectral properties and Peirce decomposition. Finally we show how this theory transparently unifies presentation and analysis of issues such as degeneracy and complementarity, and proofs of polynomial time convergence of interior point methods in linear, second order and semidefinite optimization problems.

F. Alizadeh

Chapter 12. Complementarity Problems Over Symmetric Cones: A Survey of Recent Developments in Several Aspects

The complementarity problem over a symmetric conic (that we call the Symmetric Conic Complementarity Problem, or the SCCP) has received much attention of researchers in the last decade. Many of studies done on the SCCP can be categorized into the three research themes, interior point methods for the SCCP, merit or smoothing function methods for the SCCP, and various properties of the SCCP. In this paper, we will provide a brief survey on the recent developments on these three themes.

Akiko Yoshise

Chapter 13. Convexity and Semidefinite Programming in Dimension-Free Matrix Unknowns

One of the main applications of semidefinite programming lies in linear systems and control theory. Many problems in this subject, certainly the textbook classics, have matrices as variables, and the formulas naturally contain non-commutative polynomials in matrices. These polynomials depend only on the system layout and do not change with the size of the matrices involved, hence such problems are called “dimension-free”. Analyzing dimension-free problems has led to the development recently of a non-commutative (nc) real algebraic geometry (RAG) which, when combined with convexity, produces dimension-free Semidefinite Programming. This article surveys what is known about convexity in the non-commutative setting and nc SDP and includes a brief survey of nc RAG. Typically, the qualitative properties of the non-commutative case are much cleaner than those of their scalar counterparts – variables in

$${\mathbb{R}}^{g}$$

. Indeed we describe how relaxation of scalar variables by matrix variables in several natural situations results in a beautiful structure.

J. William Helton, Igor Klep, Scott McCullough

Chapter 14. Positivity and Optimization: Beyond Polynomials

The recent progress optimization theory, due to a novel synthesis of real algebraic geometry and moment problems techniques, can naturally be reduced to positivity certificates for polynomial functions defined on basic semi-algebraic sets. There are however classical problems of applied mathematics which require exact positivity criteria for non-polynomial functions, such as splines, wavelets, periodic or almost periodic functions. While we do not lack fine analysis results referring to the positivity of such functions, traditionally stated in terms of Fourier-Laplace transforms type, the algebraic machinery of modern optimization theory based on polynomial algebra fails when applied to this more general context. A notorious example being the stability problem of differential equations with delays in the argument. In all these cases, the exact algebraic certificates must be complemented by approximation theory results. Without aiming at completeness, the present chapter offers a glimpse at a series of specific non-polynomial optimization problems, by identifying in every instance the specific results needed to run a robust algebraic relaxation scheme.

Jean B. Lasserre, Mihai Putinar

Algorithms

Frontmatter

Chapter 15. Self-Regular Interior-Point Methods for Semidefinite Optimization

Semidefinite optimization has an ever growing family of crucial applications, and large neighborhood interior point methods (IPMs) yield the method of choice to solve them. This chapter reviews the fundamental concepts and complexity results of Self-Regular (SR) IPMs for semidefinite optimizaion, that up to a log factor achieve the best polynomial complexity bound of small neighborhood IPMs. SR kernel functions are in the core of SR-IPMs. This chapter reviews several none SR kernel functions too. IPMs based on theses kernel functions enjoy similar iteration complexity bounds as SR-IPMs, though their complexity analysis requires additional tools.

Maziar Salahi, Tamás Terlaky

Chapter 16. Elementary Optimality Conditions for Nonlinear SDPs

An increasing number of recent applications rely on the solution of nonlinear semidefinite programs. First and second order optimality conditions for nonlinear programs are widely known today. This chapter generalizes these optimality conditions to nonlinear semidefinite programs, highlighting some parallels and some differences. It starts by discussing a constraint qualification for both programs. First order optimality conditions are presented for the case where this constraint qualification is satisfied. For the second order conditions, in addition, strict complementarity is assumed.

Florian Jarre

Chapter 17. Recent Progress in Interior-Point Methods: Cutting-Plane Algorithms and Warm Starts

We review some of the recent enhancements of interior-point methods for the improved solution of semidefinite relaxations in combinatorial optimization and binary quadratic programming. Central topics include general interior-point cutting-plane schemes, handling of linear inequalities, and several warm-starting strategies. A practical implementation and computational results are also discussed.

Alexander Engau

Chapter 18. Exploiting Sparsity in SDP Relaxation of Polynomial Optimization Problems

We present a survey on the sparse SDP relaxation proposed as a sparse variant of Lasserre’s SDP relaxation of polynomial optimization problems. We discuss the primal approach to derive the sparse SDP relaxation by exploiting the structured sparsity. In addition, numerical techniques used in the Matlab package SparsePOP for solving POPs are presented. We report numerical results on SparsePOP and the application of the sparse SDP relaxation to sensor network localization problems.

Sunyoung Kim, Masakazu Kojima

Chapter 19. Block Coordinate Descent Methods for Semidefinite Programming

We consider in this chapter block coordinate descent (BCD) methods for solving semidefinite programming (SDP) problems. These methods are based on sequentially minimizing the SDP problem’s objective function over blocks of variables corresponding to the elements of a single row (and column) of the positive semidefinite matrix

X

; hence, we will also refer to these methods as row-by-row (RBR) methods. Using properties of the (generalized) Schur complement with respect to the remaining fixed (

n

− 1)-dimensional principal submatrix of

X

, the positive semidefiniteness constraint on

X

reduces to a simple second-order cone constraint. It is well known that without certain safeguards, BCD methods cannot be guaranteed to converge in the presence of general constraints. Hence, to handle linear equality constraints, the methods that we describe here use an augmented Lagrangian approach. Since BCD methods are first-order methods, they are likely to work well only if each subproblem minimization can be performed very efficiently. Fortunately, this is the case for several important SDP problems, including the maxcut SDP relaxation and the minimum nuclear norm matrix completion problem, since closed-form solutions for the BCD subproblems that arise in these cases are available. We also describe how BCD can be applied to solve the sparse inverse covariance estimation problem by considering a dual formulation of this problem. The BCD approach is further generalized by using a rank-two update so that the coordinates can be changed in more than one row and column at each iteration. Finally, numerical results on the maxcut SDP relaxation and matrix completion problems are presented to demonstrate the robustness and efficiency of the BCD approach, especially if only moderately accurate solutions are desired.

Zaiwen Wen, Donald Goldfarb, Katya Scheinberg

Chapter 20. Projection Methods in Conic Optimization

There exist efficient algorithms to project a point onto the intersection of a convex conic and an affine subspace. Those conic projections are in turn the work-horse of a range of algorithms in conic optimization, having a variety of applications in science, finance and engineering. This chapter reviews some of these algorithms, emphasizing the so-called regularization algorithms for linear conic optimization, and applications in polynomial optimization. This is a presentation of the material of several recent research articles; we aim here at clarifying the ideas, presenting them in a general framework, and pointing out important techniques.

Didier Henrion, Jérôme Malick

Chapter 21. SDP Relaxations for Non-Commutative Polynomial Optimization

We consider the problem of minimizing an arbitrary hermitian polynomial

p

(

X

) in non-commutative variables

X

= (

X

1

,

,

X

N

), where the polynomial

p

(

X

) is evaluated over all states and bounded operators (

X

1

,

,

X

n

) satisfying a finite set of polynomial constraints. Problems of this type appear frequently in areas as diverse as quantum chemistry, condensed matter physics, and quantum information science; finding numerical tools to attack them is thus essential. In this chapter, we describe a hierarchy of semidefinite programming relaxations of this generic problem, which converges to the optimal solution in the asymptotic limit. Furthermore, we derive sufficient optimality conditions for each step of the hierarchy. Our method is related to recent results in non-commutative algebraic geometry and can be seen as a generalization to the non-commutative setting of well-known semidefinite programming hierarchies that have been introduced in scalar (i.e. commutative) polynomial optimization. After presenting our results, we discuss at the end of the chapter some open questions and possible directions for future research.

Miguel Navascués, Stefano Pironio, Antonio Acín

Chapter 22. Semidefinite Programming and Constraint Programming

Recently, semidefinite programming relaxations have been applied in constraint programming to take advantage of the high-quality bounds and precise heuristic guidance during the search for a solution. The purpose of this chapter is to present an overview of these developments, and to provide future research prospects.

Willem-Jan van Hoeve

Software

Frontmatter

Chapter 23. The State-of-the-Art in Conic Optimization Software

This work gives an overview of the major codes available for the solution of linear semidefinite (SDP) and second-order cone (SOCP) programs. Many of these codes also solve linear programs (LP). Some developments since the 7th DIMACS Challenge [10, 18] are pointed out as well as some currently under way. Instead of presenting performance tables, reference is made to the ongoing benchmark webpage [20] as well as other related efforts.

Hans D. Mittelmann

Chapter 24. Latest Developments in the SDPA Family for Solving Large-Scale SDPs

The main purpose of this chapter is to introduce the latest developments in SDPA and its family. SDPA is designed to solve large-scale SemiDefinite Programs (SDPs) faster and over the course of 15 years of development, it has been expanded into a high-performance-oriented software package. We hope that this introduction to the latest developments of the SDPA Family will be beneficial to readers who wish to understand the inside of state-of-art software packages for solving SDPs.

Makoto Yamashita, Katsuki Fujisawa, Mituhiro Fukuda, Kazuhiro Kobayashi, Kazuhide Nakata, Maho Nakata

Chapter 25. On the Implementation and Usage of SDPT3 – A Matlab Software Package for Semidefinite-Quadratic-Linear Programming, Version 4.0

This software is designed to solve primal and dual semidefinite-quadratic-linear conic programming problems (known as SQLP problems) whose constraint conic is a product of semidefinite conics, second-order conics, nonnegative orthants and Euclidean spaces, and whose objective function is the sum of linear functions and log-barrier terms associated with the constraint conics. This includes the special case of determinant maximization problems with linear matrix inequalities. It employs an infeasible primal-dual predictor-corrector path-following method, with either the HKM or the NT search direction. The basic code is written in

Matlab

, but key subroutines in C are incorporated via Mex files. Routines are provided to read in problems in either SDPA or SeDuMi format. Sparsity and block diagonal structure are exploited. We also exploit low-rank structures in the constraint matrices associated with the semidefinite blocks if such structures are explicitly given. To help the users in using our software, we also include some examples to illustrate the coding of problem data for our solver. Various techniques to improve the efficiency and robustness of the main solver are incorporated. For example, step-lengths associated with semidefinite conics are calculated via the Lanczos method. The current version also implements algorithms for solving a 3-parameter homogeneous self-dual model of the primal and dual SQLP problems. Routines are also provided to determine whether the primal and dual feasible regions of a given SQLP have empty interiors. Numerical experiments show that this general-purpose code can solve more than 80% of a total of about 430 test problems to an accuracy of at least 10

− 6

in relative duality gap and infeasibilities.

Kim-Chuan Toh, Michael J. Todd, Reha H. Tütüncü

Chapter 26. PENNON: Software for Linear and Nonlinear Matrix Inequalities

We present a collection of computer programs for the solution of linear and nonlinear semidefinite optimization problems. After briefly discussing the underlying algorithm, the generalized augmented Lagrangian method, we describe details of the specific programs for linear, bilinear and general nonlinear semidefinite optimization problems. For each of the programs we present typical application areas and examples.

Michal Kocvara, Michael Stingl

Applications

Frontmatter

Chapter 27. SDP Relaxations for Some Combinatorial Optimization Problems

In this chapter we present recent developments on solving various combinatorial optimization problems by using semidefinite programming (SDP). We present several SDP relaxations of the quadratic assignment problem and the traveling salesman problem. Further, we show the equivalence of several known SDP relaxations of the graph equipartition problem, and present recent results on the bandwidth problem.

Renata Sotirov

Chapter 28. Computational Approaches to Max-Cut

Max-Cut is one of the most studied combinatorial optimization problems because of its wide range of applications and because of its connections with other fields of discrete mathematics (see, e.g., the book by Deza and Laurent [10]). Like other interesting combinatorial optimization problems, Max-Cut is very simple to state.

Laura Palagi, Veronica Piccialli, Franz Rendl, Giovanni Rinaldi, Angelika Wiegele

Chapter 29. Global Approaches for Facility Layout and VLSI Floorplanning

This chapter provides an overview of conic optimization models for facility layout and VLSI floorplanning problems. We focus on two classes of problems to which conic optimization approaches have been successfully applied, namely the single-row facility layout problem, and fixed-outline floorplanning in VLSI circuit design. For the former, a close connection to the cut polytope has been exploited in positive semidefinite and integer programming approaches. In particular, the semidefinite optimization approaches can provide global optimal solutions for instances with up to 40 facilities, and tight global bounds for instances with up to 100 facilities. For the floorplanning problem, a conic optimization model provided the first non-trivial lower bounds in the literature.

Miguel F. Anjos, Frauke Liers

Chapter 30. Euclidean Distance Matrices and Applications

Euclidean distance matrices, or EDMs, have been receiving increased attention for two main reasons. The first reason is that the many applications of EDMs, such as molecular conformation in bioinformatics, dimensionality reduction in machine learning and statistics, and especially the problem of wireless sensor network localization, have all become very active areas of research. The second reason for this increased interest is the close connection between EDMs and semidefinite matrices. Our recent ability to solve semidefinite programs efficiently means we can now also solve many problems involving EDMs efficiently. This chapter connects the classical approaches for EDMs with the more recent tools from semidefinite programming. We emphasize the application to sensor network localization.

Nathan Krislock, Henry Wolkowicz

Chapter 31. Sparse PCA: Convex Relaxations, Algorithms and Applications

Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. Unfortunately, this problem is also combinatorially hard and we discuss convex relaxation techniques that efficiently produce good approximate solutions. We then describe several algorithms solving these relaxations as well as greedy algorithms that iteratively improve the solution quality. Finally, we illustrate sparse PCA in several applications, ranging from senate voting and finance to news data.

Youwei Zhang, Alexandre d’Aspremont, Laurent El Ghaoui

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise