main-content

## Über dieses Buch

This Fourth Edition introduces the latest theory and applications in optimization. It emphasizes constrained optimization, beginning with a substantial treatment of linear programming and then proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. Readers will discover a host of practical business applications as well as non-business applications.

Topics are clearly developed with many numerical examples worked out in detail. Specific examples and concrete algorithms precede more abstract topics. With its focus on solving practical problems, the book features free C programs to implement the major algorithms covered, including the two-phase simplex method, primal-dual simplex method, path-following interior-point method, and homogeneous self-dual methods. In addition, the author provides online JAVA applets that illustrate various pivot rules and variants of the simplex method, both for linear programming and for network flows. These C programs and JAVA tools can be found on the book's website. The website also includes new online instructional tools and exercises.

## Inhaltsverzeichnis

### Erratum

Without Abstract
Robert J. Vanderbei

### Chapter 1. Introduction

Abstract
This book is mostly about a subject called Linear Programming. Before defining what we mean, in general, by a linear programming problem, let us describe a few practical real-world problems that serve to motivate and at least vaguely to define this subject.
Robert J. Vanderbei

### Chapter 2. The Simplex Method

Abstract
In this chapter we present the simplex method as it applies to linear programming problems in standard form.
Robert J. Vanderbei

### Chapter 3. Degeneracy

Abstract
In the previous chapter, we discussed what it means when the ratios computed to calculate the leaving variable are all nonpositive (the problem is unbounded). In this chapter, we take up the more delicate issue of what happens when some of the ratios are infinite (i.e., their denominators vanish).
Robert J. Vanderbei

### Chapter 4. Efficiency of the Simplex Method

Abstract
In the previous chapter, we saw that the simplex method (with appropriate pivoting rules to guarantee no cycling) will solve any linear programming problem for which an optimal solution exists. In this chapter, we investigate just how fast it will solve a problem of a given size.
Robert J. Vanderbei

### Chapter 5. Duality Theory

Abstract
Associated with every linear program is another called its dual. The dual of this dual linear program is the original linear program (which is then referred to as the primal linear program). Hence, linear programs come in primal/dual pairs. It turns out that every feasible solution for one of these two linear programs gives a bound on the optimal objective function value for the other. These ideas are important and form a subject called duality theory, which is the topic of this chapter.
Robert J. Vanderbei

### Chapter 6. The Simplex Method in Matrix Notation

Abstract
So far, we have avoided using matrix notation to present linear programming problems and the simplex method. In this chapter, we shall recast everything into matrix notation. At the same time, we will emphasize the close relations between the primal and the dual problems.
Robert J. Vanderbei

### Chapter 7. Sensitivity and Parametric Analyses

Abstract
In this chapter, we consider two related subjects. The first, called sensitivity analysis (or postoptimality analysis) addresses the following question: having found an optimal solution to a given linear programming problem, how much can we change the data and have the current partition into basic and nonbasic variables remain optimal? The second subject addresses situations in which one wishes to solve not just one linear program, but a whole family of problems parametrized by a single real variable.
Robert J. Vanderbei

### Chapter 8. Implementation Issues

Abstract
In the previous chapter, we rewrote the simplex method using matrix notation. This is the first step toward our aim of describing the simplex method as one would implement it as a computer program. In this chapter, we shall continue in this direction by addressing some important implementation issues.
Robert J. Vanderbei

### Chapter 9. Problems in General Form

Abstract
Up until now, we have always considered our problems to be given in standard form. However, for real-world problems it is often convenient to formulate problems in the following form: In this chapter, we shall show how to modify the simplex method to handle problems presented in this form.
Robert J. Vanderbei

### Chapter 10. Convex Analysis

Abstract
This book is mostly about linear programming. However, this subject, important as it is, is just a subset of a larger subject called convex analysis. In this chapter, we shall give a brief introduction to this broader subject. In particular, we shall prove a few of the fundamental results of convex analysis and see that their proofs depend on some of the theory of linear programming that we have already developed.
Robert J. Vanderbei

### Chapter 11. Game Theory

Abstract
In this chapter, we shall study if not the most practical then certainly an elegant application of linear programming. The subject is called game theory, and we shall focus on the simplest type of game, called the finite two-person zero-sum game, or just matrix game for short. Our primary goal shall be to prove the famous Minimax Theorem, which was first discovered and proved by John von Neumann in 1928. His original proof of this theorem was rather involved and depended on another beautiful theorem from mathematics, the Brouwer Fixed-Point Theorem. However, it eventually became clear that the solution of matrix games could be found by solving a certain linear programming problem and that the Minimax Theorem is just a fairly straightforward consequence of the Duality Theorem.
Robert J. Vanderbei

### Chapter 12. Regression

Abstract
In this chapter, we shall study an application of linear programming to an area of statistics called regression. As a specific example, we shall use size and iteration-count data collected from a standard suite of linear programming problems to derive a regression estimate of the number of iterations needed to solve problems of a given size.
Robert J. Vanderbei

### Chapter 13. Financial Applications

Abstract
In this chapter, we shall study some applications of linear programming to problems in quantitative finance.
Robert J. Vanderbei

### Chapter 14. Network Flow Problems

Abstract
Many linear programming problems can be viewed as a problem of minimizing the “transportation” cost of moving materials through a network to meet demands for material at various locations given sources of material at other locations. Such problems are called network flow problems. They form the most important special class of linear programming problems. Transportation, electric, and communication networks provide obvious examples of application areas. Less obvious, but just as important, are applications in facilities location, resource management, financial planning, and others.
Robert J. Vanderbei

### Chapter 15. Applications

Abstract
In this chapter, we discuss briefly the most important applications of network flow problems.
Robert J. Vanderbei

### Chapter 16. Structural Optimization

Abstract
This final chapter on network-type problems deals with finding the best design of a structure to support a specified load at a fixed set of points. The topology of the problem is described by a graph where each node represents a joint in the structure and each arc represents a potential member.
Robert J. Vanderbei

### Chapter 17. The Central Path

Abstract
In this chapter, we begin our study of an alternative to the simplex method for solving linear programming problems. The algorithm we are going to introduce is called a path-following method. It belongs to a class of methods called interior-point methods. The path-following method seems to be the simplest and most natural of all the methods in this class, so in this book we focus primarily on it.
Robert J. Vanderbei

### Chapter 18. A Path-Following Method

Abstract
In this chapter, we define an interior-point method for linear programming that is called a path-following method. Recall that for the simplex method we required a two-phase solution procedure. The path-following method is a one-phase method. This means that the method can begin from a point that is neither primal nor dual feasible and it will proceed from there directly to the optimal solution.
Robert J. Vanderbei

### Chapter 19. The KKT System

Abstract
The most time-consuming aspect of each iteration of the path-following method is solving the system of equations that defines the step direction vectors Δx, Δy, Δw, and Δz:
$$\displaystyle\begin{array}{rcl} A\Delta x + \Delta w& =& \rho {}\end{array}$$
(19.1)
$$\displaystyle\begin{array}{rcl}{ A}^{T}\Delta y - \Delta z& =& \sigma {}\end{array}$$
(19.2)
$$\displaystyle\begin{array}{rcl} Z\Delta x + X\Delta z& =& \mu e - XZe{}\end{array}$$
(19.3)
$$\displaystyle\begin{array}{rcl} W\Delta y + Y \Delta w& =& \mu e - Y We.{}\end{array}$$
(19.4)
Robert J. Vanderbei

### Chapter 20. Implementation Issues for Interior-Point Methods

Abstract
In this chapter, we discuss implementation issues that arise in connection with the path-following method. The most important issue is the efficient solution of the systems of equations discussed in the previous chapter. As we saw, there are basically three choices, involving either the reduced KKT matrix,
$$\displaystyle{ B = \left [\begin{array}{cc} - {E}^{-2} & A \\ {A}^{T} &{D}^{-2} \end{array} \right ], }$$
(20.1)
or one of the two matrices associated with the normal equations:
$$\displaystyle{ A{D}^{2}{A}^{T} + {E}^{-2} }$$
(20.2)
or
$$\displaystyle{ {A}^{T}{E}^{2}A + {D}^{-2}. }$$
(20.3)
(Here, $${E}^{-2} = {Y }^{-1}W$$ and $${D}^{-2} = {X}^{-1}Z$$.) In the previous chapter, we explained that dense columns/rows are bad for the normal equations and that therefore one might be better off solving the system involving the reduced KKT matrix. But there is also a reason one might prefer to work with one of the systems of normal equations. The reason is that these matrices are positive definite. We shall show in the first section that there are important advantages in working with positive definite matrices. In the second section, we shall consider the reduced KKT matrix and see to what extent the nice properties possessed by positive definite matrices carry over to these matrices.
Robert J. Vanderbei

### Chapter 21. The Affine-Scaling Method

Abstract
In the previous chapter, we showed that the step direction for the path-following method can be decomposed into a linear combination of three directions: a direction toward optimality, a direction toward feasibility, and a direction toward centrality. It turns out that these directions, or minor variants of them, arise in all interior-point methods.
Robert J. Vanderbei

### Chapter 22. The Homogeneous Self-Dual Method

Abstract
In Chapter 18, we described and analyzed an interior-point method called the path-following algorithm. This algorithm is essentially what one implements in practice but as we saw in the section on convergence analysis, it is not easy (and perhaps not possible) to give a complete proof that the method converges to an optimal solution. If convergence were completely established, the question would still remain as to how fast is the convergence. In this chapter, we shall present a similar algorithm for which a complete convergence analysis can be given.
Robert J. Vanderbei

### Chapter 23. Integer Programming

Abstract
Many real-world problems could be modeled as linear programs except that some or all of the variables are constrained to be integers. Such problems are called integer programming problems. One might think that these problems wouldn’t be much harder than linear programming problems. For example, we saw in Chapter 14 that for network flow problems with integer data, the simplex method automatically produces integer solutions. But that was just luck. In general, one can’t expect to get integer solutions; in fact, as we shall see in this chapter, integer programming problems turn out to be generally much harder to crack than linear ones.
Robert J. Vanderbei

Abstract
In Chapter 23, we studied a generalization of the linear programming problem in which variables were constrained to take on integer values. In this chapter, we consider a generalization of a different kind. Namely, we shall study the class of problems that would be linear programs except that the objective function is permitted to include terms involving products of pairs of variables. Such terms are called quadratic terms, and the problems we shall study are called quadratic programming problems.
Robert J. Vanderbei

### Chapter 25. Convex Programming

Abstract
In the last chapter, we saw that small modifications to the primal–dual interior-point algorithm allow it to be applied to quadratic programming problems as long as the quadratic objective function is convex. In this chapter, we shall go further and allow the objective function to be a general (smooth) convex function. In addition, we shall allow the feasible region to be any convex set given by a finite collection of convex inequalities.
Robert J. Vanderbei

### Backmatter

Weitere Informationen