main-content

## Über dieses Buch

I was pleasantly surprised when I was asked by Springer-Verlag to prepare a second edition of this volume on Linear Optimization and Extensions, which - not exactly contrary to my personal expectations - has apparently been accepted reasonably weIl by the global optimization community. My objective in putting this book together was originally - and still is - to detail the major algorithmic ideas in linear optimization that have evolved in the past fifty years or so and that have changed the historical optimization "landscape" in substantial ways - both theoretically and computationally. While I may have overlooked the importance of some very recent developments - the work by Farid Alizadeh which generalizes linear programming to "sem i-definite" programming is perhaps a candidate for one of my omissions - I think that major new breakthraughs on those two fronts that interest me - theory and computation - have not occurred since this book was published originally. As a consequence I have restricted myself to a thorough re-working of the original manuscript with the goal of making it more readable. Of course, I have taken this opportunity to correct a few "Schönheitsfehler" of the first edition and to add some illustrations. The index to this volume has been extended substantially - to permit a hurried reader a quicker glance at the wealth of topics that were covered nevertheless already in the first edition. As was the case with the first edition, Dr.

## Inhaltsverzeichnis

### 1. Introduction

Abstract
When mathematics hit the front page of the New York Times you just know that it must have commercial value. Indeed, linear programming gets to be mentioned in the popular press not infrequently and the following verbatim quotation from a front page article of the New York Times of November 19, 1984, while not exactly of a recent vintage, sums up very well part of why this is so. Under the heading “Breakthrough in Problem Solving” one reads in the second paragraph:
“The discovery, which is to be formally published next month, is already circulating rapidly through the mathematics world. It has also set off a deluge of inquiries from brokerage houses, oil companies and airlines, industries with millions of dollars at stake in problems known as linear programming.”

### 2. The Linear Programming Problem

Abstract
The general linear programming problem has e.g. the form
$$\begin{array}{*{20}{c}} {\min imize\quad \sum\nolimits_{{j = 1}}^{n} {{{c}_{j}}{{x}_{j}}} } \\ {subject\,to\quad \sum\nolimits_{{j = 1}}^{n} {a_{j}^{i}{{x}_{j}} \leqslant {{b}_{i}}\quad for\,i = 1, \ldots ,p} } \\ {\sum\nolimits_{{j = 1}}^{n} {a_{j}^{i}{{x}_{j}} = {{b}_{i}}\quad for\,i = p + 1, \ldots ,m} } \\ {{{x}_{j}} \geqslant 0\quad \quad \quad \quad \;for\,j = 1,...,q} \\ \end{array}$$
where qn; that is, some variables (or “decision variables” or “activities”) x j for j = q+1,...,n may be unrestricted in sign (“free variables”). Inequalities of the type
$$\sum\limits_{j = 1}^n {a_j^i{x_j}} {b_i}$$
(2.1)

### 3. Basic Concepts

Abstract
Consider the linear programming problem in standard form

### 4. Five Preliminaries

Abstract
We state now five (really, four) essential steps in the development of an algorithm for the resolution of linear programs in standard form.

### 5. Simplex Algorithms

Abstract
We are now ready to state an iterative procedure for the resolution of the linear programming problem (LP) in standard form with descriptive “input data” m, n, A, b and c.

### 6. Primal-Dual Pairs

Abstract
For every linear programming problem we have a “dual” linear programming problem. Whereas in the original or primal linear program the variables are associated with the columns of the constraint matrix, in the dual linear program the variables are associated with the rows of the constraint matrix. To bring out the symmetry of the construction of a primal-dual pair of linear programs we consider first the general case.

### 7. Analytical Geometry

Abstract
In its abbreviated form АГЕΩΜΤΡΗΤΟΣ ΜΗ ΕΙΣΙΤΩ Plato’s dictum — in capital Greek letters, of course — was chiseled into the portal to the Athenean Academy and you find it today in the seal of the American Mathematical Society. Have you ever thought about what academy means? Probably not; but you are in academe and so here is the story. While Socrates (c. 470–399 B.C.) taught like so many before him in public — on the Athenean marketplace or ’Aγορά — Plato founded the Academy near Athens around 387 B.C. The word academy or ’ Aϰαδημία comes from its location in the grove of Academos so named after a mythological Attic hero.

### 8. Projective Algorithms

Abstract
If the above quotation is Greek to you, you are quite right. In any case, counting the letters in each word separately you get 3.14159... which is a start on the number Pi (π) that was found in a sandbox of the Mediterranean well over 2,000 years ago and which got you all excited about mathematics in grade school already. And, of course, “God the Almighty always does geometry” is what it freely translates to.

### 9. Ellipsoid Algorithms

Abstract
Gaius Julius Caesar (101–44 B.C.) made military use of it when his legions conquered Gaulle, Machiavelli coined it as a political doctrine for Florentine princes to improve his princes’ control over their subjects, and mathematicians employ it e.g. when they use binary search to locate the optimum of a p-unimodal function over a compact convex subset of ℝ1. Divide and conquer, or more precisely divide and reign, is what Niccolo’s doctrine translates to. The compact convex subset of ℝ1 is, of course, some finite interval of the real line and the basic idea of binary search consists in successively halving this interval — like we did in Chapter 7.5.3. Utilizing the p-unimodality (see below) of the function to be optimized, we then discard one half of the original interval forever and continue the search in the other half of the original interval — which is again a compact convex subset of ℝ1. Consequently, the basic idea can be reapplied and we can iterate. Since the length of the left-over interval is half of the length of the previous interval, the 1-dimensional “volume” of the remaining compact convex subset of ℝ1 to be searched shrinks at a geometric rate and we obtain fast convergence of the iterative scheme.