Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Chapter 1. An Algorithm for Solving Linear Programming Problems in O(n 3 L) Operations

Abstract
This chapter describes a short-step penalty function algorithm that solves linear programming problems in no more than O(n 0.5 L) iterations. The total number of arithmetic operations is bounded by O(n 3 L), carried on with the same precision as that in Karmarkar’s algorithm. Each iteration updates a penalty multiplier and solves a Newton-Raphson iteration on the traditional logarithmic barrier function using approximated Hessian matrices. The resulting sequence follows the path of optimal solutions for the penalized functions as in a predictor-corrector homotopy algorithm.
Clovis C. Gonzaga

Chapter 2. A Primal-Dual Interior Point Algorithm for Linear Programming

Abstract
This chapter presents an algorithm that works simultaneously on primal and dual linear programming problems and generates a sequence of pairs of their interior feasible solutions. Along the sequence generated, the duality gap converges to zero at least linearly with a global convergence ratio (1 — η/n); each iteration reduces the duality gap by at least η/n. Here n denotes the size of the problems and η a positive number depending on initial interior feasible solutions of the problems. The algorithm is based on an application of the classical logarithmic barrier function method to primal and dual linear programs, which has recently been proposed and studied by Megiddo.
Masakazu Kojima, Shinji Mizuno, Akiko Yoshise

Chapter 3. An Extension of Karmarkar’s Algorithm and the Trust Region Method for Quadratic Programming

Abstract
An extension of Karmarkar’s algorithm and the trust region method is developed for solving quadratic programming problems. This extension is based on the affine scaling technique, followed by optimization over a trust ellipsoidal region. It creates a sequence of interior feasible points that converge to the optimal feasible solution. The initial computational results reported here suggest the potential usefulness of this algorithm in practice.
Yinyu Ye

Chapter 4. Approximate Projections in a Projective Method for the Linear Feasibility Problem

Abstract
The key issue in implementing a projective method is the projection operation. In order to cut down computations, several authors have suggested using approximations instead of the projection itself. However, these approximations are not directly compatible with the standard proofs of polynomial complexity. In this chapter we present a relaxation of the main convergence lemma which makes it possible to accommodate approximate projections. We propose several types of approximations that preserve polynomial complexity for the version of Karmarkar’s algorithm presented by de Ghellinck and Vial.
Jean-Philippe Vial

Chapter 5. A Locally Weil-Behaved Potential Function and a Simple Newton-Type Method for Finding the Center of a Polytope

Abstract
The center of a bounded full-dimensional polytope P = {x: Axb} is the unique point ω that maximizes the strictly concave potential function \(F(x) = \sum\nolimits_{i = 1}^m {\ln (a_i^T} x - {b_i})\) over the interior of P. Let x 0 be a point in the interior of P. We show that the first two terms in the power series of F(x) at x 0 serve as a good approximation to F(x) in a suitable ellipsoid around x 0 and that minimizing the first-order (linear) term in the power series over this ellipsoid increases F(x) by a fixed additive constant as long as x 0 is not too close to the center ω.
Pravin M. Vaidya

Chapter 6. A Note on Comparing Simplex and Interior Methods for Linear Programming

Abstract
This chapter discusses some aspects of the computational comparison of the simplex method and the new interior point (barrier) methods for linear programming. In particular, we consider classes of problems with which the simplex method has traditionally had difficulty and present some computational results.
J. A. Tomlin

Chapter 7. Pricing Criteria in Linear Programming

Abstract
In this chapter we discuss gradient-based descent methods for solving a linear program. The definition of a local reduced model and selection of a suitable direction of descent provide the overall framework within which we discuss some existing techniques for linear programming and explore other new ones. In particular, we propose a null space affine (scaling) technique that is motivated by the approach of Karmarkar (1984) but builds more directly on the simplex method. We discuss algorithmic considerations based on this approach and issues of effective implementation, and we report the results of a simple, yet instructive, numerical experiment.
J. L. Nazareth

Chapter 8. Pathways to the Optimal Set in Linear Programming

Abstract
This chapter presents continuous paths leading to the set of optimal solutions of a linear programming problem. These paths are derived from the weighted logarithmic barrier function. The defining equations are bilinear and have some nice primal-dual symmetry properties. Extensions to the general linear complementarity problem are indicated.
Nimrod Megiddo
Weitere Informationen