Skip to main content
main-content

Über dieses Buch

This self-contained monograph presents the reader with an authoritative view of Continuous Optimization, an area of mathematical optimization that has experienced major developments during the past 40 years. The book contains results which have not yet been covered in a systematic way as well as a summary of results on NR theory and methods developed over the last several decades. The readership is aimed to graduate students in applied mathematics, computer science, economics, as well as researchers working in optimization and those applying optimization methods for solving real life problems. Sufficient exercises throughout provide graduate students and instructors with practical utility in a two-semester course in Continuous Optimization.

The topical coverage includes interior point methods, self-concordance theory and related complexity issues, first and second order methods with accelerated convergence, nonlinear rescaling (NR) theory and exterior point methods, just to mention a few. The book contains a unified approach to both interior and exterior point methods with emphasis of the crucial duality role. One of the main achievements of the book shows what makes the exterior point methods numerically attractive and why.

The book is composed in five parts. The first part contains the basics of calculus, convex analysis, elements of unconstrained optimization, as well as classical results of linear and convex optimization. The second part contains the basics of self-concordance theory and interior point methods, including complexity results for LP, QP, and QP with quadratic constraint, semidefinite and conic programming. In the third part, the NR and Lagrangian transformation theories are considered and exterior point methods are described. Three important problems in finding equilibrium are considered in the fourth part. In the fifth and final part of the book, several important applications arising in economics, structural optimization, medicine, statistical learning theory, and more, are detailed. Numerical results, obtained by solving a number of real life and test problems, are also provided.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction

Abstract
The first steps in Optimization go back to ancient times, when several isoperimetric problems were solved.
Roman A. Polyak

Chapter 2. Elements of Calculus and Convex Analysis

Abstract
Along with elements of calculus we consider some basic facts of convex analysis, including convex functions, convex set and their most important properties.
Roman A. Polyak

Chapter 3. Few Topics in Unconstrained Optimization

Abstract
Unconstrained optimization is, on the one hand, a classical topic in optimization, which goes back to the seventeenth century. On the other hand, it is a modern one, which undergone substantial transformation in the last few decades.
Roman A. Polyak

Chapter 4. Optimization with Equality Constraints

Abstract
In 1797 Lagrange published in Paris his treatise on the Theory of Analytic Functions. In this fundamental book, he describes his method for solving constrained optimization problems.
Roman A. Polyak

Chapter 5. Basics in Linear and Convex Optimization

Abstract
In the first part of this chapter we cover basic LP facts, including LP duality, special LP structure, as well as two main tools for solving LP: simplex and interior point methods (IPMs).
Roman A. Polyak

Chapter 6. Self-Concordant Functions and IPM Complexity

Abstract
The introduction by N. Karmarkar of his projective transformation method for LP in 1984; finding by P. Gill et al. (1986) of a, connection between Karmarkar’s and Newton log-barrier methods; rediscovery of the affine scaling (AS) method.
Roman A. Polyak

Chapter 7. Nonlinear Rescaling: Theory and Methods

Abstract
The first result on Nonlinear Rescaling (NR) theory and methods were obtained in the early 1980s. The purpose was finding an alternative for SUMT free from its well-known shortcomings (see Chapter 5). Roughly speaking, for inequality-constrained optimization, the NR is to SUMT as AL is to Courant’s penalty method for ECO. Unfortunately, the NR results were published almost 10 years later.
Roman A. Polyak

Chapter 8. Realizations of the NR Principle

Abstract
We concentrate on three important realizations of NR principle: modified log-barrier function (MBF), exterior distance function (EDF), and log-sigmoid (LS) Lagrangian. The correspondent NR methods can be viewed as alternatives to both SUMT and IPMs (see Chapters 5 and 6).
Roman A. Polyak

Chapter 9. Lagrangian Transformation and Interior Ellipsoid Methods

Abstract
The NR approach produced a number of multipliers methods, which are primal exterior and dual interior.
Roman A. Polyak

Chapter 10. Finding Nonlinear Equilibrium

Abstract
Optimization problems, finding a saddle point of a convex–concave function, matrix games, J. Nash equilibrium of n-person concave game, Walras–Wald equilibrium, finding nonlinear equilibrium (NE) for optimal resource allocation (ORA), and nonlinear input–output equilibrium (NIOE) are all particular cases of general nonlinear equilibrium (NE) problem.
Roman A. Polyak

Chapter 11. Applications and Numerical Results

Abstract
In this chapter we describe several real-life applications and provide results obtained by solving truss topology design (TTD), intensity-modulated radiation therapy planning, support vector machine, non-negative least squares, and economic equilibrium. We also provide numerical results of testing both nonlinear and linear optimization problems. The results obtained strongly corroborate the theory.
Roman A. Polyak

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise