Skip to main content
Log in

Lagrangian Transformation and Interior Ellipsoid Methods in Convex Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The rediscovery of the affine scaling method in the late 1980s was one of the turning points which led to a new chapter in Modern Optimization—the interior point methods (IPMs). Simultaneously and independently, the theory of exterior point methods for convex optimization arose. The two seemingly unconnected fields turned out to be intrinsically connected. The purpose of this paper is to show the connections between primal exterior and dual IPMs. Our main tool is the Lagrangian transformation (LT), which for inequality constrained has the best features of the classical augmented Lagrangian. We show that the primal exterior LT method is equivalent to the dual interior ellipsoid method (IEM). Using the equivalence we prove convergence, estimate the convergence rate, and establish the complexity bound for the IEM assuming boundedness of both the primal and the dual optimal sets. We show that application of the LT method with modified barrier transformation for linear programming (LP) leads to Dikin’s affine scaling method for the dual LP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Dikin, I.: Iterative solutions of linear and quadratic programming problems. Sov. Math. Doklady. 8, 674–675 (1967)

    MATH  Google Scholar 

  2. Barnes, E.: A variation on Karmarkar’s algorithm for solving linear programming problems. Math. Program. 36, 174–182 (1986)

    Article  MATH  Google Scholar 

  3. Vandarbei, R., Meketon, M., Freedman, B.: A modification of Karmarkar’s linear programming algorithm. Algorithmica 1(4), 395–407 (1986)

    Article  MathSciNet  Google Scholar 

  4. Karmarkar, N.: A new polynomial time algorithm for linear programing. Combinatorica 4, 373–395 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  5. Gill, P., Murray, W., Saunders, M., Tomlin, J., Wright, M.: On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projected method. Math. Program. 36, 183–209 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  6. Gonzaga, C.: An algorithm for solving Linear Programming problems in O(\(n^3L\)) operations. In: Megiddo, N. (ed.) Progress in Mathematical Programming, pp. 1–82. Springer, Berlin (1988)

    Google Scholar 

  7. Renegar, J.: A polynomial time algorithm, based on Newton’s method for linear programming. Math. Program. 40, 59–93 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  8. Potra, F.A., Wright, S.J.: Interior-point methods. J. Comput. Appl. Math. 124, 281–302 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  9. Nesterov, Yu., Nemirovsky, A.: Interior Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia (1994)

    Book  MATH  Google Scholar 

  10. Nesterov, Yu.: Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, Norwell (2004)

    Book  MATH  Google Scholar 

  11. Auslender, A., Teboulle, M., Ben-Tiba, S.: Interior primal and multipliers methods based on second-order homogeneous kernels. Math. Oper. Res. 24(3), 645–668 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  12. Bertsekas, D.: Constrained Optimization and Lagrange Multipliers Methods. Academic Press, New York (1982)

    Google Scholar 

  13. Polyak, R.: Modified barrier functions (theory and methods). Math. Program. 54, 177–222 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  14. Polyak, R., Teboulle, M.: Nonlinear rescaling and proximal-like methods in convex optimization. Math. Program. 76, 265–284 (1997)

    MATH  MathSciNet  Google Scholar 

  15. Ben-Tal, A., Yuzefovich, B., Zibulevsky, M.: Penalty-barrier multipliers method for minimax and constrained smooth convex optimization. Technion, Research report, 9–92 (1992).

  16. Ben-Tal, A., Zibulevsky, M.: Penalty—barrier methods for convex programming problems. SIAM J. Optim. 7, 347–366 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  17. Polyak, R.: Nonlinear rescaling vs. smoothing technique in convex optimization. Math. Program. 92, 197–235 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  18. Tseng, P., Bertsekas, D.: On the convergence of exponential multiplier method for convex programming. Math. Program. 60, 1–19 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  19. Teboulle, M.: Entropic proximal mappings with applications to nonlinear programming. Y Math. Oper. 17, 97–116 (1992)

    MathSciNet  Google Scholar 

  20. Griva, I., Polyak, R.: Primal–dual nonlinear rescaling method with dynamic scaling parameter update. Math. Program. Ser. A 106, 237–259 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  21. Kočvara, M., Stingl, M.: On the solution of large-scale SDP problems by the modified barrier method using iterative solvers. Math. Program. Ser. B 109(2–3), 413–444 (2007)

    Article  MATH  Google Scholar 

  22. Polyak, R.: Nonlinear rescaling as interior quadratic prox method in convex optimization. Comput. Optim. Appl. 35, 347–373 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  23. Polyak, R.: Lagrangian Transformation in Convex Optimization, Research Report-072003, pp. 1–23. Department of SEOR and Mathematical Science Department, GMU, Fairfax (2003)

    Google Scholar 

  24. Matioli, L., Gonzaga, C.: A new family of penalties for Augmented Lagrangian methods. Numer. Linear Algebra Appl. 15, 925–944 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  25. Fiacco, A., Mc Cormick, G.: Nonlinear programming, sequential unconstrained minimization techniques. Y SIAM (1990). doi:10.1137/1.9781611971316

  26. Bregman, L.: The relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 7, 200–217 (1967)

    Article  Google Scholar 

  27. Censor, Y., Zenios, S.: The proximal minimization algorithm with d-functions”. J. Optim. Theory Appl. 73, 451–464 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  28. Chen, G., Teboulle, M.: Convergence analysis of a proximal-like minimization algorithm using Bregman Functions. SIAM J. Optim. 3(4), 538–543 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  29. Eckstein, J.: Nonlinear proximal point algorithms using Bregman functions with applications to convex programming. Y. Math. Oper. Res. 18(1), 202–226 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  30. Hestenes, M.R.: Multipliers and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)

    Article  MATH  MathSciNet  Google Scholar 

  31. Polyak, B.: Introduction to Optimization. Optimization Software, New York (1987)

    Google Scholar 

  32. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, (ed.) Optimization, pp. 283–298. Academic Press, London (1969)

    Google Scholar 

  33. Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained minimization. Math. Program. 5, 354–373 (1973)

    Article  MATH  MathSciNet  Google Scholar 

  34. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  35. Rockafellar, R.T.: Augmented lagrangians and applications of the proximal points algorithms in convex programming. Y Math. Oper. Res. 1, 97–116 (1976)

    Article  MATH  MathSciNet  Google Scholar 

  36. Guler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)

    Article  MathSciNet  Google Scholar 

  37. Polyak, R.: Primal–dual exterior point method for convex optimization. Optim. Methods Softw. 23(1), 141–160 (2008)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The research was supported by NSF Grant CCF–0836338.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roman Polyak.

Additional information

Communicated by Florian A. Potra.

In memory of Ilya Dikin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Polyak, R. Lagrangian Transformation and Interior Ellipsoid Methods in Convex Optimization. J Optim Theory Appl 164, 966–992 (2015). https://doi.org/10.1007/s10957-014-0527-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-014-0527-5

Keywords

Mathematics Subject Classifications (2000)

Navigation