Skip to main content

Advertisement

Log in

Non-euclidean restricted memory level method for large-scale convex optimization

  • Published:
Mathematical Programming Submit manuscript

Abstract.

We propose a new subgradient-type method for minimizing extremely large-scale nonsmooth convex functions over “simple” domains. The characteristic features of the method are (a) the possibility to adjust the scheme to the geometry of the feasible set, thus allowing to get (nearly) dimension-independent (and nearly optimal in the large-scale case) rate-of-convergence results for minimization of a convex Lipschitz continuous function over a Euclidean ball, a standard simplex, and a spectahedron (the set of positive semidefinite symmetric matrices, of given size, with unit trace); (b) flexible handling of accumulated information, allowing for tradeoff between the level of utilizing this information and iteration’s complexity. We present extensions of the scheme for the cases of minimizing non-Lipschitzian convex objectives, finding saddle points of convex-concave functions and solving variational inequalities with monotone operators. Finally, we report on encouraging numerical results of experiments with test problems of dimensions up to 66,000.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ben-Tal, A., Margalit, T., Nemirovski, A.: The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography. SIAM Journal on Optimization 12, 79–108 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  2. Chudak, F.A.: Improved approximation algorithms for uncapacitated facility location. Lecture Notes on Computer Science 1412, 180–192 (1998)

    Article  MathSciNet  Google Scholar 

  3. Kiwiel, K.: An aggregate subgradient method for nonsmooth convex minimization. Mathematical Programming 27, 320–341 (1983)

    MATH  MathSciNet  Google Scholar 

  4. Kiwiel, K.: Proximal level bundle method for convex nondifferentable optimization, saddle point problems and variational inequalities. Mathematical Programming Series B 69, 89–109 (1995)

    Article  MathSciNet  Google Scholar 

  5. Kiwiel K.C., Larson, T., Lindberg, P.O.: The efficiency of ballstep subgradient level methods for convex optimization. Mathematics of Operations Research 24, 237–254 (1999).

    Article  MathSciNet  Google Scholar 

  6. Lemaréchal, C.: Nonsmooth optimization and descent methods. Research Report 78–4, IIASA, Laxenburg, Austria, 1978

  7. Lemaréchal, C., Strodiot, J.J., Bihain, A.: On a bundle algorithm for nonsmooth optimization. In: O.L. Mangasarian, R.R. Meyer, S.M. Robinson, (eds.), Nonlinear Programming 4 (Academic Press, NY, 1981), pp. 245–282

  8. Lemaréchal, C., Nemirovski, A., Nesterov, Yu.: New variants of bundle methods. Mathematical Programming Series B 69, 111–148 (1995)

    Article  Google Scholar 

  9. Mifflin, R.: A modification and an extension of Lemaréchal’s algorithm for nonsmooth minimization. Mathematical Programming Study 17, 77–90 (1982)

    MATH  MathSciNet  Google Scholar 

  10. Nemirovski, A., Yudin, D.: Problem complexity and method efficiency in optimization, J. Wiley & Sons, 1983

  11. Nesterov, Yu.: Cutting plane algorithms from analytic centers: complexity estimate. Mathematical Programming 65, 149–176 (1995)

    Article  MathSciNet  Google Scholar 

  12. Polyak, B.T.: A general method for solving extremal problems. Soviet Math. Doklady 174, 33–36 (1967)

    Google Scholar 

  13. Shor, N.Z.: Generalized gradient descent with application to block programming. Kibernetika No. 3 (1967) (in Russian)

  14. Schramm, H., Zowe, J.: A version of bundle idea for minimizing a non-smooth function: conceptual idea, convergence analysis, numerical results. SIAM Journal on Optimization 2, 121–152 (1992)

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aharon Ben-Tal.

Additional information

This research was supported by the Technion Fund for Promotion of Research

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ben-Tal, A., Nemirovski, A. Non-euclidean restricted memory level method for large-scale convex optimization. Math. Program. 102, 407–456 (2005). https://doi.org/10.1007/s10107-004-0553-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-004-0553-4

Keywords

Navigation