Skip to main content
Log in

Global Convergence of a Memory Gradient Method for Unconstrained Optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. L. Adams and J.L. Nazareth (eds.), Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, 1996.

  2. M.Al-Baali, “Descent property and global convergence of the Fletcher-Reeves method with inexact line search,” IMA Journal of Numerical Analysis, vol. 5, pp. 121–124, 1985.

    MATH  MathSciNet  Google Scholar 

  3. E.E. Cragg and A.V. Levy, “Study on a supermemory gradient method for the minimization of functions,” Journal of Optimization Theory and Applications, vol. 4, pp. 191–205, 1969.

    Article  MATH  MathSciNet  Google Scholar 

  4. J.C. Gilbert and J. Nocedal, “Global convergence properties of conjugate gradient methods for optimization,” SIAM Journal on Optimization, vol. 2, pp. 21–42, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  5. L. Grippo, F. Lampariello, and S. Lucidi, “A truncated Newton method with nonmonotone line search for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 60, pp. 401–419, 1989.

    Article  MATH  MathSciNet  Google Scholar 

  6. A. Miele and J.W. Cantrell, “Study on a memory gradient method for the minimization of functions,” Journal of Optimization Theory and Applications, vol. 3, pp. 459–470, 1969.

  7. J.J. Moré, B.S. Garbow, and K. E. Hillstrom,“Testing unconstrained optimization software,” ACM Transactions on Mathematical Software, vol. 7, pp. 17–41, 1981.

    Article  MATH  Google Scholar 

  8. J.J.Moré and D.J. Thuente,“Line search algorithms with guaranteed sufficient decease,” ACM Transactions on Mathematical Software, vol. 20, pp. 286–307, 1994.

  9. L. Nazareth, “A conjugate direction algorithm without line searches,” Journal of Optimization Theory and Applications, vol. 23, pp. 373–387, 1977.

    Article  MATH  MathSciNet  Google Scholar 

  10. J.Nocedal, “Updating quasi-Newton matrices with limited storage, ” Mathematics of Computation, vol.35, pp. 773-782, 1980.

  11. J. Nocedal, http://www.ece.northwestern.edu/nocedal/software.html.

  12. J. Nocedal and S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer Verlag, New York, 1999.

  13. J.Z. Zhang, N.Y. Deng, and L.H. Chen, “New quasi-Newton equation and related methods for unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 102, pp. 147–167, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  14. J. Zhang and C. Xu, “Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations,” Journal of Computational and Applied Mathematics, vol. 137, pp. 269–278, 2001.

    Article  MATH  MathSciNet  Google Scholar 

  15. G. Zoutendijk, Nonlinear Programming, Computational Methods, in Integer and Nonlinear Programming, J. Abadie, (ed.), North-Holland, Amsterdam, pp. 37–86, 1970.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yasushi Narushima.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Narushima, Y., Yabe, H. Global Convergence of a Memory Gradient Method for Unconstrained Optimization. Comput Optim Applic 35, 325–346 (2006). https://doi.org/10.1007/s10589-006-8719-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-006-8719-z

Keywords

Navigation