Skip to main content
Log in

A new supermemory gradient method for unconstrained optimization problems

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

This paper presents a new supermemory gradient method for unconstrained optimization problems. It can be regarded as a combination of ODE-based methods, line search and subspace techniques. The main characteristic of this method is that, at each iteration, a lower dimensional system of linear equations is solved only once to obtain a trial step, thus avoiding solving a quadratic trust region subproblem. Another is that when a trial step is not accepted, this proposed method generates an iterative point whose step-length satisfies Armijo line search rule, thus avoiding resolving linear system of equations. Under some reasonable assumptions, the method is proven to be globally convergent. Numerical results show the efficiency of this proposed method in practical computation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Sun W.Y., Yuan Y.X.: Optimization Theory and Methods: Nonlinear Programming, Springer Optimization and Its Applications, Vol.1. Springer, New York (2006)

    Google Scholar 

  2. Pardalos P.M., Resende M.G.C.: Handbook of Applied Optimization. Oxford University Press, Oxford (2002)

    MATH  Google Scholar 

  3. Nazareth J.L.: Conjugate Gradient Methods. In: Floudas, C.A., Pardalos, P.M. (eds) Encyclopedia of Optimization (2nd ed.) Part 3, pp. 466–470. Springer, Berlin (2009)

    Google Scholar 

  4. Dai Y.H., Yuan Y.X.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Yuan G.L.: Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems. Optim. Lett. 3, 11–21 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Shi Z.J., Shen J.: Convergence of supermemory gradient method. J. Appl. Math. Comput. 24(1-2), 367–376 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Yu Z.S., Zhang W.G., Wu B.F.: Stong global convergence an adaptive nonmonotone memory gradient method. Appl. Math. Comput. 185, 681–688 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  8. Narushima Y., Yabe H.: Global convergence of a memory gradient method for unconstrained optimization. Comput. Optim. Appl. 35, 325–346 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Powell M.J.D.: Restart procedures for the conjugate gradient method. Math. Program. 12, 154–241 (1977)

    Article  Google Scholar 

  10. Shi Z.J.: A new supermemory gradient method for unconstrained optimization. Adv. Math. 35(3), 265–274 (2006)

    MathSciNet  Google Scholar 

  11. Shi Z.J., Shen J.: A new supermemory gradient method with curve search rule. Appl. Math. Comput. 170, 1–16 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Liu, J., Liu, H.B., Zheng, Y.: A new supermemory gradient method without line search for unconstrained optimization. In: Wang, H. et al. (eds.) The Sixth International Symposium on Neural Networks (ISNN 2009), Advances in Soft Computing, vol. 56, pp. 641–647 (2009)

  13. Yu Z.S.: Global convergence of a memory gradient method without line search. J. App. Math. Comput. 26(1-2), 545–553 (2008)

    Article  MATH  Google Scholar 

  14. Shi Z.J., Shen J.: A new class of supermemory gradient methods. Appl. Math. Comput. 183, 748–760 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  15. Shi Z.J., Shen J.: On memory gradient method with trust region for unconstrained optimization. Numer. Algorithm 41, 173–196 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Brown A.A., Biggs M.C.: Some effective methods for unconstrained optimization based on the solution of system of ordinary differentiable equations. J. Optim. Theorem Appl. 62, 211–224 (1989)

    Article  MATH  Google Scholar 

  17. Shi Z.J., Xu Z.W.: The convergence of subspace trust region methods. J. Comput. Appl. Math. 231, 365–377 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ou Y.G., Zhou Q., Lin H.C.: An ODE-based trust region method for unconstrained optimization problems. J. Comput. Appl. Math. 232, 318–326 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. Liu D.C., Nocedal J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  20. Byrd R., Nocedal J., Schnabel R.: Representations of quasi-Newton matrices and their use in limited memory methods. Math. Program. 63, 129–156 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  21. More J., Garbow B., Hillstrom K.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  22. Zhou F.J., Xiao Y.: A class of nonmonotone stabilization trust region methods. Computing. 53, 119–136 (1994)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi-gui Ou.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ou, Yg., Wang, Gs. A new supermemory gradient method for unconstrained optimization problems. Optim Lett 6, 975–992 (2012). https://doi.org/10.1007/s11590-011-0328-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-011-0328-9

Keywords

Navigation