Skip to main content
Log in

Study on a supermemory gradient method for the minimization of functions

  • Contributed Papers
  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

A new accelerated gradient method for finding the minimum of a functionf(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows:

$$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \sum\limits_{i = 1}^k {\beta _i \delta x_i }$$

wherex is ann-vector,g(x) is the gradient of the functionf(x), δx is the change in the position vector for the iteration under consideration, and δx i is the change in the position vector for theith previous iteration. The quantities α and β i are scalars chosen at each step so as to yield the greatest decrease in the function; the scalark denotes the number of past iterations remembered.

Fork=1, the algorithm reduces to the memory gradient method of Ref. 2; it contains at most two undetermined multipliers to be optimized by a two-dimensional search. Fork=n−1, the algorithm contains at mostn undetermined multipliers to be optimized by ann-dimensional search.

Two nonquadratic test problems are considered. For both problems, the memory gradient method and the supermemory gradient method are compared with the Fletcher-Reeves method and the Fletcher-Powell-Davidon method. A comparison with quasilinearization is also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Cragg, E. E, andLevy, A. V.,Gradient Methods in Mathematical Programming, Part 3, Supermemory Gradient Method, Rice University, Aero-Astronautics Report No. 58, 1969.

  2. Miele, A., andCantrell, J. W.,Study on a Memory Gradient Method for the Minimization of Functions, Journal of Optimization Theory and Applications, Vol. 3, No. 6, 1969.

  3. Fletcher, R., andReeves, C. M.,Function Minimization by Conjugate Gradients, Computer Journal, Vol. 7, No. 2, 1964.

  4. Myers, G. E.,Properties of the Conjugate Gradient and Davidson Methods, Journal of Optimization Theory and Applications, Vol. 2, No. 4, 1968.

  5. Pearson, J. D.,On Variable Metric Methods of Minimization, Research Analysis Corporation, Technical Paper No. RAC-TP-302, 1968.

  6. Miele, A., Huang, H. Y., andCantrell, J. W.,Gradient Methods in Mathematical Programming, Part 1, Review of Previous Techniques, Rice University, Aero-Astronautics Report No. 55, 1969.

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by A. Miele

This research, supported by the Office of Scientific Research, Office of Aerospace Research, United States Air Force, Grant No. AF-AFOSR-828-67, is a condensation of the investigation described in Ref. 1. The authors are indebted to Professor Angelo Miele for stimulating discussions.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cragg, E.E., Levy, A.V. Study on a supermemory gradient method for the minimization of functions. J Optim Theory Appl 4, 191–205 (1969). https://doi.org/10.1007/BF00930579

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00930579

Keywords

Navigation