Abstract
The gradient method for the symmetric positive definite linear system \(Ax=b\) is as follows
where \(g_{k}=Ax_{k}-b\) is the residual of the system at x k and α k is the stepsize. The stepsize \(\alpha_{k} = \frac{2}{{\lambda_{1}+\lambda_{n}}}\) is optimal in the sense that it minimizes the modulus \(||I - \alpha A||_{2}\), where λ1 and λ n are the minimal and maximal eigenvalues of A respectively. Since λ1 and λ n are unknown to users, it is usual that the gradient method with the optimal stepsize is only mentioned in theory. In this paper, we will propose a new stepsize formula which tends to the optimal stepsize as \(k \to \infty\). At the same time, the minimal and maximal eigenvalues, λ1 and λ n , of A and their corresponding eigenvectors can be obtained.
Similar content being viewed by others
References
H. Akaike, “On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method,” Ann. Inst. Statist. Math. Tokyo Vol. II, pp. 1–17, 1959.
J. Barzilai and J.M. Borwein, “Two-point step size gradient methods,” IMA J. Numer. Anal., vol. 8, pp. 141–148, 1988.
A. Cauchy, “Méthode générale pour la résolution des systèmes d'équations simultanées,” Comp. Rend. Acad. Sci. Paris, vol. 25, pp. 536–538, 1847.
Y.H. Dai and Y. Yuan, “Alternate minimization gradient method,” IMA J. Numer. Anal., vol. 23, pp. 377–393, 2003.
H.C. Elman and G.H. Golub, “Inexact and preconditioned Uzawa algorithms for saddle point problems,” SIAM J. Numer. Anal., vol. 31, pp. 1645–1661, 1994.
R. Fletcher, On the Barzilai-Borwein method, Research report NA207, Unversity of Dundee, 2001.
A. Friedlander, J.M. Martínez, B. Molina, and M. Raydan, “Gradient method with retards and generalizations,” SIAM J. Numer. Anal., vol. 36, pp. 275–289, 1999.
G. Golub and R.S. Varga, “Chebyshev semi-iterative methods, successive overrelaxation iteration methods, and second order richardson iterative methods,” Numerische Mathematik, vol. 3, pp. 147–156, 1961.
Q. Hu and J. Zou, “An iterative method with variable relaxation parameters for saddle-point problems,” SIAM J. Matrix Anal. Appl., vol. 23, pp. 317–338, 2001.
J. Nocedal, A. Sartenaer, and C. Zhu, “On the behavior of the gradient norm in the steepest descent method,” Computational Optimization and Applications, vol. 22, pp. 5–35, 2002.
M.N. Vrahatis, G.S. Androulakis, J.N. Lambrinos, and G.D. Magoulas, “A class of gradient unconstrained minimization algorithms with adaptive stepsize,” Journal of Computational and Applied Mathematics, vol. 114, pp. 367–386, 2000.
J.H. Wilkinson, The Algebraic Eigenvalue Prolem, Oxford University Press, 1965.
Author information
Authors and Affiliations
Corresponding author
Additional information
This research was initiated while the first author was visiting The Hong Kong Polytechnic University.
This author was supported by the Chinese NSF grants (No. 40233029 and 101071104) and an innovation fund of Chinese Academy of Sciences.
This author was supported by a grant from the Research Committee of the Hong Kong Polytechnic University (A-PC36).
Rights and permissions
About this article
Cite this article
Dai, Y.H., Yang, X.Q. A New Gradient Method with an Optimal Stepsize Property. Comput Optim Applic 33, 73–88 (2006). https://doi.org/10.1007/s10589-005-5959-2
Received:
Revised:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/s10589-005-5959-2