Skip to main content
Top
Published in: Calcolo 2/2020

01-06-2020

New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method

Author: Neculai Andrei

Published in: Calcolo | Issue 2/2020

Login to get access

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Three new procedures for computation the scaling parameter in the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno search direction with a parameter are presented. The first two are based on clustering the eigenvalues of the self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno iteration matrix with a parameter by using the determinant or the trace of this matrix. The third one is based on minimizing the measure function of Byrd and Nocedal (SIAM J Numer Anal 26:727–739, 1989). For all these three algorithms the sufficient descent condition is established. The stepsize is computed using the standard Wolfe line search. Under the standard Wolfe line search the global convergence of these algorithms is established. By using 80 unconstrained optimization test problems, with different structures and complexities, it is shown that the performances of the self-scaling memoryless algorithms based on the determinant or on the trace of the iteration matrix or on minimizing the measure function are better than those of CG_DESCENT (version 1.4) with Wolfe line search (Hager and Zhang in SIAM J Optim 16:170–192, 2005), the self-scaling memoryless BFGS algorithms with scaling parameter proposed by Oren and Spedicato (Math Program 10:70–90, 1976) and by Oren and Luenberger (Manag Sci 20:845–862, 1974), LBFGS by Liu and Nocedal (Math Program 45:503–528, 1989) and the standard BFGS. The self-scaling memoryless algorithm based on minimizing the measure function of Byrd and Nocedal is slightly top performer versus the same algorithms based on the determinant or on the trace of the iteration matrix.
Literature
1.
go back to reference Al-Baali, M.: Numerical experience with a class of self-scaling quasi-Newton algorithms. J. Optim. Theory Appl. 96(3), 533–553 (1998)MathSciNetMATHCrossRef Al-Baali, M.: Numerical experience with a class of self-scaling quasi-Newton algorithms. J. Optim. Theory Appl. 96(3), 533–553 (1998)MathSciNetMATHCrossRef
2.
go back to reference Andrei, N.: Critica Raţiunii Algoritmilor de Optimizare fără Restricţii. [Criticism of the Unconstrained Optimization Algorithms Reasoning]. Editura Academiei Române, Bucureşti (2009) Andrei, N.: Critica Raţiunii Algoritmilor de Optimizare fără Restricţii. [Criticism of the Unconstrained Optimization Algorithms Reasoning]. Editura Academiei Române, Bucureşti (2009)
3.
go back to reference Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213(2), 361–369 (2009)MathSciNetMATH Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213(2), 361–369 (2009)MathSciNetMATH
4.
5.
go back to reference Andrei, N.: An adaptive conjugate gradient algorithm for large-scale unconstrained optimization. J. Comput. Appl. Math. 292, 83–91 (2016)MathSciNetMATHCrossRef Andrei, N.: An adaptive conjugate gradient algorithm for large-scale unconstrained optimization. J. Comput. Appl. Math. 292, 83–91 (2016)MathSciNetMATHCrossRef
6.
go back to reference Andrei, N.: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim. Methods Softw. 32(3), 534–551 (2017)MathSciNetMATHCrossRef Andrei, N.: Eigenvalues versus singular values study in conjugate gradient algorithms for large-scale unconstrained optimization. Optim. Methods Softw. 32(3), 534–551 (2017)MathSciNetMATHCrossRef
7.
go back to reference Andrei, N.: UOP—a collection of 80 unconstrained optimization test problems. Technical Report No. 7/2018, November 17, Research Institute for Informatics, Bucharest, Romania. (2018) Andrei, N.: UOP—a collection of 80 unconstrained optimization test problems. Technical Report No. 7/2018, November 17, Research Institute for Informatics, Bucharest, Romania. (2018)
8.
go back to reference Axelsson, O., Lindskog, G.: On the rate of convergence of the preconditioned conjugate gradient methods. Numer. Math. 48, 499–523 (1986)MathSciNetMATHCrossRef Axelsson, O., Lindskog, G.: On the rate of convergence of the preconditioned conjugate gradient methods. Numer. Math. 48, 499–523 (1986)MathSciNetMATHCrossRef
9.
go back to reference Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)MathSciNetMATHCrossRef Babaie-Kafaki, S.: On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae. J. Optim. Theory Appl. 167(1), 91–101 (2015)MathSciNetMATHCrossRef
10.
11.
go back to reference Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)MathSciNetMATHCrossRef Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43(2), 117–128 (2001)MathSciNetMATHCrossRef
12.
go back to reference Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM TOMS 21, 123–160 (1995)MATHCrossRef Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM TOMS 21, 123–160 (1995)MATHCrossRef
13.
go back to reference Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. I. General considerations. J. Inst. Math. Appl. 6, 76–90 (1970)MATHCrossRef Broyden, C.G.: The convergence of a class of double-rank minimization algorithms. I. General considerations. J. Inst. Math. Appl. 6, 76–90 (1970)MATHCrossRef
14.
go back to reference Byrd, R., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrainedminimization. SIAM J. Numer. Anal. 26, 727–739 (1989)MathSciNetMATHCrossRef Byrd, R., Nocedal, J.: A tool for the analysis of quasi-Newton methods with application to unconstrainedminimization. SIAM J. Numer. Anal. 26, 727–739 (1989)MathSciNetMATHCrossRef
15.
go back to reference Dai, Y.H.: Chapter 8: convergence analysis of nonlinear conjugate gradient methods. In: Wang, Y., Yagola, A.G., Yang, C. (eds.) Optimization and Regularization for Computational Inverse Problems and Applications, pp. 157–181. Higher Education Press, Beijing and Springer, Berlin (2010)CrossRef Dai, Y.H.: Chapter 8: convergence analysis of nonlinear conjugate gradient methods. In: Wang, Y., Yagola, A.G., Yang, C. (eds.) Optimization and Regularization for Computational Inverse Problems and Applications, pp. 157–181. Higher Education Press, Beijing and Springer, Berlin (2010)CrossRef
16.
go back to reference Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)MathSciNetMATHCrossRef Dai, Y.H., Kou, C.X.: A nonlinear conjugate gradient algorithm with an optimal property and an improved Wolfe line search. SIAM J. Optim. 23(1), 296–320 (2013)MathSciNetMATHCrossRef
17.
go back to reference Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)MathSciNetMATHCrossRef Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)MathSciNetMATHCrossRef
19.
20.
go back to reference Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13, 317–322 (1970)MATHCrossRef Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13, 317–322 (1970)MATHCrossRef
22.
go back to reference Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)MathSciNetMATHCrossRef Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)MathSciNetMATHCrossRef
23.
go back to reference Goldfarb, D.: A family of variable metric method derived by variation mean. Math. Comput. 23, 23–26 (1970)MATHCrossRef Goldfarb, D.: A family of variable metric method derived by variation mean. Math. Comput. 23, 23–26 (1970)MATHCrossRef
24.
go back to reference Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)MathSciNetMATHCrossRef Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)MathSciNetMATHCrossRef
25.
go back to reference Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)MATHCrossRef Hager, W.W., Zhang, H.: Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Trans. Math. Softw. 32(1), 113–137 (2006)MATHCrossRef
27.
go back to reference Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)MathSciNetMATHCrossRef Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)MathSciNetMATHCrossRef
28.
go back to reference Kaporin, I.E.: New convergence results and preconditioning strategies for the conjugate gradient methods. Numer. Linear Algebr. 1, 179–210 (1994)MathSciNetMATHCrossRef Kaporin, I.E.: New convergence results and preconditioning strategies for the conjugate gradient methods. Numer. Linear Algebr. 1, 179–210 (1994)MathSciNetMATHCrossRef
29.
31.
go back to reference Luenberger, D.G.: Introduction to linear and nonlinear programming, 2nd edn. Addison-Wesley Publishing Company, Reading (1984)MATH Luenberger, D.G.: Introduction to linear and nonlinear programming, 2nd edn. Addison-Wesley Publishing Company, Reading (1984)MATH
34.
35.
go back to reference Oren, S.S.: Self-scaling variable metric (SSVM) algorithms. Part II: implementation and experiments. Manag. Sci. 20(5), 863–874 (1974)MATHCrossRef Oren, S.S.: Self-scaling variable metric (SSVM) algorithms. Part II: implementation and experiments. Manag. Sci. 20(5), 863–874 (1974)MATHCrossRef
36.
go back to reference Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20, 845–862 (1974)MATHCrossRef Oren, S.S., Luenberger, D.G.: Self-scaling variable metric (SSVM) algorithms, part I: criteria and sufficient conditions for scaling a class of algorithms. Manag. Sci. 20, 845–862 (1974)MATHCrossRef
37.
go back to reference Perry, A.: A class of conjugate gradient algorithms with two step variable metric memory. Discussion paper 269, Center for Mathematical Studies in Economics and Management Science. Northwestern University, Il, USA. (1977) Perry, A.: A class of conjugate gradient algorithms with two step variable metric memory. Discussion paper 269, Center for Mathematical Studies in Economics and Management Science. Northwestern University, Il, USA. (1977)
38.
go back to reference Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Griffiths, D.F. (ed.) Numerical Analysis (Dundee, 1983), Lecture Notes in Mathematics, vol. 1066, pp. 122–141. (1984) Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Griffiths, D.F. (ed.) Numerical Analysis (Dundee, 1983), Lecture Notes in Mathematics, vol. 1066, pp. 122–141. (1984)
41.
44.
go back to reference Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 38–86. North-Holland, Amsterdam (1970)MATH Zoutendijk, G.: Nonlinear programming, computational methods. In: Abadie, J. (ed.) Integer and Nonlinear Programming, pp. 38–86. North-Holland, Amsterdam (1970)MATH
Metadata
Title
New conjugate gradient algorithms based on self-scaling memoryless Broyden–Fletcher–Goldfarb–Shanno method
Author
Neculai Andrei
Publication date
01-06-2020
Publisher
Springer International Publishing
Published in
Calcolo / Issue 2/2020
Print ISSN: 0008-0624
Electronic ISSN: 1126-5434
DOI
https://doi.org/10.1007/s10092-020-00365-7

Other articles of this Issue 2/2020

Calcolo 2/2020 Go to the issue

Premium Partner