Skip to main content

2020 | OriginalPaper | Buchkapitel

A Hessian Free Neural Networks Training Algorithm with Curvature Scaled Adaptive Momentum

verfasst von : Flora Sakketou, Nicholas Ampazis

Erschienen in: Learning and Intelligent Optimization

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper we propose an algorithm for training neural network architectures, called Hessian Free algorithm with Curvature Scaled Adaptive Momentum (HF-CSAM). The algorithm’s weight update rule is similar to SGD with momentum but with two main differences arising from the formulation of the training task as a constrained optimization problem: (i) the momentum term is scaled with curvature information (in the form of the Hessian); (ii) the coefficients for the learning rate and the scaled momentum term are adaptively determined. The implementation of the algorithm requires minimal additional computations compared to a classical SGD with momentum iteration since no actual computation of the Hessian is needed, due to the algorithm’s requirement for computing only a Hessian-vector product. This product can be computed exactly and very efficiently within any modern computational graph framework such as, for example, Tensorflow. We report experiments with different neural network architectures trained on standard neural network benchmarks which demonstrate the efficiency of the proposed method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Battiti, R.: First and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput. 4, 141–166 (1992)CrossRef Battiti, R.: First and second-order methods for learning: between steepest descent and Newton’s method. Neural Comput. 4, 141–166 (1992)CrossRef
2.
Zurück zum Zitat Bryson, A.E., Denham, W.F.: A steepest-ascent method for solving optimum programming problems. J. Appl. Mech. 29, 247–257 (1962)MathSciNetCrossRef Bryson, A.E., Denham, W.F.: A steepest-ascent method for solving optimum programming problems. J. Appl. Mech. 29, 247–257 (1962)MathSciNetCrossRef
4.
Zurück zum Zitat Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)MathSciNetCrossRef Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)MathSciNetCrossRef
5.
Zurück zum Zitat Gould, N.I.M., Nocedal, J.: The modified absolute-value factorization norm for trust-region minimization. In: De Leone, R., Murli, A., Pardalos, P.M., Toraldo, G. (eds.) High Performance Algorithms and Software in Nonlinear Optimization. Applied Optimization, vol. 24, pp. 225–241. Springer, Boston (1998). https://doi.org/10.1007/978-1-4613-3279-4_15CrossRef Gould, N.I.M., Nocedal, J.: The modified absolute-value factorization norm for trust-region minimization. In: De Leone, R., Murli, A., Pardalos, P.M., Toraldo, G. (eds.) High Performance Algorithms and Software in Nonlinear Optimization. Applied Optimization, vol. 24, pp. 225–241. Springer, Boston (1998). https://​doi.​org/​10.​1007/​978-1-4613-3279-4_​15CrossRef
6.
Zurück zum Zitat Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR, abs/1412.6980 (2014) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR, abs/1412.6980 (2014)
7.
Zurück zum Zitat Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, Canadian Institute for Advanced Research (2009) Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, Canadian Institute for Advanced Research (2009)
8.
Zurück zum Zitat LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010) LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)
9.
Zurück zum Zitat Martens, J.: Deep learning via Hessian-free optimization. In: Proceedings of the 27th International Conference on Machine Learning (ICML-2010), Haifa, Israel, 21–24 June 2010, pp. 735–742 (2010) Martens, J.: Deep learning via Hessian-free optimization. In: Proceedings of the 27th International Conference on Machine Learning (ICML-2010), Haifa, Israel, 21–24 June 2010, pp. 735–742 (2010)
10.
Zurück zum Zitat Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, USA, pp. 807–814. Omnipress (2010) Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, USA, pp. 807–814. Omnipress (2010)
11.
Zurück zum Zitat Pearlmutter, B.A.: Fast exact multiplication by the Hessian. Neural Comput. 6(1), 147–160 (1994)CrossRef Pearlmutter, B.A.: Fast exact multiplication by the Hessian. Neural Comput. 6(1), 147–160 (1994)CrossRef
12.
Zurück zum Zitat Perantonis, S.J., Ampazis, N., Virvilis, V.: A learning framework for neural networks using constrained optimization methods. Ann. Oper. Res. 99, 385–401 (2000)MathSciNetCrossRef Perantonis, S.J., Ampazis, N., Virvilis, V.: A learning framework for neural networks using constrained optimization methods. Ann. Oper. Res. 99, 385–401 (2000)MathSciNetCrossRef
13.
Zurück zum Zitat Perantonis, S.J., Karras, D.A.: An efficient constrained learning algorithm with momentum acceleration. Neural Networks 8(2), 237–239 (1995)CrossRef Perantonis, S.J., Karras, D.A.: An efficient constrained learning algorithm with momentum acceleration. Neural Networks 8(2), 237–239 (1995)CrossRef
14.
Zurück zum Zitat Ramsundar, B., Zadeh, R.B.: TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning, 1st edn. O’Reilly Media Inc., Sebastopol (2018) Ramsundar, B., Zadeh, R.B.: TensorFlow for Deep Learning: From Linear Regression to Reinforcement Learning, 1st edn. O’Reilly Media Inc., Sebastopol (2018)
15.
16.
Zurück zum Zitat Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetMATH Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetMATH
17.
Zurück zum Zitat Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on International Conference on Machine Learning, ICML 2013, vol. 28, pp. III–1139–III–1147 (2013). www.JMLR.org Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: Proceedings of the 30th International Conference on International Conference on Machine Learning, ICML 2013, vol. 28, pp. III–1139–III–1147 (2013). www.​JMLR.​org
18.
Zurück zum Zitat Tieleman, T., Hinton, G.: Lecture 6.5—RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning (2012) Tieleman, T., Hinton, G.: Lecture 6.5—RMSProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning (2012)
19.
Zurück zum Zitat Wilson, A.C., Roelofs, R., Stern, M., Srebro, N., Recht, B.: The marginal value of adaptive gradient methods in machine learning. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4148–4158. Curran Associates Inc. (2017) Wilson, A.C., Roelofs, R., Stern, M., Srebro, N., Recht, B.: The marginal value of adaptive gradient methods in machine learning. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4148–4158. Curran Associates Inc. (2017)
Metadaten
Titel
A Hessian Free Neural Networks Training Algorithm with Curvature Scaled Adaptive Momentum
verfasst von
Flora Sakketou
Nicholas Ampazis
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-38629-0_8