Skip to main content
Top

2016 | OriginalPaper | Chapter

A Conjugate Gradient-Based Efficient Algorithm for Training Single-Hidden-Layer Neural Networks

Authors : Xiaoling Gong, Jian Wang, Yanjiang Wang, Jacek M. Zurada

Published in: Neural Information Processing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

A single hidden layer neural networks (SHLNNs) learning algorithm has been proposed which is called Extreme Learning Machine (ELM). It shows extremely faster than typical back propagation (BP) neural networks based on gradient descent method. However, it requires many more hidden neurons than BP neural networks to achieve assortive classification accuracy. This then leads more test time which plays an important role in practice. A novel learning algorithm (USA) for SHLNNs has been presented which updates the weights by using gradient method in the ELM framework. In this paper, we employ the conjugate gradient method to train the SHLNNs on the MNIST digit recognition problem. The simulated experiment demonstrates the better generalization and less required hidden neurons than the common ELM and USA.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(123), 489–501 (2006)CrossRef Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(123), 489–501 (2006)CrossRef
2.
go back to reference Serre, D.: Matrices: Theory and Applications. Springer, New York (2002)MATH Serre, D.: Matrices: Theory and Applications. Springer, New York (2002)MATH
3.
go back to reference Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. Wiley, New York (1971)MATH Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. Wiley, New York (1971)MATH
4.
go back to reference Zhang, P., Wang, X., Gu, D., Zhao, S.: Extreme learning machine based on conjugate gradient. J. Comput. Appl. 35(10), 2757–2760 (2015) Zhang, P., Wang, X., Gu, D., Zhao, S.: Extreme learning machine based on conjugate gradient. J. Comput. Appl. 35(10), 2757–2760 (2015)
5.
go back to reference Chorowski, J., Wang, J., Zurada, J.M.: Review and performance comparison of SVM- and ELM-based classifiers. Neurocomputing 128(5), 507–516 (2014)CrossRef Chorowski, J., Wang, J., Zurada, J.M.: Review and performance comparison of SVM- and ELM-based classifiers. Neurocomputing 128(5), 507–516 (2014)CrossRef
6.
go back to reference Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inf. Theor. 44(2), 525–536 (1998)MathSciNetCrossRefMATH Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans. Inf. Theor. 44(2), 525–536 (1998)MathSciNetCrossRefMATH
7.
go back to reference Widrow, B., Greenblatt, A., Kim, Y., Park, D.: The no-prop algorithm: a new learning algorithm for multilayer neural networks. Neural Netw. 37, 182–188 (2013)CrossRef Widrow, B., Greenblatt, A., Kim, Y., Park, D.: The no-prop algorithm: a new learning algorithm for multilayer neural networks. Neural Netw. 37, 182–188 (2013)CrossRef
8.
go back to reference Zhu, Q.Y., Qin, A.K., Suganthan, P.N., Huang, G.B.: Evolutionary extreme learning machine. Pattern Recogn. 38(10), 1759–1763 (2005)CrossRefMATH Zhu, Q.Y., Qin, A.K., Suganthan, P.N., Huang, G.B.: Evolutionary extreme learning machine. Pattern Recogn. 38(10), 1759–1763 (2005)CrossRefMATH
9.
go back to reference Yu, D., Deng, L.: Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recogn. Lett. 33(5), 554–558 (2012)MathSciNetCrossRef Yu, D., Deng, L.: Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recogn. Lett. 33(5), 554–558 (2012)MathSciNetCrossRef
10.
11.
go back to reference Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)CrossRef Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6), 861–867 (1993)CrossRef
12.
go back to reference Huang, G.B., Babri, H.A.: Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans. Neural Netw. 9(1), 224–229 (1998)CrossRef Huang, G.B., Babri, H.A.: Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans. Neural Netw. 9(1), 224–229 (1998)CrossRef
13.
go back to reference Huang, G.B.: Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Trans. Neural Netw. 14(2), 274–281 (2003)CrossRef Huang, G.B.: Learning capability and storage capacity of two hidden-layer feedforward networks. IEEE Trans. Neural Netw. 14(2), 274–281 (2003)CrossRef
Metadata
Title
A Conjugate Gradient-Based Efficient Algorithm for Training Single-Hidden-Layer Neural Networks
Authors
Xiaoling Gong
Jian Wang
Yanjiang Wang
Jacek M. Zurada
Copyright Year
2016
DOI
https://doi.org/10.1007/978-3-319-46681-1_56

Premium Partner