Skip to main content
Erschienen in: Neural Processing Letters 3/2020

23.10.2020

Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks

verfasst von: Yan Xiong, Xin Tong

Erschienen in: Neural Processing Letters | Ausgabe 3/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Gradient method is often used for the feedforward neural network training. Most of the studies so far have been focused on the square error function. In this paper, a novel entropy error function is proposed for the feedforward neural network training. The week and strong convergence analysis of the gradient method based on the entropy error function with batch input training patterns is strictly proved. Numerical examples are also given by the end of the paper for verifying the effectiveness and correctness. Compared with the square error function, our method provides both faster learning speed and better generalization for the given test problems.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Zhang H, Tang Y (2017) Online gradient method with smoothing \(l_0\) regularization for feedforward neural networks. Neuocomputing 224(10):1–8 Zhang H, Tang Y (2017) Online gradient method with smoothing \(l_0\) regularization for feedforward neural networks. Neuocomputing 224(10):1–8
2.
Zurück zum Zitat Li F, Zurada J, Wu W (2018) Smooth group \(\text{ L}_{\frac{1}{2}}\) regularization for input layer of feedforward neural networks. Neural Netw 314(7):109–119 Li F, Zurada J, Wu W (2018) Smooth group \(\text{ L}_{\frac{1}{2}}\) regularization for input layer of feedforward neural networks. Neural Netw 314(7):109–119
3.
Zurück zum Zitat Chen Z (2019) Convergence of neutral type fuzzy cellular neural networks with D operator. Neural Process Lett 49:1189–1199CrossRef Chen Z (2019) Convergence of neutral type fuzzy cellular neural networks with D operator. Neural Process Lett 49:1189–1199CrossRef
4.
5.
Zurück zum Zitat Liu J, Zhang Y, Yu Y et al (2019) Fixed-time event-triggered consensus for nonlinear multiagent systems without continuous communications. IEEE Trans Syst Man Cybern Syst 49(11):2221–2229CrossRef Liu J, Zhang Y, Yu Y et al (2019) Fixed-time event-triggered consensus for nonlinear multiagent systems without continuous communications. IEEE Trans Syst Man Cybern Syst 49(11):2221–2229CrossRef
8.
Zurück zum Zitat Xu Y, Chen Q (2013) Convergence of gradient method for training ridge polynomial neural network. Neural Comput Appl 22(1):333–339CrossRef Xu Y, Chen Q (2013) Convergence of gradient method for training ridge polynomial neural network. Neural Comput Appl 22(1):333–339CrossRef
9.
Zurück zum Zitat Zhang H, Wu W (2011) Convergence of split-complex backpropagation algorithm with a momentum. Neural Netw World 21(1):75–90CrossRef Zhang H, Wu W (2011) Convergence of split-complex backpropagation algorithm with a momentum. Neural Netw World 21(1):75–90CrossRef
10.
Zurück zum Zitat Li L, Qiao Z, Long Z (2020) A smoothing algorithm with constant learning rate for training two Kinds of fuzzy neural networks and its convergence. Neural Process Lett 51:1093–1109 Li L, Qiao Z, Long Z (2020) A smoothing algorithm with constant learning rate for training two Kinds of fuzzy neural networks and its convergence. Neural Process Lett 51:1093–1109
11.
Zurück zum Zitat Huang C, Bingwen Liu B, Tian X et al (2019) Global convergence on asymptotically almost periodic SICNNs with nonlinear decay functions. Neural Process Lett 49:625–641 Huang C, Bingwen Liu B, Tian X et al (2019) Global convergence on asymptotically almost periodic SICNNs with nonlinear decay functions. Neural Process Lett 49:625–641
12.
Zurück zum Zitat Xu D, Dong J, Zhang H (2017) Deterministic convergence of wirtinger-gradient methods for complex-valued neural networks. Neural Process Lett 45:445–456 Xu D, Dong J, Zhang H (2017) Deterministic convergence of wirtinger-gradient methods for complex-valued neural networks. Neural Process Lett 45:445–456
13.
Zurück zum Zitat Karayiannis NB, Venetsanopoulos AN (1992) Fast learning algorithms for neural networks. IEEE Trans Circuit Syst II Analog Digit Signal Process 39(7):453–474MATH Karayiannis NB, Venetsanopoulos AN (1992) Fast learning algorithms for neural networks. IEEE Trans Circuit Syst II Analog Digit Signal Process 39(7):453–474MATH
14.
Zurück zum Zitat Oh SH (1997) Improving the error back propagation algorithm with a modified error function. IEEE Trans Neural Netw 8(3):799–802 Oh SH (1997) Improving the error back propagation algorithm with a modified error function. IEEE Trans Neural Netw 8(3):799–802
15.
Zurück zum Zitat Lin KWE, Balamurali BT, Koh E et al (2020) Singing voice separation using a deep convolutional neural network trained by ideal binary mask and cross entropy. Neural Comput Appl 32:1037–1050 Lin KWE, Balamurali BT, Koh E et al (2020) Singing voice separation using a deep convolutional neural network trained by ideal binary mask and cross entropy. Neural Comput Appl 32:1037–1050
16.
Zurück zum Zitat Shan B, Fang Y (2020) A cross entropy based deep neural network model for road extraction from satellite images. Entropy 22:535–551 Shan B, Fang Y (2020) A cross entropy based deep neural network model for road extraction from satellite images. Entropy 22:535–551
17.
Zurück zum Zitat Bahri A, Majelan SG, Mohammadi S et al (2020) Remote sensing image classification via improved cross-entropy loss and transfer learning strategy based on deep convolutional neural networks. IEEE Geosci Remote Sens Lett 17(6):1087–1091CrossRef Bahri A, Majelan SG, Mohammadi S et al (2020) Remote sensing image classification via improved cross-entropy loss and transfer learning strategy based on deep convolutional neural networks. IEEE Geosci Remote Sens Lett 17(6):1087–1091CrossRef
18.
Zurück zum Zitat Song D, Zhang Y, Shan X et al (2017) Over-Learning phenomenon of wavelet neural networks in remote sensing image classifications with different entropy error functions. Entropy 19:101–119CrossRef Song D, Zhang Y, Shan X et al (2017) Over-Learning phenomenon of wavelet neural networks in remote sensing image classifications with different entropy error functions. Entropy 19:101–119CrossRef
19.
Zurück zum Zitat Bosman AS, Engelbrecht A, Helbig M (2020) Visualising basins of attraction for the cross-entropy and the squared error neural network loss functions. Neurocomputing 400:113–136CrossRef Bosman AS, Engelbrecht A, Helbig M (2020) Visualising basins of attraction for the cross-entropy and the squared error neural network loss functions. Neurocomputing 400:113–136CrossRef
20.
Zurück zum Zitat Yuan Y, Sun W (2001) Optimization theory and methods. Science Press, Beijing Yuan Y, Sun W (2001) Optimization theory and methods. Science Press, Beijing
Metadaten
Titel
Convergence of Batch Gradient Method Based on the Entropy Error Function for Feedforward Neural Networks
verfasst von
Yan Xiong
Xin Tong
Publikationsdatum
23.10.2020
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 3/2020
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-020-10374-w

Weitere Artikel der Ausgabe 3/2020

Neural Processing Letters 3/2020 Zur Ausgabe