Skip to main content
Erschienen in: Neural Processing Letters 2/2015

01.10.2015

Online Training for Open Faulty RBF Networks

verfasst von: Yi Xiao, Ruibin Feng, Chi Sing Leung, Pui Fai Sum

Erschienen in: Neural Processing Letters | Ausgabe 2/2015

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recently, a batch mode learning algorithm, namely optimal open weight fault regularization (OOWFR), was developed for handling the open fault situation. In terms of the Kullback–Leibler divergence, this batch mode learning algorithm is optimal. However, the main disadvantage of this batch mode learning algorithm is that it requires to store the entire input–output history. Therefore, the memory consumption is a problem when the number of training samples is large. In this paper, we present an online version for the OOWFR algorithm. We consider two learning rate cases, fixed learning rate and adaptive learning rate. We present the convergent conditions for these two cases. Simulation results show that the performance of the proposed online mode learning algorithm is better than that of other online mode learning algorithms. Also, the performance of the proposed algorithm is close to that of the batch mode OOWFR algorithm.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Chiu CT, Mehrotra K, Mohan CK, Ranka S (1994) Modifying training algorithms for improved fault tolerance. In: Proceedings of The International Conference on Neural Networks 1994, vol. 4. Orlando, pp 333–338 Chiu CT, Mehrotra K, Mohan CK, Ranka S (1994) Modifying training algorithms for improved fault tolerance. In: Proceedings of The International Conference on Neural Networks 1994, vol. 4. Orlando, pp 333–338
3.
Zurück zum Zitat Bernier J, Ortega J, Rodrguez M, Rojas I, Prieto A (1999) An accurate measure for multilayer perceptron tolerance to weight deviations. Neural Process Lett 10(2):121–130CrossRefMATH Bernier J, Ortega J, Rodrguez M, Rojas I, Prieto A (1999) An accurate measure for multilayer perceptron tolerance to weight deviations. Neural Process Lett 10(2):121–130CrossRefMATH
4.
Zurück zum Zitat Bernier J, Ortega J, Rojas I, Ros E, Prieto A (2000) Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process Lett 12(2):107–113CrossRef Bernier J, Ortega J, Rojas I, Ros E, Prieto A (2000) Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process Lett 12(2):107–113CrossRef
5.
Zurück zum Zitat Zhou ZH, Chen SF (2003) Evolving fault-tolerant neural networks. Neural Comput Appl 11(3–4):156–160CrossRefMATH Zhou ZH, Chen SF (2003) Evolving fault-tolerant neural networks. Neural Comput Appl 11(3–4):156–160CrossRefMATH
6.
Zurück zum Zitat Leung CS, Sum J (2008) A fault-tolerant regularizer for RBF networks. IEEE Trans Neural Netw 19(3):493–507CrossRefMATH Leung CS, Sum J (2008) A fault-tolerant regularizer for RBF networks. IEEE Trans Neural Netw 19(3):493–507CrossRefMATH
7.
Zurück zum Zitat Sum JPF, Leung CS, Ho KI-J (2009) On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans Neural Netw 20(1):124–138. doi:10.1109/TNN.2008.2005596 CrossRefMATH Sum JPF, Leung CS, Ho KI-J (2009) On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans Neural Netw 20(1):124–138. doi:10.​1109/​TNN.​2008.​2005596 CrossRefMATH
8.
Zurück zum Zitat Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244CrossRefMATH Leung CS, Wang HJ, Sum J (2010) On the selection of weight decay parameter for faulty networks. IEEE Trans Neural Netw 21(8):1232–1244CrossRefMATH
9.
Zurück zum Zitat Chandra P, Singh Y (2003) Fault tolerance of feedforward artificial neural networks—a framework of study. In: Proceedings of The International Joint Conference on Neural Networks 2003, vol 1. Portland, pp 489–494 Chandra P, Singh Y (2003) Fault tolerance of feedforward artificial neural networks—a framework of study. In: Proceedings of The International Joint Conference on Neural Networks 2003, vol 1. Portland, pp 489–494
10.
Zurück zum Zitat Phatak DS, Koren I (1995) Complete and partial fault tolerance of feedforward neural nets. IEEE Trans Neural Netw 6(2):446–456CrossRef Phatak DS, Koren I (1995) Complete and partial fault tolerance of feedforward neural nets. IEEE Trans Neural Netw 6(2):446–456CrossRef
11.
Zurück zum Zitat Burr J (1995) Digital neural network implementations. In: Neural networks, concepts, applications, and implementations, Vol III. Englewood Cliffs, Prentice Hall, pp 237–285 Burr J (1995) Digital neural network implementations. In: Neural networks, concepts, applications, and implementations, Vol III. Englewood Cliffs, Prentice Hall, pp 237–285
12.
Zurück zum Zitat Himavathi DAS, Muthuramalingam A (2007) Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization. IEEE Trans Neural Netw 18:880–888CrossRefMATH Himavathi DAS, Muthuramalingam A (2007) Feedforward neural network implementation in FPGA using layer multiplexing for effective resource utilization. IEEE Trans Neural Netw 18:880–888CrossRefMATH
13.
Zurück zum Zitat Savich A, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252CrossRef Savich A, Moussa M, Areibi S (2007) The impact of arithmetic representation on implementing MLP-BP on FPGAs: a study. IEEE Trans Neural Netw 18(1):240–252CrossRef
14.
Zurück zum Zitat Kaneko T, Liu B (1970) Effect of coefficient rounding in floating-point digital filters. IEEE Trans Aerosp Electron Syst AE–7:995–1003 Kaneko T, Liu B (1970) Effect of coefficient rounding in floating-point digital filters. IEEE Trans Aerosp Electron Syst AE–7:995–1003
15.
Zurück zum Zitat Liu B, Kaneko T (1969) Error angalysis of digital filter realized with floating-point arithemetic. Proc IEEE 57:1735–1747CrossRef Liu B, Kaneko T (1969) Error angalysis of digital filter realized with floating-point arithemetic. Proc IEEE 57:1735–1747CrossRef
16.
Zurück zum Zitat Bolt G (1991) Fault models for artificial neural networks. IEEE Int Joint Conf Neural Netw 2:1373–1378MATH Bolt G (1991) Fault models for artificial neural networks. IEEE Int Joint Conf Neural Netw 2:1373–1378MATH
17.
Zurück zum Zitat Tchernev EB, Mulvaney RG, Phatak DS (2005) Investigating the fault tolerance of neural networks. Neural Comput 17(7):1646–1664MathSciNetCrossRefMATH Tchernev EB, Mulvaney RG, Phatak DS (2005) Investigating the fault tolerance of neural networks. Neural Comput 17(7):1646–1664MathSciNetCrossRefMATH
18.
Zurück zum Zitat Haruhiko T, Masahiko M, Hidehiko K, Terumine H (2007) Enhancing both generalization and fault tolerance of multilayer neural networks. In ATS ’04: Proceedings of IJCNN 2007, pp 1429–1433 Haruhiko T, Masahiko M, Hidehiko K, Terumine H (2007) Enhancing both generalization and fault tolerance of multilayer neural networks. In ATS ’04: Proceedings of IJCNN 2007, pp 1429–1433
19.
Zurück zum Zitat Ahmadi A, Sargolzaie M, Fakhraie S, Lucas C, Vakili S (2009) Feedforward sigmoidal networks—equicontinuity and fault-tolerance properties. Comput Eng Technol Int Conf 2:93–97MATH Ahmadi A, Sargolzaie M, Fakhraie S, Lucas C, Vakili S (2009) Feedforward sigmoidal networks—equicontinuity and fault-tolerance properties. Comput Eng Technol Int Conf 2:93–97MATH
20.
Zurück zum Zitat Sequin C, Clay R (1990) Fault tolerance in artificial neural networks. In: Proceedings of The International Joint Conference on Neural Networks 1990. San Diego, pp 703–708 Sequin C, Clay R (1990) Fault tolerance in artificial neural networks. In: Proceedings of The International Joint Conference on Neural Networks 1990. San Diego, pp 703–708
21.
Zurück zum Zitat Neti C, Schneider MH, Young ED (1992) Maximally fault tolerance neural networks. IEEE Trans Neural Netw 3(1):14–23CrossRef Neti C, Schneider MH, Young ED (1992) Maximally fault tolerance neural networks. IEEE Trans Neural Netw 3(1):14–23CrossRef
22.
Zurück zum Zitat Deodhare D, Vidyasagar M, Sathiya Keerthi S (1998) Synthesis of fault-tolerant feedforward neural networks using minimax optimization. IEEE Trans Neural Netw 9(5):891–900CrossRef Deodhare D, Vidyasagar M, Sathiya Keerthi S (1998) Synthesis of fault-tolerant feedforward neural networks using minimax optimization. IEEE Trans Neural Netw 9(5):891–900CrossRef
23.
Zurück zum Zitat Bernier J, Daz A, Fernndez F, Caas A, Gonzlez J, Martn-Smith P, Ortega J (2003) Assessing the noise immunity and generalization of radial basis function networks. Neural Process Lett 18(1):35–48CrossRef Bernier J, Daz A, Fernndez F, Caas A, Gonzlez J, Martn-Smith P, Ortega J (2003) Assessing the noise immunity and generalization of radial basis function networks. Neural Process Lett 18(1):35–48CrossRef
24.
Zurück zum Zitat Cavalieri S, Mirabella O (1999) A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw 12(1):91–106CrossRef Cavalieri S, Mirabella O (1999) A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw 12(1):91–106CrossRef
25.
Zurück zum Zitat Mallofre AC, Llanas XP (1996) Fault tolerance parameter model of radial basis function networks. Proc Int Joint Conf Neural Netw 2:1384–1389CrossRef Mallofre AC, Llanas XP (1996) Fault tolerance parameter model of radial basis function networks. Proc Int Joint Conf Neural Netw 2:1384–1389CrossRef
26.
Zurück zum Zitat Parra X, Catala A (2000) Fault tolerance in the learning algorithm of radial basis function networks. Proc IJCNN 3(2000):527–532 Parra X, Catala A (2000) Fault tolerance in the learning algorithm of radial basis function networks. Proc IJCNN 3(2000):527–532
27.
Zurück zum Zitat Moody JE (1991) Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proceedings First IEEE-SP Workshop on Neural Networks for Signal Processing, pp 1–10 Moody JE (1991) Note on generalization, regularization, and architecture selection in nonlinear learning systems. In: Proceedings First IEEE-SP Workshop on Neural Networks for Signal Processing, pp 1–10
28.
Zurück zum Zitat Moody J (1996) A smoothing regularizer for feedforward and recurrent neural networks. Neural Comput 8:461–489CrossRef Moody J (1996) A smoothing regularizer for feedforward and recurrent neural networks. Neural Comput 8:461–489CrossRef
29.
Zurück zum Zitat Krogh A, Hertz JA (1992) A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems. Morgan Kaufmann, Burlington, pp 950–957 Krogh A, Hertz JA (1992) A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems. Morgan Kaufmann, Burlington, pp 950–957
30.
Zurück zum Zitat Leung CS, Tsoi A, Chan L (2001) Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans Neural Netw 12(6):1314–1332CrossRef Leung CS, Tsoi A, Chan L (2001) Two regularizers for recursive least square algorithms in feedforward multilayered neural networks. IEEE Trans Neural Netw 12(6):1314–1332CrossRef
31.
Zurück zum Zitat Kullback S (1959) Information theory and statisticis. Wiley, New York Kullback S (1959) Information theory and statisticis. Wiley, New York
32.
Zurück zum Zitat Wang HJ, Leung CS, Sum PF, Wei G (2010) Kernel width optimization for faulty rbf neural networks with multi-node open fault. Neural Process Lett 32(1):97–107CrossRefMATH Wang HJ, Leung CS, Sum PF, Wei G (2010) Kernel width optimization for faulty rbf neural networks with multi-node open fault. Neural Process Lett 32(1):97–107CrossRefMATH
33.
Zurück zum Zitat Wang ZQ, Manry M, Schiano J (2000) LMS learning algorithms: misconceptions and new results on convergence. IEEE Trans on Neural Netw 11(1):47–56CrossRef Wang ZQ, Manry M, Schiano J (2000) LMS learning algorithms: misconceptions and new results on convergence. IEEE Trans on Neural Netw 11(1):47–56CrossRef
34.
Zurück zum Zitat Luo Z-Q (1991) On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput 3(2):226–245CrossRef Luo Z-Q (1991) On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks. Neural Comput 3(2):226–245CrossRef
36.
Zurück zum Zitat Chen S (2006) Local regularization assisted orthogonal least squares regression. Neurocomputing 69:559–585CrossRef Chen S (2006) Local regularization assisted orthogonal least squares regression. Neurocomputing 69:559–585CrossRef
37.
Zurück zum Zitat Sugiyama M, Ogawa H (2002) Optimal design of regularization term and regularization parameter by subspace information criterion. Neural Netw 15(3):349–361CrossRef Sugiyama M, Ogawa H (2002) Optimal design of regularization term and regularization parameter by subspace information criterion. Neural Netw 15(3):349–361CrossRef
38.
Zurück zum Zitat Chen S, Cowan CFN, Grant P (1991) Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans on Neural Netw 2(2):302–309CrossRefMATH Chen S, Cowan CFN, Grant P (1991) Orthogonal least squares learning algorithm for radial basis function networks. IEEE Trans on Neural Netw 2(2):302–309CrossRefMATH
Metadaten
Titel
Online Training for Open Faulty RBF Networks
verfasst von
Yi Xiao
Ruibin Feng
Chi Sing Leung
Pui Fai Sum
Publikationsdatum
01.10.2015
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 2/2015
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-014-9363-8

Weitere Artikel der Ausgabe 2/2015

Neural Processing Letters 2/2015 Zur Ausgabe

Neuer Inhalt