Skip to main content
Top

2015 | OriginalPaper | Chapter

An FPGA-Based Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Perceptron (Full Version)

Authors : Tadayoshi Horita, Itsuo Takanami, Masakazu Akiba, Mina Terauchi, Tsuneo Kanno

Published in: Transactions on Computational Science XXV

Publisher: Springer Berlin Heidelberg

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

A method to implement a digital multilayer perceptron (DMLP) in an FPGA is proposed, where the DMLP is tolerant to simultaneous weight and neuron faults. It has been shown in [1] that a multilayer perceptron (MLP) which has successfully trained using the deep learning method is tolerant to multiple weight and neuron faults where the weight faults are between the hidden and output layers, and the neuron faults are in the hidden layer. Using this fact, a set of weights in the trained MLP is installed in an FPGA to cope with these faults. Further, the neuron faults in the output layer are detected or corrected using SECDED code. The above process is done as follows. The generator developed by us automatically outputs a VHDL source file which describes the perceptron using a set of weight values in the MLP trained by the deep learning method. The VHDL file obtained is input to the logic design software Quartus II of Altera Inc., and then, implemented in an FPGA. The process is applied to realizing fault-tolerant DMLPs for character recognitions as concrete examples. Then, the perceptrons to be made fault-tolerant and corresponding non-redundant ones not to be made fault-tolerant are compared in terms of not only reliability and fault rate but also hardware size, computing speed and electricity consumption. The data show that the fault rate of the fault-tolerant perceptron can be significantly decreased than that of the corresponding non-redundant one.
This paper is the full version of [2].

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. Syst. E91-D(4), 1168–1175 (2008) Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. Syst. E91-D(4), 1168–1175 (2008)
2.
go back to reference Horita, T., Takanami, I.: An FPGA-based multiple-weight-and-neuron-fault tolerant digital multilayer perceptron. Neurocomputing 99, 570–574 (2013). ElsevierCrossRef Horita, T., Takanami, I.: An FPGA-based multiple-weight-and-neuron-fault tolerant digital multilayer perceptron. Neurocomputing 99, 570–574 (2013). ElsevierCrossRef
3.
go back to reference Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)CrossRef Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Netw. 6(2), 446–456 (1995)CrossRef
4.
go back to reference Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by Fahlman, S.E., et al. at the Computer Science Department, Carnegie Mellon University Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by Fahlman, S.E., et al. at the Computer Science Department, Carnegie Mellon University
5.
go back to reference Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proceedings of International Symposium on FTCS, pp. 228–235 (1990) Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proceedings of International Symposium on FTCS, pp. 228–235 (1990)
6.
go back to reference Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993). (in Japanese) Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993). (in Japanese)
7.
go back to reference Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvement. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)CrossRef Murray, A.F., Edwards, P.J.: Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvement. IEEE Trans. Neural Netw. 4(4), 722–725 (1993)CrossRef
8.
go back to reference Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf. Syst. E81-D(1), 115–123 (1998) Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf. Syst. E81-D(1), 115–123 (1998)
9.
go back to reference Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proceedings of International Joint Conference on Neural Networks, pp. 2656–2660 (2001) Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proceedings of International Joint Conference on Neural Networks, pp. 2656–2660 (2001)
10.
go back to reference Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. Syst. E84-D(7), 899–905 (2001) Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. Syst. E84-D(7), 899–905 (2001)
11.
go back to reference Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proceedings of International Joint Conference on Neural Networks, pp. I-769–I-774 (1992) Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proceedings of International Joint Conference on Neural Networks, pp. I-769–I-774 (1992)
12.
go back to reference Cavalieri, S., Mirabella, O.: A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw. (Pergamon) 12(1), 91–106 (1999)CrossRef Cavalieri, S., Mirabella, O.: A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks. Neural Netw. (Pergamon) 12(1), 91–106 (1999)CrossRef
13.
go back to reference Demidenko, S., Piuri, V.: Concurrent diagnosis in digital implementations of neural networks. Neurocomputing 48(1-4), 879–903 (2002)CrossRefMATH Demidenko, S., Piuri, V.: Concurrent diagnosis in digital implementations of neural networks. Neurocomputing 48(1-4), 879–903 (2002)CrossRefMATH
14.
go back to reference Zhanga, Y., Guoa, L., Yua, H., Zhao, K.: Fault tolerant control based on stochastic distributions via MLP neural networks. Neurocomputing 70(4–6), 867–874 (2007)CrossRef Zhanga, Y., Guoa, L., Yua, H., Zhao, K.: Fault tolerant control based on stochastic distributions via MLP neural networks. Neurocomputing 70(4–6), 867–874 (2007)CrossRef
15.
go back to reference Ho, K., Leung, C.S., Sum, J.: Training RBF network to tolerate single node faults. Neurocomputing 74(6), 1046–1052 (2011)CrossRef Ho, K., Leung, C.S., Sum, J.: Training RBF network to tolerate single node faults. Neurocomputing 74(6), 1046–1052 (2011)CrossRef
16.
go back to reference Maka, S.K., Sum, P.F., Leung, C.S.: Regularizers for fault tolerant multilayer feedforward networks. Neurocomputing 74(11), 2028–2040 (2011)CrossRef Maka, S.K., Sum, P.F., Leung, C.S.: Regularizers for fault tolerant multilayer feedforward networks. Neurocomputing 74(11), 2028–2040 (2011)CrossRef
17.
go back to reference Takanami, I., Sato, M., Yang, Y.P.: A fault-value injection approach for multiple-weight-fault tolerance of MNNs. In: Proceedings of International Joint Conference on IJCNN, p. III-515 (2000) Takanami, I., Sato, M., Yang, Y.P.: A fault-value injection approach for multiple-weight-fault tolerance of MNNs. In: Proceedings of International Joint Conference on IJCNN, p. III-515 (2000)
18.
go back to reference Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. Syst. E86-D(12), 2536–2543 (2003) Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. Syst. E86-D(12), 2536–2543 (2003)
19.
go back to reference Massengill, L.W., Mundie, D.B.: An analog neural hardware implementation using charge-injection multipliers and neuron-specific gain control. IEEE Trans. Neural Netw. 3(3), 354–362 (1992)CrossRef Massengill, L.W., Mundie, D.B.: An analog neural hardware implementation using charge-injection multipliers and neuron-specific gain control. IEEE Trans. Neural Netw. 3(3), 354–362 (1992)CrossRef
20.
go back to reference Frye, R.C., Rietman, E.A., Wong, C.C.: Back-propagation learning and nonidealities in analog neural network hardware. IEEE Trans. Neural Netw. 2(1), 110–117 (1991)CrossRef Frye, R.C., Rietman, E.A., Wong, C.C.: Back-propagation learning and nonidealities in analog neural network hardware. IEEE Trans. Neural Netw. 2(1), 110–117 (1991)CrossRef
21.
go back to reference Holt, J.L., Hwang, J.N.: Finite precision error analysis of neural network hardware implementations. IEEE Trans. Comput. 42(3), 281–290 (1993)CrossRef Holt, J.L., Hwang, J.N.: Finite precision error analysis of neural network hardware implementations. IEEE Trans. Comput. 42(3), 281–290 (1993)CrossRef
22.
go back to reference Mauduit, N., Duranton, M., Gobert, J., Sirat, J.A.: Lneuro 1.0: a piece of hardware LEGO for building neural network systems. IEEE Trans. Neural Netw. 3(3), 414–422 (1992)CrossRef Mauduit, N., Duranton, M., Gobert, J., Sirat, J.A.: Lneuro 1.0: a piece of hardware LEGO for building neural network systems. IEEE Trans. Neural Netw. 3(3), 414–422 (1992)CrossRef
23.
go back to reference Murakawa, M., Yoshizawa, S., et al.: The GRD chip: genetic reconfiguration of DSPs for neural network processing. IEEE Trans. Comput. 48, 628–639 (1999)CrossRef Murakawa, M., Yoshizawa, S., et al.: The GRD chip: genetic reconfiguration of DSPs for neural network processing. IEEE Trans. Comput. 48, 628–639 (1999)CrossRef
24.
go back to reference Brown, B.D., Card, H.C.: Stochastic neural computation I: computational elements. IEEE Trans. Comput. 50(9), 891–905 (2001)CrossRefMathSciNet Brown, B.D., Card, H.C.: Stochastic neural computation I: computational elements. IEEE Trans. Comput. 50(9), 891–905 (2001)CrossRefMathSciNet
25.
go back to reference Card, H.C., McNeal, D.K., McLeod, R.D.: Competitive learning algorithms and neurocomputer architecture. IEEE Trans. Comput. 47(8), 847–858 (1998)CrossRef Card, H.C., McNeal, D.K., McLeod, R.D.: Competitive learning algorithms and neurocomputer architecture. IEEE Trans. Comput. 47(8), 847–858 (1998)CrossRef
26.
go back to reference Ninomiya, H., Asai, H.: Neural networks for digital sequential circuits. IEICE Trans. Fundam. E77-A(12), 2112–2115 (1994) Ninomiya, H., Asai, H.: Neural networks for digital sequential circuits. IEICE Trans. Fundam. E77-A(12), 2112–2115 (1994)
27.
go back to reference Aihara, K., Fujita, O., Uchimura, K.: A sparse memory access architecture for digital neural network LSIs. IEICE Trans. Electron. E80-C(7), 996–1002 (1997) Aihara, K., Fujita, O., Uchimura, K.: A sparse memory access architecture for digital neural network LSIs. IEICE Trans. Electron. E80-C(7), 996–1002 (1997)
28.
go back to reference Morishita, T., Tamura, Y., et al.: A digital neural network coprocessor with a dynamically reconfigurable pipeline architecture. IEICE Trans. Electron. E76-C(7), 1191–1196 (1993) Morishita, T., Tamura, Y., et al.: A digital neural network coprocessor with a dynamically reconfigurable pipeline architecture. IEICE Trans. Electron. E76-C(7), 1191–1196 (1993)
29.
go back to reference Fujita, M., Kobayashi, Y., et al.: Development and fabrication of digital neural network WSIs. IEICE Trans. Electron. E76-C(7), 1182–1190 (1993) Fujita, M., Kobayashi, Y., et al.: Development and fabrication of digital neural network WSIs. IEICE Trans. Electron. E76-C(7), 1182–1190 (1993)
30.
go back to reference Bettola, S., Piuri, V.: High performance fault-tolerant digital neural networks. IEEE Trans. Comput. 47(3), 357–363 (1998)CrossRef Bettola, S., Piuri, V.: High performance fault-tolerant digital neural networks. IEEE Trans. Comput. 47(3), 357–363 (1998)CrossRef
31.
go back to reference Sugawara, E., Fukushi, M., Horiguchi, S.: Self reconfigurable multi-layer neural networks with genetic algorithms. IEICE Trans. Inf. Syst. E87-D(8), 2021–2028 (2004) Sugawara, E., Fukushi, M., Horiguchi, S.: Self reconfigurable multi-layer neural networks with genetic algorithms. IEICE Trans. Inf. Syst. E87-D(8), 2021–2028 (2004)
Metadata
Title
An FPGA-Based Multiple-Weight-and-Neuron-Fault Tolerant Digital Multilayer Perceptron (Full Version)
Authors
Tadayoshi Horita
Itsuo Takanami
Masakazu Akiba
Mina Terauchi
Tsuneo Kanno
Copyright Year
2015
Publisher
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-47074-9_9

Premium Partner