Skip to main content

2019 | OriginalPaper | Buchkapitel

12. Recurrent Neural Networks

verfasst von : Ke-Lin Du, M. N. S. Swamy

Erschienen in: Neural Networks and Statistical Learning

Verlag: Springer London

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recurrent networks are neural networks with backward connections. They are dynamical systems with temporal state representations. They are used in many temporal processing models and applications. This chapter deals with recurrent networks and their learning.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Almeida, L. B. (1987). A learning rule for asynchronous perceptrons with feedback in combinatorial environment. In Proceedings of the IEEE 1st International Conference on Neural Networks (pp. 609–618). San Diego, CA. Almeida, L. B. (1987). A learning rule for asynchronous perceptrons with feedback in combinatorial environment. In Proceedings of the IEEE 1st International Conference on Neural Networks (pp. 609–618). San Diego, CA.
2.
Zurück zum Zitat Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157–166.CrossRef Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157–166.CrossRef
3.
Zurück zum Zitat Berthold, M. R. (1994). A time delay radial basis function network for phoneme recognition. In Proceedings of IEEE International Conference on Neural Networks (Vol. 7, pp. 4470–4473). Orlando, FL. Berthold, M. R. (1994). A time delay radial basis function network for phoneme recognition. In Proceedings of IEEE International Conference on Neural Networks (Vol. 7, pp. 4470–4473). Orlando, FL.
4.
Zurück zum Zitat Bertschinger, N., & Natschlager, T. (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16, 1413–1436.MATHCrossRef Bertschinger, N., & Natschlager, T. (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16, 1413–1436.MATHCrossRef
5.
Zurück zum Zitat Billings, S. A., & Hong, X. (1998). Dual-orthogonal radial basis function networks for nonlinear time series prediction. Neural Networks, 11, 479–493.CrossRef Billings, S. A., & Hong, X. (1998). Dual-orthogonal radial basis function networks for nonlinear time series prediction. Neural Networks, 11, 479–493.CrossRef
6.
Zurück zum Zitat Buonomano, D. V., & Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience, 10, 113–125.CrossRef Buonomano, D. V., & Maass, W. (2009). State-dependent computations: spatiotemporal processing in cortical networks. Nature Reviews Neuroscience, 10, 113–125.CrossRef
7.
Zurück zum Zitat Busing, L., Schrauwen, B., & Legenstein, R. (2010). Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Computation, 22, 1272–1311.MathSciNetMATHCrossRef Busing, L., Schrauwen, B., & Legenstein, R. (2010). Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Computation, 22, 1272–1311.MathSciNetMATHCrossRef
8.
Zurück zum Zitat Cai, X., Prokhorov, D. V., & Wunsch, D. C, I. I. (2007). Training winner-take-all simultaneous recurrent neural networks. IEEE Transactions on Neural Networks, 18(3), 674–684.CrossRef Cai, X., Prokhorov, D. V., & Wunsch, D. C, I. I. (2007). Training winner-take-all simultaneous recurrent neural networks. IEEE Transactions on Neural Networks, 18(3), 674–684.CrossRef
9.
Zurück zum Zitat Coelho, P. H. G. (2001). A complex EKF-RTRL neural network. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) (Vol. 1, pp. 120–125). Coelho, P. H. G. (2001). A complex EKF-RTRL neural network. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) (Vol. 1, pp. 120–125).
10.
Zurück zum Zitat Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179–211.CrossRef Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14, 179–211.CrossRef
11.
Zurück zum Zitat Farkas, I., Bosak, R., & Gergel, P. (2016). Computational analysis of memory capacity in echo state networks. Neural Networks, 83, 109–120.CrossRef Farkas, I., Bosak, R., & Gergel, P. (2016). Computational analysis of memory capacity in echo state networks. Neural Networks, 83, 109–120.CrossRef
12.
Zurück zum Zitat Frascon, P., Cori, M., Maggini, M., & Soda, G. (1996). Representation of finite state automata in recurrent radial basis function networks. Machine Learning, 23, 5–32. Frascon, P., Cori, M., Maggini, M., & Soda, G. (1996). Representation of finite state automata in recurrent radial basis function networks. Machine Learning, 23, 5–32.
13.
Zurück zum Zitat Funahashi, K. I., & Nakamura, Y. (1993). Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6(6), 801–806.CrossRef Funahashi, K. I., & Nakamura, Y. (1993). Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6(6), 801–806.CrossRef
14.
Zurück zum Zitat Goh, S. L., & Mandic, D. P. (2004). A complex-valued RTRL algorithm for recurrent neural networks. Neural Computation, 16, 2699–2713.MATHCrossRef Goh, S. L., & Mandic, D. P. (2004). A complex-valued RTRL algorithm for recurrent neural networks. Neural Computation, 16, 2699–2713.MATHCrossRef
15.
Zurück zum Zitat Goh, S. L., & Mandic, D. P. (2007). An augmented CRTRL for complex-valued recurrent neural networks. Neural Networks, 20(10), 1061–1066.MATHCrossRef Goh, S. L., & Mandic, D. P. (2007). An augmented CRTRL for complex-valued recurrent neural networks. Neural Networks, 20(10), 1061–1066.MATHCrossRef
16.
Zurück zum Zitat Goh, S. L., & Mandic, D. P. (2007). An augmented extended Kalman filter algorithm for complex-valued recurrent neural networks. Neural Computation, 19, 1039–1055.MATHCrossRef Goh, S. L., & Mandic, D. P. (2007). An augmented extended Kalman filter algorithm for complex-valued recurrent neural networks. Neural Computation, 19, 1039–1055.MATHCrossRef
18.
Zurück zum Zitat Grigoryeva, L., & Ortega, J.-P. (2018). Echo state networks are universal. Neural Networks, 108, 495–508.CrossRef Grigoryeva, L., & Ortega, J.-P. (2018). Echo state networks are universal. Neural Networks, 108, 495–508.CrossRef
19.
Zurück zum Zitat Gruning, A. (2007). Elman backpropagation as reinforcement for simple recurrent networks. Neural Computation, 19, 3108–3131.MATHCrossRef Gruning, A. (2007). Elman backpropagation as reinforcement for simple recurrent networks. Neural Computation, 19, 3108–3131.MATHCrossRef
20.
Zurück zum Zitat Hassoun, M. H. (1995). Fundamentals of artificial neural networks. Cambridge: MIT Press. Hassoun, M. H. (1995). Fundamentals of artificial neural networks. Cambridge: MIT Press.
21.
Zurück zum Zitat Ilin, R., Kozma, R., & Werbos, P. J. (2008). Beyond feedforward models trained by backpropagation: A practical training tool for a more efficient universal approximator. IEEE Transactions on Neural Networks, 19(6), 929–937.CrossRef Ilin, R., Kozma, R., & Werbos, P. J. (2008). Beyond feedforward models trained by backpropagation: A practical training tool for a more efficient universal approximator. IEEE Transactions on Neural Networks, 19(6), 929–937.CrossRef
22.
Zurück zum Zitat Jaeger, H. (2001). The “echo state” approach to analyzing and training recurrent neural networks. GMD Technical Report 148. Sankt Augustin, Germany: German National Research Center for Information Technology. Jaeger, H. (2001). The “echo state” approach to analyzing and training recurrent neural networks. GMD Technical Report 148. Sankt Augustin, Germany: German National Research Center for Information Technology.
23.
Zurück zum Zitat Jaeger, H. (2001). Short term memory in echo state networks. GMD Technical Report 152. Sankt Augustin, Germany: German National Research Center for Information Technology. Jaeger, H. (2001). Short term memory in echo state networks. GMD Technical Report 152. Sankt Augustin, Germany: German National Research Center for Information Technology.
24.
Zurück zum Zitat Kawai, Y., Park, J., & Asada, M. (2019). A small-world topology enhances the echo state property and signal propagation in reservoir computing. Neural Networks, 112, 15–23.CrossRef Kawai, Y., Park, J., & Asada, M. (2019). A small-world topology enhances the echo state property and signal propagation in reservoir computing. Neural Networks, 112, 15–23.CrossRef
25.
Zurück zum Zitat Kechriotis, G., & Manolakos, E. (1994). Training fully recurrent neural networks with complex weights. IEEE Transactions on Circuits and Systems II, 41(3), 235–238.CrossRef Kechriotis, G., & Manolakos, E. (1994). Training fully recurrent neural networks with complex weights. IEEE Transactions on Circuits and Systems II, 41(3), 235–238.CrossRef
26.
Zurück zum Zitat Kinouchi, M., & Hagiwara, M. (1996). Memorization of melodies by complex-valued recurrent neural network. In Proceedings of International Conference on Neural Networks (Vol. 3, pp. 1324–1328). Washington, DC. Kinouchi, M., & Hagiwara, M. (1996). Memorization of melodies by complex-valued recurrent neural network. In Proceedings of International Conference on Neural Networks (Vol. 3, pp. 1324–1328). Washington, DC.
27.
Zurück zum Zitat Li, L. K. (1992). Approximation theory and recurrent networks. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) (pp. 266–271). Baltimore, MD. Li, L. K. (1992). Approximation theory and recurrent networks. In Proceedings of the International Joint Conference on Neural Networks (IJCNN) (pp. 266–271). Baltimore, MD.
28.
Zurück zum Zitat Li, X., & Yu, W. (2002). Dynamic system identification via recurrent multilayer perceptrons. Information Sciences, 147, 45–63.MathSciNetMATHCrossRef Li, X., & Yu, W. (2002). Dynamic system identification via recurrent multilayer perceptrons. Information Sciences, 147, 45–63.MathSciNetMATHCrossRef
29.
Zurück zum Zitat Li, X. D., Ho, J. K. L., & Chow, T. W. S. (2005). Approximation of dynamical time-variant systems by continuous-time recurrent neural networks. IEEE Transactions on Circuits and Systems, 52(10), 656–660.CrossRef Li, X. D., Ho, J. K. L., & Chow, T. W. S. (2005). Approximation of dynamical time-variant systems by continuous-time recurrent neural networks. IEEE Transactions on Circuits and Systems, 52(10), 656–660.CrossRef
30.
Zurück zum Zitat Lian, J., Lee, Y., Sudhoff, S. D., & Zak, S. H. (2008). Self-organizing radial basis function network for real-time approximation of continuous-time dynamical systems. IEEE Transactions on Neural Networks, 19(3), 460–474.CrossRef Lian, J., Lee, Y., Sudhoff, S. D., & Zak, S. H. (2008). Self-organizing radial basis function network for real-time approximation of continuous-time dynamical systems. IEEE Transactions on Neural Networks, 19(3), 460–474.CrossRef
31.
Zurück zum Zitat Liang, J., & Gupta, M. M. (1999). Stable dynamic backpropagation learning in recurrent neural networks. IEEE Transactions on Neural Networks, 10(6), 1321–1334.CrossRef Liang, J., & Gupta, M. M. (1999). Stable dynamic backpropagation learning in recurrent neural networks. IEEE Transactions on Neural Networks, 10(6), 1321–1334.CrossRef
32.
Zurück zum Zitat Liu, Q., & Wang, J. (2008). A one-layer recurrent neural network with a discontinuous activation function for linear programming. Neural Computation, 20, 1366–1383.MathSciNetMATHCrossRef Liu, Q., & Wang, J. (2008). A one-layer recurrent neural network with a discontinuous activation function for linear programming. Neural Computation, 20, 1366–1383.MathSciNetMATHCrossRef
33.
Zurück zum Zitat Maass, W., & Markram, H. (2004). On the computational power of circuits of spiking neurons. Journal of Computer and System Sciences, 69(4), 593–616.MathSciNetMATHCrossRef Maass, W., & Markram, H. (2004). On the computational power of circuits of spiking neurons. Journal of Computer and System Sciences, 69(4), 593–616.MathSciNetMATHCrossRef
34.
Zurück zum Zitat Maass, W., Natschlager, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.MATHCrossRef Maass, W., Natschlager, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560.MATHCrossRef
35.
Zurück zum Zitat Manjunath, G., & Jaeger, H. (2013). Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural Computation, 25, 671–696.MathSciNetMATHCrossRef Manjunath, G., & Jaeger, H. (2013). Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural Computation, 25, 671–696.MathSciNetMATHCrossRef
36.
Zurück zum Zitat Mirikitani, D. T., & Nikolaev, N. (2010). Recursive Bayesian recurrent neural networks for time-series modeling. IEEE Transactions on Neural Networks, 21(2), 262–274.CrossRef Mirikitani, D. T., & Nikolaev, N. (2010). Recursive Bayesian recurrent neural networks for time-series modeling. IEEE Transactions on Neural Networks, 21(2), 262–274.CrossRef
37.
Zurück zum Zitat Mandic, D. P., & Chambers, J. A. (2000). A normalised real time recurrent learning algorithm. Signal Processing, 80, 1909–1916.MATHCrossRef Mandic, D. P., & Chambers, J. A. (2000). A normalised real time recurrent learning algorithm. Signal Processing, 80, 1909–1916.MATHCrossRef
38.
Zurück zum Zitat Menguc, E. C., & Acir, N. (2018). Kurtosis-based CRTRL algorithms for fully connected recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 29(12), 6123–6131.CrossRef Menguc, E. C., & Acir, N. (2018). Kurtosis-based CRTRL algorithms for fully connected recurrent neural networks. IEEE Transactions on Neural Networks and Learning Systems, 29(12), 6123–6131.CrossRef
39.
Zurück zum Zitat Patan, K. (2008). Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks. Neural Networks, 21, 59–64.MATHCrossRef Patan, K. (2008). Approximation of state-space trajectories by locally recurrent globally feed-forward neural networks. Neural Networks, 21, 59–64.MATHCrossRef
40.
Zurück zum Zitat Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (pp. 365–372). Washington, DC. Pearlmutter, B. A. (1989). Learning state space trajectories in recurrent neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (pp. 365–372). Washington, DC.
41.
Zurück zum Zitat Pearlmutter, B. A. (1995). Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5), 1212–1228.CrossRef Pearlmutter, B. A. (1995). Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5), 1212–1228.CrossRef
42.
Zurück zum Zitat Peng, H., Ozaki, T., Haggan-Ozaki, V., & Toyoda, Y. (2003). A parameter optimization method for radial basis function type models. IEEE Transactions on Neural Networks, 14(2), 432–438.MATHCrossRef Peng, H., Ozaki, T., Haggan-Ozaki, V., & Toyoda, Y. (2003). A parameter optimization method for radial basis function type models. IEEE Transactions on Neural Networks, 14(2), 432–438.MATHCrossRef
43.
Zurück zum Zitat Pineda, F. J. (1987). Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59, 2229–2232.MathSciNetCrossRef Pineda, F. J. (1987). Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59, 2229–2232.MathSciNetCrossRef
44.
Zurück zum Zitat Rodan, A., & Tino, P. (2011). Minimum complexity echo state network. IEEE Transactions on Neural Networks, 22(1), 131–144.CrossRef Rodan, A., & Tino, P. (2011). Minimum complexity echo state network. IEEE Transactions on Neural Networks, 22(1), 131–144.CrossRef
45.
Zurück zum Zitat Roelfsema, P. R., & van Ooyen, A. (2005). Attention-gated reinforcement learning of internal representations for classification. Neural Computation, 17, 1–39.MATHCrossRef Roelfsema, P. R., & van Ooyen, A. (2005). Attention-gated reinforcement learning of internal representations for classification. Neural Computation, 17, 1–39.MATHCrossRef
46.
Zurück zum Zitat Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition, 1: Foundation (pp. 318–362). Cambridge: MIT Press. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart & J. L. McClelland (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition, 1: Foundation (pp. 318–362). Cambridge: MIT Press.
47.
Zurück zum Zitat Schafer, A. M., & Zimmermann, H. G. (2006). Recurrent neural networks are universal approximators. In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN), LNCS (Vol. 4131, pp. 632–640). Berlin: Springer. Schafer, A. M., & Zimmermann, H. G. (2006). Recurrent neural networks are universal approximators. In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN), LNCS (Vol. 4131, pp. 632–640). Berlin: Springer.
48.
Zurück zum Zitat Sejnowski, T., & Rosenberg, C. (1986). NETtalk: A parallel network that learns to read alound. Technical Report JHU/EECS-86/01, Johns Hopkins University. Sejnowski, T., & Rosenberg, C. (1986). NETtalk: A parallel network that learns to read alound. Technical Report JHU/EECS-86/01, Johns Hopkins University.
49.
Zurück zum Zitat Shamma, S. A. (1989). Spatial and temporal processing in cellular auditory network. In C. Koch & I. Segev (Eds.), Methods in neural modeling (pp. 247–289). Cambridge: MIT Press. Shamma, S. A. (1989). Spatial and temporal processing in cellular auditory network. In C. Koch & I. Segev (Eds.), Methods in neural modeling (pp. 247–289). Cambridge: MIT Press.
50.
Zurück zum Zitat Sheta, A. F., & De Jong, K. (2001). Time series forecasting using GA-tuned radial basis functions. Information Sciences, 133, 221–228.MATHCrossRef Sheta, A. F., & De Jong, K. (2001). Time series forecasting using GA-tuned radial basis functions. Information Sciences, 133, 221–228.MATHCrossRef
51.
Zurück zum Zitat Siegelmann, H. T., & Sontag, E. D. (1995). On the computational power of neural nets. Journal of Computer and System Sciences, 50(1), 132–150.MathSciNetMATHCrossRef Siegelmann, H. T., & Sontag, E. D. (1995). On the computational power of neural nets. Journal of Computer and System Sciences, 50(1), 132–150.MathSciNetMATHCrossRef
52.
Zurück zum Zitat Siegelmann, H. T., Horne, B. G., & Giles, C. L. (1997). Computational capabilities of recurrent NARX neural networks. IEEE Transactions on Systems, Man, and Cybernetics Part B, 27(2), 208–215.CrossRef Siegelmann, H. T., Horne, B. G., & Giles, C. L. (1997). Computational capabilities of recurrent NARX neural networks. IEEE Transactions on Systems, Man, and Cybernetics Part B, 27(2), 208–215.CrossRef
53.
Zurück zum Zitat Soh, H., & Demiris, Y. (2015). Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes. IEEE Transactions on Neural Networks and Learning Systems, 26(3), 522–536.MathSciNetCrossRef Soh, H., & Demiris, Y. (2015). Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes. IEEE Transactions on Neural Networks and Learning Systems, 26(3), 522–536.MathSciNetCrossRef
54.
Zurück zum Zitat Song, Q., Wu, Y., & Soh, Y. C. (2008). Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. IEEE Transactions on Neural Networks, 19(11), 1841–1853.CrossRef Song, Q., Wu, Y., & Soh, Y. C. (2008). Robust adaptive gradient-descent training algorithm for recurrent neural networks in discrete time domain. IEEE Transactions on Neural Networks, 19(11), 1841–1853.CrossRef
55.
Zurück zum Zitat Steil, J. J. (2004). Backpropagation-decorrelation: Recurrent learning with \(O(N)\) complexity. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (Vol. 1, pp. 843–848). Steil, J. J. (2004). Backpropagation-decorrelation: Recurrent learning with \(O(N)\) complexity. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (Vol. 1, pp. 843–848).
56.
Zurück zum Zitat Steil, J. J. (2006). Online stability of backpropagation-decorrelation recurrent learning. Neurocomputing, 69, 642–650.CrossRef Steil, J. J. (2006). Online stability of backpropagation-decorrelation recurrent learning. Neurocomputing, 69, 642–650.CrossRef
57.
Zurück zum Zitat Strauss, T., Wustlich, W., & Labahn, R. (2012). Design strategies for weight matrices of echo state networks. Neural Computation, 24, 3246–3276.MathSciNetMATHCrossRef Strauss, T., Wustlich, W., & Labahn, R. (2012). Design strategies for weight matrices of echo state networks. Neural Computation, 24, 3246–3276.MathSciNetMATHCrossRef
58.
Zurück zum Zitat Tokuda, I., Tokunaga, R., & Aihara, K. (2003). Back-propagation learning of infinite-dimensional dynamical systems. Neural Networks, 16, 1179–1193.CrossRef Tokuda, I., Tokunaga, R., & Aihara, K. (2003). Back-propagation learning of infinite-dimensional dynamical systems. Neural Networks, 16, 1179–1193.CrossRef
59.
Zurück zum Zitat Tsoi, A. C., & Back, A. D. (1994). Locally recurrent globally feedforward networks: A critical review of architectures. IEEE Transactions on Neural Networks, 5(2), 229–239.CrossRef Tsoi, A. C., & Back, A. D. (1994). Locally recurrent globally feedforward networks: A critical review of architectures. IEEE Transactions on Neural Networks, 5(2), 229–239.CrossRef
60.
Zurück zum Zitat Vesin, J. (1993). An amplitude-dependent autoregressive signal model based on a radial basis function expansion. In Proceedings of International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (Vol. 3, pp. 129–132). Minneapolis, MN. Vesin, J. (1993). An amplitude-dependent autoregressive signal model based on a radial basis function expansion. In Proceedings of International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (Vol. 3, pp. 129–132). Minneapolis, MN.
61.
Zurück zum Zitat Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., & Lang, K. J. (1989). Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37(3), 328–339.CrossRef Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., & Lang, K. J. (1989). Phoneme recognition using time-delay neural networks. IEEE Transactions on Acoustics, Speech and Signal Processing, 37(3), 328–339.CrossRef
62.
Zurück zum Zitat Wallace, E., Hamid, R., & Latham, P. (2013). Randomly connected networks have short temporal memory. Neural Computation, 25, 1408–1439.MathSciNetMATHCrossRef Wallace, E., Hamid, R., & Latham, P. (2013). Randomly connected networks have short temporal memory. Neural Computation, 25, 1408–1439.MathSciNetMATHCrossRef
63.
Zurück zum Zitat Wan, E. A. (1990). Temporal backpropagation for FIR neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (pp. 575–580). San Diego, CA. Wan, E. A. (1990). Temporal backpropagation for FIR neural networks. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN) (pp. 575–580). San Diego, CA.
64.
Zurück zum Zitat Wan, E. A. (1994). Time series prediction by using a connectionist network with internal delay lines. In A. S. Weigend & N. A. Gershenfeld (Eds.), Time series prediction: Forcasting the future and understanding the past (pp. 195–217). Reading: Addison-Wesley. Wan, E. A. (1994). Time series prediction by using a connectionist network with internal delay lines. In A. S. Weigend & N. A. Gershenfeld (Eds.), Time series prediction: Forcasting the future and understanding the past (pp. 195–217). Reading: Addison-Wesley.
65.
Zurück zum Zitat Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of small-world networks. Nature, 393, 440–442.MATHCrossRef Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of small-world networks. Nature, 393, 440–442.MATHCrossRef
66.
Zurück zum Zitat Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRef Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10), 1550–1560.CrossRef
67.
Zurück zum Zitat Werbos, P. J., & Pang, X. (1996). Generalized maze navigation: SRN critics solve what feedforward or Hebbian cannot. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (Vol. 3, pp. 1764–1769). Werbos, P. J., & Pang, X. (1996). Generalized maze navigation: SRN critics solve what feedforward or Hebbian cannot. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (Vol. 3, pp. 1764–1769).
68.
Zurück zum Zitat Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2), 270–280.CrossRef Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2), 270–280.CrossRef
69.
Zurück zum Zitat Williams, R. J., & Zipser, D. (1995). Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin & D. E. Rumelhart (Eds.), Backpropagation: Theory, architecture, and applications (pp. 433–486). Hillsdale: Lawrence Erlbaum. Williams, R. J., & Zipser, D. (1995). Gradient-based learning algorithms for recurrent networks and their computational complexity. In Y. Chauvin & D. E. Rumelhart (Eds.), Backpropagation: Theory, architecture, and applications (pp. 433–486). Hillsdale: Lawrence Erlbaum.
70.
Zurück zum Zitat Wu, F.-X. (2011). Delay-independent stability of genetic regulatory networks. IEEE Transactions on Neural Networks, 22(11), 1685–1692.CrossRef Wu, F.-X. (2011). Delay-independent stability of genetic regulatory networks. IEEE Transactions on Neural Networks, 22(11), 1685–1692.CrossRef
71.
Zurück zum Zitat Yu, D. L. (2004). A localized forgetting method for Gaussian RBFN model adaptation. Neural Processing Letters, 20, 125–135.CrossRef Yu, D. L. (2004). A localized forgetting method for Gaussian RBFN model adaptation. Neural Processing Letters, 20, 125–135.CrossRef
72.
Zurück zum Zitat Zhang, Y., Jiang, D., & Wang, J. (2002). A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Transactions on Neural Networks, 13(5), 1053–1063.CrossRef Zhang, Y., Jiang, D., & Wang, J. (2002). A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Transactions on Neural Networks, 13(5), 1053–1063.CrossRef
73.
Zurück zum Zitat Zhang, Y., Ma, W., & Cai, B. (2009). From Zhang neural network to Newton iteration for matrix inversion. IEEE Transactions on Circuits and Systems Part I, 56(7), 1405–1415.MathSciNetCrossRef Zhang, Y., Ma, W., & Cai, B. (2009). From Zhang neural network to Newton iteration for matrix inversion. IEEE Transactions on Circuits and Systems Part I, 56(7), 1405–1415.MathSciNetCrossRef
Metadaten
Titel
Recurrent Neural Networks
verfasst von
Ke-Lin Du
M. N. S. Swamy
Copyright-Jahr
2019
Verlag
Springer London
DOI
https://doi.org/10.1007/978-1-4471-7452-3_12