2012 | OriginalPaper | Chapter
Forecasting with Recurrent Neural Networks: 12 Tricks
Authors : Hans-Georg Zimmermann, Christoph Tietz, Ralph Grothmann
Published in: Neural Networks: Tricks of the Trade
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Recurrent neural networks (RNNs) are typically considered as relatively simple architectures, which come along with complicated learning algorithms. This paper has a different view: We start from the fact that RNNs can model any high dimensional, nonlinear dynamical system. Rather than focusing on learning algorithms, we concentrate on the design of network architectures. Unfolding in time is a well-known example of this modeling philosophy. Here a temporal algorithm is transferred into an architectural framework such that the learning can be performed by an extension of standard error backpropagation.
We introduce
12
tricks that not only provide deeper insights in the functioning of RNNs but also improve the identification of underlying dynamical system from data.