2009 | OriginalPaper | Buchkapitel
Lag-Dependent Regularization for MLPs Applied to Financial Time Series Forecasting Tasks
verfasst von : Andrew Skabar
Erschienen in: Computational Science – ICCS 2009
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
The application of multilayer perceptrons to forecasting the future value of some time series based on past (or
lagged
) values of the time series usually requires very careful selection of the number of lags to be used as inputs, and this must usually be determined empirically. This paper proposes a regularization technique by which the influence that a lag has in determining the forecast value decreases exponentially with the lag, and is consistent with the intuitive notion that recent values should have more influence than less recent values in predicting future values. This means that in principle an infinite number of dimensions could be used. Empirical results show that the regularization technique yields superior performance on out-of-sample data compared with approaches that use a fixed number of inputs without lag-dependent regularization.