Skip to main content
Log in

Reducing error in neural network time series forecasting

  • Artides
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

Neural network time series forecasting error comprises autocorrelation error, due to an imperfect model, and random noise, inherent in the data. Both problems are addressed here, the first using a two stage training, growth-network neuron: the autocorrelation error (ACE) neuron. The second is considered as a post-processing noise filtering problem. These techniques are applied in forecasting the sunspot time series, with comparison of stochastic, BFGS and conjugate gradient solvers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Casdagli M, Eubank S, eds. Nonlinear modeling and forecasting. Sante Fe Institute Proceedings vol. XII; Addison-Wesley, Redwood City, CA, 1992

    Google Scholar 

  2. Lapedes AS, Farber RM. Nonlinear signal processing using neural networks: prediction and system modelling. Technical Report LA-UR-87-2662, Los Alamos National Laboratory, 1987.

  3. Weigend AS, Huberman BA, Rumelhart DE. Predicting the future: a connectionist approach. Int J Neural Syst 1990; 1: 193–209

    Article  Google Scholar 

  4. Chakraborty K, Mehrotra K, Mohan CK, Ranka S. Forecasting the behavior of multivariate time series using neural networks. Neural Networks 1992; 5: 961–970

    Google Scholar 

  5. Refenes AN, Azema-Barac M, Chen L, Karoussos SA. Currency Exchange Rate Prediction and Neural Network Design Strategies. Neural Comput Applic 1993; 1: 46–58

    Article  Google Scholar 

  6. Azoff EM. Neural network time series forecasting of financial markets. John Wiley, Chichester (To be published 1994)

  7. Mezard M, Nadal J-P. Learning in feedforward layered networks: the tiling algorithm. J Physics A 1989; 21: 2191–2204

    Google Scholar 

  8. Fahlman SE, Lebiere C. The cascade-correlation learning architecture. In: Touretzky, DS, ed. Advances in Neural Information Processing Systems, vol 2. Morgan Kaufmann, San Mateo, CA, 1990, 524–532

    Google Scholar 

  9. Frean M. The upstart algorithm: a method for constructing and training feedforward neural networks. Neural Computation 1990; 2: 198–209

    Google Scholar 

  10. Refenes AN, Vithlani S. Constructive learning by specialisation. In: Proc. ICANN 1991, Helsinki, Finland

    Google Scholar 

  11. Werbos PJ. Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD Thesis, Harvard University, 1974

  12. Le Cun Y. A learning scheme for asymmetric threshold network. In: CESTA-AFCET, ed. Cognitiva 85: A la Frontiere de l'Intelligence Artificielle des Sciences de la Connaissance des Neurosciences, Paris, 1985, 599–604

  13. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by backpropagating errors. Nature 1986; 323: 533–536

    Article  Google Scholar 

  14. Azoff EM. Neural network principal components preprocessing and diffraction tomography. Neural Comput Applic 1993; 1: 107–114

    Article  Google Scholar 

  15. Hertz J, Krogh A, Palmer RG. Introduction to the theory of neural computation. Lecture Notes vol I, Sante Fe Institute. Addison-Wesley, Redwood City, CA, 1991

    Google Scholar 

  16. Jacobs DAH, ed. The state of the art in numerical analysis. Academic Press, London, 1977

    Google Scholar 

  17. Battiti R. First- and second-order methods for learning, between steepest descent and newton's method. Neural Computation 1992; 4: 141–166

    Google Scholar 

  18. Shanno DF. Conjugate gradient methods with inexact searches. Math Operat Res 1978; 3: 244–256

    Google Scholar 

  19. Marple SL. Digital spectral analysis. Prentice-Hall, Englewood Cliffs, NJ 1987

    Google Scholar 

  20. Koeckelenbergh A. Sunspot Index Data Center, 3 Avenue Circulaire, B-1180, Bruxelles, Belgium

  21. Gabr MM, Subba Rao T. The estimation and prediction of subset bilinear time series models with applications. J Time Series Analysis 1981; 2: 155–171

    Google Scholar 

  22. Tong H, Lim KS. Threshold autoregression, limit cycles and cyclical data. J Roy Statist Soc 1980; B42: 245–292

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Azoff, E.M. Reducing error in neural network time series forecasting. Neural Comput & Applic 1, 240–247 (1993). https://doi.org/10.1007/BF02098741

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02098741

Keywords

Navigation