Skip to main content
Top
Published in: Neural Computing and Applications 10/2019

17-03-2018 | Original Article

A novel double incremental learning algorithm for time series prediction

Authors: Jinhua Li, Qun Dai, Rui Ye

Published in: Neural Computing and Applications | Issue 10/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Based on support vector machine (SVM), incremental SVM was proposed, which has a strong ability to deal with various classification and regression problems. Incremental SVM and incremental learning paradigm are good at handling streaming data, and consequently, they are well suited for solving time series prediction (TSP) problems. In this paper, incremental learning paradigm is combined with incremental SVM, establishing a novel algorithm for TSP, which is the reason why the proposed algorithm is termed double incremental learning (DIL) algorithm. In DIL algorithm, incremental SVM is utilized as the base learner, while incremental learning is implemented by combining the existing base models with the ones generated on the new data. A novel weight update rule is proposed in DIL algorithm, being used to update the weights of the samples in each iteration. Furthermore, a classical method of integrating base models is employed in DIL. Benefited from the advantages of both incremental SVM and incremental learning, the DIL algorithm achieves desirable prediction effect for TSP. Experimental results on six benchmark TSP datasets verify that DIL possesses preferable predictive performance compared with other existing excellent algorithms.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Abdi J, Moshiri B, Abdulhai B, Sedigh AK (2013) Short-term traffic flow forecasting: parametric and nonparametric approaches via emotional temporal difference learning. Neural Comput Appl 23:141–159CrossRef Abdi J, Moshiri B, Abdulhai B, Sedigh AK (2013) Short-term traffic flow forecasting: parametric and nonparametric approaches via emotional temporal difference learning. Neural Comput Appl 23:141–159CrossRef
2.
go back to reference Aye GC, Balcilar M, Gupta R, Majumdar A (2015) Forecasting aggregate retail sales: the case of South Africa. Int J Prod Econ 160:66–79CrossRef Aye GC, Balcilar M, Gupta R, Majumdar A (2015) Forecasting aggregate retail sales: the case of South Africa. Int J Prod Econ 160:66–79CrossRef
3.
go back to reference Li G, Wang S (2017) Sunspots time-series prediction based on complementary ensemble empirical mode decomposition and wavelet neural network. Math Probl Eng 2017:1–7 Li G, Wang S (2017) Sunspots time-series prediction based on complementary ensemble empirical mode decomposition and wavelet neural network. Math Probl Eng 2017:1–7
4.
go back to reference Podsiadlo M, Rybinski H (2016) Financial time series forecasting using rough sets with time-weighted rule voting. Expert Syst Appl 66:219–233CrossRef Podsiadlo M, Rybinski H (2016) Financial time series forecasting using rough sets with time-weighted rule voting. Expert Syst Appl 66:219–233CrossRef
5.
go back to reference Gooijer JGD, Hyndman RJ (2006) 25 years of time series forecasting. Int J Forecast 22:443–473CrossRef Gooijer JGD, Hyndman RJ (2006) 25 years of time series forecasting. Int J Forecast 22:443–473CrossRef
6.
go back to reference Chen D, Han W (2013) Prediction of multivariate chaotic time series via radial basis function neural network. Complexity 18:55–66CrossRef Chen D, Han W (2013) Prediction of multivariate chaotic time series via radial basis function neural network. Complexity 18:55–66CrossRef
7.
go back to reference Chandra R, Zhang MJ (2012) Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 86:116–123CrossRef Chandra R, Zhang MJ (2012) Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 86:116–123CrossRef
8.
go back to reference Abiyev RH (2011) Fuzzy wavelet neural network based on fuzzy clustering and gradient techniques for time series prediction. Neural Comput Appl 20:249–259CrossRef Abiyev RH (2011) Fuzzy wavelet neural network based on fuzzy clustering and gradient techniques for time series prediction. Neural Comput Appl 20:249–259CrossRef
9.
go back to reference Castro JR, Castillo O, Melin P, Mendoza O, Rodríguezdíaz A (2010) An interval type-2 fuzzy neural network for Chaotic time series prediction with cross-validation and Akaike test. In: Kang JC, Schoch CL (eds) Soft computing for intelligent control and mobile robotics. Springer, Berlin, pp 269–285CrossRef Castro JR, Castillo O, Melin P, Mendoza O, Rodríguezdíaz A (2010) An interval type-2 fuzzy neural network for Chaotic time series prediction with cross-validation and Akaike test. In: Kang JC, Schoch CL (eds) Soft computing for intelligent control and mobile robotics. Springer, Berlin, pp 269–285CrossRef
10.
go back to reference Lin CJ, Chen CH, Lin CT (2009) A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications. IEEE Trans Syst Man Cybern Part C Appl Rev 39:55–68CrossRef Lin CJ, Chen CH, Lin CT (2009) A hybrid of cooperative particle swarm optimization and cultural algorithm for neural fuzzy networks and its prediction applications. IEEE Trans Syst Man Cybern Part C Appl Rev 39:55–68CrossRef
11.
go back to reference Ma QL, Zheng QL, Peng H, Zhong TW, Xu LQ (2007) Chaotic time series prediction based on evolving recurrent neural networks. In: Proceedings of 2007 international conference on machine learning and cybernetics, vol 1–7, pp 3496–3500 Ma QL, Zheng QL, Peng H, Zhong TW, Xu LQ (2007) Chaotic time series prediction based on evolving recurrent neural networks. In: Proceedings of 2007 international conference on machine learning and cybernetics, vol 1–7, pp 3496–3500
12.
go back to reference Donate JP, Li XD, Sanchez GG, de Miguel AS (2013) Time series forecasting by evolving artificial neural networks with genetic algorithms, differential evolution and estimation of distribution algorithm. Neural Comput Appl 22:11–20CrossRef Donate JP, Li XD, Sanchez GG, de Miguel AS (2013) Time series forecasting by evolving artificial neural networks with genetic algorithms, differential evolution and estimation of distribution algorithm. Neural Comput Appl 22:11–20CrossRef
13.
go back to reference Rivero CR (2013) Analysis of a Gaussian process and feed-forward neural networks based filter for forecasting short rainfall time series. In: IEEE computational intelligence magazine, pp 1–6 Rivero CR (2013) Analysis of a Gaussian process and feed-forward neural networks based filter for forecasting short rainfall time series. In: IEEE computational intelligence magazine, pp 1–6
14.
go back to reference Pucheta JA, Rodríguez Rivero CM, Herrera MR, Salas CA, Patiño HD, Kuchen BR (2011) A feed-forward neural networks-based nonlinear autoregressive model for forecasting time series. Computación Y Sistemas 14:423–435 Pucheta JA, Rodríguez Rivero CM, Herrera MR, Salas CA, Patiño HD, Kuchen BR (2011) A feed-forward neural networks-based nonlinear autoregressive model for forecasting time series. Computación Y Sistemas 14:423–435
15.
go back to reference Babinec Š, Pospíchal J (2006) Merging echo state and feedforward neural networks for time series forecasting. In: Kollias SD, Stafylopatis A, Duch W, Oja E (eds) Artificial neural networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4131. Springer, Berlin, Heidelberg Babinec Š, Pospíchal J (2006) Merging echo state and feedforward neural networks for time series forecasting. In: Kollias SD, Stafylopatis A, Duch W, Oja E (eds) Artificial neural networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4131. Springer, Berlin, Heidelberg
16.
go back to reference Wang BH, Huang HJ, Wang XL (2013) A support vector machine based MSM model for financial short-term volatility forecasting. Neural Comput Appl 22:21–28CrossRef Wang BH, Huang HJ, Wang XL (2013) A support vector machine based MSM model for financial short-term volatility forecasting. Neural Comput Appl 22:21–28CrossRef
17.
go back to reference Miranian A, Abdollahzade M (2013) Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction. IEEE Trans Neural Netw Learn Syst 24:207–218CrossRef Miranian A, Abdollahzade M (2013) Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction. IEEE Trans Neural Netw Learn Syst 24:207–218CrossRef
18.
go back to reference Wu Q (2010) The hybrid forecasting model based on chaotic mapping, genetic algorithm and support vector machine. Expert Syst Appl 37:1776–1783CrossRef Wu Q (2010) The hybrid forecasting model based on chaotic mapping, genetic algorithm and support vector machine. Expert Syst Appl 37:1776–1783CrossRef
19.
go back to reference Hansen JV, Nelson RD (1997) Neural networks and traditional time series methods: a synergistic combination in state economic forecasts. IEEE Trans Neural Netw 8:863–873CrossRef Hansen JV, Nelson RD (1997) Neural networks and traditional time series methods: a synergistic combination in state economic forecasts. IEEE Trans Neural Netw 8:863–873CrossRef
21.
go back to reference Suykens JAK, De Brabanter J, Lukas L, Vandewalle J (2002) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48:85–105MATHCrossRef Suykens JAK, De Brabanter J, Lukas L, Vandewalle J (2002) Weighted least squares support vector machines: robustness and sparse approximation. Neurocomputing 48:85–105MATHCrossRef
22.
go back to reference Laskov P, Gehl C, Kruger S, Muller KR (2006) Incremental support vector learning: analysis, implementation and applications. J Mach Learn Res 7:1909–1936MathSciNetMATH Laskov P, Gehl C, Kruger S, Muller KR (2006) Incremental support vector learning: analysis, implementation and applications. J Mach Learn Res 7:1909–1936MathSciNetMATH
23.
go back to reference Ma JS, Theiler J, Perkins S (2003) Accurate on-line support vector regression. Neural Comput 15:2683–2703MATHCrossRef Ma JS, Theiler J, Perkins S (2003) Accurate on-line support vector regression. Neural Comput 15:2683–2703MATHCrossRef
24.
go back to reference Zhang YW (2009) Enhanced statistical analysis of nonlinear processes using KPCA, KICA and SVM. Chem Eng Sci 64:801–811CrossRef Zhang YW (2009) Enhanced statistical analysis of nonlinear processes using KPCA, KICA and SVM. Chem Eng Sci 64:801–811CrossRef
25.
go back to reference Cauwenberghs G, Poggio T (2000) Incremental and decremental support vector machine learning. In: International conference on neural information processing systems, pp 388–394 Cauwenberghs G, Poggio T (2000) Incremental and decremental support vector machine learning. In: International conference on neural information processing systems, pp 388–394
26.
go back to reference Zhou ZH, Chen ZQ (2002) Hybrid decision tree. Knowl-Based Syst 15:515–528CrossRef Zhou ZH, Chen ZQ (2002) Hybrid decision tree. Knowl-Based Syst 15:515–528CrossRef
27.
go back to reference Hu LM, Shao C, Li JZ, Ji H (2015) Incremental learning from news events. Knowl-Based Syst 89:618–626CrossRef Hu LM, Shao C, Li JZ, Ji H (2015) Incremental learning from news events. Knowl-Based Syst 89:618–626CrossRef
28.
go back to reference Xu X, Wang W, Wang JH (2016) A three-way incremental-learning algorithm for radar emitter identification. Front Comput Sci 10:673–688CrossRef Xu X, Wang W, Wang JH (2016) A three-way incremental-learning algorithm for radar emitter identification. Front Comput Sci 10:673–688CrossRef
29.
go back to reference Lange S, Zilles S (2012) Formal models of incremental learning and their analysis. In: International joint conference on neural networks, vol 4, pp 2691–2696 Lange S, Zilles S (2012) Formal models of incremental learning and their analysis. In: International joint conference on neural networks, vol 4, pp 2691–2696
30.
go back to reference Giraud-Carrier C (2000) A note on the utility of incremental learning. Ai Commun 13:215–223MATH Giraud-Carrier C (2000) A note on the utility of incremental learning. Ai Commun 13:215–223MATH
31.
go back to reference Xu SL, Wang JH (2016) A fast incremental extreme learning machine algorithm for data streams classification. Expert Syst Appl 65:332–344CrossRef Xu SL, Wang JH (2016) A fast incremental extreme learning machine algorithm for data streams classification. Expert Syst Appl 65:332–344CrossRef
32.
go back to reference Das RT, Ang KK, Quek C (2016) ieRSPOP: a novel incremental rough set-based pseudo outer-product with ensemble learning. Appl Soft Comput 46:170–186CrossRef Das RT, Ang KK, Quek C (2016) ieRSPOP: a novel incremental rough set-based pseudo outer-product with ensemble learning. Appl Soft Comput 46:170–186CrossRef
33.
go back to reference Qin Y, Li D, Zhang A (2015) A new SVM multiclass incremental learning algorithm. Math Probl Eng 2015:1–5MathSciNetMATH Qin Y, Li D, Zhang A (2015) A new SVM multiclass incremental learning algorithm. Math Probl Eng 2015:1–5MathSciNetMATH
34.
go back to reference Osorio FS, Amy B (1999) INSS: a hybrid system for constructive machine learning. Neurocomputing 28:191–205CrossRef Osorio FS, Amy B (1999) INSS: a hybrid system for constructive machine learning. Neurocomputing 28:191–205CrossRef
35.
go back to reference Xing YL, Shi XF, Shen FR, Zhou K, Zhao JX (2016) A self-organizing incremental neural network based on local distribution learning. Neural Netw 84:143–160CrossRef Xing YL, Shi XF, Shen FR, Zhou K, Zhao JX (2016) A self-organizing incremental neural network based on local distribution learning. Neural Netw 84:143–160CrossRef
36.
go back to reference Gu B, Sheng VS, Tay KY, Romano W, Li S (2015) Incremental support vector learning for ordinal regression. IEEE Trans Neural Netw Learn Syst 26:1403–1416MathSciNetCrossRef Gu B, Sheng VS, Tay KY, Romano W, Li S (2015) Incremental support vector learning for ordinal regression. IEEE Trans Neural Netw Learn Syst 26:1403–1416MathSciNetCrossRef
37.
go back to reference Hoya T, Constantinides AG (1998) An heuristic pattern correction scheme for GRNNs and its application to speech recognition. In: Neural networks for signal processing VIII, pp 351–359 Hoya T, Constantinides AG (1998) An heuristic pattern correction scheme for GRNNs and its application to speech recognition. In: Neural networks for signal processing VIII, pp 351–359
38.
go back to reference Yamauchi K, Yamaguchi N, Ishii N (1999) Incremental learning methods with retrieving of interfered patterns. IEEE Trans Neural Netw 10:1351–1365CrossRef Yamauchi K, Yamaguchi N, Ishii N (1999) Incremental learning methods with retrieving of interfered patterns. IEEE Trans Neural Netw 10:1351–1365CrossRef
39.
go back to reference Tsoumakas G, Partalas I, Vlahavas I (2009) An ensemble pruning primer. In: Okun O, Valentini G (eds) Applications of supervised and unsupervised ensemble methods. Studies in Computational Intelligence, vol 245. Springer, Berlin, Heidelberg Tsoumakas G, Partalas I, Vlahavas I (2009) An ensemble pruning primer. In: Okun O, Valentini G (eds) Applications of supervised and unsupervised ensemble methods. Studies in Computational Intelligence, vol 245. Springer, Berlin, Heidelberg
40.
go back to reference Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP (2005) Ensemble diversity measures and their application to thinning. Inf Fusion 6:49–62CrossRef Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP (2005) Ensemble diversity measures and their application to thinning. Inf Fusion 6:49–62CrossRef
42.
go back to reference Vapnik V, Cortes C (1995) Support vector networks. Mach Learn 20:273–297MATH Vapnik V, Cortes C (1995) Support vector networks. Mach Learn 20:273–297MATH
43.
go back to reference Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik V (1997) Support vector regression machines. Adv Neural Inf Process Syst 9:155–161 Drucker H, Burges CJC, Kaufman L, Smola A, Vapnik V (1997) Support vector regression machines. Adv Neural Inf Process Syst 9:155–161
44.
go back to reference Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the second European conference on computational learning theory, pp 119–139 Freund Y, Schapire RE (1995) A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the second European conference on computational learning theory, pp 119–139
45.
go back to reference Sun Y, Wang XG, Tang X (2014) Deep learning face representation from predicting 10,000 classes. In: IEEE conference on computer vision and pattern recognition, pp 1891–1898 Sun Y, Wang XG, Tang X (2014) Deep learning face representation from predicting 10,000 classes. In: IEEE conference on computer vision and pattern recognition, pp 1891–1898
46.
go back to reference Lehman B, Sullins J, Daigle R, Combs R, Vogt K, Perkins L (2010) A time for emoting: when affect-sensitivity is and isn’t effective at promoting deep learning. In: International conference on intelligent tutoring systems, pp 245–254 Lehman B, Sullins J, Daigle R, Combs R, Vogt K, Perkins L (2010) A time for emoting: when affect-sensitivity is and isn’t effective at promoting deep learning. In: International conference on intelligent tutoring systems, pp 245–254
47.
go back to reference Graesser AC, Moreno KN, Marineau JC, Adcock AB, Olney AM, Person NK (2003) AutoTutor improves deep learning of computer literacy: is it the dialog or the talking head? Artif Intell Educ 97:47–54 Graesser AC, Moreno KN, Marineau JC, Adcock AB, Olney AM, Person NK (2003) AutoTutor improves deep learning of computer literacy: is it the dialog or the talking head? Artif Intell Educ 97:47–54
48.
go back to reference Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35:1798–1828CrossRef Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35:1798–1828CrossRef
50.
go back to reference Salakhutdinov R, Hinton G (2009) Deep Boltzmann machines. J Mach Learn Res 5:1967–2006MATH Salakhutdinov R, Hinton G (2009) Deep Boltzmann machines. J Mach Learn Res 5:1967–2006MATH
51.
go back to reference Bengio Y, Lamblin P, Dan P, Larochelle H (2006) Greedy layer-wise training of deep networks. In: International conference on neural information processing systems, pp 153–160 Bengio Y, Lamblin P, Dan P, Larochelle H (2006) Greedy layer-wise training of deep networks. In: International conference on neural information processing systems, pp 153–160
52.
go back to reference Smolensky P (1986) Information processing in dynamical systems: foundations of harmony theory. In: Rumelhart DE, Group CP (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol 1. MIT Press, Cambridge, pp 194–281 Smolensky P (1986) Information processing in dynamical systems: foundations of harmony theory. In: Rumelhart DE, Group CP (eds) Parallel distributed processing: explorations in the microstructure of cognition, vol 1. MIT Press, Cambridge, pp 194–281
53.
go back to reference Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S et al (2014) Generative adversarial nets. In: International conference on neural information processing systems, pp 2672–2680 Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S et al (2014) Generative adversarial nets. In: International conference on neural information processing systems, pp 2672–2680
54.
go back to reference Zhang HJ, Li JX, Ji YZ, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Industr Inf 13(2):616–624CrossRef Zhang HJ, Li JX, Ji YZ, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Industr Inf 13(2):616–624CrossRef
55.
go back to reference Ardalani-Farsa M, Zolfaghari S (2010) Chaotic time series prediction with residual analysis method using hybrid Elman-NARX neural networks. Neurocomputing 73:2540–2553CrossRef Ardalani-Farsa M, Zolfaghari S (2010) Chaotic time series prediction with residual analysis method using hybrid Elman-NARX neural networks. Neurocomputing 73:2540–2553CrossRef
56.
go back to reference Ardalani-Farsa M, Zolfaghari S (2013) Taguchi’s design of experiment in combination selection for a Chaotic time series forecasting method using ensemble artificial neural networks. Cybern Syst 44:351–377CrossRef Ardalani-Farsa M, Zolfaghari S (2013) Taguchi’s design of experiment in combination selection for a Chaotic time series forecasting method using ensemble artificial neural networks. Cybern Syst 44:351–377CrossRef
58.
go back to reference Zhou TL, Gao SC, Wang JH, Chu CY, Todo Y, Tang Z (2016) Financial time series prediction using a dendritic neuron model. Knowl-Based Syst 105:214–224CrossRef Zhou TL, Gao SC, Wang JH, Chu CY, Todo Y, Tang Z (2016) Financial time series prediction using a dendritic neuron model. Knowl-Based Syst 105:214–224CrossRef
59.
go back to reference Ardalani-Farsa M, Zolfaghari S (2011) Residual analysis and combination of embedding theorem and artificial intelligence in Chaotic time series forecasting. Appl Artif Intell 25:45–73CrossRef Ardalani-Farsa M, Zolfaghari S (2011) Residual analysis and combination of embedding theorem and artificial intelligence in Chaotic time series forecasting. Appl Artif Intell 25:45–73CrossRef
60.
go back to reference Gholipour A, Araabi BN, Lucas C (2006) Predicting chaotic time series using neural and neurofuzzy models: a comparative study. Neural Process Lett 24:217–239CrossRef Gholipour A, Araabi BN, Lucas C (2006) Predicting chaotic time series using neural and neurofuzzy models: a comparative study. Neural Process Lett 24:217–239CrossRef
61.
go back to reference Chandra R, Chand S (2016) Evaluation of co-evolutionary neural network architectures for time series prediction with mobile application in finance. Appl Soft Comput 49:462–473CrossRef Chandra R, Chand S (2016) Evaluation of co-evolutionary neural network architectures for time series prediction with mobile application in finance. Appl Soft Comput 49:462–473CrossRef
62.
go back to reference Chandra R (2015) Competition and collaboration in cooperative coevolution of elman recurrent neural networks for time-series prediction. IEEE Trans Neural Netw Learn Syst 26:3123–3136MathSciNetCrossRef Chandra R (2015) Competition and collaboration in cooperative coevolution of elman recurrent neural networks for time-series prediction. IEEE Trans Neural Netw Learn Syst 26:3123–3136MathSciNetCrossRef
63.
go back to reference Rojas I, Valenzuela O, Rojas F, Guillen A, Herrera LJ, Pomares H et al (2008) Soft-computing techniques and ARMA model for time series prediction. Neurocomputing 71:519–537CrossRef Rojas I, Valenzuela O, Rojas F, Guillen A, Herrera LJ, Pomares H et al (2008) Soft-computing techniques and ARMA model for time series prediction. Neurocomputing 71:519–537CrossRef
64.
65.
go back to reference Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Anderson JA (ed) Neurocomputing: foundations of research. MIT Press, Cambridge, pp 318–362 Rumelhart DE, Hinton GE, Williams RJ (1986) Learning internal representations by error propagation. In: Anderson JA (ed) Neurocomputing: foundations of research. MIT Press, Cambridge, pp 318–362
66.
go back to reference Vairappan C, Tamura H, Gao S, Tang Z (2009) Batch type local search-based adaptive neuro-fuzzy inference system (ANFIS) with self-feedbacks for time-series prediction. Neurocomputing 72:1870–1877CrossRef Vairappan C, Tamura H, Gao S, Tang Z (2009) Batch type local search-based adaptive neuro-fuzzy inference system (ANFIS) with self-feedbacks for time-series prediction. Neurocomputing 72:1870–1877CrossRef
67.
go back to reference Yadav RN, Kalra PK, John J (2007) Time series prediction with single multiplicative neuron model. Appl Soft Comput 7:1157–1163CrossRef Yadav RN, Kalra PK, John J (2007) Time series prediction with single multiplicative neuron model. Appl Soft Comput 7:1157–1163CrossRef
Metadata
Title
A novel double incremental learning algorithm for time series prediction
Authors
Jinhua Li
Qun Dai
Rui Ye
Publication date
17-03-2018
Publisher
Springer London
Published in
Neural Computing and Applications / Issue 10/2019
Print ISSN: 0941-0643
Electronic ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-018-3434-0

Other articles of this Issue 10/2019

Neural Computing and Applications 10/2019 Go to the issue

Premium Partner