Skip to main content
Erschienen in: Neural Computing and Applications 7/2019

14.10.2017 | Original Article

Nonparametric kernel smoother on topology learning neural networks for incremental and ensemble regression

verfasst von: Jianhua Xiao, Zhiyang Xiang, Dong Wang, Zhu Xiao

Erschienen in: Neural Computing and Applications | Ausgabe 7/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Incremental learning is a technique which is effective to increase the space efficiency of machine learning algorithms. Ensemble learning can combine different algorithms to form more accurate ones. The parameter selection of incremental methods is difficult because no retraining is allowed, and the combination of incremental and ensemble learning has not been fully explored. In this paper, we propose a parameter-free regression framework and it combines incremental learning and ensemble learning. First, the topology learning neural networks such as growing neural gas (GNG) and self-organizing incremental neural network (SOINN) are employed as solutions to nonlinearity. Then, the vector quantizations of GNG and SOINN are transformed into a feed-forward neural network by an improved Nadaraya–Watson estimator. A maximum likelihood process is devised for adaptive parameter selection of the estimator. Finally, a weighted training strategy is incorporated to enable the topology learning regressors for ensemble learning by AdaBoost. Experiments are carried out on 5 UCI datasets, and an application study of short-term traffic flow prediction is given. The results show that the proposed method gives comparable results to mainstream incremental and non-incremental regression methods, and better performances in the short-term traffic flow prediction.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat Beygelzimer A, Kale S, Luo H (2015) Optimal and adaptive algorithms for online boosting. In: Proceedings of the 32nd international conference on machine learning (ICML-15), pp 2323–2331 Beygelzimer A, Kale S, Luo H (2015) Optimal and adaptive algorithms for online boosting. In: Proceedings of the 32nd international conference on machine learning (ICML-15), pp 2323–2331
3.
Zurück zum Zitat Brugger D, Rosenstiel W, Bogdan M (2011) Online SVR training by solving the primal optimization problem. J Signal Process Syst 65(3):391–402CrossRef Brugger D, Rosenstiel W, Bogdan M (2011) Online SVR training by solving the primal optimization problem. J Signal Process Syst 65(3):391–402CrossRef
5.
Zurück zum Zitat Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive-aggressive algorithms. J Mach Learn Res 7:551–585MathSciNetMATH Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive-aggressive algorithms. J Mach Learn Res 7:551–585MathSciNetMATH
7.
Zurück zum Zitat Doucet A, Johansen AM (2009) A tutorial on particle filtering and smoothing: fifteen years later. Handb Nonlinear Filter 12:656–704MATH Doucet A, Johansen AM (2009) A tutorial on particle filtering and smoothing: fifteen years later. Handb Nonlinear Filter 12:656–704MATH
8.
Zurück zum Zitat Dudek G (2014) Tournament searching method for optimization of the forecasting model based on the Nadaraya–Watson estimator. In: International conference on artificial intelligence and soft computing, Springer, pp 339–348 Dudek G (2014) Tournament searching method for optimization of the forecasting model based on the Nadaraya–Watson estimator. In: International conference on artificial intelligence and soft computing, Springer, pp 339–348
9.
Zurück zum Zitat Fink O, Zio E, Weidmann U (2015) Novelty detection by multivariate kernel density estimation and growing neural gas algorithm. Mech Syst Signal Process 50:427–436CrossRef Fink O, Zio E, Weidmann U (2015) Novelty detection by multivariate kernel density estimation and growing neural gas algorithm. Mech Syst Signal Process 50:427–436CrossRef
10.
Zurück zum Zitat Freund Y, Schapire RE et al (1996) Experiments with a new boosting algorithm. ICML 96:148–156 Freund Y, Schapire RE et al (1996) Experiments with a new boosting algorithm. ICML 96:148–156
11.
Zurück zum Zitat Fritzke B et al (1995) A growing neural gas network learns topologies. Adv Neural Inform Process Syst 7:625–632 Fritzke B et al (1995) A growing neural gas network learns topologies. Adv Neural Inform Process Syst 7:625–632
12.
Zurück zum Zitat Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42CrossRefMATH Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42CrossRefMATH
13.
Zurück zum Zitat Grewal MS (2011) Kalman filtering. Springer, New York Grewal MS (2011) Kalman filtering. Springer, New York
14.
Zurück zum Zitat Xiang H, Shu H (2015) An ensemble model of short-term traffic flow forecasting on freeway. Appl Mech Mater 744–746:1852–7CrossRef Xiang H, Shu H (2015) An ensemble model of short-term traffic flow forecasting on freeway. Appl Mech Mater 744–746:1852–7CrossRef
16.
Zurück zum Zitat Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423CrossRef Liang NY, Huang GB, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423CrossRef
17.
Zurück zum Zitat McKinney W (2012) Python for Data Analysis. O'Reilly, Sebastopol McKinney W (2012) Python for Data Analysis. O'Reilly, Sebastopol
18.
Zurück zum Zitat Osei-Bryson KM (2007) Post-pruning in decision tree induction using multiple performance measures. Comput Oper Res 34(11):3331–3345MathSciNetCrossRefMATH Osei-Bryson KM (2007) Post-pruning in decision tree induction using multiple performance measures. Comput Oper Res 34(11):3331–3345MathSciNetCrossRefMATH
19.
Zurück zum Zitat Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH
20.
Zurück zum Zitat Polikar R, Upda L, Upda SS, Honavar V (2001) Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans Syst Man Cybern Part C Appl Rev 31(4):497–508CrossRef Polikar R, Upda L, Upda SS, Honavar V (2001) Learn++: an incremental learning algorithm for supervised neural networks. IEEE Trans Syst Man Cybern Part C Appl Rev 31(4):497–508CrossRef
21.
Zurück zum Zitat Rahimi A, Recht B (2007) Random features for large-scale kernel machines. In: Advances in neural information processing systems, pp 1177–1184 Rahimi A, Recht B (2007) Random features for large-scale kernel machines. In: Advances in neural information processing systems, pp 1177–1184
23.
Zurück zum Zitat Rigatos GG (2015) Nonlinear Kalman filtering based on differential flatness theory. In: Nonlinear control and filtering using differential flatness approaches. Studies in systems, decision and control, vol 25. Springer, Cham, pp 141–181 Rigatos GG (2015) Nonlinear Kalman filtering based on differential flatness theory. In: Nonlinear control and filtering using differential flatness approaches. Studies in systems, decision and control, vol 25. Springer, Cham, pp 141–181
24.
Zurück zum Zitat Scott DW (2015) Multivariate density estimation: theory, practice, and visualization. Wiley, HobokenCrossRefMATH Scott DW (2015) Multivariate density estimation: theory, practice, and visualization. Wiley, HobokenCrossRefMATH
25.
Zurück zum Zitat Shen F, Yu H, Sakurai K, Hasegawa O (2011) An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network. Neural Comput Appl 20(7):1061–1074CrossRef Shen F, Yu H, Sakurai K, Hasegawa O (2011) An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network. Neural Comput Appl 20(7):1061–1074CrossRef
26.
Zurück zum Zitat Silva LA, Del-Moral-Hernandez E (2011) A SOM combined with KNN for classification task. In: The 2011 international joint conference on neural networks (IJCNN), IEEE, pp 2368–2373 Silva LA, Del-Moral-Hernandez E (2011) A SOM combined with KNN for classification task. In: The 2011 international joint conference on neural networks (IJCNN), IEEE, pp 2368–2373
27.
Zurück zum Zitat Silverman BW (1986) Density estimation for statistics and data analysis, vol 27. CRC press, Boca RatonCrossRefMATH Silverman BW (1986) Density estimation for statistics and data analysis, vol 27. CRC press, Boca RatonCrossRefMATH
29.
Zurück zum Zitat Tfekci P (2014) Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. Int J Electr Power Energy Syst 60:126–140CrossRef Tfekci P (2014) Prediction of full load electrical power output of a base load operated combined cycle power plant using machine learning methods. Int J Electr Power Energy Syst 60:126–140CrossRef
30.
Zurück zum Zitat Thayasivam U, Kuruwita C, Ramachandran RP (2015) Robust \(l_2e\) parameter estimation of Gaussian mixture models: comparison with expectation maximization. In: Neural information processing, Springer, pp 281–288 Thayasivam U, Kuruwita C, Ramachandran RP (2015) Robust \(l_2e\) parameter estimation of Gaussian mixture models: comparison with expectation maximization. In: Neural information processing, Springer, pp 281–288
31.
Zurück zum Zitat Thompson JJ, Blair MR, Chen L, Henrey AJ (2013) Video game telemetry as a critical tool in the study of complex skill learning. PLoS ONE 8(9):e75,129CrossRef Thompson JJ, Blair MR, Chen L, Henrey AJ (2013) Video game telemetry as a critical tool in the study of complex skill learning. PLoS ONE 8(9):e75,129CrossRef
32.
Zurück zum Zitat Tsanas A, Little MA, McSharry PE, Ramig LO (2010) Accurate telemonitoring of Parkinson’s disease progression by noninvasive speech tests. IEEE Trans Biomed Eng 57(4):884–893CrossRef Tsanas A, Little MA, McSharry PE, Ramig LO (2010) Accurate telemonitoring of Parkinson’s disease progression by noninvasive speech tests. IEEE Trans Biomed Eng 57(4):884–893CrossRef
33.
Zurück zum Zitat Xiang Z, Xiao Z, Huang Y, Wang D, Fu B, Chen W (2016) Advances in knowledge discovery and data mining: 20th Pacific–Asia conference, PAKDD 2016, Auckland, New Zealand, April 19–22, 2016, proceedings, part I, chap. Unsupervised and semi-supervised dimensionality reduction with self-organizing incremental neural network and graph similarity constraints, Springer International Publishing, Cham, pp 191–202. doi:10.1007/978-3-319-31753-3_16 Xiang Z, Xiao Z, Huang Y, Wang D, Fu B, Chen W (2016) Advances in knowledge discovery and data mining: 20th Pacific–Asia conference, PAKDD 2016, Auckland, New Zealand, April 19–22, 2016, proceedings, part I, chap. Unsupervised and semi-supervised dimensionality reduction with self-organizing incremental neural network and graph similarity constraints, Springer International Publishing, Cham, pp 191–202. doi:10.​1007/​978-3-319-31753-3_​16
36.
Zurück zum Zitat Xiao X, Zhang H, Hasegawa O (2013) Density estimation method based on self-organizing incremental neural networks and error estimation. In: Lee M, Hirose A, Hou ZG, Kil RM (eds) Neural information processing. ICONIP 2013. Lecture Notes in Computer Science, vol 8227. Springer, Berlin, Heidelberg Xiao X, Zhang H, Hasegawa O (2013) Density estimation method based on self-organizing incremental neural networks and error estimation. In: Lee M, Hirose A, Hou ZG, Kil RM (eds) Neural information processing. ICONIP 2013. Lecture Notes in Computer Science, vol 8227. Springer, Berlin, Heidelberg
Metadaten
Titel
Nonparametric kernel smoother on topology learning neural networks for incremental and ensemble regression
verfasst von
Jianhua Xiao
Zhiyang Xiang
Dong Wang
Zhu Xiao
Publikationsdatum
14.10.2017
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 7/2019
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-017-3218-y

Weitere Artikel der Ausgabe 7/2019

Neural Computing and Applications 7/2019 Zur Ausgabe

Premium Partner