Skip to main content
Erschienen in: Artificial Intelligence Review 8/2020

19.04.2020

Selective ensemble of uncertain extreme learning machine for pattern classification with missing features

verfasst von: Shibo Jing, Yidan Wang, Liming Yang

Erschienen in: Artificial Intelligence Review | Ausgabe 8/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Ensemble learning is an effective technique to improve performance and stability compared to single classifiers. This work proposes a selective ensemble classification strategy to handle missing data classification, where an uncertain extreme learning machine with probability constraints is used as individual (or base) classifiers. Then, three selective ensemble frameworks are developed to optimize ensemble margin distributions and aggregate individual classifiers. The first two are robust ensemble frameworks with the proposed loss functions. The third is a sparse ensemble classification framework with the zero-norm regularization, to automatically select the required individual classifiers. Moreover, the majority voting method is applied to produce ensemble classifier for missing data classification. We demonstrate some important properties of the proposed loss functions such as robustness, convexity and Fisher consistency. To verify the validity of the proposed methods for missing data, numerical experiments are implemented on benchmark datasets with missing feature values. In experiments, missing features are first imputed by using expectation maximization algorithm. Numerical experiments are simulated in filled datasets. With different probability lower bounds of classification accuracy, experimental results under different proportion of missing values show that the proposed ensemble methods have better or comparable generalization compared to the traditional methods in handling missing-value data classifications.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Aravkin AY, Kambadur A, Lozano AC et al (2014) Sparse quantile Huber regression for efficient and robust estimation. Mathematics 14(Suppl 1):1–1 Aravkin AY, Kambadur A, Lozano AC et al (2014) Sparse quantile Huber regression for efficient and robust estimation. Mathematics 14(Suppl 1):1–1
Zurück zum Zitat Bershad NJ (2004) Comments on a recursive least M-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis. IEEE Trans Signal Process 57(1):388–389MathSciNetMATHCrossRef Bershad NJ (2004) Comments on a recursive least M-estimate algorithm for robust adaptive filtering in impulsive noise: fast algorithm and convergence performance analysis. IEEE Trans Signal Process 57(1):388–389MathSciNetMATHCrossRef
Zurück zum Zitat Bi Y, Guan J, Bell D (2008) The combination of multiple classifiers using an evidential reasoning approach. Artif Intell 172(15):1731–1751MATHCrossRef Bi Y, Guan J, Bell D (2008) The combination of multiple classifiers using an evidential reasoning approach. Artif Intell 172(15):1731–1751MATHCrossRef
Zurück zum Zitat Cao T, Zhang M, Andreae P, Xue B, Bui LT (2018) An effective and efficient approach to classification with incomplete data. Knowl-Based Syst 154:1–16CrossRef Cao T, Zhang M, Andreae P, Xue B, Bui LT (2018) An effective and efficient approach to classification with incomplete data. Knowl-Based Syst 154:1–16CrossRef
Zurück zum Zitat Chen C, Yan C, Zhao N, Guo B, Lin G (2016) A robust algorithm of support vector regression with a trimmed huber loss function in the primal. Soft Comput 21(18):1–9 Chen C, Yan C, Zhao N, Guo B, Lin G (2016) A robust algorithm of support vector regression with a trimmed huber loss function in the primal. Soft Comput 21(18):1–9
Zurück zum Zitat Dempster AP (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc B 39:1–38MathSciNetMATH Dempster AP (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc B 39:1–38MathSciNetMATH
Zurück zum Zitat Eirola E, Miche Y, Rui N, Akusok A, Lendasse A (2016) Extreme learning machine for missing data using multiple imputations. Neurocomputing 174(PA):220–231 Eirola E, Miche Y, Rui N, Akusok A, Lendasse A (2016) Extreme learning machine for missing data using multiple imputations. Neurocomputing 174(PA):220–231
Zurück zum Zitat Farhangfar A, Kurgan L, Dy J (2008) Impact of imputation of missing values on classification error for discrete data. Pattern Recognit 41(12):3692–3705MATHCrossRef Farhangfar A, Kurgan L, Dy J (2008) Impact of imputation of missing values on classification error for discrete data. Pattern Recognit 41(12):3692–3705MATHCrossRef
Zurück zum Zitat Han B, He B, Nian R et al (2015) LARSEN-ELM: selective ensemble of extreme learning machines using LARS for blended data. Neurocomputing 149:285–294CrossRef Han B, He B, Nian R et al (2015) LARSEN-ELM: selective ensemble of extreme learning machines using LARS for blended data. Neurocomputing 149:285–294CrossRef
Zurück zum Zitat Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501CrossRef
Zurück zum Zitat Huang GB, Ding X, Zhou H (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1):155–163CrossRef Huang GB, Ding X, Zhou H (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1):155–163CrossRef
Zurück zum Zitat Huang G, Song S, Wu C, You K (2012) Robust support vector regression for uncertain input and output data. IEEE Trans Neural Netw Learn Syst 23(11):1690–1700CrossRef Huang G, Song S, Wu C, You K (2012) Robust support vector regression for uncertain input and output data. IEEE Trans Neural Netw Learn Syst 23(11):1690–1700CrossRef
Zurück zum Zitat Huang X, Shi L, Suykens JAK (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997CrossRef Huang X, Shi L, Suykens JAK (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997CrossRef
Zurück zum Zitat Huang G, Huang GB, Song S et al (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48MATHCrossRef Huang G, Huang GB, Song S et al (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48MATHCrossRef
Zurück zum Zitat Lanckriet GRG, El Ghaoui L, Bhattacharyya C, Jordan MI (2004a) Minimax probability machine. In: Dietterich TG, Becker S, Ghahramani Z (eds) Advances in neural information processing systems, vol 14. MIT Press, Cambridge Lanckriet GRG, El Ghaoui L, Bhattacharyya C, Jordan MI (2004a) Minimax probability machine. In: Dietterich TG, Becker S, Ghahramani Z (eds) Advances in neural information processing systems, vol 14. MIT Press, Cambridge
Zurück zum Zitat Liu ZG, Pan Q, Dezert J et al (2016) Adaptive imputation of missing values for incomplete pattern classification. Pattern Recognit 52(C):85–95CrossRef Liu ZG, Pan Q, Dezert J et al (2016) Adaptive imputation of missing values for incomplete pattern classification. Pattern Recognit 52(C):85–95CrossRef
Zurück zum Zitat Lopez J, Maldonado S, Carrasco M (2018) Double regularization methods for robust feature selection and SVM classification via DC programming. Inf Sci 429:377–389MathSciNetMATHCrossRef Lopez J, Maldonado S, Carrasco M (2018) Double regularization methods for robust feature selection and SVM classification via DC programming. Inf Sci 429:377–389MathSciNetMATHCrossRef
Zurück zum Zitat Martin B, Dragi K, Sa D (2018) Ensembles for multi-target regression with random output selections. Mach Learn 107:1673–1709MathSciNetMATHCrossRef Martin B, Dragi K, Sa D (2018) Ensembles for multi-target regression with random output selections. Mach Learn 107:1673–1709MathSciNetMATHCrossRef
Zurück zum Zitat Pelckmans K, Brabanter JD, Suykens JAK et al (2005) Handling missing values in support vector machine classifiers. Neural Netw 18(5–6):684–692MATHCrossRef Pelckmans K, Brabanter JD, Suykens JAK et al (2005) Handling missing values in support vector machine classifiers. Neural Netw 18(5–6):684–692MATHCrossRef
Zurück zum Zitat Polikar R, Depasquale J, Mohammed HS, Brown G, Kuncheva LI (2010) Learn.mf: a random subspace approach for the missing feature problem. Pattern Recognit 43(11):3817–3832MATHCrossRef Polikar R, Depasquale J, Mohammed HS, Brown G, Kuncheva LI (2010) Learn.mf: a random subspace approach for the missing feature problem. Pattern Recognit 43(11):3817–3832MATHCrossRef
Zurück zum Zitat Schapire RE (1989) The strength of weak learnability. Proceedings of the second annual workshop on computational learning theory 5(2):197–227MATH Schapire RE (1989) The strength of weak learnability. Proceedings of the second annual workshop on computational learning theory 5(2):197–227MATH
Zurück zum Zitat Schapire RE, Freund Y, Bartlett P, Lee WS (1997) Boosting the margin: a new explanation for the effectiveness of voting methods. In: Fourteenth international conference on machine learning, pp 322–330 Schapire RE, Freund Y, Bartlett P, Lee WS (1997) Boosting the margin: a new explanation for the effectiveness of voting methods. In: Fourteenth international conference on machine learning, pp 322–330
Zurück zum Zitat Schapire RE, Freund Y, Bartlett P et al (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Ann Stat 26(5):1651–1686MathSciNetMATHCrossRef Schapire RE, Freund Y, Bartlett P et al (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Ann Stat 26(5):1651–1686MathSciNetMATHCrossRef
Zurück zum Zitat Shivaswamy PK, Bhattacharyya C, Smola AJ (2006) Second order cone programming approaches for handling missing and uncertain data. J Machine Learn Res 7:1283–1314MathSciNetMATH Shivaswamy PK, Bhattacharyya C, Smola AJ (2006) Second order cone programming approaches for handling missing and uncertain data. J Machine Learn Res 7:1283–1314MathSciNetMATH
Zurück zum Zitat Sousa LM, Vandenberghe L, Boyd S, Lebret H (1998) Applications of second order cone programming. Linear Algebra Appl 284(1):193–228MathSciNetMATH Sousa LM, Vandenberghe L, Boyd S, Lebret H (1998) Applications of second order cone programming. Linear Algebra Appl 284(1):193–228MathSciNetMATH
Zurück zum Zitat Steinwart I, Christmann A (2011) Estimating conditional quantiles with the help of the pinball loss. Bernoulli 17(1):211–225MathSciNetMATHCrossRef Steinwart I, Christmann A (2011) Estimating conditional quantiles with the help of the pinball loss. Bernoulli 17(1):211–225MathSciNetMATHCrossRef
Zurück zum Zitat Yang L, Zhang S (2016) A sparse extreme learning machine framework by continuous optimization algorithms and its application in pattern recognition. Eng Appl Artif Intell 53:176–189CrossRef Yang L, Zhang S (2016) A sparse extreme learning machine framework by continuous optimization algorithms and its application in pattern recognition. Eng Appl Artif Intell 53:176–189CrossRef
Zurück zum Zitat Yang L, Dong H (2018) Support vector machine with truncated pinball loss and its application in pattern recognition. Chemom Intell Lab Syst 177:89–99CrossRef Yang L, Dong H (2018) Support vector machine with truncated pinball loss and its application in pattern recognition. Chemom Intell Lab Syst 177:89–99CrossRef
Zurück zum Zitat Yang L, Ren Z, Wang Y, Dong H (2017) A robust regression framework with laplace kernel-induced loss. Neural Comput 29:3014–3039MathSciNetMATHCrossRef Yang L, Ren Z, Wang Y, Dong H (2017) A robust regression framework with laplace kernel-induced loss. Neural Comput 29:3014–3039MathSciNetMATHCrossRef
Zurück zum Zitat Zhang L, Zhou WD (2011) Sparse ensembles using weighted combination methods based on linear programming. Pattern Recognit 44(1):97–106MATHCrossRef Zhang L, Zhou WD (2011) Sparse ensembles using weighted combination methods based on linear programming. Pattern Recognit 44(1):97–106MATHCrossRef
Metadaten
Titel
Selective ensemble of uncertain extreme learning machine for pattern classification with missing features
verfasst von
Shibo Jing
Yidan Wang
Liming Yang
Publikationsdatum
19.04.2020
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence Review / Ausgabe 8/2020
Print ISSN: 0269-2821
Elektronische ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-020-09836-3

Weitere Artikel der Ausgabe 8/2020

Artificial Intelligence Review 8/2020 Zur Ausgabe