Skip to main content
Erschienen in: Neural Processing Letters 2/2022

05.11.2021

Classification Algorithm Using Branches Importance

verfasst von: Youness Manzali, Mohamed Chahhou, Mohammed El Mohajir

Erschienen in: Neural Processing Letters | Ausgabe 2/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Ensemble methods have attracted a wide attention, as they are learning algorithms that construct a set of classifiers and then classify new data points by taking a weighted vote of their predictions instead of creating one classifier. Random Forest is one of the most popular and powerful ensemble methods, but it suffers from some drawbacks, such as interpretability and time consumption in the prediction phase. In this paper, we introduce a new algorithm branch classification ’BrClssf’ that classifies observations using branches instead of trees, these branches are extracted from a set of randomized trees. The novelty of the proposed method is that it classifies instances according to the branch’s importance, which is defined by some criteria. This algorithm avoids the drawbacks of ensemble methods while remaining efficient. BrClssf was compared to the state-of-the-art algorithms and the results over 15 databases from the UCI Repository and Kaggle show that the BrClssf algorithm gives good performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Breiman L (1996) Bagging predictors. Mach Learn 24:123–140MATH Breiman L (1996) Bagging predictors. Mach Learn 24:123–140MATH
3.
Zurück zum Zitat Geurts Pierre, Ernst Damien, Wehenkel Louis (2006) Extremely randomized trees. Mach Learn 63:3–42CrossRef Geurts Pierre, Ernst Damien, Wehenkel Louis (2006) Extremely randomized trees. Mach Learn 63:3–42CrossRef
4.
5.
Zurück zum Zitat COHEN, William W (1995) Fast effective rule induction. In : Machine learning proceedings. Morgan Kaufmann, 1995: 115-123 COHEN, William W (1995) Fast effective rule induction. In : Machine learning proceedings. Morgan Kaufmann, 1995: 115-123
6.
Zurück zum Zitat William CW, Yoram S (1999) A simple fast and effective rule learner. AAAI/IAAI 99:335–342 William CW, Yoram S (1999) A simple fast and effective rule learner. AAAI/IAAI 99:335–342
7.
8.
Zurück zum Zitat Dembczynski K, Kotlowski W, Slowinski R (2010) ENDER: a statistical framework for boosting decision rules. Data Mining Know Discov 21(1):52–90MathSciNetCrossRef Dembczynski K, Kotlowski W, Slowinski R (2010) ENDER: a statistical framework for boosting decision rules. Data Mining Know Discov 21(1):52–90MathSciNetCrossRef
9.
Zurück zum Zitat Freund Y, Schapire RE et al. (1996) Experiments with a new boosting algorithm,Thirteenth International Conference on ML, 148-156 Freund Y, Schapire RE et al. (1996) Experiments with a new boosting algorithm,Thirteenth International Conference on ML, 148-156
10.
Zurück zum Zitat Bernard S, Heutte L Adam S (2009) On the selection of decision trees in random forests. In : 2009 International Joint Conference on Neural Networks. IEEE, 302-307 Bernard S, Heutte L Adam S (2009) On the selection of decision trees in random forests. In : 2009 International Joint Conference on Neural Networks. IEEE, 302-307
11.
Zurück zum Zitat Tripoliti EE, Fotiadis DI, et Manis G (2010) Dynamic construction of Random Forests: Evaluation using biomedical engineering problems. In : Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine. IEEE, 1-4 Tripoliti EE, Fotiadis DI, et Manis G (2010) Dynamic construction of Random Forests: Evaluation using biomedical engineering problems. In : Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine. IEEE, 1-4
12.
Zurück zum Zitat Sirikulviriya N et Sinthupinyo S (2011) Integration of rules from a random forest. In : International Conference on Information and Electronics Engineering,194-198 Sirikulviriya N et Sinthupinyo S (2011) Integration of rules from a random forest. In : International Conference on Information and Electronics Engineering,194-198
13.
Zurück zum Zitat MASHAYEKHI, Morteza et GRAS, Robin. Rule extraction from random forest: the RF+ HC methods. In : Canadian Conference on Artificial Intelligence. Springer, Cham, 223-237(2015) MASHAYEKHI, Morteza et GRAS, Robin. Rule extraction from random forest: the RF+ HC methods. In : Canadian Conference on Artificial Intelligence. Springer, Cham, 223-237(2015)
14.
Zurück zum Zitat Van assche A, et Blockeel H (2007) Seeing the forest through the trees: Learning a comprehensible model from an ensemble. In : European Conference on Machine Learning. Springer, Berlin, Heidelberg, 418-429 Van assche A, et Blockeel H (2007) Seeing the forest through the trees: Learning a comprehensible model from an ensemble. In : European Conference on Machine Learning. Springer, Berlin, Heidelberg, 418-429
15.
Zurück zum Zitat Johansson Ulf, Sonstr DC, et Lofstrom T (2011) One tree to explain them all. In : 2011 IEEE Congress of Evolutionary Computation (CEC). IEEE, 1444-1451 Johansson Ulf, Sonstr DC, et Lofstrom T (2011) One tree to explain them all. In : 2011 IEEE Congress of Evolutionary Computation (CEC). IEEE, 1444-1451
16.
Zurück zum Zitat Meinshausen N (2010) Node harvest. The Annals of Applied Statistics, 2049-2072 Meinshausen N (2010) Node harvest. The Annals of Applied Statistics, 2049-2072
17.
Zurück zum Zitat Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277–287CrossRef Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277–287CrossRef
18.
19.
Zurück zum Zitat Mita G, Papotti P, Filippone M et al. (2020) LIBRE: Learning Interpretable Boolean Rule Ensembles. In : International Conference on Artificial Intelligence and Statistics. 245-255 Mita G, Papotti P, Filippone M et al. (2020) LIBRE: Learning Interpretable Boolean Rule Ensembles. In : International Conference on Artificial Intelligence and Statistics. 245-255
20.
Zurück zum Zitat Pancho DP, Alonso JM, Cordon NO et al. (2013) FINGRAMS: visual representations of fuzzy rule-based inference for expert analysis of comprehensibility. IEEE Transactions on Fuzzy Systems, vol. 21, no 6, p. 1133-1149 Pancho DP, Alonso JM, Cordon NO et al. (2013) FINGRAMS: visual representations of fuzzy rule-based inference for expert analysis of comprehensibility. IEEE Transactions on Fuzzy Systems, vol. 21, no 6, p. 1133-1149
21.
Zurück zum Zitat Pierrard R, Poli JP, et Hudelot C (2018) Learning fuzzy relations and properties for explainable artificial intelligence. In : 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE. p. 1-8 Pierrard R, Poli JP, et Hudelot C (2018) Learning fuzzy relations and properties for explainable artificial intelligence. In : 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE. p. 1-8
22.
Zurück zum Zitat Rizzo L, et Longo L (2018) A qualitative investigation of the degree of explainability of defeasible argumentation and non-monotonic fuzzy reasoning. In : 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science. p. 138-149 Rizzo L, et Longo L (2018) A qualitative investigation of the degree of explainability of defeasible argumentation and non-monotonic fuzzy reasoning. In : 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science. p. 138-149
23.
Zurück zum Zitat Wang T, Rudin C, Doshi-velez F (2017) A bayesian framework for learning rule sets for interpretable classification. J Mach Learn Res 18(1):2357–2393MathSciNetMATH Wang T, Rudin C, Doshi-velez F (2017) A bayesian framework for learning rule sets for interpretable classification. J Mach Learn Res 18(1):2357–2393MathSciNetMATH
24.
Zurück zum Zitat Letham B, Rudin C, Mccormick TH et al (2015) Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann Appl Statist 9(3):1350–1371MathSciNetCrossRef Letham B, Rudin C, Mccormick TH et al (2015) Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann Appl Statist 9(3):1350–1371MathSciNetCrossRef
25.
Zurück zum Zitat Lakkaraju H, Bach SH, et Leskovec J (2016) Interpretable decision sets: A joint framework for description and prediction. In : Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. p. 1675-1684 Lakkaraju H, Bach SH, et Leskovec J (2016) Interpretable decision sets: A joint framework for description and prediction. In : Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. p. 1675-1684
26.
Zurück zum Zitat Verbeke W, Martens D, Mues C et al (2011) Building comprehensible customer churn prediction models with advanced rule induction techniques. Expert Syst Appl 38(3):2354–2364CrossRef Verbeke W, Martens D, Mues C et al (2011) Building comprehensible customer churn prediction models with advanced rule induction techniques. Expert Syst Appl 38(3):2354–2364CrossRef
27.
Zurück zum Zitat Otero F, EbetFreitas AA (2016) Improving the interpretability of classification rules discovered by an ant colony algorithm: extended results. Evol Comput 24(3):385–409CrossRef Otero F, EbetFreitas AA (2016) Improving the interpretability of classification rules discovered by an ant colony algorithm: extended results. Evol Comput 24(3):385–409CrossRef
28.
Zurück zum Zitat Malioutov DM, Varshney KR, Emad A et al (2017) Learning interpretable classification rules with boolean compressed sensing Transparent Data Mining for Big and Small Data. Springer, Cham, pp 95–121CrossRef Malioutov DM, Varshney KR, Emad A et al (2017) Learning interpretable classification rules with boolean compressed sensing Transparent Data Mining for Big and Small Data. Springer, Cham, pp 95–121CrossRef
29.
Zurück zum Zitat Su G, Wei D, Varshney KR et al. (2015) Interpretable two-level boolean rule learning for classification. arXiv preprint arXiv:1511.07361 Su G, Wei D, Varshney KR et al. (2015) Interpretable two-level boolean rule learning for classification. arXiv preprint arXiv:​1511.​07361
30.
Zurück zum Zitat HATWELL, Julian, GABER, Mohamed Medhat, et AZAD, R. CHIRPS: Explaining random forest classification. Artificial Intelligence Review, (2020) HATWELL, Julian, GABER, Mohamed Medhat, et AZAD, R. CHIRPS: Explaining random forest classification. Artificial Intelligence Review, (2020)
31.
Zurück zum Zitat Proen CA, Hugo M, Van Leeuwen M (2020) Interpretable multiclass classification by MDL-based rule lists. Inform Sci 512:1372–1393CrossRef Proen CA, Hugo M, Van Leeuwen M (2020) Interpretable multiclass classification by MDL-based rule lists. Inform Sci 512:1372–1393CrossRef
32.
Zurück zum Zitat Angelino E, Larus-stone N, Alabi D et al (2017) Learning certifiably optimal rule lists for categorical data. J Mach Learn Res 18(1):8753–8830MathSciNetMATH Angelino E, Larus-stone N, Alabi D et al (2017) Learning certifiably optimal rule lists for categorical data. J Mach Learn Res 18(1):8753–8830MathSciNetMATH
33.
Zurück zum Zitat Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):1–27CrossRef Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):1–27CrossRef
34.
Zurück zum Zitat Nunez H, Angulo C, Catala A (2006) Rule-based learning systems for support vector machines. Neural Process Lett 24(1):1–18CrossRef Nunez H, Angulo C, Catala A (2006) Rule-based learning systems for support vector machines. Neural Process Lett 24(1):1–18CrossRef
35.
Zurück zum Zitat Augasta MG, Kathirvalavakumar T (2012) Reverse engineering the neural networks for rule extraction in classification problems. Neural Process Lett 35(2):131–150CrossRef Augasta MG, Kathirvalavakumar T (2012) Reverse engineering the neural networks for rule extraction in classification problems. Neural Process Lett 35(2):131–150CrossRef
36.
Zurück zum Zitat Hara, S., & Hayashi, K. (2018, March). Making tree ensembles interpretable: A bayesian model selection approach. In International conference on artificial intelligence and statistics (pp. 77-85). PMLR Hara, S., & Hayashi, K. (2018, March). Making tree ensembles interpretable: A bayesian model selection approach. In International conference on artificial intelligence and statistics (pp. 77-85). PMLR
37.
Zurück zum Zitat Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64–82CrossRef Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64–82CrossRef
38.
Zurück zum Zitat Zhao X, Wu Y, Lee DL, Cui W (2018) iForest: interpreting random forests via visual analytics. IEEE Trans Visual Comput Gr 25(1):407–416CrossRef Zhao X, Wu Y, Lee DL, Cui W (2018) iForest: interpreting random forests via visual analytics. IEEE Trans Visual Comput Gr 25(1):407–416CrossRef
39.
Zurück zum Zitat Vandewiele G, Lannoye K, Janssens O, Ongenae F, De Turck F, & Van Hoecke S (2017). A genetic algorithm for interpretable model extraction from decision tree ensembles. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 104-115). Springer, Cham Vandewiele G, Lannoye K, Janssens O, Ongenae F, De Turck F, & Van Hoecke S (2017). A genetic algorithm for interpretable model extraction from decision tree ensembles. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 104-115). Springer, Cham
40.
Zurück zum Zitat Fernández RR, de Diego IM, Aceña V, Fernández-Isabel A, Moguerza JM (2020) Random forest explainability using counterfactual sets. Inform Fusion 63:196–207CrossRef Fernández RR, de Diego IM, Aceña V, Fernández-Isabel A, Moguerza JM (2020) Random forest explainability using counterfactual sets. Inform Fusion 63:196–207CrossRef
Metadaten
Titel
Classification Algorithm Using Branches Importance
verfasst von
Youness Manzali
Mohamed Chahhou
Mohammed El Mohajir
Publikationsdatum
05.11.2021
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 2/2022
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-021-10664-x

Weitere Artikel der Ausgabe 2/2022

Neural Processing Letters 2/2022 Zur Ausgabe

Neuer Inhalt