Skip to main content
Top
Published in: Neural Processing Letters 2/2022

05-11-2021

Classification Algorithm Using Branches Importance

Authors: Youness Manzali, Mohamed Chahhou, Mohammed El Mohajir

Published in: Neural Processing Letters | Issue 2/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Ensemble methods have attracted a wide attention, as they are learning algorithms that construct a set of classifiers and then classify new data points by taking a weighted vote of their predictions instead of creating one classifier. Random Forest is one of the most popular and powerful ensemble methods, but it suffers from some drawbacks, such as interpretability and time consumption in the prediction phase. In this paper, we introduce a new algorithm branch classification ’BrClssf’ that classifies observations using branches instead of trees, these branches are extracted from a set of randomized trees. The novelty of the proposed method is that it classifies instances according to the branch’s importance, which is defined by some criteria. This algorithm avoids the drawbacks of ensemble methods while remaining efficient. BrClssf was compared to the state-of-the-art algorithms and the results over 15 databases from the UCI Repository and Kaggle show that the BrClssf algorithm gives good performance.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Breiman L (1996) Bagging predictors. Mach Learn 24:123–140MATH Breiman L (1996) Bagging predictors. Mach Learn 24:123–140MATH
3.
go back to reference Geurts Pierre, Ernst Damien, Wehenkel Louis (2006) Extremely randomized trees. Mach Learn 63:3–42CrossRef Geurts Pierre, Ernst Damien, Wehenkel Louis (2006) Extremely randomized trees. Mach Learn 63:3–42CrossRef
4.
5.
go back to reference COHEN, William W (1995) Fast effective rule induction. In : Machine learning proceedings. Morgan Kaufmann, 1995: 115-123 COHEN, William W (1995) Fast effective rule induction. In : Machine learning proceedings. Morgan Kaufmann, 1995: 115-123
6.
go back to reference William CW, Yoram S (1999) A simple fast and effective rule learner. AAAI/IAAI 99:335–342 William CW, Yoram S (1999) A simple fast and effective rule learner. AAAI/IAAI 99:335–342
7.
8.
go back to reference Dembczynski K, Kotlowski W, Slowinski R (2010) ENDER: a statistical framework for boosting decision rules. Data Mining Know Discov 21(1):52–90MathSciNetCrossRef Dembczynski K, Kotlowski W, Slowinski R (2010) ENDER: a statistical framework for boosting decision rules. Data Mining Know Discov 21(1):52–90MathSciNetCrossRef
9.
go back to reference Freund Y, Schapire RE et al. (1996) Experiments with a new boosting algorithm,Thirteenth International Conference on ML, 148-156 Freund Y, Schapire RE et al. (1996) Experiments with a new boosting algorithm,Thirteenth International Conference on ML, 148-156
10.
go back to reference Bernard S, Heutte L Adam S (2009) On the selection of decision trees in random forests. In : 2009 International Joint Conference on Neural Networks. IEEE, 302-307 Bernard S, Heutte L Adam S (2009) On the selection of decision trees in random forests. In : 2009 International Joint Conference on Neural Networks. IEEE, 302-307
11.
go back to reference Tripoliti EE, Fotiadis DI, et Manis G (2010) Dynamic construction of Random Forests: Evaluation using biomedical engineering problems. In : Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine. IEEE, 1-4 Tripoliti EE, Fotiadis DI, et Manis G (2010) Dynamic construction of Random Forests: Evaluation using biomedical engineering problems. In : Proceedings of the 10th IEEE International Conference on Information Technology and Applications in Biomedicine. IEEE, 1-4
12.
go back to reference Sirikulviriya N et Sinthupinyo S (2011) Integration of rules from a random forest. In : International Conference on Information and Electronics Engineering,194-198 Sirikulviriya N et Sinthupinyo S (2011) Integration of rules from a random forest. In : International Conference on Information and Electronics Engineering,194-198
13.
go back to reference MASHAYEKHI, Morteza et GRAS, Robin. Rule extraction from random forest: the RF+ HC methods. In : Canadian Conference on Artificial Intelligence. Springer, Cham, 223-237(2015) MASHAYEKHI, Morteza et GRAS, Robin. Rule extraction from random forest: the RF+ HC methods. In : Canadian Conference on Artificial Intelligence. Springer, Cham, 223-237(2015)
14.
go back to reference Van assche A, et Blockeel H (2007) Seeing the forest through the trees: Learning a comprehensible model from an ensemble. In : European Conference on Machine Learning. Springer, Berlin, Heidelberg, 418-429 Van assche A, et Blockeel H (2007) Seeing the forest through the trees: Learning a comprehensible model from an ensemble. In : European Conference on Machine Learning. Springer, Berlin, Heidelberg, 418-429
15.
go back to reference Johansson Ulf, Sonstr DC, et Lofstrom T (2011) One tree to explain them all. In : 2011 IEEE Congress of Evolutionary Computation (CEC). IEEE, 1444-1451 Johansson Ulf, Sonstr DC, et Lofstrom T (2011) One tree to explain them all. In : 2011 IEEE Congress of Evolutionary Computation (CEC). IEEE, 1444-1451
16.
go back to reference Meinshausen N (2010) Node harvest. The Annals of Applied Statistics, 2049-2072 Meinshausen N (2010) Node harvest. The Annals of Applied Statistics, 2049-2072
17.
go back to reference Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277–287CrossRef Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277–287CrossRef
18.
19.
go back to reference Mita G, Papotti P, Filippone M et al. (2020) LIBRE: Learning Interpretable Boolean Rule Ensembles. In : International Conference on Artificial Intelligence and Statistics. 245-255 Mita G, Papotti P, Filippone M et al. (2020) LIBRE: Learning Interpretable Boolean Rule Ensembles. In : International Conference on Artificial Intelligence and Statistics. 245-255
20.
go back to reference Pancho DP, Alonso JM, Cordon NO et al. (2013) FINGRAMS: visual representations of fuzzy rule-based inference for expert analysis of comprehensibility. IEEE Transactions on Fuzzy Systems, vol. 21, no 6, p. 1133-1149 Pancho DP, Alonso JM, Cordon NO et al. (2013) FINGRAMS: visual representations of fuzzy rule-based inference for expert analysis of comprehensibility. IEEE Transactions on Fuzzy Systems, vol. 21, no 6, p. 1133-1149
21.
go back to reference Pierrard R, Poli JP, et Hudelot C (2018) Learning fuzzy relations and properties for explainable artificial intelligence. In : 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE. p. 1-8 Pierrard R, Poli JP, et Hudelot C (2018) Learning fuzzy relations and properties for explainable artificial intelligence. In : 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE. p. 1-8
22.
go back to reference Rizzo L, et Longo L (2018) A qualitative investigation of the degree of explainability of defeasible argumentation and non-monotonic fuzzy reasoning. In : 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science. p. 138-149 Rizzo L, et Longo L (2018) A qualitative investigation of the degree of explainability of defeasible argumentation and non-monotonic fuzzy reasoning. In : 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science. p. 138-149
23.
go back to reference Wang T, Rudin C, Doshi-velez F (2017) A bayesian framework for learning rule sets for interpretable classification. J Mach Learn Res 18(1):2357–2393MathSciNetMATH Wang T, Rudin C, Doshi-velez F (2017) A bayesian framework for learning rule sets for interpretable classification. J Mach Learn Res 18(1):2357–2393MathSciNetMATH
24.
go back to reference Letham B, Rudin C, Mccormick TH et al (2015) Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann Appl Statist 9(3):1350–1371MathSciNetCrossRef Letham B, Rudin C, Mccormick TH et al (2015) Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann Appl Statist 9(3):1350–1371MathSciNetCrossRef
25.
go back to reference Lakkaraju H, Bach SH, et Leskovec J (2016) Interpretable decision sets: A joint framework for description and prediction. In : Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. p. 1675-1684 Lakkaraju H, Bach SH, et Leskovec J (2016) Interpretable decision sets: A joint framework for description and prediction. In : Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. p. 1675-1684
26.
go back to reference Verbeke W, Martens D, Mues C et al (2011) Building comprehensible customer churn prediction models with advanced rule induction techniques. Expert Syst Appl 38(3):2354–2364CrossRef Verbeke W, Martens D, Mues C et al (2011) Building comprehensible customer churn prediction models with advanced rule induction techniques. Expert Syst Appl 38(3):2354–2364CrossRef
27.
go back to reference Otero F, EbetFreitas AA (2016) Improving the interpretability of classification rules discovered by an ant colony algorithm: extended results. Evol Comput 24(3):385–409CrossRef Otero F, EbetFreitas AA (2016) Improving the interpretability of classification rules discovered by an ant colony algorithm: extended results. Evol Comput 24(3):385–409CrossRef
28.
go back to reference Malioutov DM, Varshney KR, Emad A et al (2017) Learning interpretable classification rules with boolean compressed sensing Transparent Data Mining for Big and Small Data. Springer, Cham, pp 95–121CrossRef Malioutov DM, Varshney KR, Emad A et al (2017) Learning interpretable classification rules with boolean compressed sensing Transparent Data Mining for Big and Small Data. Springer, Cham, pp 95–121CrossRef
29.
go back to reference Su G, Wei D, Varshney KR et al. (2015) Interpretable two-level boolean rule learning for classification. arXiv preprint arXiv:1511.07361 Su G, Wei D, Varshney KR et al. (2015) Interpretable two-level boolean rule learning for classification. arXiv preprint arXiv:​1511.​07361
30.
go back to reference HATWELL, Julian, GABER, Mohamed Medhat, et AZAD, R. CHIRPS: Explaining random forest classification. Artificial Intelligence Review, (2020) HATWELL, Julian, GABER, Mohamed Medhat, et AZAD, R. CHIRPS: Explaining random forest classification. Artificial Intelligence Review, (2020)
31.
go back to reference Proen CA, Hugo M, Van Leeuwen M (2020) Interpretable multiclass classification by MDL-based rule lists. Inform Sci 512:1372–1393CrossRef Proen CA, Hugo M, Van Leeuwen M (2020) Interpretable multiclass classification by MDL-based rule lists. Inform Sci 512:1372–1393CrossRef
32.
go back to reference Angelino E, Larus-stone N, Alabi D et al (2017) Learning certifiably optimal rule lists for categorical data. J Mach Learn Res 18(1):8753–8830MathSciNetMATH Angelino E, Larus-stone N, Alabi D et al (2017) Learning certifiably optimal rule lists for categorical data. J Mach Learn Res 18(1):8753–8830MathSciNetMATH
33.
go back to reference Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):1–27CrossRef Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):1–27CrossRef
34.
go back to reference Nunez H, Angulo C, Catala A (2006) Rule-based learning systems for support vector machines. Neural Process Lett 24(1):1–18CrossRef Nunez H, Angulo C, Catala A (2006) Rule-based learning systems for support vector machines. Neural Process Lett 24(1):1–18CrossRef
35.
go back to reference Augasta MG, Kathirvalavakumar T (2012) Reverse engineering the neural networks for rule extraction in classification problems. Neural Process Lett 35(2):131–150CrossRef Augasta MG, Kathirvalavakumar T (2012) Reverse engineering the neural networks for rule extraction in classification problems. Neural Process Lett 35(2):131–150CrossRef
36.
go back to reference Hara, S., & Hayashi, K. (2018, March). Making tree ensembles interpretable: A bayesian model selection approach. In International conference on artificial intelligence and statistics (pp. 77-85). PMLR Hara, S., & Hayashi, K. (2018, March). Making tree ensembles interpretable: A bayesian model selection approach. In International conference on artificial intelligence and statistics (pp. 77-85). PMLR
37.
go back to reference Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64–82CrossRef Obregon J, Kim A, Jung JY (2019) RuleCOSI: combination and simplification of production rules from boosted decision trees for imbalanced classification. Expert Syst Appl 126:64–82CrossRef
38.
go back to reference Zhao X, Wu Y, Lee DL, Cui W (2018) iForest: interpreting random forests via visual analytics. IEEE Trans Visual Comput Gr 25(1):407–416CrossRef Zhao X, Wu Y, Lee DL, Cui W (2018) iForest: interpreting random forests via visual analytics. IEEE Trans Visual Comput Gr 25(1):407–416CrossRef
39.
go back to reference Vandewiele G, Lannoye K, Janssens O, Ongenae F, De Turck F, & Van Hoecke S (2017). A genetic algorithm for interpretable model extraction from decision tree ensembles. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 104-115). Springer, Cham Vandewiele G, Lannoye K, Janssens O, Ongenae F, De Turck F, & Van Hoecke S (2017). A genetic algorithm for interpretable model extraction from decision tree ensembles. In Pacific-Asia Conference on Knowledge Discovery and Data Mining (pp. 104-115). Springer, Cham
40.
go back to reference Fernández RR, de Diego IM, Aceña V, Fernández-Isabel A, Moguerza JM (2020) Random forest explainability using counterfactual sets. Inform Fusion 63:196–207CrossRef Fernández RR, de Diego IM, Aceña V, Fernández-Isabel A, Moguerza JM (2020) Random forest explainability using counterfactual sets. Inform Fusion 63:196–207CrossRef
Metadata
Title
Classification Algorithm Using Branches Importance
Authors
Youness Manzali
Mohamed Chahhou
Mohammed El Mohajir
Publication date
05-11-2021
Publisher
Springer US
Published in
Neural Processing Letters / Issue 2/2022
Print ISSN: 1370-4621
Electronic ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-021-10664-x

Other articles of this Issue 2/2022

Neural Processing Letters 2/2022 Go to the issue