Skip to main content
Top
Published in: Advances in Data Analysis and Classification 3/2020

02-03-2019 | Regular Article

Enhancing techniques for learning decision trees from imbalanced data

Authors: Ikram Chaabane, Radhouane Guermazi, Mohamed Hammami

Published in: Advances in Data Analysis and Classification | Issue 3/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Several machine learning techniques assume that the number of objects in considered classes is approximately similar. Nevertheless, in real-world applications, the class of interest to be studied is generally scarce. The data imbalance status may allow high global accuracy through most standard learning algorithms, but it poses a real challenge when considering the minority class accuracy. To deal with this issue, we introduce in this paper a novel adaptation of the decision tree algorithm to imbalanced data situations. A new asymmetric entropy measure is proposed. It adjusts the most uncertain class distribution to the a priori class distribution and involves it in the node splitting-process. Unlike most competitive split criteria, which include only the maximum uncertainty vector in their formula, the proposed entropy is customizable with an adjustable concavity to better comply with the system expectations. The experimental results across thirty-five differently class-imbalanced data-sets show significant improvements over various split criteria adapted for imbalanced situations. Furthermore, being combined with sampling strategies and based-ensemble methods, our entropy proves significant enhancements on the minority class prediction, along with a good handling of the data difficulties related to the class imbalance problem.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
The data-set Ids are in \(\{1,3,6{-}8,11{-}15,17,18,21{-}27,29,31{-}35\}.\)
 
2
The data-set Ids are in \(\{2,5,9,10,20\}.\)
 
3
The data-set Ids are in \(\{4,16,19,28\}.\)
 
4
The data-set Ids in descending order of the percentage of borderline minority-class examples are in \(\{31, 8, 33, 5, 32, 6, 35, 16, 28, 22, 1, 13, 25\}.\)
 
5
The data-set Ids in descending order of the percentage of rare minority class examples are in \(\{29, 25, 22, 35, 33, 16, 5, 28, 6, 26\}.\)
 
6
The data-set Ids in descending order of outlier presence are in \(\{23, 34, 29, 22\}.\)
 
7
The data-set Ids are in \(\{27, 20, 19, 7, 3\}.\)
 
8
Referenced in Sect. 5.1.
 
Literature
go back to reference Alcala-Fdez J, Fernandez A, Luengo J, Derrac J, Garcia S (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multiple-Valued Logic Soft Comput 17(2–3):255–287 Alcala-Fdez J, Fernandez A, Luengo J, Derrac J, Garcia S (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multiple-Valued Logic Soft Comput 17(2–3):255–287
go back to reference Beyan C, Fisher R (2015) Classifying imbalanced data sets using similarity based hierarchical decomposition. Pattern Recognit 48(5):1653–1672CrossRef Beyan C, Fisher R (2015) Classifying imbalanced data sets using similarity based hierarchical decomposition. Pattern Recognit 48(5):1653–1672CrossRef
go back to reference Blaszczynski J, Deckert M, Stefanowski J, Wilk S (2010) Integrating selective pre-processing of imbalanced data with ivotes ensemble. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough sets and current trends in computing. Springer, Berlin, pp 148–157CrossRef Blaszczynski J, Deckert M, Stefanowski J, Wilk S (2010) Integrating selective pre-processing of imbalanced data with ivotes ensemble. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough sets and current trends in computing. Springer, Berlin, pp 148–157CrossRef
go back to reference Blaszczynski J, Stefanowski J, Idkowiak L (2013) Extending bagging for imbalanced data. In: Burduk R, Jackowski K, Kurzynski M, Wozniak M, Zolnierek A (eds) Proceedings of the 8th international conference on computer recognition systems CORES 2013, Springer International Publishing, Heidelberg, pp 269–278 Blaszczynski J, Stefanowski J, Idkowiak L (2013) Extending bagging for imbalanced data. In: Burduk R, Jackowski K, Kurzynski M, Wozniak M, Zolnierek A (eds) Proceedings of the 8th international conference on computer recognition systems CORES 2013, Springer International Publishing, Heidelberg, pp 269–278
go back to reference Bradford JP, Kunz C, Kohavi R, Brunk C, Brodley CE (1998) Pruning decision trees with misclassification costs. In: Nedellec C, Rouveirol C (eds) Machine learning: ECML-98. Springer, Berlin, pp 131–136CrossRef Bradford JP, Kunz C, Kohavi R, Brunk C, Brodley CE (1998) Pruning decision trees with misclassification costs. In: Nedellec C, Rouveirol C (eds) Machine learning: ECML-98. Springer, Berlin, pp 131–136CrossRef
go back to reference Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth and Brooks, MontereyMATH Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth and Brooks, MontereyMATH
go back to reference Buntine W, Niblett T (1992) A further comparison of splitting rules for decision-tree induction. Mach Learn 8(1):75–85 Buntine W, Niblett T (1992) A further comparison of splitting rules for decision-tree induction. Mach Learn 8(1):75–85
go back to reference Chaabane I, Guermazi R, Hammami M (2017) Adapted pruning scheme for the framework of imbalanced data-sets. Procedia Comput Sci 112(C):1542–1553CrossRef Chaabane I, Guermazi R, Hammami M (2017) Adapted pruning scheme for the framework of imbalanced data-sets. Procedia Comput Sci 112(C):1542–1553CrossRef
go back to reference Chawla NV (2003) C4.5 and imbalanced data sets: investigating the effect of sampling method, probabilistic estimate, and decision tree structure. In: Proceedings of the ICML’03 workshop on class imbalances Chawla NV (2003) C4.5 and imbalanced data sets: investigating the effect of sampling method, probabilistic estimate, and decision tree structure. In: Proceedings of the ICML’03 workshop on class imbalances
go back to reference Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357MATHCrossRef Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357MATHCrossRef
go back to reference Chawla NV, Lazarevic A, Hall L, Bowyer K (2003) SMOTEBoost: improving prediction of the minority class in boosting. In: Lavrac N, Gamberger D, Todorovski L, Blockeel H (eds) Knowledge discovery in databases: PKDD 2003, vol 2838. Lecture Notes in Computer Science. Springer, Berlin, pp 107–119CrossRef Chawla NV, Lazarevic A, Hall L, Bowyer K (2003) SMOTEBoost: improving prediction of the minority class in boosting. In: Lavrac N, Gamberger D, Todorovski L, Blockeel H (eds) Knowledge discovery in databases: PKDD 2003, vol 2838. Lecture Notes in Computer Science. Springer, Berlin, pp 107–119CrossRef
go back to reference Cieslak DA, Hoens TR, Chawla NV, Kegelmeyer WP (2012) Hellinger distance decision trees are robust and skew-insensitive. Data Min Knowl Discov 24(1):136–158MathSciNetMATHCrossRef Cieslak DA, Hoens TR, Chawla NV, Kegelmeyer WP (2012) Hellinger distance decision trees are robust and skew-insensitive. Data Min Knowl Discov 24(1):136–158MathSciNetMATHCrossRef
go back to reference Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetMATH Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30MathSciNetMATH
go back to reference Derrac J, Garcia S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18CrossRef Derrac J, Garcia S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18CrossRef
go back to reference Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence, vol 2. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’01, pp 973–978 Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence, vol 2. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’01, pp 973–978
go back to reference Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern C Appl Rev 42(4):463–484CrossRef Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern C Appl Rev 42(4):463–484CrossRef
go back to reference Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2016) Ordering-based pruning for improving the performance of ensembles of classifiers in the framework of imbalanced datasets. Information Sci 354:178–196CrossRef Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2016) Ordering-based pruning for improving the performance of ensembles of classifiers in the framework of imbalanced datasets. Information Sci 354:178–196CrossRef
go back to reference Ganganwar V (2012) An overview of classification algorithms for imbalanced datasets. Int J Emerg Technol Adv Eng 2(4):42–47 Ganganwar V (2012) An overview of classification algorithms for imbalanced datasets. Int J Emerg Technol Adv Eng 2(4):42–47
go back to reference Garcia V, Mollineda RA, Sanchez JS (2009) Pattern recognition and image analysis: 4th Iberian conference, IbPRIA 2009 Povoa de Varzim, Portugal, June 10–12, 2009 Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, chap Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions, pp 441–448 Garcia V, Mollineda RA, Sanchez JS (2009) Pattern recognition and image analysis: 4th Iberian conference, IbPRIA 2009 Povoa de Varzim, Portugal, June 10–12, 2009 Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, chap Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions, pp 441–448
go back to reference Gu Q, Zhu L, Cai Z (2009) Evaluation measures of the classification performance of imbalanced data sets. In: Cai Z, Li Z, Kang Z, Liu Y (eds) Computational intelligence and intelligent systems, communications in computer and information science, vol 51. Springer, Berlin, pp 461–471CrossRef Gu Q, Zhu L, Cai Z (2009) Evaluation measures of the classification performance of imbalanced data sets. In: Cai Z, Li Z, Kang Z, Liu Y (eds) Computational intelligence and intelligent systems, communications in computer and information science, vol 51. Springer, Berlin, pp 461–471CrossRef
go back to reference Han H, Wang W, Mao B (2005) Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: Huang DS, Zhang XP, Huang GB (eds) ICIC (1), Springer, Lecture Notes in Computer Science, vol 3644, pp 878–887 Han H, Wang W, Mao B (2005) Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: Huang DS, Zhang XP, Huang GB (eds) ICIC (1), Springer, Lecture Notes in Computer Science, vol 3644, pp 878–887
go back to reference Hart P (1968) The condensed nearest neighbor rule. IEEE Trans Inf Theory 14:515–516CrossRef Hart P (1968) The condensed nearest neighbor rule. IEEE Trans Inf Theory 14:515–516CrossRef
go back to reference Kang S, Ramamohanarao K (2014) Advances in knowledge discovery and data mining: 18th Pacific-Asia conference, PAKDD 2014, Tainan, Taiwan, May 13–16, 2014. Proceedings, Part I, Springer International Publishing, Cham, chap A Robust Classifier for Imbalanced Datasets, pp 212–223 Kang S, Ramamohanarao K (2014) Advances in knowledge discovery and data mining: 18th Pacific-Asia conference, PAKDD 2014, Tainan, Taiwan, May 13–16, 2014. Proceedings, Part I, Springer International Publishing, Cham, chap A Robust Classifier for Imbalanced Datasets, pp 212–223
go back to reference Kraiem MS, Moreno MN (2017) Effectiveness of basic and advanced sampling strategies on the classification of imbalanced data. A comparative study using classical and novel metrics. In: Martinez de Pison FJ, Urraca R, Quintien H, Corchado E (eds) Hybrid artificial intelligent systems, Springer International Publishing, Cham, pp 233–245 Kraiem MS, Moreno MN (2017) Effectiveness of basic and advanced sampling strategies on the classification of imbalanced data. A comparative study using classical and novel metrics. In: Martinez de Pison FJ, Urraca R, Quintien H, Corchado E (eds) Hybrid artificial intelligent systems, Springer International Publishing, Cham, pp 233–245
go back to reference Lallich S, Lenca P, Vaillant B (2007) Construction d’une entropie décentrée pour l’apprentissage supervisé. In: EGC 2007 : 7èmes journées francophones ”Extraction et gestion des connaissances”, Atelier Qualité des Données et des Connaissances, Namur, Belgique, pp 45–54 Lallich S, Lenca P, Vaillant B (2007) Construction d’une entropie décentrée pour l’apprentissage supervisé. In: EGC 2007 : 7èmes journées francophones ”Extraction et gestion des connaissances”, Atelier Qualité des Données et des Connaissances, Namur, Belgique, pp 45–54
go back to reference Lenca P, Lallich S, Do TN, Pham NK (2008) A comparison of different off-centered entropies to deal with class imbalance for decision trees. In: Advances in knowledge discovery and data mining. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 634–643 Lenca P, Lallich S, Do TN, Pham NK (2008) A comparison of different off-centered entropies to deal with class imbalance for decision trees. In: Advances in knowledge discovery and data mining. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 634–643
go back to reference Lenca P, Lallich S, Vaillant B (2010) Construction of an off-centered entropy for the supervised learning of imbalanced classes: some first results. Commun Stat Theory Methods 39(3):493–507MathSciNetMATHCrossRef Lenca P, Lallich S, Vaillant B (2010) Construction of an off-centered entropy for the supervised learning of imbalanced classes: some first results. Commun Stat Theory Methods 39(3):493–507MathSciNetMATHCrossRef
go back to reference Liang G (2013) An effective method for imbalanced time series classification: hybrid sampling. In: Cranefield S, Nayak A (eds) AI 2013: Adv Artif Intell. Springer International Publishing, Cham, pp 374–385CrossRef Liang G (2013) An effective method for imbalanced time series classification: hybrid sampling. In: Cranefield S, Nayak A (eds) AI 2013: Adv Artif Intell. Springer International Publishing, Cham, pp 374–385CrossRef
go back to reference Lin W, Tsai CF, Hu Y, Jhang J (2017) Clustering-based undersampling in class-imbalanced data. Information Sci 409(Supplement C):17–26CrossRef Lin W, Tsai CF, Hu Y, Jhang J (2017) Clustering-based undersampling in class-imbalanced data. Information Sci 409(Supplement C):17–26CrossRef
go back to reference Ling CX, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs. In: Proceedings of the twenty-first international conference on machine learning. ACM, New York, NY, USA, ICML ’04, pp 69–76 Ling CX, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs. In: Proceedings of the twenty-first international conference on machine learning. ACM, New York, NY, USA, ICML ’04, pp 69–76
go back to reference Liu W, Chawla S, Cieslak DA, Chawla NV (2010) A robust decision tree algorithm for imbalanced data sets, pp 766–777 Liu W, Chawla S, Cieslak DA, Chawla NV (2010) A robust decision tree algorithm for imbalanced data sets, pp 766–777
go back to reference Liu XY, Zhou ZH (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Ensemble Methods for Class Imbalance Learning, pp 61–82 Liu XY, Zhou ZH (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Ensemble Methods for Class Imbalance Learning, pp 61–82
go back to reference Marcellin S, Zighed DA, Ritschard G (2006a) An asymmetric entropy measure for decision trees. In: 11th Conference on information processing and management of uncertainty in knowledge-based systems. IPMU 2006, pp 1292 – 1299 Marcellin S, Zighed DA, Ritschard G (2006a) An asymmetric entropy measure for decision trees. In: 11th Conference on information processing and management of uncertainty in knowledge-based systems. IPMU 2006, pp 1292 – 1299
go back to reference Marcellin S, Zighed DA, Ritschard G (2006) Detection of breast cancer using an asymmetric entropy measure. In: Rizzi A, Vichi M (eds) Computional statistics (COMPSTAT 06), vol XXV. Springer, Heidelberg, pp 975–982 Marcellin S, Zighed DA, Ritschard G (2006) Detection of breast cancer using an asymmetric entropy measure. In: Rizzi A, Vichi M (eds) Computional statistics (COMPSTAT 06), vol XXV. Springer, Heidelberg, pp 975–982
go back to reference Marcellin S, Zighed DA, Ritschard G (2008) Evaluating decision trees grown with asymmetric entropies. In: Foundations of intelligent systems, 17th international symposium, ISMIS 2008, Toronto, Canada, May 20–23, pp 58–67 Marcellin S, Zighed DA, Ritschard G (2008) Evaluating decision trees grown with asymmetric entropies. In: Foundations of intelligent systems, 17th international symposium, ISMIS 2008, Toronto, Canada, May 20–23, pp 58–67
go back to reference Napierala K, Stefanowski J, Wilk S (2010) Learning from imbalanced data in presence of noisy and borderline examples. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough Sets Current Trends Comput. Springer, Berlin Heidelberg, pp 158–167CrossRef Napierala K, Stefanowski J, Wilk S (2010) Learning from imbalanced data in presence of noisy and borderline examples. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough Sets Current Trends Comput. Springer, Berlin Heidelberg, pp 158–167CrossRef
go back to reference Park Y, Ghosh J (2014) Ensembles of \(({\alpha })\)-trees for imbalanced classification problems. IEEE Trans Knowl Data Eng 26(1):131–143CrossRef Park Y, Ghosh J (2014) Ensembles of \(({\alpha })\)-trees for imbalanced classification problems. IEEE Trans Knowl Data Eng 26(1):131–143CrossRef
go back to reference Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830MathSciNetMATH
go back to reference Pham NK, Do TN, Lenca P, Lallich S (2008) Using local node information in decision trees: coupling a local labeling rule with an off-centered entropy. In: Proceedings of the international conference on data mining, July 14–17, 2008, Las Vegas, USA, pp 117–123 Pham NK, Do TN, Lenca P, Lallich S (2008) Using local node information in decision trees: coupling a local labeling rule with an off-centered entropy. In: Proceedings of the international conference on data mining, July 14–17, 2008, Las Vegas, USA, pp 117–123
go back to reference Rayhan F, Ahmed S, Mahbub A, Jani MR, Shatabda S, Farid DM, Rahman CM (2017) MEBoost: mixing estimators with boosting for imbalanced data classification. In: International conference on software, knowledge, information management and applications (SKIMA), vol 11. IEEE, pp 1–6 Rayhan F, Ahmed S, Mahbub A, Jani MR, Shatabda S, Farid DM, Rahman CM (2017) MEBoost: mixing estimators with boosting for imbalanced data classification. In: International conference on software, knowledge, information management and applications (SKIMA), vol 11. IEEE, pp 1–6
go back to reference Ritschard G, Zighed DA, Marcellin S (2007) Données déséquilibrées, entropie décentrée et indice d’implication. In: Nouveaux apports théoriques à l’analyse statistique implicative et applications, ASI4, Departament de Matematiques, Universitat Jaume I, pp 315–327 Ritschard G, Zighed DA, Marcellin S (2007) Données déséquilibrées, entropie décentrée et indice d’implication. In: Nouveaux apports théoriques à l’analyse statistique implicative et applications, ASI4, Departament de Matematiques, Universitat Jaume I, pp 315–327
go back to reference Ryan Hoens T, Chawla N (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Imbalanced Datasets: From Sampling to Classifiers, pp 43–59 Ryan Hoens T, Chawla N (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Imbalanced Datasets: From Sampling to Classifiers, pp 43–59
go back to reference Saez JA, Luengo J, Stefanowsk J, Herrera F (2015) SMOTE–IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Information Sci 291(Supplement C):184–203CrossRef Saez JA, Luengo J, Stefanowsk J, Herrera F (2015) SMOTE–IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Information Sci 291(Supplement C):184–203CrossRef
go back to reference Shen A, Tong R, Deng Y (2007) Application of classification models on credit card fraud detection. In: 2007 International conference on service systems and service management. pp 1–4 Shen A, Tong R, Deng Y (2007) Application of classification models on credit card fraud detection. In: 2007 International conference on service systems and service management. pp 1–4
go back to reference Sheng VS, Ling CX (2006) Thresholding for making classifiers cost-sensitive. In: Proceedings of the 21st national conference on artificial intelligence, vol 1. AAAI Press, pp 476–481 Sheng VS, Ling CX (2006) Thresholding for making classifiers cost-sensitive. In: Proceedings of the 21st national conference on artificial intelligence, vol 1. AAAI Press, pp 476–481
go back to reference Shuo W, Xin Y (2009) Diversity analysis on imbalanced data sets by using ensemble models. IEEE Symp Comput Intell Data Min 2009:324–331 Shuo W, Xin Y (2009) Diversity analysis on imbalanced data sets by using ensemble models. IEEE Symp Comput Intell Data Min 2009:324–331
go back to reference Singh A, Liu J, Guttag J (2010) Discretization of continuous ECG based risk metrics using asymmetric and warped entropy measures. In: 2010 Computing in cardiology. pp 473–476 Singh A, Liu J, Guttag J (2010) Discretization of continuous ECG based risk metrics using asymmetric and warped entropy measures. In: 2010 Computing in cardiology. pp 473–476
go back to reference Stefanowski J (2016) Dealing with data difficulty factors while learning from imbalanced data. Springer International Publishing, Cham, pp 333–363 Stefanowski J (2016) Dealing with data difficulty factors while learning from imbalanced data. Springer International Publishing, Cham, pp 333–363
go back to reference Thai-Nghe N, Gantner Z, Schmidt-Thieme L (2011) A new evaluation measure for learning from imbalanced data. In: The 2011 international joint conference on neural networks (IJCNN). pp 537–542 Thai-Nghe N, Gantner Z, Schmidt-Thieme L (2011) A new evaluation measure for learning from imbalanced data. In: The 2011 international joint conference on neural networks (IJCNN). pp 537–542
go back to reference Tomek I (1976) An experiment with the edited nearest-neighbor rule. IEEE Trans Syst Man Cybern SMC–6(6):448–452MathSciNetMATH Tomek I (1976) An experiment with the edited nearest-neighbor rule. IEEE Trans Syst Man Cybern SMC–6(6):448–452MathSciNetMATH
go back to reference Turney PD (1995) Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. J Artif Intell Res 2(1):369–409CrossRef Turney PD (1995) Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. J Artif Intell Res 2(1):369–409CrossRef
go back to reference Weiss GM (2004) Mining with rarity: a unifying framework. SIGKDD Explor 6(1):7–19CrossRef Weiss GM (2004) Mining with rarity: a unifying framework. SIGKDD Explor 6(1):7–19CrossRef
go back to reference Weiss GM (2010) The impact of small disjuncts on classifier learning, annals of information systems, vol 8. Springer, Boston, pp 193–226 Weiss GM (2010) The impact of small disjuncts on classifier learning, annals of information systems, vol 8. Springer, Boston, pp 193–226
go back to reference Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38(3):257–286MATHCrossRef Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38(3):257–286MATHCrossRef
go back to reference Yildirim P (2016) Pattern classification with imbalanced and multiclass data for the prediction of albendazole adverse event outcomes. Procedia Comput Sci 83:1013–1018CrossRef Yildirim P (2016) Pattern classification with imbalanced and multiclass data for the prediction of albendazole adverse event outcomes. Procedia Comput Sci 83:1013–1018CrossRef
go back to reference Zadrozny B, Langford J, Abe N (2003) Cost-sensitive learning by cost-proportionate example weighting. In: Proceedings of the third IEEE international conference on data mining. IEEE Computer Society, Washington, DC, USA, ICDM ’03 Zadrozny B, Langford J, Abe N (2003) Cost-sensitive learning by cost-proportionate example weighting. In: Proceedings of the third IEEE international conference on data mining. IEEE Computer Society, Washington, DC, USA, ICDM ’03
go back to reference Zighed DA, Ritschard G, Marcellin S (2010) Asymmetric and sample size sensitive entropy measures for supervised learning. In: Ras Z, Tsay L (eds) Advances in intelligent information systems, studies in computational intelligence, vol 265. Springer, Berlin, pp 27–42CrossRef Zighed DA, Ritschard G, Marcellin S (2010) Asymmetric and sample size sensitive entropy measures for supervised learning. In: Ras Z, Tsay L (eds) Advances in intelligent information systems, studies in computational intelligence, vol 265. Springer, Berlin, pp 27–42CrossRef
Metadata
Title
Enhancing techniques for learning decision trees from imbalanced data
Authors
Ikram Chaabane
Radhouane Guermazi
Mohamed Hammami
Publication date
02-03-2019
Publisher
Springer Berlin Heidelberg
Published in
Advances in Data Analysis and Classification / Issue 3/2020
Print ISSN: 1862-5347
Electronic ISSN: 1862-5355
DOI
https://doi.org/10.1007/s11634-019-00354-x

Other articles of this Issue 3/2020

Advances in Data Analysis and Classification 3/2020 Go to the issue

Premium Partner