Skip to main content
Top
Published in: Journal of Intelligent Information Systems 2/2013

01-04-2013

UniDis: a universal discretization technique

Authors: Yu Sang, Yingwei Jin, Keqiu Li, Heng Qi

Published in: Journal of Intelligent Information Systems | Issue 2/2013

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
go back to reference Biba, M., Esposito, F., Ferilli, S., Mauro, N.D., Basile, T. (2007). Unsupervised discretization using kernel density estimation. In: Proceedings of Twentieth International Joint Conference on Artificial Intelligence (IJCAI) (pp. 696–701). Biba, M., Esposito, F., Ferilli, S., Mauro, N.D., Basile, T. (2007). Unsupervised discretization using kernel density estimation. In: Proceedings of Twentieth International Joint Conference on Artificial Intelligence (IJCAI) (pp. 696–701).
go back to reference Bondu, A., Boulle, M., Lemaire, V., Loiseau, S., Duval, B. (2008). A Non-parametric semi-supervised discretization method. In: Proceedings of Eighth IEEE International Conference on Data Mining (ICDM) (pp. 53–62). Bondu, A., Boulle, M., Lemaire, V., Loiseau, S., Duval, B. (2008). A Non-parametric semi-supervised discretization method. In: Proceedings of Eighth IEEE International Conference on Data Mining (ICDM) (pp. 53–62).
go back to reference Boulle, M. (2004). Khiops: a statistical discretization method of continuous attributes. Machine Learning, 55, 53–69.MATHCrossRef Boulle, M. (2004). Khiops: a statistical discretization method of continuous attributes. Machine Learning, 55, 53–69.MATHCrossRef
go back to reference Boulle, M. (2006). MODL: a bayes optimal discretization method for continuous attributes. Machine Learning, 65, 131–165.CrossRef Boulle, M. (2006). MODL: a bayes optimal discretization method for continuous attributes. Machine Learning, 65, 131–165.CrossRef
go back to reference Ching, J.Y., Wong, A.K.C., Chan, K.C.C. (1995). Class-dependent discretization for inductive learning from continuous and mixed-mode data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(7), 641–651.CrossRef Ching, J.Y., Wong, A.K.C., Chan, K.C.C. (1995). Class-dependent discretization for inductive learning from continuous and mixed-mode data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(7), 641–651.CrossRef
go back to reference Cios, K.J., & Kurgan, L.A. (2007). CLIP4: hybrid inductive machine learning algorithm that generates inequality rules. Information Sciences, 177(17), 3592–3612.CrossRef Cios, K.J., & Kurgan, L.A. (2007). CLIP4: hybrid inductive machine learning algorithm that generates inequality rules. Information Sciences, 177(17), 3592–3612.CrossRef
go back to reference Cover, T.M., & Thomas, J.A. (2006). Elements of information thoery (2nd ed.). New York: Wiley. Cover, T.M., & Thomas, J.A. (2006). Elements of information thoery (2nd ed.). New York: Wiley.
go back to reference Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.MathSciNetMATH Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.MathSciNetMATH
go back to reference Dougherty, J., Kohavi, R., Sahami M. (1995). Supervised and unsupervised discretization of continuous feature. In: Proceedings of 12th International conference of Machine learning (pp. 194–202). Dougherty, J., Kohavi, R., Sahami M. (1995). Supervised and unsupervised discretization of continuous feature. In: Proceedings of 12th International conference of Machine learning (pp. 194–202).
go back to reference Fayyad, U., & Irani, K. (1993). Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of thirteenth international joint conference on artificial intelligence (pp. 1022–1027). San Mateo, CA: Morgan Kaufmann. Fayyad, U., & Irani, K. (1993). Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of thirteenth international joint conference on artificial intelligence (pp. 1022–1027). San Mateo, CA: Morgan Kaufmann.
go back to reference Hand, D., Mannila, H., Smyth, P. (2001). Principles of data mining. MIT Press. Hand, D., Mannila, H., Smyth, P. (2001). Principles of data mining. MIT Press.
go back to reference Hansen, M.H., & Yu, B. (2001). Model selection and the principle of minimum description length. Journal of the American Statistical Association, 96(545), 746–774.MathSciNetMATHCrossRef Hansen, M.H., & Yu, B. (2001). Model selection and the principle of minimum description length. Journal of the American Statistical Association, 96(545), 746–774.MathSciNetMATHCrossRef
go back to reference Jin, R.M., Breitbart, Y., Muoh, C. (2007). Data discretization unification. In: Proceedings of seventh IEEE International Conference on Data Mining (ICDM Best Paper) (pp. 183–192). Jin, R.M., Breitbart, Y., Muoh, C. (2007). Data discretization unification. In: Proceedings of seventh IEEE International Conference on Data Mining (ICDM Best Paper) (pp. 183–192).
go back to reference Jin, Y.W., & Qu, W.Y. (2009). Multi-dimension multi-objective fuzzy optimum dynamic programming method with complicated information based on a maximal-sum-rule of decision sequence priority. In: Eighth IEEE international conference on embedded computing; IEEE international conference on scalable computing and communications (pp. 656–660). Dalian, China.CrossRef Jin, Y.W., & Qu, W.Y. (2009). Multi-dimension multi-objective fuzzy optimum dynamic programming method with complicated information based on a maximal-sum-rule of decision sequence priority. In: Eighth IEEE international conference on embedded computing; IEEE international conference on scalable computing and communications (pp. 656–660). Dalian, China.CrossRef
go back to reference Kerber, R. (1992). ChiMerge: discretization of numeric attributes. In: Proceedings of ninth national conference on artificial intelligence (pp. 123–128). AAAI Press. Kerber, R. (1992). ChiMerge: discretization of numeric attributes. In: Proceedings of ninth national conference on artificial intelligence (pp. 123–128). AAAI Press.
go back to reference Kurgan, L.A., & Cios, K.J. (2004). CAIM discretization algorithm. IEEE Transactions on Knowledge and Data Engineering, 16(2), 145–153.CrossRef Kurgan, L.A., & Cios, K.J. (2004). CAIM discretization algorithm. IEEE Transactions on Knowledge and Data Engineering, 16(2), 145–153.CrossRef
go back to reference Ling, C.X., & Zhang, H.J. (2002). The representational power of discrete bayesian networks. Journal of Machine Learning Research, 3, 709–721.MathSciNet Ling, C.X., & Zhang, H.J. (2002). The representational power of discrete bayesian networks. Journal of Machine Learning Research, 3, 709–721.MathSciNet
go back to reference Liu, L.L., Wong, A.K.C., Wang, Y. (2004). A global optimal algorithm for class-dependent discretization of continuous data. Intelligent Data Analysis, 8(2), 151–170. Liu, L.L., Wong, A.K.C., Wang, Y. (2004). A global optimal algorithm for class-dependent discretization of continuous data. Intelligent Data Analysis, 8(2), 151–170.
go back to reference Liu, H., Hussain, F., Tan, C.L., Dash, M. (2002). Discretization: an enabling technique. Journal of Data Mining and Knowledge Discovery, 6(4), 393–423.MathSciNetCrossRef Liu, H., Hussain, F., Tan, C.L., Dash, M. (2002). Discretization: an enabling technique. Journal of Data Mining and Knowledge Discovery, 6(4), 393–423.MathSciNetCrossRef
go back to reference Liu, H., & Setiono, R. (1997). Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering, 9(4), 642–645.CrossRef Liu, H., & Setiono, R. (1997). Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering, 9(4), 642–645.CrossRef
go back to reference Mahady, H., Muhammad, A.C. , Qu, W.Y., Lin, X.M. (2010). Efficient algorithms to monitor continuous constrained k nearest neighbor queries. In: Data base systems for advanced applications (pp. 233–249). Tsukuba, Japan. Mahady, H., Muhammad, A.C. , Qu, W.Y., Lin, X.M. (2010). Efficient algorithms to monitor continuous constrained k nearest neighbor queries. In: Data base systems for advanced applications (pp. 233–249). Tsukuba, Japan.
go back to reference Mussard, S., Seyte, F., Terraza, M. (2003). Decomposition of Gini and the generalized entropy inequality measures. Economic Bulletin, 4(7), 1–6. Mussard, S., Seyte, F., Terraza, M. (2003). Decomposition of Gini and the generalized entropy inequality measures. Economic Bulletin, 4(7), 1–6.
go back to reference Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.
go back to reference Quinlan, J.R. (1993). C4.5: Programs for machine learning. San Mateo, California: Morgan Kaufmann. Quinlan, J.R. (1993). C4.5: Programs for machine learning. San Mateo, California: Morgan Kaufmann.
go back to reference Roweis, S.T., & Saul, L.K. (2000). Science. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 290(5500), 2323–2326. Roweis, S.T., & Saul, L.K. (2000). Science. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 290(5500), 2323–2326.
go back to reference Schmidberger, G., & Frank, E. (2005). Unsupervised discretization using tree-based density estimation. In: Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (pp. 240–251). Schmidberger, G., & Frank, E. (2005). Unsupervised discretization using tree-based density estimation. In: Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (pp. 240–251).
go back to reference Su, C.T., & Hsu, J.H. (2005). An extended Chi2 algorithm for discretization of real value attributes. IEEE Transactions on Knowledge and Data Engineering, 17(3), 437–441.CrossRef Su, C.T., & Hsu, J.H. (2005). An extended Chi2 algorithm for discretization of real value attributes. IEEE Transactions on Knowledge and Data Engineering, 17(3), 437–441.CrossRef
go back to reference Tay, E.H., & Shen, L. (2002). A modified Chi2 algorithm for discretization. IEEE Transactions on Knowledge and Data Engineering, 14(3), 666–670.CrossRef Tay, E.H., & Shen, L. (2002). A modified Chi2 algorithm for discretization. IEEE Transactions on Knowledge and Data Engineering, 14(3), 666–670.CrossRef
go back to reference Tsai, C.J., Lee, C.I., Yang, W.P. (2008). A discretization algorithm based on class-attribute contingency coefficient. Information Sciences, 178, pp. 714–731.CrossRef Tsai, C.J., Lee, C.I., Yang, W.P. (2008). A discretization algorithm based on class-attribute contingency coefficient. Information Sciences, 178, pp. 714–731.CrossRef
go back to reference Wang, H.X., & Zaniolo, C. (2000). CMP: a fast decision tree classifier using multivariate predictions. In: 16th International Conference on Data Engineering (ICDE00) (pp. 449–460). Wang, H.X., & Zaniolo, C. (2000). CMP: a fast decision tree classifier using multivariate predictions. In: 16th International Conference on Data Engineering (ICDE00) (pp. 449–460).
go back to reference Witten, I.H., & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with java implementations. San Francisco, CA: Morgan Kaufmann. Witten, I.H., & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with java implementations. San Francisco, CA: Morgan Kaufmann.
go back to reference Zar, J.H. (1998). Biostatistical analysis (4th ed.). Englewood Clifs, New Jersey: Prentice Hall. Zar, J.H. (1998). Biostatistical analysis (4th ed.). Englewood Clifs, New Jersey: Prentice Hall.
Metadata
Title
UniDis: a universal discretization technique
Authors
Yu Sang
Yingwei Jin
Keqiu Li
Heng Qi
Publication date
01-04-2013
Publisher
Springer US
Published in
Journal of Intelligent Information Systems / Issue 2/2013
Print ISSN: 0925-9902
Electronic ISSN: 1573-7675
DOI
https://doi.org/10.1007/s10844-012-0228-1

Other articles of this Issue 2/2013

Journal of Intelligent Information Systems 2/2013 Go to the issue

Premium Partner