Skip to main content
Erschienen in: Progress in Artificial Intelligence 4/2017

17.05.2017 | Regular Paper

On the use of different base classifiers in multiclass problems

verfasst von: L. Morán-Fernández, V. Bolón-Canedo, A. Alonso-Betanzos

Erschienen in: Progress in Artificial Intelligence | Ausgabe 4/2017

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Classification problems with more than two classes can be handled in different ways. The most used approach is the one which transforms the original multiclass problem into a series of binary subproblems which are solved individually. In this approach, should the same base classifier be used on all binary subproblems? Or should these subproblems be tuned independently? Trying to answer this question, in this paper we propose a method to select a different base classifier in each subproblem—following the one-versus-one strategy—making use of data complexity measures. The experimental results on 17 real-world datasets corroborate the adequacy of the method.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Milgram, J., Cheriet, M., Sabourin, R.: One against one or one against all: which one is better for handwriting recognition with SVMS? In: Tenth International Workshop on Frontiers in Handwriting Recognition. Suvisoft (2006) Milgram, J., Cheriet, M., Sabourin, R.: One against one or one against all: which one is better for handwriting recognition with SVMS? In: Tenth International Workshop on Frontiers in Handwriting Recognition. Suvisoft (2006)
2.
Zurück zum Zitat Li, T., Zhang, C., Ogihara, M.: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics 20(15), 2429–2437 (2004)CrossRef Li, T., Zhang, C., Ogihara, M.: A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression. Bioinformatics 20(15), 2429–2437 (2004)CrossRef
3.
Zurück zum Zitat Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A.: Feature selection and classification in multiple class datasets: an application to KDD cup 99 dataset. Expert Syst. Appl. 38(5), 5947–5957 (2011)CrossRef Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A.: Feature selection and classification in multiple class datasets: an application to KDD cup 99 dataset. Expert Syst. Appl. 38(5), 5947–5957 (2011)CrossRef
4.
Zurück zum Zitat Forman, G.: An extensive empirical study of feature selection metrics for text classification. J. Mach. Learn. Res. 3(Mar), 1289–1305 (2003)MATH Forman, G.: An extensive empirical study of feature selection metrics for text classification. J. Mach. Learn. Res. 3(Mar), 1289–1305 (2003)MATH
5.
6.
Zurück zum Zitat Lorena, A.C., De Carvalho, A.C.: Evolutionary tuning of SVM parameter values in multiclass problems. Neurocomputing 71(16), 3326–3334 (2008)CrossRef Lorena, A.C., De Carvalho, A.C.: Evolutionary tuning of SVM parameter values in multiclass problems. Neurocomputing 71(16), 3326–3334 (2008)CrossRef
7.
Zurück zum Zitat Reid, S.R.: Model combination in multiclass classification. Ph.D. thesis, University of Colorado (2010) Reid, S.R.: Model combination in multiclass classification. Ph.D. thesis, University of Colorado (2010)
8.
Zurück zum Zitat Mendialdua, Í., María Martínez-Otzeta, J., Rodríguez-Rodríguez, I., Ruiz-Vázquez, T., Sierra, B.: Dynamic selection of the best base classifier in one versus one. Knowl.-Based Syst. 85, 298–306 (2015)CrossRef Mendialdua, Í., María Martínez-Otzeta, J., Rodríguez-Rodríguez, I., Ruiz-Vázquez, T., Sierra, B.: Dynamic selection of the best base classifier in one versus one. Knowl.-Based Syst. 85, 298–306 (2015)CrossRef
9.
Zurück zum Zitat Kam Ho, T., Basu, M.: Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 289–300 (2002)CrossRef Kam Ho, T., Basu, M.: Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 289–300 (2002)CrossRef
10.
Zurück zum Zitat Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)MATH Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)MATH
11.
Zurück zum Zitat Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multiclass to binary: a unifying approach for margin classifiers. J. Mach. Learn. Res. 1, 113–141 (2000)MATHMathSciNet Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multiclass to binary: a unifying approach for margin classifiers. J. Mach. Learn. Res. 1, 113–141 (2000)MATHMathSciNet
12.
Zurück zum Zitat Hsu, C.-W., Lin, C.-J.: A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 13(2), 415–425 (2002)CrossRef Hsu, C.-W., Lin, C.-J.: A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 13(2), 415–425 (2002)CrossRef
13.
Zurück zum Zitat Galar, M., Fernández, A., Barrenechea, E., Bustince, H., Herrera, F.: An overview of ensemble methods for binary classifiers in multi-class problems: experimental study on one-vs-one and one-vs-all schemes. Pattern Recognit. 44(8), 1761–1776 (2011)CrossRef Galar, M., Fernández, A., Barrenechea, E., Bustince, H., Herrera, F.: An overview of ensemble methods for binary classifiers in multi-class problems: experimental study on one-vs-one and one-vs-all schemes. Pattern Recognit. 44(8), 1761–1776 (2011)CrossRef
14.
Zurück zum Zitat Liepert, M.: Topological fields chunking for German with SVMS: optimizing SVM-parameters with gas. In: Proceedings of the International Conference on Recent Advances in Natural Language Processing (2003) Liepert, M.: Topological fields chunking for German with SVMS: optimizing SVM-parameters with gas. In: Proceedings of the International Conference on Recent Advances in Natural Language Processing (2003)
15.
Zurück zum Zitat De Souza, B.F., De Carvalho, A.C., Calvo, R., Ishii, R.P.: Multiclass SVM model selection using particle swarm optimization. In: Sixth International Conference on Hybrid Intelligent Systems, 2006. HIS’06, pp. 31–31. IEEE (2006) De Souza, B.F., De Carvalho, A.C., Calvo, R., Ishii, R.P.: Multiclass SVM model selection using particle swarm optimization. In: Sixth International Conference on Hybrid Intelligent Systems, 2006. HIS’06, pp. 31–31. IEEE (2006)
16.
Zurück zum Zitat Lebrun, G., Lezoray, O., Charrier, C., Cardot, H.: An ea multi-model selection for SVM multiclass schemes. In: International Work-Conference on Artificial Neural Networks, pp. 260–267. Springer (2007) Lebrun, G., Lezoray, O., Charrier, C., Cardot, H.: An ea multi-model selection for SVM multiclass schemes. In: International Work-Conference on Artificial Neural Networks, pp. 260–267. Springer (2007)
17.
Zurück zum Zitat Kuncheva, L.I.: Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 32(2), 146–156 (2002)CrossRef Kuncheva, L.I.: Switching between selection and fusion in combining classifiers: an experiment. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 32(2), 146–156 (2002)CrossRef
18.
Zurück zum Zitat Liu, R., Yuan, B.: Multiple classifiers combination by clustering and selection. Inf. Fusion 2(3), 163–168 (2001)CrossRef Liu, R., Yuan, B.: Multiple classifiers combination by clustering and selection. Inf. Fusion 2(3), 163–168 (2001)CrossRef
19.
Zurück zum Zitat Szepannek, G., Bischl, B., Weihs, C.: On the combination of locally optimal pairwise classifiers. Eng. Appl. Artif. Intell. 22(1), 79–85 (2009)CrossRef Szepannek, G., Bischl, B., Weihs, C.: On the combination of locally optimal pairwise classifiers. Eng. Appl. Artif. Intell. 22(1), 79–85 (2009)CrossRef
20.
Zurück zum Zitat Kang, S., Cho, S.: Optimal construction of one-against-one classifier based on meta-learning. Neurocomputing 167, 459–466 (2015)CrossRef Kang, S., Cho, S.: Optimal construction of one-against-one classifier based on meta-learning. Neurocomputing 167, 459–466 (2015)CrossRef
21.
Zurück zum Zitat Woods, K., Philip Kegelmeyer, W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 405–410 (1997)CrossRef Woods, K., Philip Kegelmeyer, W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 405–410 (1997)CrossRef
22.
Zurück zum Zitat Ko, A.H.R., Sabourin, R., Souza Britto Jr., A.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 41(5), 1718–1731 (2008)CrossRefMATH Ko, A.H.R., Sabourin, R., Souza Britto Jr., A.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 41(5), 1718–1731 (2008)CrossRefMATH
23.
Zurück zum Zitat Dos Santos, E.M., Sabourin, R., Maupin, P.: A dynamic overproduce-and-choose strategy for the selection of classifier ensembles. Pattern Recognit. 41(10), 2993–3009 (2008)CrossRefMATH Dos Santos, E.M., Sabourin, R., Maupin, P.: A dynamic overproduce-and-choose strategy for the selection of classifier ensembles. Pattern Recognit. 41(10), 2993–3009 (2008)CrossRefMATH
24.
Zurück zum Zitat Galar, M., Fernández, A., Barrenechea, E., Bustince, H., Herrera, F.: Dynamic classifier selection for one-vs-one strategy: avoiding non-competent classifiers. Pattern Recognit. 46(12), 3412–3424 (2013)CrossRef Galar, M., Fernández, A., Barrenechea, E., Bustince, H., Herrera, F.: Dynamic classifier selection for one-vs-one strategy: avoiding non-competent classifiers. Pattern Recognit. 46(12), 3412–3424 (2013)CrossRef
25.
Zurück zum Zitat Morán-Fernández, L., Bolón-Canedo, V., Alonso-Betanzos, A.: Selection of the best base classifier in one-versus-one using data complexity measures. In: Conference of the Spanish Association for Artificial Intelligence, pp. 110–120. Springer (2016) Morán-Fernández, L., Bolón-Canedo, V., Alonso-Betanzos, A.: Selection of the best base classifier in one-versus-one using data complexity measures. In: Conference of the Spanish Association for Artificial Intelligence, pp. 110–120. Springer (2016)
26.
Zurück zum Zitat Cano, J.-R.: Analysis of data complexity measures for classification. Expert Syst. Appl. 40(12), 4820–4831 (2013)CrossRef Cano, J.-R.: Analysis of data complexity measures for classification. Expert Syst. Appl. 40(12), 4820–4831 (2013)CrossRef
27.
Zurück zum Zitat Morán-Fernández, L., Bolón-Canedo, V., Alonso-Betanzos, A.: Can classification performance be predicted by complexity measures? A study using microarray data. Knowl. Inf. Syst. 51(3), 1067–1090 (2016)CrossRef Morán-Fernández, L., Bolón-Canedo, V., Alonso-Betanzos, A.: Can classification performance be predicted by complexity measures? A study using microarray data. Knowl. Inf. Syst. 51(3), 1067–1090 (2016)CrossRef
28.
Zurück zum Zitat Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6(1), 37–66 (1991) Aha, D.W., Kibler, D., Albert, M.K.: Instance-based learning algorithms. Mach. Learn. 6(1), 37–66 (1991)
29.
Zurück zum Zitat Vapnik, V.N., Vapnik, V.: Statistical Learning Theory, vol. 1. Wiley, New York (1998) Vapnik, V.N., Vapnik, V.: Statistical Learning Theory, vol. 1. Wiley, New York (1998)
30.
Zurück zum Zitat Ho, T.K., Bernadó-Mansilla, E.: Classifier domains of competence in data complexity space. In: Data Complexity in Pattern Recognition, pp. 135–152. Springer (2006). doi:10.1007/978-1-84628-172-3_7 Ho, T.K., Bernadó-Mansilla, E.: Classifier domains of competence in data complexity space. In: Data Complexity in Pattern Recognition, pp. 135–152. Springer (2006). doi:10.​1007/​978-1-84628-172-3_​7
33.
Zurück zum Zitat Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. ACM SIGKDD Explor. Newsl. 11(1), 10–18 (2009)CrossRef Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: an update. ACM SIGKDD Explor. Newsl. 11(1), 10–18 (2009)CrossRef
34.
Zurück zum Zitat Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)CrossRefMATH Friedman, M.: The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 32(200), 675–701 (1937)CrossRefMATH
35.
Metadaten
Titel
On the use of different base classifiers in multiclass problems
verfasst von
L. Morán-Fernández
V. Bolón-Canedo
A. Alonso-Betanzos
Publikationsdatum
17.05.2017
Verlag
Springer Berlin Heidelberg
Erschienen in
Progress in Artificial Intelligence / Ausgabe 4/2017
Print ISSN: 2192-6352
Elektronische ISSN: 2192-6360
DOI
https://doi.org/10.1007/s13748-017-0126-4

Weitere Artikel der Ausgabe 4/2017

Progress in Artificial Intelligence 4/2017 Zur Ausgabe