Skip to main content

Ensemble Learning

  • Reference work entry
  • First Online:

Synonyms

Classifier combination; Committee-based learning; Multiple classifier systems

Definition

Ensemble learning is a machine learning paradigm where multiple learners are trained to solve the same problem. In contrast to ordinary machine learning approaches which try to learn one hypothesis from training data, ensemble methods try to construct a set of hypotheses and combine them to use.

Introduction

An ensemble contains a number of learners which are usually called base learners. The generalization ability of an ensemble is usually much stronger than that of base learners. Actually, ensemble learning is appealing because it is able to boost weak learners which are slightly better than random guess to strong learnerswhich can make very accurate predictions. So, “base learners” are also referred to as “weak learners.” It is noteworthy, however, that although most theoretical analyses work on weak learners, base learners used in practice are not necessarily weak since using...

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   899.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   549.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. L.K. Hansen, P. Salamon, Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    Google Scholar 

  2. R.E. Schapire, The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)

    Google Scholar 

  3. A. Krogh, J. Vedelsby, Neural network ensembles, cross validation, and active learning, in Advances in Neural Information Processing Systems, ed. by G. Tesauro, D.S. Touretzky, T.K. Leen, vol. 7 (MIT, Cambridge, 1995), pp. 231–238

    Google Scholar 

  4. L.I. Kuncheva, C.J. Whitaker, Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)

    MATH  Google Scholar 

  5. Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    MATH  MathSciNet  Google Scholar 

  6. L. Breiman, Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  7. D.H. Wolpert, Stacked generalization. Neural Networks 5(2), 241–260 (1992)

    MathSciNet  Google Scholar 

  8. L. Breiman, Random forests. Mach. Learn. 45(1), 5–32 (2001)

    MATH  Google Scholar 

  9. E. Bauer, R. Kohavi, An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach. Learn. 36(1–2), 105–139 (1999)

    Google Scholar 

  10. K.M. Ting, I.H. Witten, Issues in stacked generalization. J. Artif. Intell. Res. 10, 271–289 (1999)

    MATH  Google Scholar 

  11. D. Opitz, R. Maclin, Popular ensemble methods: an empirical study. J. Artif. Intell. Res. 11, 169–198 (1999)

    MATH  Google Scholar 

  12. Z.H. Zhou, J. Wu, W. Tang, Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–263 (2002)

    MATH  MathSciNet  Google Scholar 

  13. A. Strehl, J. Ghosh, Cluster ensembles – a knowledge reuse framework for combining multiple partitionings. J. Mach. Learn. Res. 3, 583–617 (2002)

    MathSciNet  Google Scholar 

  14. T.G. Dietterich, Machine learning research: four current directions. AI Mag. 18(4), 97–136 (1997)

    Google Scholar 

  15. Z.H. Zhou, Y. Jiang, S.F. Chen, Extracting symbolic rules from trained neural network ensembles. AI Commun. 16(1), 3–15 (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media New York

About this entry

Cite this entry

Zhou, ZH. (2015). Ensemble Learning. In: Li, S.Z., Jain, A.K. (eds) Encyclopedia of Biometrics. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7488-4_293

Download citation

Publish with us

Policies and ethics