Skip to main content

2017 | OriginalPaper | Buchkapitel

17. Random Forests

verfasst von : Ronny Hänsch, Olaf Hellwich

Erschienen in: Photogrammetrie und Fernerkundung

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Zusammenfassung

Random Forests und deren Varianten gehören zu den erfolgreichsten Methoden des maschinellen Lernens. Ihre Einfachheit, Effizienz, Robustheit, Genauigkeit und Allgemeinheit führten sowohl zu mannigfaltigen Adaptionen des zugrunde-liegenden Konzepts als auch zu vielen erfolgreichen Anwendungen auf verschiedene Problemstellungen. Dieser Artikel versucht einen Überblick über Random Forests zu schaffen, stellt deren Ursprünge dar, erklärt grundlegende Konzepte und potentielle Erweiterungen, diskutiert Vor- und Nachteile und erwähnt einige der einflussreichsten Anwendungen im Bereich der digitalen Bildanalyse.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
*Geobasisdaten: Land NRW, Bonn, 2111/2009
 
Literatur
1.
Zurück zum Zitat Ah-Pine, J.: Data fusion in information retrieval using consensus aggregation operators. In: International Conference on Web Intelligence and Intelligent Agent Technology, Bd. 1, S. 662–668 (2008) Ah-Pine, J.: Data fusion in information retrieval using consensus aggregation operators. In: International Conference on Web Intelligence and Intelligent Agent Technology, Bd. 1, S. 662–668 (2008)
2.
Zurück zum Zitat Ahmad, A., Brown, G.: Random ordinality ensembles – a novel ensemble method for multi-valued categorical data. In: Proceedings of the 8th International Workshop on Multiple Classifier Systems, S. 222–231 (2009) Ahmad, A., Brown, G.: Random ordinality ensembles – a novel ensemble method for multi-valued categorical data. In: Proceedings of the 8th International Workshop on Multiple Classifier Systems, S. 222–231 (2009)
3.
Zurück zum Zitat Alpaydin, E., Jordan, M.I.: Local linear perceptrons for classification. IEEE Trans. Neural Netw. 7(3), 788–792 (1996)CrossRef Alpaydin, E., Jordan, M.I.: Local linear perceptrons for classification. IEEE Trans. Neural Netw. 7(3), 788–792 (1996)CrossRef
4.
Zurück zum Zitat Amit, Y., Geman, D.: Shape quantization and recognition with randomized trees. Neural Comput. 9, 1545–1588 (1997)CrossRef Amit, Y., Geman, D.: Shape quantization and recognition with randomized trees. Neural Comput. 9, 1545–1588 (1997)CrossRef
5.
Zurück zum Zitat Banfield, R., Hall, L., Bowyer, K., Kegelmeyer, W.: Ensemble diversity measures and their application to thinning. Inf. Fusion 6(1), 49–62 (2005)CrossRef Banfield, R., Hall, L., Bowyer, K., Kegelmeyer, W.: Ensemble diversity measures and their application to thinning. Inf. Fusion 6(1), 49–62 (2005)CrossRef
6.
Zurück zum Zitat Battiti, R., Colla, A.: Democracy in neural nets: voting schemes for classification. Neural Netw. 7, 691–707 (1994)CrossRef Battiti, R., Colla, A.: Democracy in neural nets: voting schemes for classification. Neural Netw. 7, 691–707 (1994)CrossRef
7.
Zurück zum Zitat Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach. Learn. 36, 105–139 (1999)CrossRef Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach. Learn. 36, 105–139 (1999)CrossRef
8.
Zurück zum Zitat Benediktsson, J., Swain, P.: Consensus theoretic classification methods. IEEE Trans. Syst. Man Cybern. 22, 688–704 (1992)CrossRef Benediktsson, J., Swain, P.: Consensus theoretic classification methods. IEEE Trans. Syst. Man Cybern. 22, 688–704 (1992)CrossRef
9.
Zurück zum Zitat Benediktsson, J., Sveinsson, J., Ingimundarson, J.I., Sigurdsson, H., Ersoy, O.: Multistage classifiers optimized by neural networks and genetic algorithms. Nonlinear Anal. Theory Methods Appl. 30, 1323–1334 (1997) Benediktsson, J., Sveinsson, J., Ingimundarson, J.I., Sigurdsson, H., Ersoy, O.: Multistage classifiers optimized by neural networks and genetic algorithms. Nonlinear Anal. Theory Methods Appl. 30, 1323–1334 (1997)
10.
Zurück zum Zitat Biau, G., Devroye, L., Lugosi, G.: Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 9, 2015–2033 (2008) Biau, G., Devroye, L., Lugosi, G.: Consistency of random forests and other averaging classifiers. J. Mach. Learn. Res. 9, 2015–2033 (2008)
11.
Zurück zum Zitat Bloch, I.: Information combination operators for data fusion: a comparative review with classification. IEEE Trans. Syst. Man Cybern. – Part A: Syst. Hum. 26, 52–67 (1996)CrossRef Bloch, I.: Information combination operators for data fusion: a comparative review with classification. IEEE Trans. Syst. Man Cybern. – Part A: Syst. Hum. 26, 52–67 (1996)CrossRef
12.
Zurück zum Zitat Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: IEEE 11th International Conference on Computer Vision, S. 1–8 (2007) Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: IEEE 11th International Conference on Computer Vision, S. 1–8 (2007)
13.
Zurück zum Zitat Breiman, L.: Bagging predictors. Technical Report No. 421, Department of Statistics, University of California (1994) Breiman, L.: Bagging predictors. Technical Report No. 421, Department of Statistics, University of California (1994)
14.
Zurück zum Zitat Breiman, L.: Arcing classifiers. Technical Report, Department of Statistics, University of California (1996) Breiman, L.: Arcing classifiers. Technical Report, Department of Statistics, University of California (1996)
15.
Zurück zum Zitat Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996) Breiman, L.: Bagging predictors. Mach. Learn. 24, 123–140 (1996)
16.
Zurück zum Zitat Breiman, L.: Some infinity theory for predictor ensembles. Technical Report No. 577, Department of Statistics, University of California (2000) Breiman, L.: Some infinity theory for predictor ensembles. Technical Report No. 577, Department of Statistics, University of California (2000)
17.
18.
Zurück zum Zitat Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: CART: Classification and Regression Trees (1984) Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: CART: Classification and Regression Trees (1984)
19.
Zurück zum Zitat Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. J. Inf. Fusion 6(1), 5–20 (2005)CrossRef Brown, G., Wyatt, J., Harris, R., Yao, X.: Diversity creation methods: A survey and categorisation. J. Inf. Fusion 6(1), 5–20 (2005)CrossRef
20.
Zurück zum Zitat Bühlmann, P.: Bagging, boosting and ensemble methods. In: Handbook of Computational Statistics, S. 985–1022 (2012) Bühlmann, P.: Bagging, boosting and ensemble methods. In: Handbook of Computational Statistics, S. 985–1022 (2012)
21.
Zurück zum Zitat Buntine, W.: Learning classification trees. In: Artificial Intelligence Frontiers in Statistics, S. 182–2011 (1991) Buntine, W.: Learning classification trees. In: Artificial Intelligence Frontiers in Statistics, S. 182–2011 (1991)
22.
Zurück zum Zitat Caruana, R., Karampatziakis, N., Yessenalina, A.: An empirical evaluation of supervised learning in high dimensions. In: Proceedings International Conference on Machine Learning, S. 96–103 (2008) Caruana, R., Karampatziakis, N., Yessenalina, A.: An empirical evaluation of supervised learning in high dimensions. In: Proceedings International Conference on Machine Learning, S. 96–103 (2008)
23.
Zurück zum Zitat Caruana, R., Niculescu-Mizil, A.: An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd International Conference on Machine Learning (2006) Caruana, R., Niculescu-Mizil, A.: An empirical comparison of supervised learning algorithms. In: Proceedings of the 23rd International Conference on Machine Learning (2006)
24.
Zurück zum Zitat Chiang, C.C., Fu, H.C.: A divide-and-conquer methodology for modular supervised neural network design. In: IEEE International Conference on Neural Networks, S. 119–124 (1994) Chiang, C.C., Fu, H.C.: A divide-and-conquer methodology for modular supervised neural network design. In: IEEE International Conference on Neural Networks, S. 119–124 (1994)
25.
Zurück zum Zitat Cho, S.B., Kim, J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans. Syst. Man Cybern. 25(2), 380–384 (1995)CrossRef Cho, S.B., Kim, J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE Trans. Syst. Man Cybern. 25(2), 380–384 (1995)CrossRef
26.
Zurück zum Zitat Ciresan, D., Meier, U., Masci, J., Schmidhuber, J.: A committee of neural networks for traffic sign classification. In: The 2011 International Joint Conference on Neural Networks, IJCNN 2011, S. 1918–1921 (2011) Ciresan, D., Meier, U., Masci, J., Schmidhuber, J.: A committee of neural networks for traffic sign classification. In: The 2011 International Joint Conference on Neural Networks, IJCNN 2011, S. 1918–1921 (2011)
27.
Zurück zum Zitat Cutler, A.: Fast classification using perfect random trees. Technical Report 5/99/99, Department of Mathematics and Statistics, Utah State University (1999) Cutler, A.: Fast classification using perfect random trees. Technical Report 5/99/99, Department of Mathematics and Statistics, Utah State University (1999)
28.
Zurück zum Zitat Cutler, A., Zhao, G.: Pert – perfect random tree ensembles. In: Computing Science and Statistics (2001) Cutler, A., Zhao, G.: Pert – perfect random tree ensembles. In: Computing Science and Statistics (2001)
29.
Zurück zum Zitat Dasarathy, B.V., Sheela, B.V.: Composite classifier system design: concepts and methodology. In: Proceedings of the IEEE, Bd. 67, S. 708–713 (1979) Dasarathy, B.V., Sheela, B.V.: Composite classifier system design: concepts and methodology. In: Proceedings of the IEEE, Bd. 67, S. 708–713 (1979)
30.
Zurück zum Zitat Dietterich, T.G.: Ensemble methods in machine learning. In: Proceedings of the First International Workshop on Multiple Classifier Systems, S. 1–15. Springer (2000) Dietterich, T.G.: Ensemble methods in machine learning. In: Proceedings of the First International Workshop on Multiple Classifier Systems, S. 1–15. Springer (2000)
31.
Zurück zum Zitat Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995) Dietterich, T.G., Bakiri, G.: Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 2, 263–286 (1995)
32.
Zurück zum Zitat Dietterich, T.G., Fisher, D.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach. Learn. 40, 139–157 (2000)CrossRef Dietterich, T.G., Fisher, D.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach. Learn. 40, 139–157 (2000)CrossRef
33.
Zurück zum Zitat Drucker, H., Cortes, C., Jackel, L., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Comput. 6, 1289–1301 (1994)CrossRef Drucker, H., Cortes, C., Jackel, L., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Comput. 6, 1289–1301 (1994)CrossRef
34.
Zurück zum Zitat Duan, K., Keerthi, S.: Which is the best multiclass SVM method? An empirical study. In: Multiple Classifier Systems, S. 732–760 (2005) Duan, K., Keerthi, S.: Which is the best multiclass SVM method? An empirical study. In: Multiple Classifier Systems, S. 732–760 (2005)
35.
Zurück zum Zitat Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification (2000) Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification (2000)
36.
Zurück zum Zitat Filippi, E., Costa, M., Pasero, E.: Multi-layer perceptron ensembles for increased performance and fault-tolerance in pattern recognition tasks. In: IEEE International Conference on Neural Networks, S. 2901–2906 (1994) Filippi, E., Costa, M., Pasero, E.: Multi-layer perceptron ensembles for increased performance and fault-tolerance in pattern recognition tasks. In: IEEE International Conference on Neural Networks, S. 2901–2906 (1994)
37.
Zurück zum Zitat Freund, Y.: Boosting a weak learning algorithm by majority. In: Proceedings of the Third Annual Workshop on Computational Learning Theory, S. 202–216 (1990) Freund, Y.: Boosting a weak learning algorithm by majority. In: Proceedings of the Third Annual Workshop on Computational Learning Theory, S. 202–216 (1990)
38.
Zurück zum Zitat Gader, P., Mohamed, M., Keller, J.: Fusion of handwritten word classifiers. Pattern Recognit. Lett. 17, 577–584 (1996)CrossRef Gader, P., Mohamed, M., Keller, J.: Fusion of handwritten word classifiers. Pattern Recognit. Lett. 17, 577–584 (1996)CrossRef
39.
Zurück zum Zitat Gal-Or, M., May, J., Spangler, W.: Assessing the predictive accuracy of diversity measures with domain-dependent, asymmetric misclassification costs. Inf. Fusion 6(1), 37–48 (2005)CrossRef Gal-Or, M., May, J., Spangler, W.: Assessing the predictive accuracy of diversity measures with domain-dependent, asymmetric misclassification costs. Inf. Fusion 6(1), 37–48 (2005)CrossRef
40.
Zurück zum Zitat Gall, J., Lempitsky, V.: Class-specific hough forests for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), S. 1022–1029 (2009) Gall, J., Lempitsky, V.: Class-specific hough forests for object detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), S. 1022–1029 (2009)
41.
Zurück zum Zitat Geurts, P., Wehenkel, D.E.L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)CrossRef Geurts, P., Wehenkel, D.E.L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)CrossRef
42.
Zurück zum Zitat Giacinto, G., Roli, F.: Approach to the automatic design of multiple classifier systems. Pattern Recognit. Lett. 22(1), 25–33 (2001)CrossRef Giacinto, G., Roli, F.: Approach to the automatic design of multiple classifier systems. Pattern Recognit. Lett. 22(1), 25–33 (2001)CrossRef
43.
Zurück zum Zitat Hall, L.O., Bowyer, K.W., Banfield, R.E., Bhadoria, D., Philip, W., Eschrich, S.: Comparing pure parallel ensemble creation techniques against bagging. In: The Third IEEE International Conference on Data Mining, S. 533–536 (2003) Hall, L.O., Bowyer, K.W., Banfield, R.E., Bhadoria, D., Philip, W., Eschrich, S.: Comparing pure parallel ensemble creation techniques against bagging. In: The Third IEEE International Conference on Data Mining, S. 533–536 (2003)
44.
Zurück zum Zitat Hänsch, R.: Generic object categorization in PolSAR images – and beyond, Technische Universität Berlin, Germany. Ph.D. thesis (2014) Hänsch, R.: Generic object categorization in PolSAR images – and beyond, Technische Universität Berlin, Germany. Ph.D. thesis (2014)
45.
Zurück zum Zitat Hänsch, R., Hellwich, O.: Performance assessment and interpretation of random forests by three-dimensional visualizations. In: International Conference on Information Visualization Theory and Applications (IVAPP 2015), S. 149–156 (2015) Hänsch, R., Hellwich, O.: Performance assessment and interpretation of random forests by three-dimensional visualizations. In: International Conference on Information Visualization Theory and Applications (IVAPP 2015), S. 149–156 (2015)
46.
Zurück zum Zitat Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRef Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRef
47.
Zurück zum Zitat Ho, T.K.: Random decision forest. In: Proceedings of the Third International Conference on Document Analysis and Recognition, S. 278–282 (1995) Ho, T.K.: Random decision forest. In: Proceedings of the Third International Conference on Document Analysis and Recognition, S. 278–282 (1995)
48.
Zurück zum Zitat Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)CrossRef Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)CrossRef
49.
Zurück zum Zitat Ho, T.K., Hull, J., Srihari, S.: Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 16(1), 66–75 (1994)CrossRef Ho, T.K., Hull, J., Srihari, S.: Decision combination in multiple classifier systems. IEEE Trans. Pattern Anal. Mach. Intell. 16(1), 66–75 (1994)CrossRef
50.
Zurück zum Zitat Hunt, E.B.: Concept Learning: An Information Processing Problem (1962)CrossRef Hunt, E.B.: Concept Learning: An Information Processing Problem (1962)CrossRef
51.
Zurück zum Zitat Jacobs, R.A.: Methods for combining experts probability assessments. Neural Comput. 7, 867–888 (1995)CrossRef Jacobs, R.A.: Methods for combining experts probability assessments. Neural Comput. 7, 867–888 (1995)CrossRef
52.
Zurück zum Zitat Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3, 79–87 (1991)CrossRef Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Comput. 3, 79–87 (1991)CrossRef
53.
Zurück zum Zitat Jordan, M., Xu, L.: Convergence results for the em approach to mixtures of experts architectures. Neural Netw. 8, 1409–1431 (1995)CrossRef Jordan, M., Xu, L.: Convergence results for the em approach to mixtures of experts architectures. Neural Netw. 8, 1409–1431 (1995)CrossRef
54.
Zurück zum Zitat Kass, G.V.: An exploratory technique for investigating large quantities of categorical data. Appl. Stat. 29(2), 119–127 (1980)CrossRef Kass, G.V.: An exploratory technique for investigating large quantities of categorical data. Appl. Stat. 29(2), 119–127 (1980)CrossRef
55.
Zurück zum Zitat Keller, J.M., Gader, P., Tahani, H., Chiang, J.H., Mohamed, M.: Advances in fuzzy integration for pattern recognition. Fuzzy Sets Syst. 65(2–3), 273–283 (1994)CrossRef Keller, J.M., Gader, P., Tahani, H., Chiang, J.H., Mohamed, M.: Advances in fuzzy integration for pattern recognition. Fuzzy Sets Syst. 65(2–3), 273–283 (1994)CrossRef
56.
Zurück zum Zitat Kittler, J., Hatef, M., Duin, R., Mates, J.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 226–239 (1998)CrossRef Kittler, J., Hatef, M., Duin, R., Mates, J.: On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 226–239 (1998)CrossRef
57.
Zurück zum Zitat Kohavi, R., Kunz, C.: Option decision trees with majority votes. In: Proceedings of the Fourteenth International Conference on Machine Learning, S. 161–169 (1997) Kohavi, R., Kunz, C.: Option decision trees with majority votes. In: Proceedings of the Fourteenth International Conference on Machine Learning, S. 161–169 (1997)
58.
Zurück zum Zitat Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation and active learning. In: Advances in Neural Information Processing Systems, S. 231–238. MIT, Cambridge (1995) Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation and active learning. In: Advances in Neural Information Processing Systems, S. 231–238. MIT, Cambridge (1995)
59.
Zurück zum Zitat Kuncheva, L.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 281–286 (2002)CrossRef Kuncheva, L.: A theoretical study on six classifier fusion strategies. IEEE Trans. Pattern Anal. Mach. Intell. 24(2), 281–286 (2002)CrossRef
60.
Zurück zum Zitat Kuncheva, L.: Combining Pattern Classifiers, Methods and Algorithms (2004) Kuncheva, L.: Combining Pattern Classifiers, Methods and Algorithms (2004)
61.
Zurück zum Zitat Kuncheva, L., Bezdek, J., Duin, R.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognit. 34(2), 299–314 (2001)CrossRef Kuncheva, L., Bezdek, J., Duin, R.: Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recognit. 34(2), 299–314 (2001)CrossRef
62.
Zurück zum Zitat Kuncheva, L., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)CrossRef Kuncheva, L., Whitaker, C.: Measures of diversity in classifier ensembles and their relationship with ensemble accuracy. Mach. Learn. 51(2), 181–207 (2003)CrossRef
63.
Zurück zum Zitat Lam, L., Suen, C.: Optimal combination of pattern classifiers. Pattern Recognit. Lett. 16, 945–954 (1995)CrossRef Lam, L., Suen, C.: Optimal combination of pattern classifiers. Pattern Recognit. Lett. 16, 945–954 (1995)CrossRef
64.
Zurück zum Zitat Leibe, B., Leonardis, A., Schiele, B.: Combined object categorization and segmentation with an implicit shape model. In: ECCV’04 Workshop on Statistical Learning in Computer Vision, S. 1–16 (2004) Leibe, B., Leonardis, A., Schiele, B.: Combined object categorization and segmentation with an implicit shape model. In: ECCV’04 Workshop on Statistical Learning in Computer Vision, S. 1–16 (2004)
65.
Zurück zum Zitat Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1465–1479 (2006)CrossRef Lepetit, V., Fua, P.: Keypoint recognition using randomized trees. IEEE Trans. Pattern Anal. Mach. Intell. 28(9), 1465–1479 (2006)CrossRef
66.
Zurück zum Zitat Lepetit, V., Lagger, P., Fua, P.: Randomized trees for real-time keypoint recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), Bd. 2, S. 775–781 (2005) Lepetit, V., Lagger, P., Fua, P.: Randomized trees for real-time keypoint recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), Bd. 2, S. 775–781 (2005)
67.
Zurück zum Zitat Lin, Y., Jeon, Y.: Random forests and adaptive nearest neighbors. J. Am. Stat. Assoc. 101–474 (2002) Lin, Y., Jeon, Y.: Random forests and adaptive nearest neighbors. J. Am. Stat. Assoc. 101–474 (2002)
68.
Zurück zum Zitat Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Netw. 12, 1399–1404 (1999)CrossRef Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Netw. 12, 1399–1404 (1999)CrossRef
69.
Zurück zum Zitat Masoudnia, S., Ebrahimpour, R.: Mixture of experts: a literature survey. Artif. Intell. Rev. 1–19 (2012) Masoudnia, S., Ebrahimpour, R.: Mixture of experts: a literature survey. Artif. Intell. Rev. 1–19 (2012)
70.
Zurück zum Zitat Melville, P., Mooney, R.J.: Creating diversity in ensembles using artificial data. Inf. Fusion 6, 99–111 (2004)CrossRef Melville, P., Mooney, R.J.: Creating diversity in ensembles using artificial data. Inf. Fusion 6, 99–111 (2004)CrossRef
71.
Zurück zum Zitat Messenger, R., Mandell, L.: A modal search technique for predictive nominal scale multivariate analysis. J. Am. Stat. Assoc. 67, 768–772 (1972) Messenger, R., Mandell, L.: A modal search technique for predictive nominal scale multivariate analysis. J. Am. Stat. Assoc. 67, 768–772 (1972)
72.
Zurück zum Zitat Morgan, J., Sonquist, J.: Problems in the analysis of survey data, and a proposal. J. Am. Stat. Assoc. 58, 415–434 (1963)CrossRef Morgan, J., Sonquist, J.: Problems in the analysis of survey data, and a proposal. J. Am. Stat. Assoc. 58, 415–434 (1963)CrossRef
73.
Zurück zum Zitat Moosmann, F., Nowak, E., Jurie, F.: Randomized clustering forests for image classification. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1632–1646 (2008)CrossRef Moosmann, F., Nowak, E., Jurie, F.: Randomized clustering forests for image classification. IEEE Trans. Pattern Anal. Mach. Intell. 30(9), 1632–1646 (2008)CrossRef
74.
Zurück zum Zitat Ng, K.C., Abramson, B.: Consensus diagnosis: a simulation study. IEEE Trans. Syst. Man Cybern. 22, 916–928 (1992)CrossRef Ng, K.C., Abramson, B.: Consensus diagnosis: a simulation study. IEEE Trans. Syst. Man Cybern. 22, 916–928 (1992)CrossRef
75.
Zurück zum Zitat Nowlan, S., Hinton, G.: Evaluation of adaptive mixtures of competing experts. Adv. Neural Inf. Process. Syst. 3, 774–780 (1991) Nowlan, S., Hinton, G.: Evaluation of adaptive mixtures of competing experts. Adv. Neural Inf. Process. Syst. 3, 774–780 (1991)
76.
Zurück zum Zitat Opitz, D., Shavlik, J.: Actively searching for an effective neural-network ensemble. Connect. Sci. 8, 337–354 (1996)CrossRef Opitz, D., Shavlik, J.: Actively searching for an effective neural-network ensemble. Connect. Sci. 8, 337–354 (1996)CrossRef
77.
Zurück zum Zitat Quinlan, J.: Discovering rules from large collections of examples: a case study (1979) Quinlan, J.: Discovering rules from large collections of examples: a case study (1979)
78.
Zurück zum Zitat Quinlan, J.: Inductive knowledge acquisition: a case study (1987) Quinlan, J.: Inductive knowledge acquisition: a case study (1987)
79.
Zurück zum Zitat Quinlan, J.: C4.5: Programs for Machine Learning (1993) Quinlan, J.: C4.5: Programs for Machine Learning (1993)
80.
Zurück zum Zitat Rodner, E., Denzler, J.: Learning with few examples by transferring feature relevance. In: Proceedings of the 31st DAGM Symposium on Pattern Recognition, S. 252–261 (2009) Rodner, E., Denzler, J.: Learning with few examples by transferring feature relevance. In: Proceedings of the 31st DAGM Symposium on Pattern Recognition, S. 252–261 (2009)
81.
Zurück zum Zitat Rodriguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)CrossRef Rodriguez, J.J., Kuncheva, L.I., Alonso, C.J.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)CrossRef
82.
Zurück zum Zitat Rogova, G.: Combining the results of several neural network classifiers. Neural Netw. 7(5), 777–781 (1994)CrossRef Rogova, G.: Combining the results of several neural network classifiers. Neural Netw. 7(5), 777–781 (1994)CrossRef
83.
Zurück zum Zitat Roli, F., Giacinto, G.: Design of multiple classifier systems. In: Hybrid Methods Pattern Recognition (2002)CrossRef Roli, F., Giacinto, G.: Design of multiple classifier systems. In: Hybrid Methods Pattern Recognition (2002)CrossRef
84.
Zurück zum Zitat Rosen, B.: Ensemble learning using decorrelated neural networks. Connect. Sci. 8, 373–384 (1996)CrossRef Rosen, B.: Ensemble learning using decorrelated neural networks. Connect. Sci. 8, 373–384 (1996)CrossRef
85.
Zurück zum Zitat Ruta, D., Gabrys, B.: An overview of classifier fusion methods. Comput. Inf. Syst. 7, 1–10 (2000) Ruta, D., Gabrys, B.: An overview of classifier fusion methods. Comput. Inf. Syst. 7, 1–10 (2000)
86.
Zurück zum Zitat Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRef Ruta, D., Gabrys, B.: Classifier selection for majority voting. Inf. Fusion 6(1), 63–81 (2005)CrossRef
87.
Zurück zum Zitat Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990) Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)
88.
Zurück zum Zitat Schlimmer, J., Fisher, D.: A case study of incremental concept induction. In: Proceedings of the Fifth National Conference on Artificial Intelligence, S. 135–141 (1986) Schlimmer, J., Fisher, D.: A case study of incremental concept induction. In: Proceedings of the Fifth National Conference on Artificial Intelligence, S. 135–141 (1986)
89.
Zurück zum Zitat Schroff, F., Criminisi, A., Zisserman, A.: Object class segmentation using random forests. In: Proceedings of the British Machine Vision Conference, S. 1–10 (2008) Schroff, F., Criminisi, A., Zisserman, A.: Object class segmentation using random forests. In: Proceedings of the British Machine Vision Conference, S. 1–10 (2008)
90.
Zurück zum Zitat Shotton, J., Johnson, M., Cipolla, R.: Semantic texton forests for image categorization and segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), S. 1–8 (2008) Shotton, J., Johnson, M., Cipolla, R.: Semantic texton forests for image categorization and segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008), S. 1–8 (2008)
91.
Zurück zum Zitat Strobl, C., Boulesteix, A.L., Kneib, T., Augustin, T., Zeileis, A.: Conditional variable importance for random forests. BMC Bioinform. 9(307), 1–11 (2008) Strobl, C., Boulesteix, A.L., Kneib, T., Augustin, T., Zeileis, A.: Conditional variable importance for random forests. BMC Bioinform. 9(307), 1–11 (2008)
92.
Zurück zum Zitat Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Diversity in search strategies for ensemble feature selection. Inf. Fusion 6(1), 83–98 (2005)CrossRef Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Diversity in search strategies for ensemble feature selection. Inf. Fusion 6(1), 83–98 (2005)CrossRef
93.
Zurück zum Zitat Utgoff, P.: Incremental induction of decision trees. Mach. Learn. 4(2), 161–186 (1989)CrossRef Utgoff, P.: Incremental induction of decision trees. Mach. Learn. 4(2), 161–186 (1989)CrossRef
94.
Zurück zum Zitat Wang, W., Brakensiek, A., Rigoll, G.: Combination of multiple classifiers for handwritten word recognition. In: Proceedings of International Workshop on Frontiers in Handwriting Recognition (IWFHR 2002), S. 117–122 (2002) Wang, W., Brakensiek, A., Rigoll, G.: Combination of multiple classifiers for handwritten word recognition. In: Proceedings of International Workshop on Frontiers in Handwriting Recognition (IWFHR 2002), S. 117–122 (2002)
95.
Zurück zum Zitat Windeatt, T.: Diversity measures for multiple classifier system analysis and design. Inf. Fusion 6, 21–36 (2005)CrossRef Windeatt, T.: Diversity measures for multiple classifier system analysis and design. Inf. Fusion 6, 21–36 (2005)CrossRef
96.
Zurück zum Zitat Wolpert, D.H.: Stacked generalization. Neural Netw. 5(3), 241–259 (1992)CrossRef Wolpert, D.H.: Stacked generalization. Neural Netw. 5(3), 241–259 (1992)CrossRef
97.
Zurück zum Zitat Wolpert, D., Macready, W.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010 (Santa Fe Institute) (1995) Wolpert, D., Macready, W.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010 (Santa Fe Institute) (1995)
98.
Zurück zum Zitat Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRef Wolpert, D., Macready, W.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRef
99.
Zurück zum Zitat Woods, K., Kegelmeyer, W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 405–410 (1997)CrossRef Woods, K., Kegelmeyer, W., Bowyer, K.: Combination of multiple classifiers using local accuracy estimates. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 405–410 (1997)CrossRef
100.
Zurück zum Zitat Xu, L., Krzyzak, A., Suen, C.: Methods for combining multiple classifiers and their appli-cations to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22(3), 418–435 (1992)CrossRef Xu, L., Krzyzak, A., Suen, C.: Methods for combining multiple classifiers and their appli-cations to handwriting recognition. IEEE Trans. Syst. Man Cybern. 22(3), 418–435 (1992)CrossRef
101.
Zurück zum Zitat Yang, W., Zou, T., Dai, D., Shuai, Y.: Supervised land-cover classification of TerraSAR-X imagery over urban areas using extremely randomized clustering forests. In: Joint Urban Remote Sensing Event, S. 1–6 (2009) Yang, W., Zou, T., Dai, D., Shuai, Y.: Supervised land-cover classification of TerraSAR-X imagery over urban areas using extremely randomized clustering forests. In: Joint Urban Remote Sensing Event, S. 1–6 (2009)
102.
Zurück zum Zitat Zadrozny, B., Elkan, C.: Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2002), S. 694–699 (2002) Zadrozny, B., Elkan, C.: Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2002), S. 694–699 (2002)
103.
Zurück zum Zitat Zhang, C., Ma, Y.: Ensemble Machine Learning – Methods and Applications (2012)CrossRef Zhang, C., Ma, Y.: Ensemble Machine Learning – Methods and Applications (2012)CrossRef
104.
Zurück zum Zitat Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms (2012) Zhou, Z.H.: Ensemble Methods: Foundations and Algorithms (2012)
105.
Zurück zum Zitat Zou, T., Yang, W., Dai, D., Sun, H.: Polarimetric SAR image classification using multifeatures combination and extremely randomized clustering forests. EURASIP J. Adv. Signal Process. 2010(1), 1–9 (2010)CrossRef Zou, T., Yang, W., Dai, D., Sun, H.: Polarimetric SAR image classification using multifeatures combination and extremely randomized clustering forests. EURASIP J. Adv. Signal Process. 2010(1), 1–9 (2010)CrossRef
Metadaten
Titel
Random Forests
verfasst von
Ronny Hänsch
Olaf Hellwich
Copyright-Jahr
2017
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-47094-7_46

Premium Partner