Skip to main content
Top
Published in: International Journal of Machine Learning and Cybernetics 11/2020

09-05-2020 | Original Article

Flat random forest: a new ensemble learning method towards better training efficiency and adaptive model size to deep forest

Authors: Peng Liu, Xuekui Wang, Liangfei Yin, Bing Liu

Published in: International Journal of Machine Learning and Cybernetics | Issue 11/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The known deficiencies of deep neural networks include inferior training efficiency, weak parallelization capability, too many hyper-parameters etc. To address these issues, some researchers presented deep forest, a special deep learning model, which achieves some significant improvements but remain poor training efficiency, inflexible model size and weak interpretability. This paper endeavors to solve the issues in a new way. Firstly, deep forest is extended to the densely connected deep forest to enhance the prediction accuracy. Secondly, to perform parallel training with adaptive model size, the flat random forest is proposed by achieving the balance between the width and depth of densely connected deep forest. Finally, two core algorithms are respectively presented for the forward output weights computation and output weights updating. The experimental results show, compared with deep forest, the proposed flat random forest acquires competitive prediction accuracy, higher training efficiency, less hyper-parameters and adaptive model size.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Show more products
Literature
1.
go back to reference Dimitriadou E, Weingessel A, Hornik K (2001) Voting-merging: an ensemble method for clustering. In: International conference on artificial neural networks, pp 217–244 Dimitriadou E, Weingessel A, Hornik K (2001) Voting-merging: an ensemble method for clustering. In: International conference on artificial neural networks, pp 217–244
2.
go back to reference Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MATH Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MATH
3.
go back to reference Schapire RE (1999) A brief introduction to boosting. In: Sixteenth international joint conference on artificial intelligence, pp 1401–1406 Schapire RE (1999) A brief introduction to boosting. In: Sixteenth international joint conference on artificial intelligence, pp 1401–1406
4.
go back to reference Chen T, Guestrin C (2016) XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 785–794 Chen T, Guestrin C (2016) XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 785–794
5.
go back to reference Friedman JH, Popescu BE (2008) Predictive learning via rule ensembles. Ann Appl Stat 2(3):916–954MathSciNetMATH Friedman JH, Popescu BE (2008) Predictive learning via rule ensembles. Ann Appl Stat 2(3):916–954MathSciNetMATH
6.
8.
go back to reference Huang G, Liu Z, Laurens VDM, Weinberger KQ (2017) Densely connected convolutional networks. In Proceedings of the 2017 IEEE conference on computer vision and pattern recognition (CVPR), vol 1(no. 2), p 3 Huang G, Liu Z, Laurens VDM, Weinberger KQ (2017) Densely connected convolutional networks. In Proceedings of the 2017 IEEE conference on computer vision and pattern recognition (CVPR), vol 1(no. 2), p 3
9.
go back to reference Chen CLP, Wan JZ (1999) A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction. IEEE Trans Syst Man Cybern B Cybern 29(1):62–72 Chen CLP, Wan JZ (1999) A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction. IEEE Trans Syst Man Cybern B Cybern 29(1):62–72
10.
go back to reference Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2014) PCANet: a simple deep learning baseline for image classification. IEEE Trans Image Process 24(12):5017–5032MathSciNetMATH Chan TH, Jia K, Gao S, Lu J, Zeng Z, Ma Y (2014) PCANet: a simple deep learning baseline for image classification. IEEE Trans Image Process 24(12):5017–5032MathSciNetMATH
11.
go back to reference Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24MathSciNet Chen CLP, Liu Z (2018) Broad learning system: an effective and efficient incremental learning system without the need for deep architecture. IEEE Trans Neural Netw Learn Syst 29(1):10–24MathSciNet
12.
go back to reference Payne DB, Stern JR (1985) Wavelength-switched passively coupled single-mode optical network. In: Proceedings of the IOOC–ECOC, vol 85, p 585 Payne DB, Stern JR (1985) Wavelength-switched passively coupled single-mode optical network. In: Proceedings of the IOOC–ECOC, vol 85, p 585
13.
go back to reference Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87 Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87
14.
go back to reference Hoeting JA, Madigan D, Volinsky RCT (1999) Bayesian model averaging: a tutorial. Stat Sci 14(4):382–401MathSciNetMATH Hoeting JA, Madigan D, Volinsky RCT (1999) Bayesian model averaging: a tutorial. Stat Sci 14(4):382–401MathSciNetMATH
15.
go back to reference Bagui SC (2005) Combining pattern classifiers: methods and algorithms. Technometrics 47(4):517–518 Bagui SC (2005) Combining pattern classifiers: methods and algorithms. Technometrics 47(4):517–518
16.
go back to reference Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36(1–2):105–139 Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach Learn 36(1–2):105–139
17.
go back to reference Mallapragada PK, Jin R, Jain AK, Liu Y, Mallapragada PK, Jin R et al (2009) Semiboost: boosting for semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 31(11):2000–2014 Mallapragada PK, Jin R, Jain AK, Liu Y, Mallapragada PK, Jin R et al (2009) Semiboost: boosting for semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 31(11):2000–2014
18.
go back to reference Bennett KP, Demiriz A, Maclin R (2002) Exploiting unlabeled data in ensemble methods. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, pp 289–296 Bennett KP, Demiriz A, Maclin R (2002) Exploiting unlabeled data in ensemble methods. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, pp 289–296
19.
go back to reference Zhou ZH, Li M (2005) Tri-training: exploiting unlabeled data using three classifiers. IEEE Trans Knowl Data Eng 17(11):1529–1541 Zhou ZH, Li M (2005) Tri-training: exploiting unlabeled data using three classifiers. IEEE Trans Knowl Data Eng 17(11):1529–1541
20.
go back to reference Yoon J, Zame WR, Mihaela VDS (2018) ToPs: ensemble learning with trees of predictors. IEEE Trans Signal Process 66(8):2141–2152MathSciNetMATH Yoon J, Zame WR, Mihaela VDS (2018) ToPs: ensemble learning with trees of predictors. IEEE Trans Signal Process 66(8):2141–2152MathSciNetMATH
21.
go back to reference Wu Q, Ye Y, Zhang H et al (2014) ForesTexter: an efficient random forest algorithm for imbalanced text categorization. Knowl Based Syst 67:105–116 Wu Q, Ye Y, Zhang H et al (2014) ForesTexter: an efficient random forest algorithm for imbalanced text categorization. Knowl Based Syst 67:105–116
22.
go back to reference Amiri S, Mahjoub MA, Rekik I (2018) Dynamic multiscale tree learning using ensemble strong classifiers for multi-label segmentation of medical images with lesions. In: Proceedings of the 13th international joint conference on computer vision, imaging and computer graphics theory and applications, vol 4, pp 419–426 Amiri S, Mahjoub MA, Rekik I (2018) Dynamic multiscale tree learning using ensemble strong classifiers for multi-label segmentation of medical images with lesions. In: Proceedings of the 13th international joint conference on computer vision, imaging and computer graphics theory and applications, vol 4, pp 419–426
23.
go back to reference Chakraborty T (2017) [IEEE 2017 IEEE international conference on data mining (ICDM)—New Orleans, LA (2017.11.18-2017.11.21)] 2017 IEEE international conference on data mining (ICDM)—EC3: combining clustering and classification for ensemble learning, pp 781–786 Chakraborty T (2017) [IEEE 2017 IEEE international conference on data mining (ICDM)—New Orleans, LA (2017.11.18-2017.11.21)] 2017 IEEE international conference on data mining (ICDM)—EC3: combining clustering and classification for ensemble learning, pp 781–786
24.
go back to reference Strauss T, Hanselmann M, Junginger A et al (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv:1709.03423 Strauss T, Hanselmann M, Junginger A et al (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv:​1709.​03423
25.
go back to reference Zhou ZH (2012) Ensemble methods: foundations and algorithms. Chapman and Hall/CRC, London Zhou ZH (2012) Ensemble methods: foundations and algorithms. Chapman and Hall/CRC, London
26.
go back to reference Hinton GE, Osindero S, Teh YW (2014) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetMATH Hinton GE, Osindero S, Teh YW (2014) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetMATH
27.
go back to reference Wang Y, Xia ST, Tang Q et al (2018) A novel consistent random forest framework: Bernoulli random forests. IEEE Trans Neural Netw Learn Syst 29(8):3510–3523MathSciNet Wang Y, Xia ST, Tang Q et al (2018) A novel consistent random forest framework: Bernoulli random forests. IEEE Trans Neural Netw Learn Syst 29(8):3510–3523MathSciNet
28.
go back to reference Samaria FS, Harter AC (1994) Parameterization of a stochastic model for human face identification. IEEE Workshop Appl Comput Vis 22:138–142 Samaria FS, Harter AC (1994) Parameterization of a stochastic model for human face identification. IEEE Workshop Appl Comput Vis 22:138–142
29.
go back to reference Asuncion A, Newman D (2007) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine Asuncion A, Newman D (2007) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine
Metadata
Title
Flat random forest: a new ensemble learning method towards better training efficiency and adaptive model size to deep forest
Authors
Peng Liu
Xuekui Wang
Liangfei Yin
Bing Liu
Publication date
09-05-2020
Publisher
Springer Berlin Heidelberg
Published in
International Journal of Machine Learning and Cybernetics / Issue 11/2020
Print ISSN: 1868-8071
Electronic ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-020-01136-0

Other articles of this Issue 11/2020

International Journal of Machine Learning and Cybernetics 11/2020 Go to the issue