Skip to main content
Top

2016 | OriginalPaper | Chapter

8. Supervised Learning Algorithms

Author : Shan Suthaharan

Published in: Machine Learning Models and Algorithms for Big Data Classification

Publisher: Springer US

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Supervised learning algorithms help the learning models to be trained efficiently, so that they can provide high classification accuracy. In general, the supervised learning algorithms support the search for optimal values for the model parameters by using large data sets without overfitting the model. Therefore, a careful design of the learning algorithms with systematic approaches is essential. The machine learning field suggests three phases for the design of a supervised learning algorithm: training phase, validation phase, and testing phase. Hence, it recommends three divisions (or subsets) of the data sets to carry out these tasks. It also suggests defining or selecting suitable performance evaluation metrics to train, validate, and test the supervised learning models. Therefore, the objectives of this chapter are to discuss these three phases of a supervised learning algorithm and the three performance evaluation metrics called domain division, classification accuracy, and oscillation characteristics. The chapter objectives include the introduction of five new performance evaluation metrics called delayed learning, sporadic learning, deteriorate learning, heedless learning, and stabilized learning, which can help to measure classification accuracy under oscillation characteristics.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference S. B. Kotsiantis. “Supervised machine learning: A review of classification techniques,” Informatica 31, pp. 249–268, 2007.MATHMathSciNet S. B. Kotsiantis. “Supervised machine learning: A review of classification techniques,” Informatica 31, pp. 249–268, 2007.MATHMathSciNet
2.
go back to reference C.M. Bishop. “Pattern recognition and machine learning,” Springer Science+Business Media, LLC, 2006.MATH C.M. Bishop. “Pattern recognition and machine learning,” Springer Science+Business Media, LLC, 2006.MATH
3.
go back to reference T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. New York: Springer, 2009.MATHCrossRef T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. New York: Springer, 2009.MATHCrossRef
6.
go back to reference T. G. Dietterich, “Machine-learning research: Four current directions,” AI Magazine, vol. 18, no. 4, pp. 97–136,1997. T. G. Dietterich, “Machine-learning research: Four current directions,” AI Magazine, vol. 18, no. 4, pp. 97–136,1997.
7.
go back to reference R. Kohavi. “A study of cross-validation and bootstrap for accuracy estimation and model selection,” International joint Conference on Artificial Intelligence (IJCAI), p. 7, 1995. R. Kohavi. “A study of cross-validation and bootstrap for accuracy estimation and model selection,” International joint Conference on Artificial Intelligence (IJCAI), p. 7, 1995.
8.
go back to reference L. Bottou, and Y. Lecun. “Large scale online learning,” Advances in Neural Information Processing Systems 16. Eds. S. Thurn, L. K. Saul, and B. Scholkopf. MIT Press, pp. 217–224, 2004. L. Bottou, and Y. Lecun. “Large scale online learning,” Advances in Neural Information Processing Systems 16. Eds. S. Thurn, L. K. Saul, and B. Scholkopf. MIT Press, pp. 217–224, 2004.
9.
go back to reference S. Arlot, and A. Celisse. “A survey of cross-validation procedures for model selection,” Statistics surveys, vol. 4, pp. 40–79, 2010.MATHMathSciNetCrossRef S. Arlot, and A. Celisse. “A survey of cross-validation procedures for model selection,” Statistics surveys, vol. 4, pp. 40–79, 2010.MATHMathSciNetCrossRef
10.
go back to reference A. Elisseeff and M. Pontil. “Leave-one-out error and stability of learning algorithms with applications,” NATO science series sub series iii computer and systems sciences, 190, pp. 111–130, 2003. A. Elisseeff and M. Pontil. “Leave-one-out error and stability of learning algorithms with applications,” NATO science series sub series iii computer and systems sciences, 190, pp. 111–130, 2003.
11.
go back to reference H. Suominen, T. Pahikkala and T. Salakoski. “Critical points in assessing learning performance via cross-validation,” In Proceedings of the 2nd International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning, pp. 9–22, 2008. H. Suominen, T. Pahikkala and T. Salakoski. “Critical points in assessing learning performance via cross-validation,” In Proceedings of the 2nd International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning, pp. 9–22, 2008.
12.
go back to reference S. Suthaharan. “Big data classification: Problems and challenges in network intrusion prediction with machine learning,” ACM SIGMETRICS Performance Evaluation Review, vol. 41, no. 4, pp. 70–73, 2014.CrossRef S. Suthaharan. “Big data classification: Problems and challenges in network intrusion prediction with machine learning,” ACM SIGMETRICS Performance Evaluation Review, vol. 41, no. 4, pp. 70–73, 2014.CrossRef
14.
go back to reference K. Macek. “Pareto principle in datamining: an above-average fencing algorithm,” Acta Polytechnica, vol. 48, no. 6, pp. 55–59, 2008. K. Macek. “Pareto principle in datamining: an above-average fencing algorithm,” Acta Polytechnica, vol. 48, no. 6, pp. 55–59, 2008.
15.
go back to reference I. Guyon. “A scaling law for the validation-set training-set size ratio.” AT&T Bell Laboratories, pp.1–11, 1997. I. Guyon. “A scaling law for the validation-set training-set size ratio.” AT&T Bell Laboratories, pp.1–11, 1997.
16.
go back to reference M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf. “Support vector machines.” Intelligent Systems and their Applications, IEEE, 13(4), pp. 18–28, 1998.CrossRef M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf. “Support vector machines.” Intelligent Systems and their Applications, IEEE, 13(4), pp. 18–28, 1998.CrossRef
18.
go back to reference L. Rokach, and O. Maimon. “Top-down induction of decision trees classifiers-a survey.” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 4, pp. 476–487, 2005.CrossRef L. Rokach, and O. Maimon. “Top-down induction of decision trees classifiers-a survey.” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 4, pp. 476–487, 2005.CrossRef
19.
go back to reference L. Breiman, “Random forests. “Machine learning 45, pp. 5–32, 2001. L. Breiman, “Random forests. “Machine learning 45, pp. 5–32, 2001.
21.
go back to reference G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
22.
go back to reference L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 1058–1066, 2013. L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pp. 1058–1066, 2013.
23.
go back to reference I.H. Witten, E. Frank, and M.A. Hall. Data Mining – Practical machine learning tools and techniques. Morgan Kaufmann, 3rd Edition, 2011. I.H. Witten, E. Frank, and M.A. Hall. Data Mining – Practical machine learning tools and techniques. Morgan Kaufmann, 3rd Edition, 2011.
25.
go back to reference G. M. Weiss, and F. Provost. “Learning when training data are costly: the effect of class distribution on tree induction,” Journal of Artificial Intelligence Research, vol. 19, pp. 315–354, 2003.MATH G. M. Weiss, and F. Provost. “Learning when training data are costly: the effect of class distribution on tree induction,” Journal of Artificial Intelligence Research, vol. 19, pp. 315–354, 2003.MATH
Metadata
Title
Supervised Learning Algorithms
Author
Shan Suthaharan
Copyright Year
2016
Publisher
Springer US
DOI
https://doi.org/10.1007/978-1-4899-7641-3_8