Weitere Kapitel dieses Buchs durch Wischen aufrufen
The main objective of this chapter is to discuss various supervised learning models in detail. The supervised learning models provide parametrized mapping that projects a data domain into a response set, and thus helps extract knowledge (known) from data (unknown). These learning models, in simple form, can be grouped into predictive models and classification models. Firstly, the predictive models, such as the standard regression, ridge regression, lasso regression, and elastic-net regression are discussed in detail with their mathematical and visual interpretations using simple examples. Secondly, the classification models are discussed and grouped into three models: mathematical models, hierarchical models, and layered models. Also discussed are the mathematical models, such as the logistic regression and support vector machine; the hierarchical models, like the decision tree and the random forest; and the layered models, like the deep learning. They are discussed only from the modeling point of view, and they will be discussed in detail together as the modeling and algorithms in separate chapters later in the book.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
T. G. Dietterich, “Machine-learning research: Four current directions,” AI Magazine, vol. 18, no. 4, pp. 97–136,1997.
D. Meyer, F. Leisch, and K. Hornik. “The support vector machine under test. Neurocomputing,” 55, pp. 169–186, 2003.
G. M. Weiss, and F. Provost, F. “Learning when training data are costly: the effect of class distribution on tree induction,” Journal of Artificial Intelligence Research, vol. 19, pp. 315–354, 2003. MATH
Van der Kooij, A.J. and Meulman, J.J.(2006). “Regularization with Ridge penalties, the Lasso, and the Elastic Net for Regression with Optimal Scaling Transformations,” https://openaccess.leidenuniv.nl/bitstream/handle/1887/12096/04.pdf (last accessed April 16th 2015).
M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, and B. Scholkopf. “Support vector machines.” Intelligent Systems and their Applications, IEEE, 13(4), pp. 18–28, 1998. CrossRef
L. Rokach, and O. Maimon. “Top-down induction of decision trees classifiers-a survey.” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 35, no. 4, pp. 476–487, 2005. CrossRef
G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.
M. Dunbar, J. M. Murray, L. A. Cysique, B. J. Brew, and V. Jeyakumar. “Simultaneous classification and feature selection via convex quadratic programming with application to HIV-associated neurocognitive disorder assessment.” European Journal of Operational Research 206(2): pp. 470–478, 2010. MATHCrossRef
O. L. Mangasarian and D. R. Musicant. 2000. “LSVM Software: Active set support vector machine classification software,” Available online at http://research.cs.wisc.edu/dmi/lsvm/.
V. Franc, and V. Hlavac. “Multi-class support vector machine.” In Proceedings of the IEEE 16th International Conference on Pattern Recognition, vol. 2, pp. 236–239, 2002.
R. J. Lewis. “An introduction to classification and regression tree (CART) analysis” In Annual Meeting of the Society for Academic Emergency Medicine in San Francisco, California, pp. 1–14, 2000.
http://www.simafore.com/blog/bid/62482/2-main-differences- between- classification-and- regression-trees. (last accessed April 19, 2015).
Li Deng. “A tutorial survey of architectures, algorithms, and applications for deep learning,” APSIPA Transactions on Signal and Information Processing, 3, e2 doi:10.1017/atsip.2013.9, 2014.
L. Wan, M. Zeiler, S. Zhang, Y. L. Cunn, and R. Fergus. “Regularization of neural networks using dropconnect.” In Proceedings of the International Conference on Machine Learning, pp. 1058–1066, 2013.
B. L. Kalman and S. C. Kwasny. “Why tanh: choosing a sigmoidal function.” International Joint Conference on Neural Networks, vol. 4, pp. 578–581, 1992.
T. Zhang. “Solving large scale linear prediction problems using stochastic gradient descent algorithms.” In Proceedings of the International Conference on Machine Learning, pp. 919–926, 2004.
- Supervised Learning Models
- Springer US
- Chapter 7
Neuer Inhalt/© Stellmach, Neuer Inhalt/© Maturus, Pluta Logo/© Pluta