Swipe to navigate through the chapters of this book
This chapter is concerned with estimating the performance of a classifier (of any kind). Three methods are described for estimating a classifier’s predictive accuracy. The first of these is to divide the data available into a training set used for generating the classifier and a test set used for evaluating its performance. The other methods are \(k\)-fold cross-validation and its extreme form \(N\)-fold (or leave-one-out) cross-validation.
A statistical measure of the accuracy of an estimate formed using any of these methods, known as standard error is introduced. Experiments to estimate the predictive accuracy of the classifiers generated for various datasets are described, including datasets with missing attribute values. Finally a tabular way of presenting classifier performance information called a confusion matrix is introduced, together with the notion of true and false positive and negative classifications.
Please log in to get access to your license.
Dont have a licence yet? Then find out more about our products and how to get one now:
go back to reference Quinlan, J. R. (1979). Discovering rules by induction from large collections of examples. In D. Michie (Ed.), Expert systems in the micro-electronic age (pp. 168–201). Edinburgh: Edinburgh University Press. Quinlan, J. R. (1979). Discovering rules by induction from large collections of examples. In D. Michie (Ed.), Expert systems in the micro-electronic age (pp. 168–201). Edinburgh: Edinburgh University Press.
- Estimating the Predictive Accuracy of a Classifier
Prof. Max Bramer
- Copyright Year
- Springer London