ABSTRACT
Defect Prediction Models aim at identifying error-prone parts of a software system as early as possible. Many such models have been proposed, their evaluation, however, is still an open question, as recent publications show.
An important aspect often ignored during evaluation is the effort reduction gained by using such models. Models are usually evaluated per module by performance measures used in information retrieval, such as recall, precision, or the area under the ROC curve (AUC). These measures assume that the costs associated with additional quality assurance activities are the same for each module, which is not reasonable in practice. For example, costs for unit testing and code reviews are roughly proportional to the size of a module.
In this paper, we investigate this discrepancy using optimal and trivial models. We describe a trivial model that takes only the module size measured in lines of code into account, and compare it to five classification methods. The trivial model performs surprisingly well when evaluated using AUC. However, when an effort-sensitive performance measure is used, it becomes apparent that the trivial model is in fact the worst.
- E. Arisholm, L. C. Briand, and M. Fuglerud. Data mining techniques for building fault-proneness models in telecom java software. In ISSRE '07: Proceedings of the 18th IEEE International Symposium on Software Reliability Engineering, pages 215--224. IEEE Press, 2007. Google ScholarDigital Library
- L. Breiman. Bagging predictors. Machine Learning, 24(2):123--140, 1996. Google ScholarCross Ref
- L. Breiman. Random forests. Machine Learning, 45(1):5--32, 2001. Google ScholarDigital Library
- L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, 1984.Google Scholar
- J. Demšar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30, 2006. Google ScholarDigital Library
- E. Dimitriadou, K. Hornik, F. Leisch, D. Meyer, and A. Weingessel. e1071: Misc functions of the department of statistics (e1071), TU Wien, 2009. R package version 1.5--19.Google Scholar
- K. E. Emam, S. Benlarbi, N. Goel, and S. N. Rai. Comparing case-based reasoning classifiers for predicting high risk software components. Journal of Systems Software, 55(3):301--320, 2001. Google ScholarDigital Library
- T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861--874, 2006. Google ScholarDigital Library
- M. Friedman. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32:675--701, 1937.Google ScholarCross Ref
- M. H. Halstead. Elements of Software Science. Elsevier Science Inc., New York, NY, USA, 1977. Google ScholarDigital Library
- Y. Jiang, B. Cukic, and Y. Ma. Techniques for evaluating fault prediction models. Empirical Software Engineering, 13(5):561--595, 2008. Google ScholarDigital Library
- Y. Jiang, B. Cukic, and T. Menzies. Can data transformation help in the detection of fault-prone modules? In DEFECTS '08: Proceedings of the 2008 workshop on Defects in large software systems. ACM, 2008. Google ScholarDigital Library
- Y. Jiang, B. Cukic, and T. Menzies. Costs curve evaluation of fault prediction models. In ISSRE'08: Proceedings of the 19th International Symposium on Software Reliability Engineering, pages 197--206. IEEE Press, 2008. Google ScholarDigital Library
- Y. Jiang, B. Cukic, T. Menzies, and N. Bartlow. Comparing design and code metrics for software quality prediction. In PROMISE '08: Proceedings of the 4th international workshop on Predictor models in software engineering, pages 11--18. ACM, 2008. Google ScholarDigital Library
- T. M. Khoshgoftaar and E. B. Allen. Ordering fault-prone software modules. Software Quality Journal, 11(1):19--37, 2003. Google ScholarDigital Library
- S. Lessmann, B. Baesens, C. Mues, and S. Pietsch. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering, 34(4):485--496, 2008. Google ScholarDigital Library
- A. Liaw and M. Wiener. Classification and regression by randomForest. R News, 2(3):18--22, 2002.Google Scholar
- Y. Ma and B. Cukic. Adequate and precise evaluation of quality models in software engineering studies. In PROMISE '07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering. IEEE Press, 2007. Google ScholarDigital Library
- T. J. McCabe. A complexity measure. IEEE Transactions on Software Engineering, 2(4):308--320, 1976. Google ScholarDigital Library
- T. Mende, R. Koschke, and M. Leszak. Evaluating defect prediction models for a large, evolving software system. In Proceedings of the 13th European Conference on Software Maintenance and Reengineering, pages 247--250. IEEE Press, 2009. Google ScholarDigital Library
- T. Menzies, A. Dekhtyar, J. Distefano, and J. Greenwald. Problems with precision: A response to "comments on 'data mining static code attributes to learn defect predictors"'. IEEE Transactions on Software Engineering, 33(9):637--640, 2007. Google ScholarDigital Library
- T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1):2--13, 2007. Google ScholarDigital Library
- N. Ohlsson and H. Alberg. Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering, 22(12):886--894, 1996. Google ScholarDigital Library
- T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4):340--355, 2005. Google ScholarDigital Library
- T. J. Ostrand, E. J. Weyuker, and R. M. Bell. Automating algorithms for the identification of fault-prone files. In ISSTA '07: Proceedings of the 2007 international symposium on Software testing and analysis, pages 219--227, New York, NY, USA, 2007. ACM. Google ScholarDigital Library
- A. Peters and T. Hothorn. ipred: Improved Predictors, 2008. R package version 0.8--6.Google Scholar
- R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.Google Scholar
- T. Sing, O. Sander, N. Beerenwinkel, and T. Lengauer. ROCR: visualizing classifier performance in R. Bioinformatics, 21(20):3940--3941, 2005. Google ScholarDigital Library
- P.-N. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley, 1st edition, May 2005. Google ScholarDigital Library
- H. Zhang and X. Zhang. Comments on "data mining static code attributes to learn defect predictors". IEEE Transactions on Software Engineering, 33(9):635--637, 2007. Google ScholarDigital Library
Index Terms
- Revisiting the evaluation of defect prediction models
Recommendations
Cross-project defect prediction: a large scale experiment on data vs. domain vs. process
ESEC/FSE '09: Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineeringPrediction of software defects works well within projects as long as there is a sufficient amount of data available to train any models. However, this is rarely the case for new software projects and for many companies. So far, only a few have studies ...
How Far We Have Progressed in the Journey? An Examination of Cross-Project Defect Prediction
Background. Recent years have seen an increasing interest in cross-project defect prediction (CPDP), which aims to apply defect prediction models built on source projects to a target project. Currently, a variety of (complex) CPDP models have been ...
Revisiting unsupervised learning for defect prediction
ESEC/FSE 2017: Proceedings of the 2017 11th Joint Meeting on Foundations of Software EngineeringCollecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore "unsupervised" approaches to quality prediction that does not require labelled data. An alternate technique is to use "supervised" ...
Comments