skip to main content
10.1145/1540438.1540448acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Revisiting the evaluation of defect prediction models

Published:18 May 2009Publication History

ABSTRACT

Defect Prediction Models aim at identifying error-prone parts of a software system as early as possible. Many such models have been proposed, their evaluation, however, is still an open question, as recent publications show.

An important aspect often ignored during evaluation is the effort reduction gained by using such models. Models are usually evaluated per module by performance measures used in information retrieval, such as recall, precision, or the area under the ROC curve (AUC). These measures assume that the costs associated with additional quality assurance activities are the same for each module, which is not reasonable in practice. For example, costs for unit testing and code reviews are roughly proportional to the size of a module.

In this paper, we investigate this discrepancy using optimal and trivial models. We describe a trivial model that takes only the module size measured in lines of code into account, and compare it to five classification methods. The trivial model performs surprisingly well when evaluated using AUC. However, when an effort-sensitive performance measure is used, it becomes apparent that the trivial model is in fact the worst.

References

  1. E. Arisholm, L. C. Briand, and M. Fuglerud. Data mining techniques for building fault-proneness models in telecom java software. In ISSRE '07: Proceedings of the 18th IEEE International Symposium on Software Reliability Engineering, pages 215--224. IEEE Press, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. L. Breiman. Bagging predictors. Machine Learning, 24(2):123--140, 1996. Google ScholarGoogle ScholarCross RefCross Ref
  3. L. Breiman. Random forests. Machine Learning, 45(1):5--32, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, 1984.Google ScholarGoogle Scholar
  5. J. Demšar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. Dimitriadou, K. Hornik, F. Leisch, D. Meyer, and A. Weingessel. e1071: Misc functions of the department of statistics (e1071), TU Wien, 2009. R package version 1.5--19.Google ScholarGoogle Scholar
  7. K. E. Emam, S. Benlarbi, N. Goel, and S. N. Rai. Comparing case-based reasoning classifiers for predicting high risk software components. Journal of Systems Software, 55(3):301--320, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861--874, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Friedman. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the American Statistical Association, 32:675--701, 1937.Google ScholarGoogle ScholarCross RefCross Ref
  10. M. H. Halstead. Elements of Software Science. Elsevier Science Inc., New York, NY, USA, 1977. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Jiang, B. Cukic, and Y. Ma. Techniques for evaluating fault prediction models. Empirical Software Engineering, 13(5):561--595, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Y. Jiang, B. Cukic, and T. Menzies. Can data transformation help in the detection of fault-prone modules? In DEFECTS '08: Proceedings of the 2008 workshop on Defects in large software systems. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Y. Jiang, B. Cukic, and T. Menzies. Costs curve evaluation of fault prediction models. In ISSRE'08: Proceedings of the 19th International Symposium on Software Reliability Engineering, pages 197--206. IEEE Press, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Jiang, B. Cukic, T. Menzies, and N. Bartlow. Comparing design and code metrics for software quality prediction. In PROMISE '08: Proceedings of the 4th international workshop on Predictor models in software engineering, pages 11--18. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. M. Khoshgoftaar and E. B. Allen. Ordering fault-prone software modules. Software Quality Journal, 11(1):19--37, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. Lessmann, B. Baesens, C. Mues, and S. Pietsch. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering, 34(4):485--496, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Liaw and M. Wiener. Classification and regression by randomForest. R News, 2(3):18--22, 2002.Google ScholarGoogle Scholar
  18. Y. Ma and B. Cukic. Adequate and precise evaluation of quality models in software engineering studies. In PROMISE '07: Proceedings of the Third International Workshop on Predictor Models in Software Engineering. IEEE Press, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. J. McCabe. A complexity measure. IEEE Transactions on Software Engineering, 2(4):308--320, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. Mende, R. Koschke, and M. Leszak. Evaluating defect prediction models for a large, evolving software system. In Proceedings of the 13th European Conference on Software Maintenance and Reengineering, pages 247--250. IEEE Press, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. Menzies, A. Dekhtyar, J. Distefano, and J. Greenwald. Problems with precision: A response to "comments on 'data mining static code attributes to learn defect predictors"'. IEEE Transactions on Software Engineering, 33(9):637--640, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1):2--13, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. N. Ohlsson and H. Alberg. Predicting fault-prone software modules in telephone switches. IEEE Transactions on Software Engineering, 22(12):886--894, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4):340--355, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. T. J. Ostrand, E. J. Weyuker, and R. M. Bell. Automating algorithms for the identification of fault-prone files. In ISSTA '07: Proceedings of the 2007 international symposium on Software testing and analysis, pages 219--227, New York, NY, USA, 2007. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Peters and T. Hothorn. ipred: Improved Predictors, 2008. R package version 0.8--6.Google ScholarGoogle Scholar
  27. R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2008. ISBN 3-900051-07-0.Google ScholarGoogle Scholar
  28. T. Sing, O. Sander, N. Beerenwinkel, and T. Lengauer. ROCR: visualizing classifier performance in R. Bioinformatics, 21(20):3940--3941, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. P.-N. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley, 1st edition, May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. H. Zhang and X. Zhang. Comments on "data mining static code attributes to learn defect predictors". IEEE Transactions on Software Engineering, 33(9):635--637, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Revisiting the evaluation of defect prediction models

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Other conferences
                PROMISE '09: Proceedings of the 5th International Conference on Predictor Models in Software Engineering
                May 2009
                268 pages
                ISBN:9781605586342
                DOI:10.1145/1540438

                Copyright © 2009 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 18 May 2009

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • research-article

                Acceptance Rates

                Overall Acceptance Rate64of125submissions,51%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader