skip to main content
10.1145/2499393.2499395acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Training data selection for cross-project defect prediction

Published:09 October 2013Publication History

ABSTRACT

Software defect prediction has been a popular research topic in recent years and is considered as a means for the optimization of quality assurance activities. Defect prediction can be done in a within-project or a cross-project scenario. The within-project scenario produces results with a very high quality, but requires historic data of the project, which is often not available. For the cross-project prediction, the data availability is not an issue as data from other projects is readily available, e.g., in repositories like PROMISE. However, the quality of the defect prediction results is too low for practical use. Recent research showed that the selection of appropriate training data can improve the quality of cross-project defect predictions. In this paper, we propose distance-based strategies for the selection of training data based on distributional characteristics of the available data. We evaluate the proposed strategies in a large case study with 44 data sets obtained from 14 open source projects. Our results show that our training data selection strategy improves the achieved success rate of cross-project defect predictions significantly. However, the quality of the results still cannot compete with within-project defect prediction.

References

  1. C. Aggarwal, A. Hinneburg, and D. Keim. On the surprising behavior of distance metrics in high dimensional space. In Database Theory - ICDT 2001, volume 1973 of Lecture Notes in Computer Science, pages 420--434. Springer, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. V. Basili, L. Briand, and W. Melo. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering, 22(10):751--761, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. N. Bettenburg, M. Nagappan, and A. Hassan. Think locally, act globally: Improving defect and effort prediction models. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories (MSR), pages 60--69, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  4. K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is "nearest neighbor" meaningful? In Database Theory - ICDT 1999, volume 1540 of Lecture Notes in Computer Science, pages 217--235. Springer, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. E. Camargo Cruz and K. Ochimizu. Towards logistic regression models for predicting fault-prone code across software projects. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 460--463. IEEE Computer Society, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. G. Canfora, A. D. Lucia, M. D. Penta, R. Oliveto, A. Panichella, and S. Panichella. Multi-objective cross-project defect prediction. In Proceedings of the 6th IEEE International Conference on Software Testing, Verification and Validation (ICST), 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1--38, 1977.Google ScholarGoogle ScholarCross RefCross Ref
  8. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10--18, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Z. He, F. Shu, Y. Yang, M. Li, and Q. Wang. An investigation on the feasibility of cross-project defect prediction. Automated Software Engineering, 19:167--199, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. Jureczko and L. Madeyski. Towards identifying software project clusters with regard to defect prediction. In Proceedings of the 6th International Conference on Predictive Models in Software Engineering (PROMISE). ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Menzies, A. Butcher, A. Marcus, T. Zimmermann, and D. Cok. Local vs. global models for effort estimation and defect prediction. In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 343--351, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Menzies, B. Caglayan, E. Kocaguneli, J. Krall, F. Peters, and B. Turhan. The promise repository of empirical software engineering data, June 2012.Google ScholarGoogle Scholar
  13. T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1):2--13, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Moser, W. Pedrycz, and G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In Proceedings of the ACM/IEEE 30th International Conference on Software Engineering (ICSE), pages 181--190, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. N. Nagappan, T. Ball, and A. Zeller. Mining metrics to predict component failures. In Proceedings of the ACM/IEEE 28th International Conference on Software engineering, ICSE '06, pages 452--461, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4):340--355, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. F. Rahman, D. Posnett, and P. Devanbu. Recalling the "imprecision" of cross-project defect prediction. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE), FSE '12. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. B. Turhan, T. Menzies, A. Bener, and J. Di Stefano. On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 14:540--578, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Watanabe, H. Kaiya, and K. Kaijiri. Adapting a fault prediction model to allow inter language reuse. In Proceedings of the 4th International Workshop on Predictor Models in Software Engineering (PROMISE), pages 19--24. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. E. Weyuker, T. Ostrand, and R. Bell. Comparing the effectiveness of several modeling methods for fault prediction. Empirical Software Engineering, 15(3):277--295, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In Proceedings of the the 7th joint meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (FSE), pages 91--100. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Training data selection for cross-project defect prediction

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        PROMISE '13: Proceedings of the 9th International Conference on Predictive Models in Software Engineering
        October 2013
        103 pages
        ISBN:9781450320160
        DOI:10.1145/2499393

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 October 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate64of125submissions,51%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader