ABSTRACT
Software defect prediction has been a popular research topic in recent years and is considered as a means for the optimization of quality assurance activities. Defect prediction can be done in a within-project or a cross-project scenario. The within-project scenario produces results with a very high quality, but requires historic data of the project, which is often not available. For the cross-project prediction, the data availability is not an issue as data from other projects is readily available, e.g., in repositories like PROMISE. However, the quality of the defect prediction results is too low for practical use. Recent research showed that the selection of appropriate training data can improve the quality of cross-project defect predictions. In this paper, we propose distance-based strategies for the selection of training data based on distributional characteristics of the available data. We evaluate the proposed strategies in a large case study with 44 data sets obtained from 14 open source projects. Our results show that our training data selection strategy improves the achieved success rate of cross-project defect predictions significantly. However, the quality of the results still cannot compete with within-project defect prediction.
- C. Aggarwal, A. Hinneburg, and D. Keim. On the surprising behavior of distance metrics in high dimensional space. In Database Theory - ICDT 2001, volume 1973 of Lecture Notes in Computer Science, pages 420--434. Springer, 2001. Google ScholarDigital Library
- V. Basili, L. Briand, and W. Melo. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering, 22(10):751--761, 1996. Google ScholarDigital Library
- N. Bettenburg, M. Nagappan, and A. Hassan. Think locally, act globally: Improving defect and effort prediction models. In Proceedings of the 9th IEEE Working Conference on Mining Software Repositories (MSR), pages 60--69, 2012.Google ScholarCross Ref
- K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is "nearest neighbor" meaningful? In Database Theory - ICDT 1999, volume 1540 of Lecture Notes in Computer Science, pages 217--235. Springer, 1999. Google ScholarDigital Library
- A. E. Camargo Cruz and K. Ochimizu. Towards logistic regression models for predicting fault-prone code across software projects. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM), pages 460--463. IEEE Computer Society, 2009. Google ScholarDigital Library
- G. Canfora, A. D. Lucia, M. D. Penta, R. Oliveto, A. Panichella, and S. Panichella. Multi-objective cross-project defect prediction. In Proceedings of the 6th IEEE International Conference on Software Testing, Verification and Validation (ICST), 2013. Google ScholarDigital Library
- A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1--38, 1977.Google ScholarCross Ref
- M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter, 11(1):10--18, 2009. Google ScholarDigital Library
- Z. He, F. Shu, Y. Yang, M. Li, and Q. Wang. An investigation on the feasibility of cross-project defect prediction. Automated Software Engineering, 19:167--199, 2012. Google ScholarDigital Library
- M. Jureczko and L. Madeyski. Towards identifying software project clusters with regard to defect prediction. In Proceedings of the 6th International Conference on Predictive Models in Software Engineering (PROMISE). ACM, 2010. Google ScholarDigital Library
- T. Menzies, A. Butcher, A. Marcus, T. Zimmermann, and D. Cok. Local vs. global models for effort estimation and defect prediction. In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 343--351, 2011. Google ScholarDigital Library
- T. Menzies, B. Caglayan, E. Kocaguneli, J. Krall, F. Peters, and B. Turhan. The promise repository of empirical software engineering data, June 2012.Google Scholar
- T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1):2--13, 2007. Google ScholarDigital Library
- R. Moser, W. Pedrycz, and G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In Proceedings of the ACM/IEEE 30th International Conference on Software Engineering (ICSE), pages 181--190, 2008. Google ScholarDigital Library
- N. Nagappan, T. Ball, and A. Zeller. Mining metrics to predict component failures. In Proceedings of the ACM/IEEE 28th International Conference on Software engineering, ICSE '06, pages 452--461, New York, NY, USA, 2006. ACM. Google ScholarDigital Library
- T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4):340--355, 2005. Google ScholarDigital Library
- F. Rahman, D. Posnett, and P. Devanbu. Recalling the "imprecision" of cross-project defect prediction. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE), FSE '12. ACM, 2012. Google ScholarDigital Library
- B. Turhan, T. Menzies, A. Bener, and J. Di Stefano. On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 14:540--578, 2009. Google ScholarDigital Library
- S. Watanabe, H. Kaiya, and K. Kaijiri. Adapting a fault prediction model to allow inter language reuse. In Proceedings of the 4th International Workshop on Predictor Models in Software Engineering (PROMISE), pages 19--24. ACM, 2008. Google ScholarDigital Library
- E. Weyuker, T. Ostrand, and R. Bell. Comparing the effectiveness of several modeling methods for fault prediction. Empirical Software Engineering, 15(3):277--295, 2010. Google ScholarDigital Library
- T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy. Cross-project defect prediction: a large scale experiment on data vs. domain vs. process. In Proceedings of the the 7th joint meeting of the European Software Engineering Conference (ESEC) and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (FSE), pages 91--100. ACM, 2009. Google ScholarDigital Library
Index Terms
- Training data selection for cross-project defect prediction
Recommendations
Cross-project defect prediction: a large scale experiment on data vs. domain vs. process
ESEC/FSE '09: Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineeringPrediction of software defects works well within projects as long as there is a sufficient amount of data available to train any models. However, this is rarely the case for new software projects and for many companies. So far, only a few have studies ...
Recalling the "imprecision" of cross-project defect prediction
FSE '12: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software EngineeringThere has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in ...
An investigation on the feasibility of cross-project defect prediction
Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar ...
Comments