ABSTRACT
Researchers have proposed a number of tools for automatic bug localization. Given a program and a description of the failure, such tools pinpoint a set of statements that are most likely to contain the bug. Evaluating bug localization tools is a difficult task because existing benchmarks are limited in size of subjects and number of bugs. In this paper we present iBUGS, an approach that semiautomatically extracts benchmarks for bug localization from the history of a project. For ASPECTJ, we extracted 369 bugs, 223 out of these had associated test cases. We demonstrate the relevance of our dataset with a case study on the bug localization tool AMPLE
- H. Cleve and A. Zeller. Locating causes of program failures. In Proceedings of 27th International Conference on Software Engineering (ICSE), pages 342--351, 2005. Google ScholarDigital Library
- V. Dallmeier, C. Lindig, and A. Zeller. Lightweight defect localization for java. In Proceedings of 19th European Conference on Object-Oriented Programming (ECOOP), pages 528--550, 2005. Google ScholarDigital Library
- V. Dallmeier and T. Zimmermann. Automatic extraction of bug localization benchmarks from history. Technical report, Saarland University, Saarbrücken, Germany, June 2007.Google Scholar
- H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405--435, 2005. Google ScholarDigital Library
- Z. Li and Y. Zhou. Pr-miner: automatically extracting implicit programming rules and detecting violations in large software code. In Proc. of European Software Engineering Conference/International Symposium on Foundations of Software Engineering (ESEC/FSE), pages 306--315, 2005. Google ScholarDigital Library
- B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan. Scalable statistical bug isolation. In Proc. of Conference on Programming Language Design and Implementation (PLDI), pages 15--26, 2005. Google ScholarDigital Library
- C. Liu, X. Yan, L. Fei, J. Han, and S. P. Midkiff. Sober: statistical model-based bug localization. In Proc. of European Software Engineering Conference/International Symposium on Foundations of Software Engineering (ESEC/FSE), pages 286--295, 2005. Google ScholarDigital Library
- S. Lu, Z. Li, F. Qin, L. Tan, P. Zhou, and Y. Zhou. Bugbench: Benchmarks for evaluating bug detection tools. In PLDI Workshop on the Evaluation of Software Defect Detection Tools, June 2005.Google Scholar
- R. Purushothaman and D. E. Perry. Toward understanding the rhetoric of small source code changes. IEEE Transactions on Software Engineering, 31(6):511--526, 2005. Google ScholarDigital Library
- T. Reps, T. Ball, M. Das, and J. Larus. The use of program profiling for software maintenance with applications to the year 2000 problem. In Proc. European Software Engineering Conference (ESEC/FSE), pages 432--449, Sept. 1997. Google ScholarDigital Library
- J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? On Fridays. In Proc. International Workshop on Mining Software Repositories (MSR), St. Louis, Missouri, U.S., May 2005. Google ScholarDigital Library
- J. Spacco, D. Hovemeyer, and W. Pugh. Bugbench: Benchmarks for evaluating bug detection tools. In PLDI Workshop on the Evaluation of Software Defect Detection Tools, June 2005.Google Scholar
- J. Spacco, J. Strecker, D. Hovemeyer, and W. Pugh. Software repository mining with marmoset: an automated programming project snapshot and testing system. In Proc. International Workshop on Mining Software Repositories (MSR), 2005. See also: http://marmoset.cs.umd.edu/. Google ScholarDigital Library
- J. Yang, D. Evans, D. Bhardwaj, T. Bhat, and M. Das. Perracotta: mining temporal api rules from imperfect traces. In ICSE '06: Proceeding of the 28th international conference on Software engineering, pages 282--291, 2006. Google ScholarDigital Library
- X. Zhang, N. Gupta, and R. Gupta. Locating faults through automated predicate switching. In ICSE '06: Proceeding of the 28th international conference on Software engineering, pages 272--281, 2006. Google ScholarDigital Library
- T. Zimmermann. Fine-grained processing of CVS archives with APFEL. In Proceedings of the 2006 OOPSLA Workshop on Eclipse Technology eXchange, October 2006. Google ScholarDigital Library
Index Terms
- Extraction of bug localization benchmarks from history
Recommendations
Version history, similar report, and structure: putting them together for improved bug localization
ICPC 2014: Proceedings of the 22nd International Conference on Program ComprehensionDuring the evolution of a software system, a large number of bug reports are submitted. Locating the source code files that need to be fixed to resolve the bugs is a challenging problem. Thus, there is a need for a technique that can automatically ...
Bug localization via searching crowd-contributed code
Internetware '14: Proceedings of the 6th Asia-Pacific Symposium on InternetwareBug localization, i.e., locating bugs in code snippets, is a frequent task in software development. Although static bug-finding tools are available to reduce manual effort in bug localization, these tools typically detect bugs with known project-...
JITO: a tool for just-in-time defect identification and localization
ESEC/FSE 2020: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringIn software development and maintenance, defect localization is necessary for software quality assurance. Current defect localization techniques mainly rely on defect symptoms (e.g., bug reports or program spectrum) when the defect has been exposed. One ...
Comments