skip to main content
10.1145/1321631.1321702acmconferencesArticle/Chapter ViewAbstractPublication PagesaseConference Proceedingsconference-collections
poster

Extraction of bug localization benchmarks from history

Published:05 November 2007Publication History

ABSTRACT

Researchers have proposed a number of tools for automatic bug localization. Given a program and a description of the failure, such tools pinpoint a set of statements that are most likely to contain the bug. Evaluating bug localization tools is a difficult task because existing benchmarks are limited in size of subjects and number of bugs. In this paper we present iBUGS, an approach that semiautomatically extracts benchmarks for bug localization from the history of a project. For ASPECTJ, we extracted 369 bugs, 223 out of these had associated test cases. We demonstrate the relevance of our dataset with a case study on the bug localization tool AMPLE

References

  1. H. Cleve and A. Zeller. Locating causes of program failures. In Proceedings of 27th International Conference on Software Engineering (ICSE), pages 342--351, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. V. Dallmeier, C. Lindig, and A. Zeller. Lightweight defect localization for java. In Proceedings of 19th European Conference on Object-Oriented Programming (ECOOP), pages 528--550, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. V. Dallmeier and T. Zimmermann. Automatic extraction of bug localization benchmarks from history. Technical report, Saarland University, Saarbrücken, Germany, June 2007.Google ScholarGoogle Scholar
  4. H. Do, S. G. Elbaum, and G. Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software Engineering, 10(4):405--435, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Z. Li and Y. Zhou. Pr-miner: automatically extracting implicit programming rules and detecting violations in large software code. In Proc. of European Software Engineering Conference/International Symposium on Foundations of Software Engineering (ESEC/FSE), pages 306--315, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. B. Liblit, M. Naik, A. X. Zheng, A. Aiken, and M. I. Jordan. Scalable statistical bug isolation. In Proc. of Conference on Programming Language Design and Implementation (PLDI), pages 15--26, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. C. Liu, X. Yan, L. Fei, J. Han, and S. P. Midkiff. Sober: statistical model-based bug localization. In Proc. of European Software Engineering Conference/International Symposium on Foundations of Software Engineering (ESEC/FSE), pages 286--295, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Lu, Z. Li, F. Qin, L. Tan, P. Zhou, and Y. Zhou. Bugbench: Benchmarks for evaluating bug detection tools. In PLDI Workshop on the Evaluation of Software Defect Detection Tools, June 2005.Google ScholarGoogle Scholar
  9. R. Purushothaman and D. E. Perry. Toward understanding the rhetoric of small source code changes. IEEE Transactions on Software Engineering, 31(6):511--526, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. T. Reps, T. Ball, M. Das, and J. Larus. The use of program profiling for software maintenance with applications to the year 2000 problem. In Proc. European Software Engineering Conference (ESEC/FSE), pages 432--449, Sept. 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? On Fridays. In Proc. International Workshop on Mining Software Repositories (MSR), St. Louis, Missouri, U.S., May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Spacco, D. Hovemeyer, and W. Pugh. Bugbench: Benchmarks for evaluating bug detection tools. In PLDI Workshop on the Evaluation of Software Defect Detection Tools, June 2005.Google ScholarGoogle Scholar
  13. J. Spacco, J. Strecker, D. Hovemeyer, and W. Pugh. Software repository mining with marmoset: an automated programming project snapshot and testing system. In Proc. International Workshop on Mining Software Repositories (MSR), 2005. See also: http://marmoset.cs.umd.edu/. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Yang, D. Evans, D. Bhardwaj, T. Bhat, and M. Das. Perracotta: mining temporal api rules from imperfect traces. In ICSE '06: Proceeding of the 28th international conference on Software engineering, pages 282--291, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. X. Zhang, N. Gupta, and R. Gupta. Locating faults through automated predicate switching. In ICSE '06: Proceeding of the 28th international conference on Software engineering, pages 272--281, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. T. Zimmermann. Fine-grained processing of CVS archives with APFEL. In Proceedings of the 2006 OOPSLA Workshop on Eclipse Technology eXchange, October 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Extraction of bug localization benchmarks from history

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ASE '07: Proceedings of the 22nd IEEE/ACM International Conference on Automated Software Engineering
            November 2007
            590 pages
            ISBN:9781595938824
            DOI:10.1145/1321631

            Copyright © 2007 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 5 November 2007

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • poster

            Acceptance Rates

            Overall Acceptance Rate82of337submissions,24%

            Upcoming Conference

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader