skip to main content
10.1145/1806799.1806812acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Has the bug really been fixed?

Published:01 May 2010Publication History

ABSTRACT

Software has bugs, and fixing those bugs pervades the software engineering process. It is folklore that bug fixes are often buggy themselves, resulting in bad fixes, either failing to fix a bug or creating new bugs. To confirm this folklore, we explored bug databases of the Ant, AspectJ, and Rhino projects, and found that bad fixes comprise as much as 9% of all bugs. Thus, detecting and correcting bad fixes is important for improving the quality and reliability of software. However, no prior work has systematically considered this bad fix problem, which this paper introduces and formalizes. In particular, the paper formalizes two criteria to determine whether a fix resolves a bug: coverage and disruption. The coverage of a fix measures the extent to which the fix correctly handles all inputs that may trigger a bug, while disruption measures the deviations from the program's intended behavior after the application of a fix. This paper also introduces a novel notion of distance-bounded weakest precondition as the basis for the developed practical techniques to compute the coverage and disruption of a fix.

To validate our approach, we implemented Fixation, a prototype that automatically detects bad fixes for Java programs. When it detects a bad fix, Fixation returns an input that still triggers the bug or reports a newly introduced bug. Programmers can then use that bug-triggering input to refine or reformulate their fix. We manually extracted fixes drawn from real-world projects and evaluated Fixation against them: Fixation successfully detected the extracted bad fixes.

References

  1. J. Anvik, L. Hiew, and G. C. Murphy. Who should fix this bug? In ICSE '06: Proceedings of the 28th international conference on Software engineering, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. T. Ball and S. K. Rajamani. The SLAM project: debugging system software via static analysis. In POPL '02: Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Barrett and C. Tinelli. CVC3. In CAV '07: Proceedings of the 19th International Conference on Computer Aided Verification, July 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. D. Beyer, A. J. Chlipala, and R. Majumdar. Generating tests from counterexamples. In ICSE '04: Proceedings of the 26th International Conference on Software Engineering, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bugzilla. http://www.bugzilla.org/.Google ScholarGoogle Scholar
  6. S. Chandra, S. J. Fink, and M. Sridharan. Snugglebug: a powerful approach to weakest preconditions. In PLDI '09: Proceedings of the 2009 ACM SIGPLAN conference on Programming language design and implementation, volume 44, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. S. Collofello and S. N. Woodfield. Evaluating the effectiveness of reliability-assurance techniques. Journal of Systems and Software, 9(3), 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Csallner and Y. Smaragdakis. Check 'n' Crash: combining static checking and testing. In ICSE '05: Proceedings of the 27th international conference on Software engineering, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Csallner, Y. Smaragdakis, and T. Xie. DSD-Crasher: A hybrid analysis tool for bug finding. ACM Transactions Software Engineering Methodology, 17(2), 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M. d'Amorim, S. Lauterburg, and D. Marinov. Delta execution for efficient state-space exploration of object-oriented programs. In ISSTA '07: Proceedings of the 2007 international symposium on Software testing and analysis, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. W. Dijkstra. A Discipline of Programming. October 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Dolby, M. Vaziri, and F. Tip. Finding bugs efficiently with a SAT solver. In ESEC-FSE '07: Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Flanagan, K. R. M. Leino, M. Lillibridge, G. Nelson, J. B. Saxe, and R. Stata. Extended static checking for Java. In PLDI '02: Proceedings of the ACM SIGPLAN 2002 Conference on Programming language design and implementation, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. Godefroid, N. Klarlund, and K. Sen. DART: directed automated random testing. In PLDI '05: Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. L. Graves, M. J. Harrold, J.-M. Kim, A. Porter, and G. Rothermel. An empirical study of regression test selection techniques. ACM Transactions Software Engineering Methodology, 10(2), 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. J. Harrold, J. A. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. A. Spoon, and A. Gujarathi. Regression test selection for Java software. In OOPSLA '2001: Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. H. He and N. Gupta. Automated debugging using path-based weakest preconditions. In FASE '04: Proceedings of the Fundamental Approaches to Software Engineering, 7th International Conference, volume 2984 of Lecture Notes in Computer Science, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  18. T. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Software verification with BLAST. In SPIN '2003: Proceedings of the Tenth International Workshop on Model Checking of Software, volume 2648 of Lecture Notes in Computer Science, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre. Lazy abstraction. In POPL '02: Proceedings of the 29th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. IBM. T. J. Watson libraries for analysis. http://wala.sf.net.Google ScholarGoogle Scholar
  21. IDC. A Telecom and Networks market intelligence firm. http://www.idc.com.Google ScholarGoogle Scholar
  22. S. Kim, K. Pan, and E. E. J. Whitehead, Jr. Memories of bug fixes. In SIGSOFT '06/FSE-14: Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. Kim, T. Zimmermann, E. J. Whitehead Jr., and A. Zeller. Predicting faults from cached history. In ICSE '07: Proceedings of the 29th international conference on Software Engineering, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. J. C. King. Symbolic execution and program testing. Communications of the ACM, 19(7), 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. V. Levenshtein. Binary codes capable of correcting deletions, insertions and reversals (in Russian). 1965.Google ScholarGoogle Scholar
  26. T. J. McCabe. A Complexity Measure. IEEE Transactions on Software Engineering, 2(4), 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. S. McCamant and M. D. Ernst. Predicting problems caused by component upgrades. In ESEC/FSE-11: Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. A. Orso, N. Shi, and M. J. Harrold. Scaling regression testing to large software systems. SIGSOFT Software Engineering Notes, 29(6), 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. S. Person, M. B. Dwyer, S. Elbaum, and C. S. Păsăreanu. Differential symbolic execution. In SIGSOFT '08/FSE-16: Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. S. Păsăreanu, P. C. Mehlitz, D. H. Bushnell, K. Gundy-Burlet, M. Lowry, S. Person, and M. Pape. Combining unit-level symbolic execution and system-level concrete execution for testing nasa software. In ISSTA '08: Proceedings of the 2008 international symposium on Software testing and analysis, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. G. Rothermel and M. Harrold. Analyzing regression test selection techniques. IEEE Transactions on Software Engineering, 22(8), Aug 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. G. Rothermel and M. J. Harrold. A safe, efficient regression test selection technique. ACM Transactions Software Engineering Methodology, 6(2), 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. B. G. Ryder and F. Tip. Change impact analysis for object-oriented programs. In PASTE '01: Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. R. Santelices, P. Chittimalli, T. Apiwattanapong, A. Orso, and M. Harrold. Test-suite augmentation for evolving software. In Automated Software Engineering, 2008. ASE 2008. 23rd IEEE/ACM International Conference on, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. K. Sen, D. Marinov, and G. Agha. CUTE: a concolic unit testing engine for C. In ESEC/FSE-13: Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. J. Sliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In MSR '2005: International Workshop on Mining Software Repositories, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. W. Visser, C. S. Păsăreanu, and S. Khurshid. Test input generation with Java PathFinder. In ISSTA '04: Proceedings of the 2004 ACM SIGSOFT international symposium on Software testing and analysis, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. Predicting effort to fix software bugs. In Proceedings of the 9th Workshop Software Reengineering, May 2007.Google ScholarGoogle Scholar
  39. J. Wloka, E. Hoest, and B. G. Ryder. Tool support for change-centric test development. IEEE Software, 99(PrePrints), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ICSE '10: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1
    May 2010
    627 pages
    ISBN:9781605587196
    DOI:10.1145/1806799

    Copyright © 2010 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 1 May 2010

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate276of1,856submissions,15%

    Upcoming Conference

    ICSE 2025

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader