skip to main content
10.1145/2568225.2568269acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Comparing static bug finders and statistical prediction

Published:31 May 2014Publication History

ABSTRACT

The all-important goal of delivering better software at lower cost has led to a vital, enduring quest for ways to find and remove defects efficiently and accurately. To this end, two parallel lines of research have emerged over the last years. Static analysis seeks to find defects using algorithms that process well-defined semantic abstractions of code. Statistical defect prediction uses historical data to estimate parameters of statistical formulae modeling the phenomena thought to govern defect occurrence and predict where defects are likely to occur. These two approaches have emerged from distinct intellectual traditions and have largely evolved independently, in “splendid isolation”. In this paper, we evaluate these two (largely) disparate approaches on a similar footing. We use historical defect data to apprise the two approaches, compare them, and seek synergies. We find that under some accounting principles, they provide comparable benefits; we also find that in some settings, the performance of certain static bug-finders can be enhanced using information provided by statistical defect prediction.

References

  1. E. Arisholm, L. C. Briand, and M. Fuglerud. Data mining techniques for building fault-proneness models in telecom java software. In ISSRE, pages 215–224. IEEE Computer Society, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Arisholm, L. C. Briand, and E. B. Johannessen. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. JSS, 83(1):2–17, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. N. Ayewah, D. Hovemeyer, J. D. Morgenthaler, J. Penix, and W. Pugh. Using static analysis to find bugs. IEEE Software, 25(5):22–29, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Bessey, K. Block, B. Chelf, A. Chou, B. Fulton, S. Hallem, C. Henri-Gros, A. Kamsky, S. McPeak, and D. Engler. A few billion lines of code later: using static analysis to find bugs in the real world. Communications of the ACM, 53(2):66–75, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. D’Ambros, M. Lanza, and R. Robbes. Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empirical Software Engineering, 17(4-5):531–577, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. El Emam, S. Benlarbi, N. Goel, and S. Rai. The confounding effect of class size on the validity of objectoriented metrics. TSE, 27(7):630–650, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Engler and K. Ashcraft. Racerx: effective, static detection of race conditions and deadlocks. In SOSP, volume 37, pages 237–252. ACM, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. C. Flanagan, K. R. M. Leino, M. Lillibridge, G. Nelson, J. B. Saxe, and R. Stata. Extended static checking for java. In ACM Sigplan Notices, volume 37, pages 234– 245. ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. N. Jovanovic, C. Kruegel, and E. Kirda. Pixy: A static analysis tool for detecting web application vulnerabilities. In SP, pages 6–pp. IEEE, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. Kim and M. D. Ernst. Which warnings should i fix first? In FSE, pages 45–54. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Larochelle and D. Evans. Statically detecting likely buffer overflow vulnerabilities. In USENIX Security Symposium, pages 177–190. Washington DC, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Lessmann, B. Baesens, C. Mues, and S. Pietsch. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. TSE, 34(4):485–496, July 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. C. Lewis, Z. Lin, C. Sadowski, X. Zhu, R. Ou, and E. J. Whitehead Jr. Does bug prediction support human developers? findings from a google case study. In ICSE, pages 372–381. IEEE Press, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Marchenko and P. Abrahamsson. Predicting software defect density: a case study on automated static code analysis. In Agile Processes in Software Engineering and Extreme Programming, pages 137–140. Springer, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Meneely and L. A. Williams. Secure open source collaboration: an empirical study of linus’ law. In E. Al-Shaer, S. Jha, and A. D. Keromytis, editors, CCS, pages 453–462. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. N. Nagappan and T. Ball. Static analysis tools as early indicators of pre-release defect density. In ICSE, pages 580–586. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. N. Nagappan, B. Murphy, and V. Basili. The influence of organizational structure on software quality: an empirical case study. In ICSE, pages 521–530. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. G. Nanda, M. Gupta, S. Sinha, S. Chandra, D. Schmidt, and P. Balachandran. Making defectfinding tools work for you. In ICSE, pages 99–108. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. F. Rahman and P. Devanbu. How, and why, process metrics are better. In ICSE, pages 432–441. IEEE Press, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. F. Rahman, D. Posnett, and P. Devanbu. Recalling the “imprecision” of cross-project defect prediction. In FSE. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F. Rahman, D. Posnett, I. Herraiz, and P. Devanbu. Sample size vs. bias in defect prediction. In FSE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. Ramasubbu, M. Cataldo, R. K. Balan, and J. D. Herbsleb. Configuring global software teams: a multicompany analysis of project productivity, quality, and profits. In ICSE, pages 261–270. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. N. Rutar, C. B. Almazan, and J. S. Foster. A comparison of bug finding tools for java. In ISSRE, pages 245–256. IEEE, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. September. IEEE standard glossary of software engineering terminology, 1990.Google ScholarGoogle Scholar
  25. J. Śliwerski, T. Zimmermann, and A. Zeller. When do changes induce fixes? In ACM sigsoft software engineering notes, volume 30, pages 1–5. ACM, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. F. Thung, D. Lo, L. Jiang, F. Rahman, P. T. Devanbu, et al. To what extent could we detect field defects? an empirical study of false negatives in static bug finding tools. In ASE, pages 50–59. ACM, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. S. Wagner, J. Jürjens, C. Koller, and P. Trischberger. Comparing bug finding tools with reviews and tests. In Testing of Communicating Systems, pages 40–55. Springer, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. G. Wassermann and Z. Su. Sound and precise analysis of web applications for injection vulnerabilities. In ACM Sigplan Notices, volume 42, pages 32–41. ACM, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. H. Zhang and S. Cheung. A cost-effectiveness criterion for applying software defect prediction models. In FSE, pages 643–646. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. J. Zheng, L. Williams, N. Nagappan, W. Snipes, J. P. Hudepohl, and M. A. Vouk. On the value of static analysis for fault detection in software. TSE, 32(4):240– 253, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Comparing static bug finders and statistical prediction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICSE 2014: Proceedings of the 36th International Conference on Software Engineering
      May 2014
      1139 pages
      ISBN:9781450327565
      DOI:10.1145/2568225

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 May 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate276of1,856submissions,15%

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader