skip to main content
10.1145/1456362.1456370acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Prioritizing software security fortification throughcode-level metrics

Published:27 October 2008Publication History

ABSTRACT

Limited resources preclude software engineers from finding and fixing all vulnerabilities in a software system. We create predictive models to identify which components are likely to have the most security risk. Software engineers can use these models to make measurement-based risk management decisions and to prioritize software security fortification efforts, such as redesign and additional inspection and testing. We mined and analyzed data from a large commercial telecommunications software system containing over one million lines of code that had been deployed to the field for two years. Using recursive partitioning, we built attack-prone prediction models with the following code-level metrics: static analysis tool alert density, code churn, and count of source lines of code. One model identified 100% of the attack-prone components (40% of the total number of components) with an 8% false positive rate. As such, the model could be used to prioritize fortification efforts in the system.

References

  1. S. Barnum and M. Gegick, "Design Principles," https://buildsecurityin.us--cert.gov/portal/article/knowledge/Principles, 2005.Google ScholarGoogle Scholar
  2. V. Basili, L. Briand, and W. Melo, "A Validation of Object Oriented Design Metrics as Quality Indicators," IEEE Transactions on Software Engineering, vol. 22, no. 10, pp. 751--761, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. P. Chandra, B. Chess, and J. Steven, "Putting the Tools to Work: How to Succeed with Source Code Analysis," IEEE Security & Privacy, vol. 4, no. 3, pp. 80--83, May/June, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. A. DeMillo, R. J. Lipton, and F. G. Sayward, "Hints on test data selection: Help for the practicing programmer," IEEE Computer, vol. 11, no. 4, pp. 34--41, April, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. G. Denaro, "Estimating software fault--proneness for tuning testing activities," International Conference on Software Engineering, St. Malo, France, pp. 269--280, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. Dijkstra, Structured Programming, Brussels, Belgium, 1970.Google ScholarGoogle Scholar
  7. M. Gegick and L. Williams, "Toward the Use of Static Analysis Alerts for Early Identification of Vulnerability- and Attack-prone Components," First International Workshop on Systems Vulnerabilities (SYVUL'07) Santa Clara, CA, July 1--6 2007.Google ScholarGoogle Scholar
  8. T. Hastie, R. Tibshirani, and J. H. Friedman, The Elements of Statistical Learning, New York, Springer, 2001.Google ScholarGoogle Scholar
  9. S. Heckman and L. Williams, "Automated adaptive ranking and filtering of static analysis alerts," Fast abstract at the International Symposium on Software Reliability Engineering, Raleigh, NC, November 2006.Google ScholarGoogle Scholar
  10. ISO, "ISO/IEC DIS 14598--1 Information Technology - Software Product Evaluation - Part 1: General Overview," October 28 1996.Google ScholarGoogle Scholar
  11. ISO/IEC 24765, "Software and Systems Engineering Vocabulary," 2006.Google ScholarGoogle Scholar
  12. T. M. Khoshgoftaar, E. B. Allen, J. P. Hudepohl, and W. Jones, "Classification Tree Models of Software Quality over Multiple Releases," 10th International Symposium on Software Reliability Engineering, pp. 116--125, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. I. Krsul, "Software Vulnerability Analysis," PhD Thesis in Computer Science at Purdue University, West Lafayette 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. J. Lipton and F. G. Sayward, "The Status of Research on Program Mutation," In Digest for the Workshop on Software Testing and Test Documentation, pp. 355--373, December 1978.Google ScholarGoogle Scholar
  15. J. Munson and T. Khoshgoftaar, "The Detection of Fault-Prone Programs," IEEE Transactions on Software Engineering, vol. 18, no. 5, pp. 423--433, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. N. Nagappan and T. Ball, "Static Analysis Tools as Early Indicators of Pre-release Defect Density," International Conference on Software Engineering, St. Louis, MO, pp. 580--586, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. N. Nagappan and T. Ball, "Use of Relative Code Churn Measures to Predict Defect Density," International Conference on Software Engineering, St. Louis, MO, pp. 284--292, 15--21 May 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Neuhaus, T. Zimmermann, C. Holler, and A. Zeller, "Predicting Vulnerable Software Components," Computer and Communications Security, Alexandria, VA, pp. 529--540, 29 October--2 November 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. J. Offutt, "The Coupling Effect: Fact or Fiction?," International Symposium on Software Testing and Analysis, Key West, Florida, pp. 131--140, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. T. J. Ostrand, E. J. Weyuker, and R. M. Bell, "Where the bugs are," International Symposium on Software Testing and Analysis, Boston, Massachusetts, pp. 86--96, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. V. Prevelakis and D. Spinellis, "The Athens Affair," IEEE Spectrum, vol. 44, no. 7, pp. 26--33, July, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. A. Schroter, T. Zimmermann, and A. Zeller, "Predicting Component Failures at Design Time," International Symposium on Empirical Software Engineering, Rio de Janeiro, Brazil, pp. 18--27, September 21--22 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Zheng, L. Williams, W. Snipes, N. Nagappan, J. Hudepohl, and M. Vouk, "On the Value of Static Analysis Tools for Fault Detection," IEEE Transactions on Software Engineering, vol. 32, no. 4, pp. 240--253, April 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Prioritizing software security fortification throughcode-level metrics

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        QoP '08: Proceedings of the 4th ACM workshop on Quality of protection
        October 2008
        84 pages
        ISBN:9781605583211
        DOI:10.1145/1456362

        Copyright © 2008 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 27 October 2008

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Upcoming Conference

        CCS '24
        ACM SIGSAC Conference on Computer and Communications Security
        October 14 - 18, 2024
        Salt Lake City , UT , USA

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader