skip to main content
10.1145/3133956.3134072acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

A Large-Scale Empirical Study of Security Patches

Authors Info & Claims
Published:30 October 2017Publication History

ABSTRACT

Given how the "patching treadmill" plays a central role for enabling sites to counter emergent security concerns, it behooves the security community to understand the patch development process and characteristics of the resulting fixes. Illumination of the nature of security patch development can inform us of shortcomings in existing remediation processes and provide insights for improving current practices. In this work we conduct a large-scale empirical study of security patches, investigating more than 4,000 bug fixes for over 3,000 vulnerabilities that affected a diverse set of 682 open-source software projects. For our analysis we draw upon the National Vulnerability Database, information scraped from relevant external references, affected software repositories, and their associated security fixes. Leveraging this diverse set of information, we conduct an analysis of various aspects of the patch development life cycle, including investigation into the duration of impact a vulnerability has on a code base, the timeliness of patch development, and the degree to which developers produce safe and reliable fixes. We then characterize the nature of security fixes in comparison to other non-security bug fixes, exploring the complexity of different types of patches and their impact on code bases.

Among our findings we identify that: security patches have a lower footprint in code bases than non-security bug patches; a third of all security issues were introduced more than 3 years prior to remediation; attackers who monitor open-source repositories can often get a jump of weeks to months on targeting not-yet-patched systems prior to any public disclosure and patch distribution; nearly 5% of security fixes negatively impacted the associated software; and 7% failed to completely remedy the security hole they targeted.

Skip Supplemental Material Section

Supplemental Material

References

  1. American Fuzzy Lop. http://lcamtuf.coredump.cx/afl/.Google ScholarGoogle Scholar
  2. cgit. https://git.zx2c4.com/cgit/about/.Google ScholarGoogle Scholar
  3. Core Infrastructure Initiative. https://www.coreinfrastructure.org.Google ScholarGoogle Scholar
  4. Exuberant Ctags. http://ctags.sourceforge.net/.Google ScholarGoogle Scholar
  5. GitLab. https://about.gitlab.com/.Google ScholarGoogle Scholar
  6. GitWeb. https://git-scm.com/book/en/v2/Git-on-the-Server-GitWeb.Google ScholarGoogle Scholar
  7. ISC Software Defect and Security Vulnerability Disclosure Policy. https://kb.isc.org/article/AA-00861/164/ISC-Software-Defect-and-Security- Vulnerability-Disclosure-Policy.html.Google ScholarGoogle Scholar
  8. Open Crypto Audit Project. https://opencryptoaudit.org.Google ScholarGoogle Scholar
  9. Undefined Behavior Sanitizer. https://clang.llvm.org/docs/UndefinedBehavior Sanitizer.html.Google ScholarGoogle Scholar
  10. Steve Christey and Brian Martin. Buying Into the Bias: Why Vulnerability Statistics Suck. In BlackHat, 2013.Google ScholarGoogle Scholar
  11. Zakir Durumeric, Frank Li, James Kasten, Nicholas Weaver, Johanna Amann, Jethro Beekman, Mathias Payer, David Adrian, Vern Paxson, Michael Bailey, and J. Alex Halderman. The Matter of Heartbleed. In ACM Internet Measurement Conference (IMC), 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Forum of Incident Response and Security Teams. Common Vulnerability Scoring System v3.0: Specification Document. https://www.first.org/cvss/specification-document.Google ScholarGoogle Scholar
  13. Stefan Frei. End-Point Security Failures: Insights gained from Secunia PSI Scans. In USENIX Predict Workshop, 2011.Google ScholarGoogle Scholar
  14. Stefan Frei, Martin May, Ulrich Fiedler, and Bernhard Plattner. Large-Scale Vulnerability Analysis. In SIGCOMM Workshops, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Google. Sanitizers. https://github.com/google/sanitizers.Google ScholarGoogle Scholar
  16. Google Open Source Blog. Announcing OSS-Fuzz: Continuous Fuzzing for Open Source Software. https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html.Google ScholarGoogle Scholar
  17. Zhongxian Gu, Earl Barr, David Hamilton, and Zhendong Su. Has the Bug Really Been Fixed? In International Conference on Software Engineering (ICSE), 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Zhen Huang, Mariana D'Angelo, Dhaval Miyani, and David Lie. Talos: Neutralizing Vulnerabilities with Security Workarounds for Rapid Response. In IEEE Security and Privacy (S&P), 2016.Google ScholarGoogle Scholar
  19. Jonathan Corbet. Kernel Vulnerabilities: Old or New?, October 2010. https://lwn.net/Articles/410606/.Google ScholarGoogle Scholar
  20. Kees Cook. Security Bug Lifetime, October 2016. https://outflux.net/blog/archives/2016/10/18/security-bug-lifetime.Google ScholarGoogle Scholar
  21. Frank Li, Zakir Durumeric, Jakub Czyz, Mohammad Karami, Michael Bailey, Damon McCoy, Stefan Savage, and Vern Paxson. You've Got Vulnerability: Exploring Effective Vulnerability Notifications. In USENIX Security Symposium, 2016.Google ScholarGoogle Scholar
  22. T. J. McCabe. A Complexity Measure. In IEEE Transaction on Software Engineering, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. MITRE Corporation. Common Vulnerabilities and Exposures. https://cve.mitre.org/.Google ScholarGoogle Scholar
  24. MITRE Corporation. CWE: Common Weakness Enumeration. https://cwe.mitre.org/.Google ScholarGoogle Scholar
  25. Nuthan Munaiah and Andrew Meneely. Vulnerability Severity Scoring and Bounties: Why the Disconnect? In International Workshop on Software Analytics (SWAN), 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Emerson Murphy-Hill, Thomas Zimmermann, Christian Bird, and Nachiappan Nagappan. The Design of Bug Fixes. In International Conference on Software Engineering (ICSE), 2013. Google ScholarGoogle ScholarCross RefCross Ref
  27. Antonio Nappa, Richard Johnson, Leyla Bilge, Juan Caballero, and Tudor Dumitras. The Attack of the Clones: A Study of the Impact of Shared Code on Vulnerability Patching. In IEEE Security and Privacy (S&P), 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Andy Ozment and Stuart E. Schechter. Milk or Wine: Does Software Security Improve with Age? In USENIX Security Symposium, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jihun Park, Miryung Kim, Baishkhi Ray, and Doo-Hwan Bae. An Empirical Study on Supplementary Bug Fixes. In Mining Software Repositories (MSR), 2012.Google ScholarGoogle Scholar
  30. Henning Perl, Sergej Dechand, Matthew Smith, Daniel Arp, Fabian Yamaguchi, Konrad Rieck, Sascha Fahl, and Yasemin Acar. VCCFinder: Finding Potential Vulnerabilities in Open-Source Projects to Assist Code Audits. In ACM Conference on Computer and Communications Security (CCS), 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. RhodeCode. Version Control Systems Popularity in 2016. https://rhodecode.com/insights/version-control-systems-2016.Google ScholarGoogle Scholar
  32. Muhammad Shahzad, M. Zubair Shafiq, and Alex X. Liu. A Large Scale Exploratory Analysis of Software Vulnerability Life Cycles. In International Conference on Software Engineering (ICSE), 2012.Google ScholarGoogle Scholar
  33. Jacek Sliwerski, Thomas Zimmermann, and Andreas Zeller. When Do Changes Induce Fixes. In Mining Software Repositories (MSR), 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Mauricio Soto, Ferdian Thung, Chu-Pan Wong, Claire Le Goues, and David Lo. A Deeper Look into Bug Fixes: Patterns, Replacements, Deletions, and Additions. In Mining Software Repositories (MSR), 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. U.S. National Institute of Standards and Technology. CVSS Information. https://nvd.nist.gov/cvss.cfm.Google ScholarGoogle Scholar
  36. U.S. National Institute of Standards and Technology. National Checklist Program Glossary. https://web.nvd.nist.gov/view/ncp/repository/glossary.Google ScholarGoogle Scholar
  37. U.S. National Institute of Standards and Technology. National Vulnerability Database. https://nvd.nist.gov/home.cfm.Google ScholarGoogle Scholar
  38. U.S. National Institute of Standards and Technology. NVD Data Feed. https://nvd.nist.gov/download.cfm.Google ScholarGoogle Scholar
  39. Zhengzi Xu, Bihuan Chen, Mahinthan Chandramohan, Yang Liu, and Fu Song. SPAIN: Security Patch Analysis for Binaries Towards Understanding the Pain and Pills. In International Conference on Software Engineering (ICSE), 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Zuoning Yin, Ding Yuan, Yuanyuan Zhou, Shankar Pasupathy, and Lakshmi Bairavasundaram. How do Fixes become Bugs? In ACM European Conference on Foundations of Software Engineering (ESEC/FSE), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Shahed Zaman, Bram Adams, and Ahmed E. Hassan. Security Versus Performance Bugs: A Case Study on Firefox. In Mining Software Repositories (MSR), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Hao Zhong and Zhendong Su. An Empirical Study on Real Bug Fixes. In International Conference on Software Engineering (ICSE), 2015. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Large-Scale Empirical Study of Security Patches

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              CCS '17: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security
              October 2017
              2682 pages
              ISBN:9781450349468
              DOI:10.1145/3133956

              Copyright © 2017 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 30 October 2017

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              CCS '17 Paper Acceptance Rate151of836submissions,18%Overall Acceptance Rate1,261of6,999submissions,18%

              Upcoming Conference

              CCS '24
              ACM SIGSAC Conference on Computer and Communications Security
              October 14 - 18, 2024
              Salt Lake City , UT , USA

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader