skip to main content
research-article

Comparing Vulnerability Severity and Exploits Using Case-Control Studies

Published:15 August 2014Publication History
Skip Abstract Section

Abstract

(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.

Skip Supplemental Material Section

Supplemental Material

References

  1. O. H. Alhazmi and Y. K. Malaiya. 2008. Application of vulnerability discovery models to major operating systems. IEEE Trans. Reliab. 57, 1 (2008), 14--22. DOI: http://dx.doi.org/10.1109/TR.2008.916872.Google ScholarGoogle ScholarCross RefCross Ref
  2. Luca Allodi, Vadim Kotov, and Fabio Massacci. 2013. MalwareLab: Experimentation with Cybercrime attack tools. In Proceedings of the 6th Workshop on Cybersecurity Security and Test.Google ScholarGoogle Scholar
  3. Luca Allodi and Fabio Massacci. 2012. A preliminary analysis of vulnerability scores for attacks in wild. In Proceedings of the ACM CCS Workshop on Building Analysis Datasets and Gathering Experience Returns for Security. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Martin Bland and Douglas G. Altman. 1995. Multiple significance tests: The Bonferroni method. Brit. Med. J. 310, 6973 (1995), 170.Google ScholarGoogle Scholar
  5. Mehran Bozorgi, Lawrence K. Saul, Stefan Savage, and Geoffrey M. Voelker. 2010. Beyond heuristics: Learning to classify vulnerabilities and predict exploits. In Proceedings of the 16th ACM International Conference on Knowledge Discovery and Data Mining. ACM, 105--114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Steve Christey and Brian Martin. 2013. Buying into the bias: Why vulnerability statistics suck. https://www.blackhat.com/us-13/archives.html#Martin.Google ScholarGoogle Scholar
  7. Sandy Clark, Stefan Frei, Matt Blaze, and Jonathan Smith. 2010. Familiarity breeds contempt: The honeymoon effect and the role of legacy code in zero-day vulnerabilities. In Proceedings of the 26th Annual Computer Security Applications Conference. 251--260. http://doi.acm.org/10.1145/1920261.1920299. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Richard Doll and A. Bradford Hill. 1950. Smoking and carcinoma of the lung. Brit. Med. J. 2, 4682 (1950), 739--748.Google ScholarGoogle ScholarCross RefCross Ref
  9. Tudor Dumitras and Petros Efstathopoulos. 2012. Ask WINE: Are we safer today? Evaluating operating system security through big data analysis. In Proceeding of the USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET'12). 11--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Tudor Dumitras and Darren Shou. 2011. Toward a standard benchmark for computer security research: The worldwide intelligence network environment (WINE). In Proceedings of the 1st Workshop on Building Analysis Datasets and Gathering Experience Returns for Security. ACM, 89--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. L. Evans. 1986. The effectiveness of safety belts in preventing fatalities. Accident Anal. Prevent. 18, 3 (1986), 229--241.Google ScholarGoogle ScholarCross RefCross Ref
  12. Stefan Frei, Martin May, Ulrich Fiedler, and Bernhard Plattner. 2006. Large-scale vulnerability analysis. In Proceedings of the SIGCOMM Workshop on Large-Scale Attack Defense. ACM, 131--138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. Gallon. 2011. Vulnerability discrimination using CVSS framework. In Proceedings of the 4th IFIP International Conference on New Technologies, Mobility and Security. 1--6.Google ScholarGoogle ScholarCross RefCross Ref
  14. Michael Gegick, Pete Rotella, and Laurie A. Williams. 2009. Predicting attack-prone components. In Proceedings of the 2nd International Conference on Software Testing Verification and Validation (ICST'09). 181--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Chris Grier, Lucas Ballard, Juan Caballero, Neha Chachra, Christian J. Dietrich, Kirill Levchenko, Panayiotis Mavrommatis, Damon McCoy, Antonio Nappa, Andreas Pitsillidis, Niels Provos, M. Zubair Rafique, Moheeb Abu Rajab, Christian Rossow, Kurt Thomas, Vern Paxson, Stefan Savage, and Geoffrey M. Voelker. 2012. Manufacturing compromise: The emergence of exploit-as-a-service. In Proceedings of the 19th ACM Conference on Computer and Communications Security. ACM, 821--832. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. C. Herley and D. Florencio. 2010. Nobody sells gold for the price of silver: Dishonesty, uncertainty and the underground economy. In Economics of Information Security and Privacy. Springer, 33--53.Google ScholarGoogle Scholar
  17. Siv Hilde Houmb, Virginia N. L. Franqueira, and Erlend A. Engum. 2010. Quantifying security risk level from CVSS estimates of frequency and impact. J. Syst. Softw. 83, 9 (2010), 1622--1634. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Vadim Kotov and Fabio Massacci. 2013. Anatomy of exploit kits: Preliminary analysis of exploit kits as software artefacts. In Proceedings of the Engineering Secure Software and Systems Conference (ESSoS'13). 181--196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Pratyusa K. Manadhata and Jeannette M. Wing. 2011. An attack surface metric. IEEE Trans. Softw. Eng. 37 (2011), 371--386. DOI: http://dx.doi.org/10.1109/TSE.2010.60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Fabio Massacci, Stephan Neuhaus, and Viet Nguyen. 2011. After-life vulnerabilities: A study on firefox evolution, its vulnerabilities, and fixes. In Proceedings of the Engineering Secure Software and Systems Conference (ESSoS'11). Lecture Notes in Computer Science, vol. 6542, Springer, 195--208. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Fabio Massacci and Viet Nguyen. 2012. An independent validation of vulnerability discovery models. In Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security (ASIACCS'12). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Peter Mell, Karen Scarfone, and Sasha Romanosky. 2007. A Complete Guide to the Common Vulnerability Scoring System Version 2.0. Technical Report. FIRST. http://www.first.org/cvss.Google ScholarGoogle Scholar
  23. C. Miller. 2007. The legitimate vulnerability market: Inside the secretive world of 0-day exploit sales. In Proceedings of the 6th Workshop on Economics and Information Security.Google ScholarGoogle Scholar
  24. Stephan Neuhaus, Thomas Zimmermann, Christian Holler, and Andreas Zeller. 2007. Predicting vulnerable software components. In Proceedings of the 14th ACM Conference on Computer and Communications Security. 529--540. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. Ozment. 2005. The likelihood of vulnerability rediscovery and the social utility of vulnerability hunting. In Proceedings of the 4th Workshop on Economics and Information Security.Google ScholarGoogle Scholar
  26. Andy Ozment. 2007. Improving vulnerability discovery models. In Proceedings of the 3rd Workshop on Quality of Protection. 6--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. PCI Council. 2010. PCI DSS Requirements and Security Assessment Procedures, Version 2.0. (2010). https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf.Google ScholarGoogle Scholar
  28. Stephen D. Quinn, Karen A. Scarfone, Matthew Barrett, and Christopher S. Johnson. 2010. Guide to Adopting and Using the Security Content Automation Protocol (SCAP) Version 1.0. Technical Report, National Institute of Standards and Technology, U.S. Department of Commerce, Special Publication 800-117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. R Core Team. 2012. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http://www.R-project.org ISBN 3-900051-07-0.Google ScholarGoogle Scholar
  30. Karen Scarfone and Peter Mell. 2009. An analysis of CVSS version 2 vulnerability scoring. In Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement. 516--525. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Guido Schryen. 2009. A comprehensive and comparative analysis of the patching behavior of open source and closed source software vendors. In Proceedings of the 5th International Conference on IT Security Incident Management and IT Forensics (IMF'09). IEEE Computer Society, Los Alamitos, CA, 153--168. DOI: http://dx.doi.org/10.1109/IMF.2009.15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Muhammad Shahzad, Muhammad Zubair Shafiq, and Alex X. Liu. 2012. A large scale exploratory analysis of software vulnerability life cycles. In Proceedings of the 34th International Conference on Software Engineering. IEEE Press, 771--781. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Yonghee Shin and Laurie Williams. 2013. Can traditional fault prediction models be used for vulnerability prediction? Empirical Softw. Eng. 18, 1 (2013), 25--59. DOI: http://dx.doi.org/10.1007/s10664-011-9190-8.Google ScholarGoogle Scholar
  34. Symantec. 2011. Analysis of Malicious Web Activity by Attack Toolkits (online ed.). Symantec. http://www.symantec.com/threatreport/topic.jsp?id=threat_activity_trends&aid=analysis_of_malicious_web _activity. (Last accessed June 1012).Google ScholarGoogle Scholar
  35. Lingyu Wang, Tania Islam, Tao Long, Anoop Singhal, and Sushil Jajodia. 2008. An attack graph-based probabilistic security metric. In Proceedings of the 22nd IFIP WG 11.3 Working Conference on Data and Applications Security. Lecture Notes in Computer Science, vol. 5094. Springer, Berlin/Heidelberg, 283--296. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Comparing Vulnerability Severity and Exploits Using Case-Control Studies

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Information and System Security
          ACM Transactions on Information and System Security  Volume 17, Issue 1
          August 2014
          118 pages
          ISSN:1094-9224
          EISSN:1557-7406
          DOI:10.1145/2660572
          • Editor:
          • Gene Tsudik
          Issue’s Table of Contents

          Copyright © 2014 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 15 August 2014
          • Accepted: 1 May 2014
          • Revised: 1 February 2014
          • Received: 1 September 2013
          Published in tissec Volume 17, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader