skip to main content
10.1145/2508859.2516665acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

Chucky: exposing missing checks in source code for vulnerability discovery

Published:04 November 2013Publication History

ABSTRACT

Uncovering security vulnerabilities in software is a key for operating secure systems. Unfortunately, only some security flaws can be detected automatically and the vast majority of vulnerabilities is still identified by tedious auditing of source code. In this paper, we strive to improve this situation by accelerating the process of manual auditing. We introduce Chucky, a method to expose missing checks in source code. Many vulnerabilities result from insufficient input validation and thus omitted or false checks provide valuable clues for finding security flaws. Our method proceeds by statically tainting source code and identifying anomalous or missing conditions linked to security-critical objects.In an empirical evaluation with five popular open-source projects, Chucky is able to accurately identify artificial and real missing checks, which ultimately enables us to uncover 12 previously unknown vulnerabilities in two of the projects (Pidgin and LibTIFF).

References

  1. T. Avgerinos, S. K. Cha, B. L. T. Hao, and D. Brumley. AEG: Automatic Exploit Generation. In Proc. of Network and Distributed System Security Symposium (NDSS), 2011.Google ScholarGoogle Scholar
  2. A. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145--1159, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. B. Chess and M. Gerschefske. Rough auditing tool for security. Google Code, http://code.google.com/p/rough-auditing-tool-for-security/, visited February, 2013.Google ScholarGoogle Scholar
  4. M. Cova, V. Felmetsger, G. Banks, and G. Vigna. Static detection of vulnerabilities in x86 executables. In Proc. of Annual Computer Security Applications Conference (ACSAC), pages 269--278, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. DeKok. Pscan: A limited problem scanner for c source files. http://deployingradius.com/pscan/,visited February, 2013.Google ScholarGoogle Scholar
  6. D. Engler, D. Y. Chen, S. Hallem, A. Chou, and B. Chelf. Bugs as deviant behavior: A general approach to inferring errors in systems code. In Proc. of ACM Symposium on Operating Systems Principles (SOSP), pages 57--72, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D. Evans and D. Larochelle. Improving security using extensible lightweight static analysis. IEEE Software, 19(1):42--51, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. N. Falliere, L. O. Murchu, , and E. Chien. W32.stuxnet dossier. Symantec Corporation, 2011.Google ScholarGoogle Scholar
  9. P. Godefroid, M. Y. Levin, and D. Molnar. SAGE: whitebox fuzzing for security testing. Communications of the ACM, 55(3):40--44, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. N. Gruska, A. Wasylkowski, and A. Zeller. Learning from 6,000 projects: lightweight cross-project anomaly detection. In Proc. of the International Symposium on Software Testing and Analysis (ISSTA), pages 119--130, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. S. Heelan. Vulnerability detection systems: Think cyborg, not robot. IEEE Security & Privacy, 9(3):74--77, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Hopcroft and J. Motwani, R. Ullmann. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, 2 edition, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. N. Jovanovic, C. Kruegel, and E. Kirda. Pixy: A static analysis tool for detecting web application vulnerabilities. In Proc. of IEEE Symposium on Security and Privacy, pages 6--263, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. A. Kupsch and B. P. Miller. Manual vs. automated vulnerability assessment: A case study. In Proc. of Workshop on Managing Insider Security Threats (MIST), pages 83--97, 2009.Google ScholarGoogle Scholar
  15. J. R. Larus, T. Ball, M. Das, R. DeLine, M. Fahndrich, J. Pincus, S. K. Rajamani, and R. Venkatapathy. Righting software. IEEE Software, 21(3):92--100, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Z. Li and Y. Zhou. PR-Miner: automatically extracting implicit programming rules and detecting violations in large software code. In Proc. of European Software Engineering Conference (ESEC), pages 306--315, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Livshits and M. S. Lam. Finding security vulnerabilities in java applications with static analysis. In Proc. of USENIX Security Symposium, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. B. Livshits, A. V. Nori, S. K. Rajamani, and A. Banerjee. Merlin: specification inference for explicit information flow problems. In Proc. of ACM SIGPLAN International Conference on Programming Languages Design and Implementation (PLDI), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. B. Livshits and T. Zimmermann. Dynamine: finding common error patterns by mining software revision histories. In Proc. of European Software Engineering Conference (ESEC), pages 296--305, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. L. Moonen. Generating robust parsers using island grammars. In Proc. of Working Conference on Reverse Engineering (WCRE), pages 13{22, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. H. Moore. Security flaws in universal plug and play: Unplug. don't play. Technical report, Rapid 7, 2013.Google ScholarGoogle Scholar
  22. C. Mulliner, N. Golde, and J.-P. Seifert. Sms of death: From analyzing to attacking mobile phones on a large scale. In Proc. of USENIX Security Symposium, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Newsome and D. Song. Dynamic taint analysis for automatic detection, analysis, and signature generation of exploits on commodity software. In Proc. of Network and Distributed System Security Symposium (NDSS), 2005.Google ScholarGoogle Scholar
  24. J. W. Oh. Recent Java exploitation trends and malware. Presentation at Black Hat Las Vegas, 2012.Google ScholarGoogle Scholar
  25. T. Parr and R. Quong. ANTLR: A predicated-LL(k) parser generator. Software Practice and Experience, 25:789--810, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. G. Portokalidis, A. Slowinska, and H. Bos. Argos: an emulator for fingerprinting zero-day attacks for advertised honeypots with automatic signature generation. ACM SIGOPS Operating Systems Review, 40(4):15--27, Apr. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. K. Rieck and P. Laskov. Linear-time computation of similarity measures for sequential data. Journal of Machine Learning Research (JMLR), 9(Jan):23--48, Jan. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. G. Salton and M. J. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. E. Schwartz, T. Avgerinos, and D. Brumley. All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask). In Proc. of IEEE Symposium on Security and Privacy, pages 317--331, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. U. Shankar, K. Talwar, J. S. Foster, and D. Wagner. Detecting format string vulnerabilities with type qualifiers. In Proc. of USENIX Security Symposium, pages 201--218, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. S. Son, K. S. McKinley, and V. Shmatikov. Rolecast: finding missing security checks when you do not know what checks are. In Proc. of ACM International Conference on Object Oriented Programming Systems Languages and Applications (SPLASH), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. V. Srivastava, M. D. Bond, K. S. Mckinley, and V. Shmatikov. A security policy oracle: Detecting security holes using multiple API implementations. In Proc. of ACM SIGPLAN International Conference on Programming Languages Design and Implementation (PLDI), 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. M. Sutton, A. Greene, and P. Amini. Fuzzing: Brute Force Vulnerability Discovery. Addison-Wesley Professional, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. L. Tan, X. Zhang, X. Ma, W. Xiong, and Y. Zhou. Autoises: automatically inferring security specifications and detecting violations. In Proc. of USENIX Security Symposium, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. S. Thummalapenta and T. Xie. Alattin: Mining alternative patterns for detecting neglected conditions. In Proc. of the International Conference on Automated Software Engineering (ASE), pages 283{294, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. J. Viega, J. Bloch, Y. Kohno, and G. McGraw. ITS4: A static vulnerability scanner for C and C++ code. In Proc. of Annual Computer Security Applications Conference (ACSAC), pages 257--267, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. T. Wang, T. Wei, Z. Lin, and W. Zou. IntScope: Automatically detecting integer overflow vulnerability in x86 binary using symbolic execution. In Proc. of Network and Distributed System Security Symposium (NDSS), 2009.Google ScholarGoogle Scholar
  38. A. Wasylkowski, A. Zeller, and C. Lindig. Detecting object usage anomalies. In Proc. of European Software Engineering Conference (ESEC), pages 35--44, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. D. A. Wheeler. Flawfinder. http://www.dwheeler.com/flawfinder/, visited February, 2013.Google ScholarGoogle Scholar
  40. C. C. Williams and J. K. Hollingsworth. Automatic mining of source code repositories to improve bug finding techniques. IEEE Transactions on Software Engineering, 31:466--480, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. F. Yamaguchi, F. Lindner, and K. Rieck. Vulnerability extrapolation: Assisted discovery of vulnerabilities using machine learning. In Proc. of 5th USENIX Workshop on Offensive Technologies (WOOT), pages 118--127, Aug. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. F. Yamaguchi, M. Lottmann, and K. Rieck. Generalized vulnerability extrapolation using abstract syntax trees. In Proc. of 28th Annual Computer Security Applications Conference (ACSAC), pages 359--368, Dec. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. H. Zhong, T. Xie, L. Zhang, J. Pei, and H. Mei. Mapo: Mining and recommending API usage patterns. In Proc. of the European Conference on Object-Oriented Programming(ECOOP), pages 318--343, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Chucky: exposing missing checks in source code for vulnerability discovery

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              CCS '13: Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security
              November 2013
              1530 pages
              ISBN:9781450324779
              DOI:10.1145/2508859

              Copyright © 2013 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 4 November 2013

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              CCS '13 Paper Acceptance Rate105of530submissions,20%Overall Acceptance Rate1,261of6,999submissions,18%

              Upcoming Conference

              CCS '24
              ACM SIGSAC Conference on Computer and Communications Security
              October 14 - 18, 2024
              Salt Lake City , UT , USA

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader