skip to main content
10.1145/2661685.2661687acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
Article

An empirical investigation of socio-technical code review metrics and security vulnerabilities

Published:17 November 2014Publication History

ABSTRACT

One of the guiding principles of open source software development is to use crowds of developers to keep a watchful eye on source code. Eric Raymond declared Linus'' Law as "many eyes make all bugs shallow", with the socio-technical argument that high quality open source software emerges when developers combine together their collective experience and expertise to review code collaboratively. Vulnerabilities are a particularly nasty set of bugs that can be rare, difficult to reproduce, and require specialized skills to recognize. Does Linus' Law apply to vulnerabilities empirically? In this study, we analyzed 159,254 code reviews, 185,948 Git commits, and 667 post-release vulnerabilities in the Chromium browser project. We formulated, collected, and analyzed various metrics related to Linus' Law to explore the connection between collaborative reviews and vulnerabilities that were missed by the review process. Our statistical association results showed that source code files reviewed by more developers are, counter-intuitively, more likely to be vulnerable (even after accounting for file size). However, files are less likely to be vulnerable if they were reviewed by developers who had experience participating on prior vulnerability-fixing reviews. The results indicate that lack of security experience and lack of collaborator familiarity are key risk factors in considering Linus’ Law with vulnerabilities.

References

  1. E. S. Raymond, The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, 1st ed. O’Reilly Media, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Meneely and O. Williams, “Interactive Churn: Socio-Technical Variants on Code Churn Metrics,” in Int’l Workshop on Software Quality, 2012, pp. 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Y. Shin, A. Meneely, L. Williams, and J. A. Osborne, “Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities,” IEEE Trans. Softw. Eng., vol. 37, no. 6, pp. 772–787, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Meneely and L. Williams, “Strengthening the Empirical Analysis of the Relationship Between Linus’ Law and Software Security,” in Empirical Software Engineering and Measurement, Bolzano-Bozen, Italy, 2010, pp. 1–10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. A. Meneely and L. Williams, “Secure Open Source Collaboration: an Empirical Study of Linus’ Law,” in Int’l Conference on Computer and Communications Security (CCS), Chicago, Illinois, USA, 2009, pp. 453–462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Meneely and L. Williams, “Socio-Technical Developer Networks: Should We Trust Our Measurements?,” presented at the International Conference on Software Engineering, Waikiki, Hawaii, USA, 2011, p. to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Meneely, L. Williams, W. Snipes, and J. Osborne, “Predicting Failures with Developer Networks and Social Network Analysis,” in 16th ACM SIGSOFT International Symposium on Foundations of software engineering, Atlanta, Georgia, 2008, pp. 13–23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. E. L. Trist and K. W. Bamforth, “Some social and psychological consequences of the longwall method of coalgetting,” Technol. Organ. Innov. Early Debates, p. 79, 2000.Google ScholarGoogle Scholar
  9. I. V. Krsual, “Software Vulnerability Analysis.” PhD Dissertation, Purdue University, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. McIntosh, Y. Kamei, B. Adams, and A. E. Hassan, “The Impact of Code Review Coverage and Code Review Participation on Software Quality: A Case Study of the Qt, VTK, and ITK Projects,” in 11th Working Conference on Mining Software Repositories, New York, NY, USA, 2014, pp. 192–201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. Bosu, “Characteristics of the Vulnerable Code Changes Identified Through Peer Code Review,” in 36th International Conference on Software Engineering, New York, NY, USA, 2014, pp. 736–738. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Meneely, H. Srinivasan, A. Musa, A. R. Tejeda, M. Mokary, and B. Spates, “When a patch goes bad: Exploring the properties of vulnerability-contributing commits,” Proc. 2013 ACM-IEEE Int. Symp. Empir. Softw. Eng. Meas., p. to appear, 2013.Google ScholarGoogle Scholar
  13. C. F. Kemerer and M. C. Paulk, “The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data,” IEEE Trans Softw Eng, vol. 35, no. 4, pp. 534–550, Jul. 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. N. Nagappan and T. Ball, “Use of Relative Code Churn Measures to Predict System Defect Density,” in 27th international Conference on Software Engineering (ICSE), St. Louis, MO, USA, 2005, pp. 284–292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. C. Munson and S. G. Elbaum, “Code churn: a measure for estimating the impact of code change,” in Software Maintenance, 1998. Proceedings. International Conference on, 1998, pp. 24 –31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. M. Garcia, K. Weaver, G. B. Moskowitz, and J. M. Darley, “Crowded minds: The implicit bystander effect,” J. Pers. Soc. Psychol., vol. 83, no. 4, pp. 843–853, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  17. E. Coakes, “Socio-technical thinking: an holistic viewpoint,” in Socio-technical and human cognition elements of information systems, IGI Publishing, 2003, pp. 1–4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. McGraw, Software Security: Building Security In. Addison-Wesley Professional, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. N. F. Schneidewind, “Methodology for Validating Software Metrics,” IEEE Trans. Softw. Eng. TSE, vol. 18, no. 5, pp. 410–422, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. V. Rao, Statistical Research Methods in the Life Sciences, 1st ed. Duxbury Press, 1997.Google ScholarGoogle Scholar

Index Terms

  1. An empirical investigation of socio-technical code review metrics and security vulnerabilities

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SSE 2014: Proceedings of the 6th International Workshop on Social Software Engineering
      November 2014
      48 pages
      ISBN:9781450332279
      DOI:10.1145/2661685

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 17 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader