skip to main content
10.1145/3084226.3084247acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Using Metrics to Track Code Review Performance

Published:15 June 2017Publication History

ABSTRACT

During 2015, some members of the Xen Project Advisory Board became worried about the performance of their code review process. The Xen Project is a free, open source software project developing one of the most popular virtualization platforms in the industry. They use a pre-commit peer review process similar to that in the Linux kernel, based on email messages. They had observed a large increase over time in the number of messages related to code review, and were worried about how this could be a signal of problems with their code review process.

To address these concerns, we designed and conducted, with their continuous feedback, a detailed analysis focused on finding these problems, if any. During the study, we dealt with the methodological problems of Linux-like code review, and with the deeper issue of finding metrics that could uncover the problems they were worried about. For having a benchmark, we run the same analysis on a similar project, which uses very similar code review practices: the Linux Netdev (Netdev) project. As a result, we learned how in fact the Xen Project had some problems, but at the moment of the analysis those were already under control. We found as well how different the Xen and Netdev projects were behaving with respect to code review performance, despite being so similar from many points of view.

In this paper we show the results of both analyses, and propose a comprehensive methodology, fully automated, to study Linux-style code review. We discuss also the problems of getting significant metrics to track improvements or detect problems in this kind of code review.

References

  1. Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 712--721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Victor R Basili. 1993. Applying the Goal/Question/Metric paradigm in the experience factory. Software Quality Assurance and Measurement: A Worldwide Perspective (1993), 21--44.Google ScholarGoogle Scholar
  3. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2015. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering (2015), 1--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Daniel Izquierdo-Cortazar, Lars Kurth, Jesus M Gonzalez-Barahona, Santiago Dueñas, and Nelson Sekitoleko. 2016. Characterization of the Xen project code review process: an experience report. In Proceedings of the 13th International Workshop on Mining Software Repositories. ACM, 386--390. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yujuan Jiang, Bram Adams, and Daniel M German. 2013. Will my patch make it? and how fast?: Case study on the linux kernel. In Proceedings of the 10th Working Conference on Mining Software Repositories. IEEE Press, 101--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. Code Review Quality: How Developers See It. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028--1038. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2015. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering (2015), 1--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Audris Mockus, Roy T Fielding, and James D Herbsleb. 2002. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology (TOSEM) 11, 3 (2002), 309--346. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Peter Rigby, Brendan Cleary, Frederic Painchaud, Margaret-Anne Storey, and Daniel German. 2012. Contemporary peer review in action: Lessons from open source development. IEEE software 29, 6 (2012), 56--61. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Peter C Rigby and Daniel M German. 2006. A preliminary examination of code review processes in open source projects. Technical Report. Technical Report DCS-305-IR, University of Victoria.Google ScholarGoogle Scholar
  11. Peter C Rigby, Daniel M German, and Margaret-Anne Storey. 2008. Open source software peer review practices: a case study of the Apache server. In Proceedings of the 30th International Conference on Software Engineering. ACM, 541--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Peter C Rigby and Margaret-Anne Storey. 2011. Understanding broadcast based peer review on open source software projects. In Proceedings of the 33rd International Conference on Software Engineering. ACM, 541--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Gregorio Robles, Jesus M Gonzalez-Barahona, Daniel Izquierdo-Cortazar, and Israel Herraiz. 2009. Tools for the study of the usual data sources found in libre software projects. International Journal of Open Source Software and Processes 1, 1 (2009), 24--45.Google ScholarGoogle ScholarCross RefCross Ref
  14. Tony Savor, Mitchell Douglas, Michael Gentili, Laurie Williams, Kent Beck, and Michael Stumm. 2016. Continuous deployment at Facebook and OANDA. In Proceedings of the 38th International Conference on Software Engineering Companion. ACM, 21--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Junji Shimagaki, Yasutaka Kamei, Shane McIntosh, Ahmed E Hassan, and Naoyasu Ubayashi. 2016. A study of the quality-impacting practices of modern code review at Sony mobile. In Proceedings of the 38th International Conference on Software Engineering Companion. ACM, 212--221. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Peter Weißgerber, Daniel Neu, and Stephan Diehl. 2008. Small patches get in!. In Proceedings of the 2008 international working conference on Mining software repositories. ACM, 67--76. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    EASE '17: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering
    June 2017
    405 pages
    ISBN:9781450348041
    DOI:10.1145/3084226

    Copyright © 2017 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 15 June 2017

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate71of232submissions,31%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader