ABSTRACT
During 2015, some members of the Xen Project Advisory Board became worried about the performance of their code review process. The Xen Project is a free, open source software project developing one of the most popular virtualization platforms in the industry. They use a pre-commit peer review process similar to that in the Linux kernel, based on email messages. They had observed a large increase over time in the number of messages related to code review, and were worried about how this could be a signal of problems with their code review process.
To address these concerns, we designed and conducted, with their continuous feedback, a detailed analysis focused on finding these problems, if any. During the study, we dealt with the methodological problems of Linux-like code review, and with the deeper issue of finding metrics that could uncover the problems they were worried about. For having a benchmark, we run the same analysis on a similar project, which uses very similar code review practices: the Linux Netdev (Netdev) project. As a result, we learned how in fact the Xen Project had some problems, but at the moment of the analysis those were already under control. We found as well how different the Xen and Netdev projects were behaving with respect to code review performance, despite being so similar from many points of view.
In this paper we show the results of both analyses, and propose a comprehensive methodology, fully automated, to study Linux-style code review. We discuss also the problems of getting significant metrics to track improvements or detect problems in this kind of code review.
- Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 712--721. Google ScholarDigital Library
- Victor R Basili. 1993. Applying the Goal/Question/Metric paradigm in the experience factory. Software Quality Assurance and Measurement: A Worldwide Perspective (1993), 21--44.Google Scholar
- Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2015. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering (2015), 1--28. Google ScholarDigital Library
- Daniel Izquierdo-Cortazar, Lars Kurth, Jesus M Gonzalez-Barahona, Santiago Dueñas, and Nelson Sekitoleko. 2016. Characterization of the Xen project code review process: an experience report. In Proceedings of the 13th International Workshop on Mining Software Repositories. ACM, 386--390. Google ScholarDigital Library
- Yujuan Jiang, Bram Adams, and Daniel M German. 2013. Will my patch make it? and how fast?: Case study on the linux kernel. In Proceedings of the 10th Working Conference on Mining Software Repositories. IEEE Press, 101--110. Google ScholarDigital Library
- Oleksii Kononenko, Olga Baysal, and Michael W. Godfrey. 2016. Code Review Quality: How Developers See It. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, New York, NY, USA, 1028--1038. Google ScholarDigital Library
- Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2015. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering (2015), 1--44. Google ScholarDigital Library
- Audris Mockus, Roy T Fielding, and James D Herbsleb. 2002. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology (TOSEM) 11, 3 (2002), 309--346. Google ScholarDigital Library
- Peter Rigby, Brendan Cleary, Frederic Painchaud, Margaret-Anne Storey, and Daniel German. 2012. Contemporary peer review in action: Lessons from open source development. IEEE software 29, 6 (2012), 56--61. Google ScholarDigital Library
- Peter C Rigby and Daniel M German. 2006. A preliminary examination of code review processes in open source projects. Technical Report. Technical Report DCS-305-IR, University of Victoria.Google Scholar
- Peter C Rigby, Daniel M German, and Margaret-Anne Storey. 2008. Open source software peer review practices: a case study of the Apache server. In Proceedings of the 30th International Conference on Software Engineering. ACM, 541--550. Google ScholarDigital Library
- Peter C Rigby and Margaret-Anne Storey. 2011. Understanding broadcast based peer review on open source software projects. In Proceedings of the 33rd International Conference on Software Engineering. ACM, 541--550. Google ScholarDigital Library
- Gregorio Robles, Jesus M Gonzalez-Barahona, Daniel Izquierdo-Cortazar, and Israel Herraiz. 2009. Tools for the study of the usual data sources found in libre software projects. International Journal of Open Source Software and Processes 1, 1 (2009), 24--45.Google ScholarCross Ref
- Tony Savor, Mitchell Douglas, Michael Gentili, Laurie Williams, Kent Beck, and Michael Stumm. 2016. Continuous deployment at Facebook and OANDA. In Proceedings of the 38th International Conference on Software Engineering Companion. ACM, 21--30. Google ScholarDigital Library
- Junji Shimagaki, Yasutaka Kamei, Shane McIntosh, Ahmed E Hassan, and Naoyasu Ubayashi. 2016. A study of the quality-impacting practices of modern code review at Sony mobile. In Proceedings of the 38th International Conference on Software Engineering Companion. ACM, 212--221. Google ScholarDigital Library
- Peter Weißgerber, Daniel Neu, and Stephan Diehl. 2008. Small patches get in!. In Proceedings of the 2008 international working conference on Mining software repositories. ACM, 67--76. Google ScholarDigital Library
Recommendations
Code review quality: how developers see it
ICSE '16: Proceedings of the 38th International Conference on Software EngineeringIn a large, long-lived project, an effective code review process is key to ensuring the long-term quality of the code base. In this work, we study code review practices of a large, open source project, and we investigate how the developers themselves ...
Eye movements in code review
EMIP '18: Proceedings of the Workshop on Eye Movements in ProgrammingIn order to ensure sufficient quality, software engineers conduct code reviews to read over one another's code looking for errors that should be fixed before committing to their source code repositories. Many kinds of errors are spotted, from simple ...
Understanding code snippets in code reviews: a preliminary study of the OpenStack community
ICPC '22: Proceedings of the 30th IEEE/ACM International Conference on Program ComprehensionCode review is a mature practice for software quality assurance in software development with which reviewers check the code that has been committed by developers, and verify the quality of code. During the code review discussions, reviewers and ...
Comments