ABSTRACT
Code review is an effective practice for finding defects, but because it is manually intensive it can slow down the continuous integration of changes. Our goal was to understand the factors that influenced the time a change, ie a diff at Meta, would spend in review. A developer survey showed that diff reviews start to feel slow after they have been waiting for around 24 hour review. We built a review time predictor model to identify potential factors that may be causing reviews to take longer, which we could use to predict when would be the best time to nudge reviewers or to identify diff-related factors that we may need to address.
The strongest feature of the time spent in review model we built was the day of the week because diffs submitted near the weekend may have to wait for Monday for review. After removing time on weekends, the remaining features, including size of diff and the number of meetings the reviewers have did not provide substantial predictive power, thereby not being able to predict how long a code review would take.
We contributed to the effort to reduce stale diffs by suggesting that diffs be nudged near the start of the workday and that diffs published near the weekend be nudged sooner on Friday to avoid waiting the entire weekend. We use a nudging threshold rather than a model because we showed that TimeInReview cannot be accurately modelled. The NudgeBot has been rolled to over 30k developers at Meta.
- Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 international conference on software engineering. IEEE Press, 712-721. Google ScholarDigital Library
- V.R. Basili, F. Shull, and F. Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25, 4 ( 1999 ), 456-473. https://doi.org/10.1109/32.799939 Google ScholarDigital Library
- Christian Bird and Thomas Zimmermann. 2012. Assessing the Value of Branches with What-If Analysis. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (Cary, North Carolina) (FSE '12). Article 45, 11 pages. Google ScholarDigital Library
- Stephen G Eick, Clive R Loader, M David Long, Lawrence G Votta, and Scott Vander Wiel. 1992. Estimating software fault content before coding. In Proceedings of the 14th international conference on Software engineering. 59-65. Google ScholarDigital Library
- M. E. Fagan. 1976. Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal 15, 3 ( 1976 ), 182-211. Google ScholarDigital Library
- Chandra Maddila, Sai Surya Upadrasta, Chetan Bansal, Nachiappan Nagappan, Georgios Gousios, and Arie van Deursen. 2022. Nudge: Accelerating Overdue Pull Requests Towards Completion. ACM Trans. Softw. Eng. Methodol. ( 2022 ). https://doi.org/10.1145/3544791 Google ScholarDigital Library
- D.E. Perry, A. Porter, M.W. Wade, L.G. Votta, and J. Perpich. 2002. Reducing inspection interval in large-scale software development. IEEE Transactions on Software Engineering 28, 7 ( 2002 ), 695-705. https://doi.org/10.1109/TSE. 2002. 1019483 Google ScholarCross Ref
- Adam Porter, Harvey Siy, Audris Mockus, and Lawrence Votta. 1998. Understanding the sources of variation in software inspections. ACM Transactions on Software Engineering and Methodology (TOSEM) 7, 1 ( 1998 ), 41-79. Google ScholarDigital Library
- Peter Rigby, Brendan Cleary, Frederic Painchaud, Margaret-Anne Storey, and Daniel German. 2012. Contemporary peer review in action: Lessons from open source development. IEEE software 29, 6 ( 2012 ), 56-61. Google Scholar
- Peter Rigby, Daniel German, and Margaret-Anne Storey. 2008. Open source software peer review practices. In 2008 ACM/IEEE 30th International Conference on Software Engineering. 541-550. https://doi.org/10.1145/1368088.1368162 Google ScholarDigital Library
- Peter C Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. ACM, 202-212. Google ScholarDigital Library
- Peter C Rigby, Daniel M German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer review on open-source software projects: Parameters, statistical models, and theory. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4 ( 2014 ), 35. Google ScholarDigital Library
- Peter C Rigby and Margaret-Anne Storey. 2011. Understanding broadcast based peer review on open source software projects. In 2011 33rd International Conference on Software Engineering (ICSE). IEEE, 541-550. Google ScholarDigital Library
- Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern code review: a case study at google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice. ACM, 181-190. Google ScholarDigital Library
- Qianhua Shan, David Sukhdeo, Qianying Huang, Seth Rogers, Lawrence Chen, Elise Paradis, Peter C. Rigby, and Nachiappan Nagappan. 2022. Using Nudges to Accelerate Code Reviews at Scale. In Proceedings of the ACM SIGSOFT 30th International Symposium on the Foundations of Software Engineering. 11 pages. https://doi.org/10.1145/3540250.3549104 Google ScholarDigital Library
- Lawrence G. Votta. 1993. Does Every Inspection Need a Meeting? SIGSOFT Softw. Eng. Notes 18, 5 (Dec. 1993 ), 107-114. https://doi.org/10.1145/167049.167070 Google ScholarDigital Library
Index Terms
- Understanding why we cannot model how long a code review will take: an industrial case study
Recommendations
Understanding automated code review process and developer experience in industry
ESEC/FSE 2022: Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software EngineeringCode Review Automation can reduce human efforts during code review by automatically providing valuable information to reviewers. Nevertheless, it is a challenge to automate the process for large-scale companies, such as Samsung Electronics, due to ...
Understanding code snippets in code reviews: a preliminary study of the OpenStack community
ICPC '22: Proceedings of the 30th IEEE/ACM International Conference on Program ComprehensionCode review is a mature practice for software quality assurance in software development with which reviewers check the code that has been committed by developers, and verify the quality of code. During the code review discussions, reviewers and ...
Salient-class location: help developers understand code change in code review
ESEC/FSE 2018: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software EngineeringCode review involves a significant amount of human effort to understand the code change, because the information required to inspect code changes may distribute across multiple files that reviewers are not familiar with. Code changes are often organized ...
Comments