skip to main content
10.1145/3540250.3558945acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Understanding why we cannot model how long a code review will take: an industrial case study

Published:09 November 2022Publication History

ABSTRACT

Code review is an effective practice for finding defects, but because it is manually intensive it can slow down the continuous integration of changes. Our goal was to understand the factors that influenced the time a change, ie a diff at Meta, would spend in review. A developer survey showed that diff reviews start to feel slow after they have been waiting for around 24 hour review. We built a review time predictor model to identify potential factors that may be causing reviews to take longer, which we could use to predict when would be the best time to nudge reviewers or to identify diff-related factors that we may need to address.

The strongest feature of the time spent in review model we built was the day of the week because diffs submitted near the weekend may have to wait for Monday for review. After removing time on weekends, the remaining features, including size of diff and the number of meetings the reviewers have did not provide substantial predictive power, thereby not being able to predict how long a code review would take.

We contributed to the effort to reduce stale diffs by suggesting that diffs be nudged near the start of the workday and that diffs published near the weekend be nudged sooner on Friday to avoid waiting the entire weekend. We use a nudging threshold rather than a model because we showed that TimeInReview cannot be accurately modelled. The NudgeBot has been rolled to over 30k developers at Meta.

References

  1. Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proceedings of the 2013 international conference on software engineering. IEEE Press, 712-721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. V.R. Basili, F. Shull, and F. Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25, 4 ( 1999 ), 456-473. https://doi.org/10.1109/32.799939 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Christian Bird and Thomas Zimmermann. 2012. Assessing the Value of Branches with What-If Analysis. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (Cary, North Carolina) (FSE '12). Article 45, 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Stephen G Eick, Clive R Loader, M David Long, Lawrence G Votta, and Scott Vander Wiel. 1992. Estimating software fault content before coding. In Proceedings of the 14th international conference on Software engineering. 59-65. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. E. Fagan. 1976. Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal 15, 3 ( 1976 ), 182-211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Chandra Maddila, Sai Surya Upadrasta, Chetan Bansal, Nachiappan Nagappan, Georgios Gousios, and Arie van Deursen. 2022. Nudge: Accelerating Overdue Pull Requests Towards Completion. ACM Trans. Softw. Eng. Methodol. ( 2022 ). https://doi.org/10.1145/3544791 Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. D.E. Perry, A. Porter, M.W. Wade, L.G. Votta, and J. Perpich. 2002. Reducing inspection interval in large-scale software development. IEEE Transactions on Software Engineering 28, 7 ( 2002 ), 695-705. https://doi.org/10.1109/TSE. 2002. 1019483 Google ScholarGoogle ScholarCross RefCross Ref
  8. Adam Porter, Harvey Siy, Audris Mockus, and Lawrence Votta. 1998. Understanding the sources of variation in software inspections. ACM Transactions on Software Engineering and Methodology (TOSEM) 7, 1 ( 1998 ), 41-79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Peter Rigby, Brendan Cleary, Frederic Painchaud, Margaret-Anne Storey, and Daniel German. 2012. Contemporary peer review in action: Lessons from open source development. IEEE software 29, 6 ( 2012 ), 56-61. Google ScholarGoogle Scholar
  10. Peter Rigby, Daniel German, and Margaret-Anne Storey. 2008. Open source software peer review practices. In 2008 ACM/IEEE 30th International Conference on Software Engineering. 541-550. https://doi.org/10.1145/1368088.1368162 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Peter C Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. ACM, 202-212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Peter C Rigby, Daniel M German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer review on open-source software projects: Parameters, statistical models, and theory. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4 ( 2014 ), 35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Peter C Rigby and Margaret-Anne Storey. 2011. Understanding broadcast based peer review on open source software projects. In 2011 33rd International Conference on Software Engineering (ICSE). IEEE, 541-550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern code review: a case study at google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice. ACM, 181-190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Qianhua Shan, David Sukhdeo, Qianying Huang, Seth Rogers, Lawrence Chen, Elise Paradis, Peter C. Rigby, and Nachiappan Nagappan. 2022. Using Nudges to Accelerate Code Reviews at Scale. In Proceedings of the ACM SIGSOFT 30th International Symposium on the Foundations of Software Engineering. 11 pages. https://doi.org/10.1145/3540250.3549104 Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lawrence G. Votta. 1993. Does Every Inspection Need a Meeting? SIGSOFT Softw. Eng. Notes 18, 5 (Dec. 1993 ), 107-114. https://doi.org/10.1145/167049.167070 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Understanding why we cannot model how long a code review will take: an industrial case study
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in
            • Published in

              cover image ACM Conferences
              ESEC/FSE 2022: Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering
              November 2022
              1822 pages
              ISBN:9781450394130
              DOI:10.1145/3540250

              Copyright © 2022 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 9 November 2022

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article

              Acceptance Rates

              Overall Acceptance Rate112of543submissions,21%

              Upcoming Conference

              FSE '24

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader