skip to main content
10.1145/3131151.3131161acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbesConference Proceedingsconference-collections
research-article

Investigating the Effectiveness of Peer Code Review in Distributed Software Development

Published:20 September 2017Publication History

ABSTRACT

Code review is a potential means of improving software quality. To be effective, it depends on different factors, and many have been investigated in the literature to identify the scenarios in which it adds quality to the final code. However, factors associated with distributed software development, which is becoming increasingly common, have been little explored. Geographic distance can impose additional challenges to the reviewing process. We thus in this paper present the results of a quantitative study of the effectiveness of code review in a distributed software project involving 201 members. We investigate factors that can potentially influence the outcomes of peer code review. Our results show that a high number of changed lines of code tends to increase the review duration with a reduced number of messages, while the number of involved teams, locations, and participant reviewers generally improve reviewer contributions, but with a severe penalty to the duration.

References

  1. Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In ICSE 2013. IEEE Press, 712--721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In ICSE 2013. IEEE, 931--940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. V R Basili, R W Selby, and D H Hutchens. 1986. Experimentation in Software Engineering. IEEE Trans. Softw. Eng. 12, 7 (July 1986), 733--743. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2016. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21, 3 (2016), 932--959. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Moritz Beller, Alberto Bacchelli, Andy Zaidman, and Elmar Juergens. 2014. Modern Code Reviews in Open-source Projects: Which Problems Do They Fix?. In MSR 2014. ACM, 202--211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of useful code reviews: An empirical study at microsoft. In MSR 2015. IEEE, 146--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down. In ICSE 2015. IEEE Press, 27--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Michael E Fagan. 1986. Advances in software inspections. IEEE Transactions on Software Engineering 12, 1 (1986), 744--751. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Andre L Ferreira, Ricardo J Machado, Jose G Silva, Rui F Batista, Lino Costa, and Mark C Paulk. 2010. An approach to improving software inspections performance. In ICSM 2010. IEEE, 1--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Google. 2017. Gerrit Code Review. (2017). https://www.gerritcodereview.comGoogle ScholarGoogle Scholar
  11. Google. 2017. Gerrit Code Review v2.11.2, Queries. (2017). https://gerrit-documentation.storage.googleapis.com/Documentation/2.12.2/cmd-query.htmlGoogle ScholarGoogle Scholar
  12. Google. 2017. Gerrit Plugin: Reviewers by Blame. (2017). https://gerrit-review.googlesource.com/Documentation/config-plugins.html#reviewers-by-blameGoogle ScholarGoogle Scholar
  13. Christopher D. Hundhausen, Anukrati Agrawal, and Pawan Agarwal. 2013. Talking About Code: Integrating Pedagogical Code Reviews into Early Computing Courses. Trans. Comput. Educ. 13, 3, Article 14 (Aug. 2013), 28 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Internet Engineering Task Force (IETF). 2017. RFC 6020: YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF). (2017). http://tools.ietf.org/html/rfc6020Google ScholarGoogle Scholar
  15. C. F. Kemerer and M. C. Paulk. 2009. The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data. IEEE Transactions on Software Engineering 35, 4 (July 2009), 534--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Sami Kollanus and Jussi Koskinen. 2009. Survey of software inspection research. The Open Software Engineering Journal 3, 1 (2009), 15--34.Google ScholarGoogle ScholarCross RefCross Ref
  17. S. McIntosh, Y. Kamei, B. Adams, and A. Hassan. 2014. The Impact of Code Review Coverage and Code Review Participation on Software Quality: A Case Study of the Qt, VTK, and ITK Projects. In MSR 2014. ACM, 192--201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Bertrand Meyer. 2008. Design and Code Reviews in the Age of the Internet. Commun. ACM 51, 9 (Sept. 2008), 66--71. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Mohammad Masudur Rahman, Chanchal K Roy, Jesse Redl, and Jason A Collins. 2016. CORRECT: Code reviewer recommendation at GitHub for Vendasta technologies. In ASE 2016. IEEE, 792--797. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bikram Sengupta, Satish Chandra, and Vibha Sinha. 2006. A Research Agenda for Distributed Software Development. In ICSE 2006. ACM, 731--740. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Junji Shimagaki, Yasutaka Kamei, Shane McIntosh, Ahmed E. Hassan, and Naoyasu Ubayashi. 2016. A Study of the Quality-impacting Practices of Modern Code Review at Sony Mobile. In ICSE 2016. ACM, 212--221. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Patanamon Thongtanunam, Raula Gaikovina Kula, Ana Erika Camargo Cruz, Norihiro Yoshida, and Hajimu Iida. 2014. Improving code review effectiveness through reviewer recommendations. In CHASE 2014. ACM, 119--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Patanamon Thongtanunam, Shane Mcintosh, Ahmed E. Hassan, and Hajimu Iida. 2015. Investigating Code Review Practices in Defective Files: An Empirical Study of the Qt System. In MSR 2015. IEEE Press, 168--179. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Patanamon Thongtanunam, Shane Mcintosh, Ahmed E Hassan, and Hajimu Iida. 2016. Review participation in modern code review. Empirical Software Engineering (2016), 1--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Patanamon Thongtanunam, Shane Mcintosh, Ahmed E. Hassan, and Hajimu Iida. 2016. Revisiting Code Ownership and Its Relationship with Software Quality in the Scope of Modern Code Review. In ICSE 2016. ACM, 1039--1050. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Raula Gaikovina Kula, Norihiro Yoshida, Hajimu Iida, and Ken-ichi Matsumoto. 2015. Who should review my code? A file location-based code-reviewer recommendation approach for modern code review. In SANER 2015. IEEE, 141--150.Google ScholarGoogle ScholarCross RefCross Ref
  27. Giovanni Viviani and Gail C Murphy. 2016. Removing stagnation from modern code review. In SPLASH 2016. ACM, 43--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Xin Xia, David Lo, Xinyu Wang, and Xiaohu Yang. 2015. Who should review this change?: Putting text and file location analyses together for more accurate recommendations. In ICSME 2015. IEEE, 261--270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Xin Yang. 2014. Social network analysis in open source software peer review. In FSE 2014. ACM, 820--822. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Motahareh Bahrami Zanjani, Huzefa Kagdi, and Christian Bird. 2016. Automatically recommending peer reviewers in modern code review. IEEE Transactions on Software Engineering 42, 6 (2016), 530--543. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Investigating the Effectiveness of Peer Code Review in Distributed Software Development

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          SBES '17: Proceedings of the XXXI Brazilian Symposium on Software Engineering
          September 2017
          409 pages
          ISBN:9781450353267
          DOI:10.1145/3131151

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 20 September 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          SBES '17 Paper Acceptance Rate42of134submissions,31%Overall Acceptance Rate147of427submissions,34%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader