skip to main content
research-article

Information Needs in Contemporary Code Review

Published:01 November 2018Publication History
Skip Abstract Section

Abstract

Contemporary code review is a widespread practice used by software engineers to maintain high software quality and share project knowledge. However, conducting proper code review takes time and developers often have limited time for review. In this paper, we aim at investigating the information that reviewers need to conduct a proper code review, to better understand this process and how research and tool support can make developers become more effective and efficient reviewers. Previous work has provided evidence that a successful code review process is one in which reviewers and authors actively participate and collaborate. In these cases, the threads of discussions that are saved by code review tools are a precious source of information that can be later exploited for research and practice. In this paper, we focus on this source of information as a way to gather reliable data on the aforementioned reviewers' needs. We manually analyze 900 code review comments from three large open-source projects and organize them in categories by means of a card sort. Our results highlight the presence of seven high-level information needs, such as knowing the uses of methods and variables declared/modified in the code under review. Based on these results we suggest ways in which future code review tools can better support collaboration and the reviewing task.

References

  1. Ulrike Abelein and Barbara Paech. 2015. Understanding the Influence of User Participation and Involvement on System Success: a Systematic Mapping Study . Empirical Software Engineering , Vol. 20, 1 (2015), 28--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alberto Bacchelli and Christian Bird. 2013. Expectations, Outcomes, and Challenges of Modern Code Review. In 35th International Conference on Software Engineering (ICSE 2013a). 710--719. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Richard A Baker Jr. 1997. Code reviews enhance software quality. In Proceedings of the 19th International Conference on Software Engineering. ACM, 570--571. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 931--940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Mike Barnett, Christian Bird, Joao Brunet, and Shuvendu K Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In Proceedings of the 2015 International Conference on Software Engineering. IEEE Press . Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Tobias Baum, Olga Liskin, Kai Niklas, and Kurt Schneider. 2016. A Faceted Classification Scheme for Change-Based Industrial Code Review Processes. In Software Quality, Reliability and Security (QRS), 2016 IEEE International Conference on. IEEE, Vienna, Austria.Google ScholarGoogle ScholarCross RefCross Ref
  7. Tobias Baum, Kurt Schneider, and Alberto Bacchelli. 2017. On the Optimal Order of Reading Source Code Changes for Review. In 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME), Proceedings .Google ScholarGoogle Scholar
  8. Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In 2015 IEEE 31st International Conference on Software Maintenance and Evolution, ICSME 2015 - Proceedings. 81--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2013. The influence of non-technical factors on code review. In Reverse Engineering (WCRE), 2013 20th Working Conference on. IEEE, 122--131.Google ScholarGoogle ScholarCross RefCross Ref
  10. Moritz Beller, Alberto Bacchelli, Andy Zaidman, and Elmar Juergens. 2014. Modern code reviews in open-source projects: which problems do they fix?. In Proceedings of the 11th Working Conference on Mining Software Repositories. ACM, 202--211. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Fevzi Belli and Radu Crisan. 1996. Towards automation of checklist-based code-reviews. In Software Reliability Engineering, 1996. Proceedings., Seventh International Symposium on. IEEE, 24--33. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Silvia Breu, Rahul Premraj, Jonathan Sillito, and Thomas Zimmermann. 2010. Information Needs in Bug Reports : Improving Cooperation Between Developers and Users . Proceedings of the 2010 Computer Supported Cooperative Work Conference (2010), 301--310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Raymond PL Buse and Westley R Weimer. 2010. Automatically documenting program changes. In Proceedings of the IEEE/ACM international conference on Automated software engineering. ACM, 33--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Raymond P.L. Buse and Thomas Zimmermann. 2012. Information needs for software development analytics. In Proceedings - International Conference on Software Engineering. 987--996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Gerardo Canfora, Luigi Cerulo, and Massimiliano Di Penta. 2009. Ldiff: An enhanced line differencing tool. In Proceedings of the 31st International Conference on Software Engineering. IEEE Computer Society, 595--598. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Robert Chatley and Lawrence Jones. 2018. Diggit: Automated code review via software repository mining. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 567--571.Google ScholarGoogle ScholarCross RefCross Ref
  17. Jason Cohen. 2010. Modern Code Review. In Making Software , , Andy Oram and Greg Wilson (Eds.). O'Reilly, Chapter 18, 329--338.Google ScholarGoogle Scholar
  18. W. J. Conover. 1999. Practical Nonparametric Statistics 3rd ed.). John Wiley & Sons, Inc.Google ScholarGoogle Scholar
  19. Luis Fernando Cortés-Coy, Mario Linares-Vásquez, Jairo Aponte, and Denys Poshyvanyk. 2014. On automatically generating commit messages via summarization of source code changes. In Source Code Analysis and Manipulation (SCAM), 2014 IEEE 14th International Working Conference on. IEEE, 275--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code Reviews Do Not Find Bugs. How the Current Code Review Best Practice Slows Us Down. In Proceedings of the 2015 International Conference on Software Engineering . IEEE -- Institute of Electrical and Electronics Engineers. http://research.microsoft.com/apps/pubs/default.aspx?id=242201 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Marco di Biase, Magiel Bruntink, Arie van Deursen, and Alberto Bacchelli. 2018. The effects of change-decomposition on code review-A Controlled Experiment. arXiv preprint arXiv:1805.10978 (2018).Google ScholarGoogle Scholar
  22. Mart'in Dias, Alberto Bacchelli, Georgios Gousios, Damien Cassou, and Stéphane Ducasse. 2015. Untangling fine-grained code changes. In Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on. IEEE, 341--350.Google ScholarGoogle Scholar
  23. Felipe Ebert, Fernando Castor, Nicole Novielli, and Alexander Serebrenik. 2017. Confusion Detection in Code Reviews. In 33rd International Conference on Software Maintenance and Evolution (ICSME), Proceedings. ICSME.Google ScholarGoogle Scholar
  24. Deborah Finfgeld-Connett. 2014. Use of content analysis to conduct knowledge-building and theory-generating qualitative systematic reviews. Qualitative Research , Vol. 14, 3 (2014), 341--352.Google ScholarGoogle ScholarCross RefCross Ref
  25. Georgios Gousios, Margaret-Anne Storey, and Alberto Bacchelli. 2016. Work Practices and Challenges in Pull-Based Development: The Contributor's Perspective. In Proceedings of the 38th International Conference on Software Engineering (ICSE '16). ACM, 285--296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Georgios Gousios, Andy Zaidman, Margaret-Anne Storey, and Arie Van Deursen. 2015. Work practices and challenges in pull-based development: the integrator's perspective. In Proceedings of the 37th International Conference on Software Engineering-Volume 1. IEEE Press, 358--368. Google ScholarGoogle ScholarCross RefCross Ref
  27. Robert J Grissom and John J Kim. 2005. Effect sizes for research: A broad practical approach. Lawrence Erlbaum Associates Publishers.Google ScholarGoogle Scholar
  28. Kazuki Hamasaki, Raula Gaikovina Kula, Norihiro Yoshida, AE Cruz, Kenji Fujiwara, and Hajimu Iida. 2013. Who does what during a code review? datasets of oss peer review repositories. In Proceedings of the 10th Working Conference on Mining Software Repositories. IEEE Press, 49--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. JD Herbsleb and E Kuwana. 1993. Preserving knowledge in design projects: What designers need to know . Chi '93 & Interact '93 (1993), 7--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kim Herzig and Andreas Zeller. 2013. The impact of tangled code changes. In Proceedings of the 10th Working Conference on Mining Software Repositories. IEEE Press, 121--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Yujuan Jiang, Bram Adams, and Daniel M. German. 2013. Will my patch make it? And how fast?: Case study on the linux kernel. In IEEE International Working Conference on Mining Software Repositories . 101--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Norihito Kitagawa, Hideaki Hata, Akinori Ihara, Kiminao Kogiso, and Kenichi Matsumoto. 2016. Code review participation: game theoretical modeling of reviewers in gerrit datasets. In Proceedings of the 9th International Workshop on Cooperative and Human Aspects of Software Engineering. ACM, 64--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Andrew J. Ko, Robert DeLine, and Gina Venolia. 2007. Information Needs in Collocated Software Development Teams. In 29th International Conference on Software Engineering (ICSE'07). 344--353. Google ScholarGoogle ScholarCross RefCross Ref
  34. Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W. Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In 2015 IEEE 31st International Conference on Software Maintenance and Evolution, ICSME 2015 - Proceedings. 111--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Klaus Krippendorff. 2011. Agreement and information in the reliability of coding. Communication Methods and Measures , Vol. 5, 2 (2011), 93--112.Google ScholarGoogle ScholarCross RefCross Ref
  36. Mike Kuniavsky. 2003. Observing the user experience: a practitioner's guide to user research .Elsevier. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mika V Mantyla and Casper Lassenius. 2009. What types of defects are really discovered in code reviews? Software Engineering, IEEE Transactions on , Vol. 35, 3 (2009), 430--448. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2014. The impact of code review coverage and code review participation on software quality: A case study of the qt, vtk, and itk projects. In Proceedings of the 11th Working Conference on Mining Software Repositories. ACM, 192--201. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. 2016. An empirical study of the impact of modern code review practices on software quality . , Vol. 21, 5 (2016), 2146--2189. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Robert K Merton and Patricia L Kendall. 1946. The focused interview. American journal of Sociology , Vol. 51, 6 (1946), 541--557.Google ScholarGoogle ScholarCross RefCross Ref
  41. Rodrigo Morales, Shane McIntosh, and Foutse Khomh. 2015. Do code review practices impact design quality? a case study of the qt, vtk, and itk projects. In Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on. IEEE, 171--180.Google ScholarGoogle Scholar
  42. Hazel E Nelson. 1976. A modified card sorting test sensitive to frontal lobe defects. Cortex , Vol. 12, 4 (1976), 313--324.Google ScholarGoogle ScholarCross RefCross Ref
  43. Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Denys Poshyvanyk, and Andrea De Lucia. 2015. Mining version histories for detecting code smells. IEEE Transactions on Software Engineering , Vol. 41, 5 (2015), 462--489.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Fabio Palomba, Annibale Panichella, Andy Zaidman, Rocco Oliveto, and Andrea De Lucia. 2017. The scent of a smell: An extensive comparison between textual and structural smells. IEEE Transactions on Software Engineering (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Chris Parnin and Carsten G”org. 2008. Improving change descriptions with change contexts. In Proceedings of the 2008 international working conference on Mining software repositories. ACM, 51--60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Luca Ponzanelli, Simone Scalabrino, Gabriele Bavota, Andrea Mocci, Massimiliano Di Penta, Rocco Oliveto, and Michele Lanza. 2017. Supporting Software Developers with a Holistic Recommender System. In Proceedings of ICSE 2017 (39th ACM/IEEE International Conference on Software Engineering) . to be published. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Adam Porter, Harvey Siy, Audris Mockus, and Lawrence Votta. 1998. Understanding the sources of variation in software inspections. ACM Transactions on Software Engineering and Methodology (TOSEM) , Vol. 7, 1 (1998), 41--79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Achyudh Ram, Anand Ashok Sawant, Marco Castelluccio, and Alberto Bacchelli. 2018. What Makes A Code Change Easier To Review? An Empirical Investigation On Code Change Reviewability. In 26th Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018). forthcoming. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Peter C Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. ACM, Saint Petersburg, Russia, 202--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Peter C Rigby, Daniel M German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer Review on Open Source Software Projects: Parameters, Statistical Models, and Theory. ACM Transactions on Software Engineering and Methodology (2014), 34. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern code review: a case study at google. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice. ACM, 181--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Chris Sauer, D Ross Jeffery, Lesley Land, and Philip Yetton. 2000. The effectiveness of software development technical reviews: A behaviorally motivated program of research. Software Engineering, IEEE Transactions on , Vol. 26, 1 (2000), 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Jonathan Sillito, Gail C Murphy, and Kris De Volder. 2006. Questions programmers ask during software evolution tasks. In Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering. ACM, 23--34. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Davide Spadini, Maur'icio Aniche, Margaret-Anne Storey, Magiel Bruntink, and Alberto Bacchelli. 2018. When Testing Meets Code Review: Why and How Developers Review Tests. In Software Engineering (ICSE), 2018 IEEE/ACM 40th International Conference on. to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Andrew Sutherland and Gina Venolia. 2009a. Can peer code reviews be exploited for later information needs?. In Software Engineering-Companion Volume, 2009. ICSE-Companion 2009. 31st International Conference on. IEEE, 259--262.Google ScholarGoogle ScholarCross RefCross Ref
  56. Andrew Sutherland and Gina Venolia. 2009b. Can peer code reviews be exploited for later information needs?. In Software Engineering-Companion Volume, 2009. ICSE-Companion 2009. 31st International Conference on. IEEE, 259--262.Google ScholarGoogle ScholarCross RefCross Ref
  57. Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes?: an exploratory study in industry. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Yida Tao and Sunghun Kim. 2015. Partitioning composite code changes to facilitate code review. In Proceedings of the 12th Working Conference on Mining Software Repositories. IEEE Press, 180--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Patanamon Thongtanunam, Raula Gaikovina Kula, Ana Erika Camargo Cruz, Norihiro Yoshida, and Hajimu Iida. 2014. Improving code review effectiveness through reviewer recommendations. In Proceedings of the 7th International Workshop on Cooperative and Human Aspects of Software Engineering. ACM, 119--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2015a. Investigating Code Review Practices in Defective Files: An Empirical Study of the Qt System. In MSR '15 Proceedings of the 12th Working Conference on Mining Software Repositories . Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Patanamon Thongtanunam, Shane Mcintosh, Ahmed E. Hassan, and Hajimu Iida. 2016a. Review Participation in Modern Code Review . Empirical Software Engineering (EMSE) (2016), to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2016b. Revisiting code ownership and its relationship with software quality in the scope of modern code review. In Proceedings of the 38th international conference on software engineering. ACM, 1039--1050. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Raula Gaikovina Kula, Norihiro Yoshida, Hajimu Iida, and Ken-ichi Matsumoto. 2015b. Who should review my code? A file location-based code-reviewer recommendation approach for modern code review. In Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on. IEEE, 141--150.Google ScholarGoogle Scholar
  64. Jason Tsay, Laura Dabbish, and James Herbsleb. 2014. Let's talk about it: evaluating contributions through discussion in GitHub. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 144--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Yuriy Tymchuk, Andrea Mocci, and Michele Lanza. 2015. Code Review: Veni, ViDI, Vici. In Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on. IEEE, 151--160.Google ScholarGoogle Scholar
  66. Bogdan Vasilescu, Daryl Posnett, Baishakhi Ray, Mark GJ van den Brand, Alexander Serebrenik, Premkumar Devanbu, and Vladimir Filkov. 2015. Gender and tenure diversity in GitHub teams. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3789--3798. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Martin Vechev, Eran Yahav, et almbox. 2016. Programming with “Big Code”. Foundations and Trends® in Programming Languages , Vol. 3, 4 (2016), 231--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Robert S Weiss. 1995. Learning from strangers: The art and method of qualitative interview studies .Simon and Schuster.Google ScholarGoogle Scholar
  69. Kenji Yamauchi, Jiachen Yang, Keisuke Hotta, Yoshiki Higo, and Shinji Kusumoto. 2014. Clustering commits for understanding the intents of implementation. In Software Maintenance and Evolution (ICSME), 2014 IEEE International Conference on. IEEE, 406--410. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Information Needs in Contemporary Code Review

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 2, Issue CSCW
      November 2018
      4104 pages
      EISSN:2573-0142
      DOI:10.1145/3290265
      Issue’s Table of Contents

      Copyright © 2018 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 November 2018
      Published in pacmhci Volume 2, Issue CSCW

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader