skip to main content
10.1145/1531674.1531692acmconferencesArticle/Chapter ViewAbstractPublication PagesgroupConference Proceedingsconference-collections
research-article

Two peers are better than one: aggregating peer reviews for computing assignments is surprisingly accurate

Published:10 May 2009Publication History

ABSTRACT

Scientific peer review, open source software development, wikis, and other domains use distributed review to improve quality of created content by providing feedback to the work's creator. Distributed review is used to assess or improve the quality of a work (e.g., an article). However, it can also provide learning benefits to the participants in the review process. We developed an online review system for beginning computer programming students; it gathers multiple anonymous peer reviews to give students feedback on their programming work. We deployed the system in an introductory programming class and evaluated it in a controlled study. We find that: peer reviews are accurate compared to an accepted evaluation standard, that students prefer reviews from other students with less experience than themselves, and that participating in a peer review process results in better learning outcomes.

References

  1. Anewalt, K. Using peer review as a vehicle for communication skill development and active learning. J. Comput. Small Coll. 21, 2 (2005), 148--155. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bloom, B., Englehart, M. D., Furst, E. J., Hill, W. H., and Krathwohl, D. R. Taxonomy of Educational Objectives: The Classification of Educational Goals - Handbook 1: Cognitive Domain.Google ScholarGoogle Scholar
  3. David McKay Company, Inc., New York, 1956.Google ScholarGoogle Scholar
  4. Cho, K., Chung, T. R., King, W. R., and Schunn, C. Peer-based computer-supported knowledge refinement: an empirical investigation. Commun. ACM 51, 3 (2008), 83--88. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Clancey, W. J. From guidon to neomycin and heracles in twenty short lessons. AI Mag. 7, 3 (1986), 40--60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Collofello, J. S. Teaching technical reviews in a one-semester software engineering course. In SIGCSE '87: Proceedings of the eighteenth SIGCSE technical symposium on Computer science education (New York, NY, USA, 1987), ACM, pp. 222--227. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cosley, D., Frankowski, D., Terveen, L., and Riedl, J. Using intelligent task routing and contribution review to help communities build artifacts of lasting value. In CHI '06: Proceedings of the SIGCHI conference on Human Factors in computing systems (New York, NY, USA, 2006), ACM, pp. 1037--1046. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Denning, T., Kelly, M., Lindquist, D., Malani, R., Griswold, W. G., and Simon, B. Lightweight preliminary peer review: does in-class peer review make sense? In SIGCSE '07: Proceedings of the 38th SIGCSE technical symposium on Computer science education (New York, NY, USA, 2007), ACM, pp. 266--270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Foltz, P. W., Laham, D., and Landauer, T. K. Automated essay scoring: Applications to education technology. In Proceedings of ED-MEDIA (1999), pp. 939--944.Google ScholarGoogle Scholar
  10. Gehringer, E. Strategies and mechanisms for electronic peer review. Frontiers in Education Conference, 2000. FIE 2000. 30th Annual 1 (2000), F1B/2-F1B/7 vol.1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Gehringer, E. F. Electronic peer review and peer grading in computer-science courses. SIGCSE Bull. 33, 1 (2001), 139--143. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Gehringer, E. F. Electronic peer review builds resources for teaching computer architecture. In Proceedings of the 2003 American Society for Engineering Education Annual Conference & Exposition (2003), American Society for Engineering Education.Google ScholarGoogle ScholarCross RefCross Ref
  13. Gehringer, E. F., Chinn, D. D., Manuel A. Pérez-Qui n., and Ardis, M. A. Using peer review in teaching computing. In SIGCSE '05: Proceedings of the 36th SIGCSE technical symposium on Computer science education (New York, NY, USA, 2005), ACM, pp. 321--322. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Gehringer, E. F., Ehresman, L. M., and Skrien, D. J. Expertiza: students helping to write an ood text. In OOPSLA '06: Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications (New York, NY, USA, 2006), ACM, pp. 901--906. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Gotel, O., Scharff, C., and Wildenberg, A. Extending and contributing to an open source web-based system for the assessment of programming problems. In PPPJ '07: Proceedings of the 5th international symposium on Principles and practice of programming in Java (New York, NY, USA, 2007), ACM, pp. 3--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Gotel, O., Scharff, C., and Wildenberg, A. Teaching software quality assurance by encouraging student contributions to an open source web-based system for the assessment of programming assignments. In ITiCSE '08: Proceedings of the 13th annual conference on Innovation and technology in computer science education (New York, NY, USA, 2008), ACM, pp. 214--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Hamer, J., Kell, C., and Spence, F. Peer assessment using aropä. In ACE '07: Proceedings of the ninth Australasian conference on Computing education (Darlinghurst, Australia, Australia, 2007), Australian Computer Society, Inc., pp. 43--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Hamer, J., Ma, K. T. K., and Kwong, H. H. F. A method of automatic grade calibration in peer assessment. In ACE '05: Proceedings of the 7th Australasian conference on Computing education (Darlinghurst, Australia, Australia, 2005), Australian Computer Society, Inc., pp. 67--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Herlocker, J. L., Konstan, J. A., and Riedl, J. Explaining collaborative filtering recommendations. In CSCW '00: Proceedings of the 2000 ACM conference on Computer supported cooperative work (New York, NY, USA, 2000), ACM, pp. 241--250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Hinds, P. J. The curse of expertise: The effects of expertise and debiasing methods on predictions of novice performance. Journal of Experimental Psychology: Applied 5, 2 (1999), 205--221.Google ScholarGoogle ScholarCross RefCross Ref
  21. Krathwohl, D. R., Bloom, B. S., and Masia, B. Taxonomy of Educational Objectives: The Classification of Educational Goals - Handbook 2: Affective Domain, 1 ed. Longman, London, UK, July 1964.Google ScholarGoogle Scholar
  22. Lampe, C., and Resnick, P. Slash(dot) and burn: distributed moderation in a large online conversation space. In CHI '04: Proceedings of the SIGCHI conference on Human factors in computing systems (New York, NY, USA, 2004), ACM, pp. 543--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Liu, E. Z.-F., Lin, S., Chiu, C.-H., and Yuan, S.-M. Web-based peer review: the learner as both adapter and reviewer. Education, IEEE Transactions on 44, 3 (Aug 2001), 246--251. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Resnick, P., Kuwabara, K., Zeckhauser, R., and Friedman, E. Reputation systems. Commun. ACM 43, 12 (2000), 45--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Silva, E., and Moreira, D. Webcom: a tool to use peer review to improve student interaction. J. Educ. Resour. Comput. 3, 1 (2003), 3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Sullivan, S. L. Reciprocal peer reviews. In SIGCSE '94: Proceedings of the twenty-fifth SIGCSE symposium on Computer science education (New York, NY, USA, 1994), ACM, pp. 314--318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Topping, K. Peer assessment between students in colleges and universities. Review of Educational Research 68, 3 (1998), 249--276.Google ScholarGoogle ScholarCross RefCross Ref
  28. Trivedi, A., Kar, D. C., and Patterson-McNeill, H. Automatic assignment management and peer evaluation. J. Comput. Small Coll. 18, 4 (2003), 30--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Trytten, D. Progressing from small group work to cooperative learning: a case study from computer science. Frontiers in Education Conference, 1999. FIE'99. 29th Annual 2 (1999), 13A4/22--13A4/27 vol.2.Google ScholarGoogle ScholarCross RefCross Ref
  30. Trytten, D. A. A design for team peer code review. In SIGCSE '05: Proceedings of the 36th SIGCSE technical symposium on Computer science education (New York, NY, USA, 2005), ACM, pp. 455--459. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Wolfe, W. J. Online student peer reviews. In CITC5 '04: Proceedings of the 5th conference on Information technology education (New York, NY, USA, 2004), ACM, pp. 33--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Zhang, J., Ackerman, M. S., and Adamic, L. Expertise networks in online communities: structure and algorithms. In WWW '07: Proceedings of the 16th international conference on World Wide Web (New York, NY, USA, 2007), ACM, pp. 221--230. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Two peers are better than one: aggregating peer reviews for computing assignments is surprisingly accurate

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GROUP '09: Proceedings of the 2009 ACM International Conference on Supporting Group Work
      May 2009
      412 pages
      ISBN:9781605585000
      DOI:10.1145/1531674

      Copyright © 2009 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 10 May 2009

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      GROUP '09 Paper Acceptance Rate40of110submissions,36%Overall Acceptance Rate125of405submissions,31%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader