skip to main content
10.1145/3197091.3197117acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Crowdsourcing programming assignments with CrowdSorcerer

Published:02 July 2018Publication History

ABSTRACT

Small automatically assessed programming assignments are an often used resource for learning programming. Creating sufficiently large amounts of such assignments is, however, time consuming. As a consequence, offering large quantities of practice assignments to students is not always possible. CrowdSorcerer is an embeddable open-source system that students and teachers alike can use for creating and evaluating small automatically assessed programming assignments. While creating programming assignments, the students also write simple input-output -tests, and are gently introduced to the basics of testing. Students can also evaluate the assignments of others and provide feedback on them, which exposes them to code written by others early in their education. In this article we both describe the CrowdSorcerer system and our experiences in using the system in a large undergraduate programming course. Moreover, we discuss the motivation for crowdsourcing course assignments and present some usage statistics.

References

  1. John Bean and Shevawn Bogdan Eaton. 2001. The psychology underlying successful retention practices. Journal of College Student Retention: Research, Theory & Practice 3, 1 (2001), 73–89.Google ScholarGoogle ScholarCross RefCross Ref
  2. Peter Brusilovsky, Stephen Edwards, Amruth Kumar, Lauri Malmi, Luciana Benotti, Duane Buck, Petri Ihantola, Rikki Prince, Teemu Sirkiä, Sergey Sosnovsky, et al. 2014. Increasing adoption of smart learning content for computer science education. In Proc. of the Working Group Reports of the 2014 on Innovation & Technology in Computer Science Education Conference. ACM, 31–57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Kwangsu Cho, Christian D Schunn, and Roy W Wilson. 2006. Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology 98, 4 (2006), 891.Google ScholarGoogle ScholarCross RefCross Ref
  4. Paul Denny, John Hamer, Andrew Luxton-Reilly, and Helen Purchase. 2008. PeerWise: students sharing their multiple choice questions. In Proc. of the fourth international workshop on computing education research. ACM, 51–58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Paul Denny, Andrew Luxton-Reilly, Ewan Tempero, and Jacob Hendrickx. 2011. CodeWrite: supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education. ACM, 471–476. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anhai Doan, Raghu Ramakrishnan, and Alon Y. Halevy. 2011. Crowdsourcing Systems on the World-Wide Web. Commun. ACM 54, 4 (April 2011), 86–96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Stephen Downes. 2012. Connectivism and connective knowledge. Essays on meaning and learning networks. National Research Council Canada (2012).Google ScholarGoogle Scholar
  8. Enrique Estellés-Arolas and Fernando González-Ladrón-De-Guevara. 2012. Towards an Integrated Crowdsourcing Definition. J. Inf. Sci. 38, 2 (April 2012), 189–200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Edward F Gehringer, Karishma Navalakha, and Reejesh Kadanjoth. 2011. A Student-Written Wiki Textbook Supplement for a Parallel-Architecture Course.Google ScholarGoogle Scholar
  10. Jeff Howe. 2006. The rise of crowdsourcing. Wired magazine 14, 6 (2006), 1–4.Google ScholarGoogle Scholar
  11. Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research (Koli Calling ’10). ACM, New York, NY, USA, 86–93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H. Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, Miguel Ángel Rubio, Judy Sheard, Bronius Skupas, Jaime Spacco, Claudia Szabo, and Daniel Toll. 2015. Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies. In Proc. of the 2015 ITiCSE on Working Group Reports (ITICSE-WGR ’15). ACM, New York, NY, USA, 41–63. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Chinmay Kulkarni, Koh Pang Wei, Huy Le, Daniel Chia, Kathryn Papadopoulos, Justin Cheng, Daphne Koller, and Scott R. Klemmer. 2013. Peer and Self Assessment in Massive Online Classes. ACM Trans. Comput.-Hum. Interact. 20, 6, Article 33 (Dec. 2013), 31 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Leo Leppänen, Juho Leinonen, Petri Ihantola, and Arto Hellas. 2017. Using and collecting fine-grained usage data to improve online learning materials. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering and Education Track. IEEE Press, 4–12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Andrew Luxton-Reilly. 2009. A systematic review of tools that support peer assessment. Computer Science Education 19, 4 (2009), 209–232.Google ScholarGoogle ScholarCross RefCross Ref
  16. Andrei Papancea, Jaime Spacco, and David Hovemeyer. 2013. An Open Platform for Managing Short Programming Exercises. In Proc. of the Ninth Annual International ACM Conference on International Computing Education Research (ICER ’13). ACM, New York, NY, USA, 47–52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Nick Parlante. 2007. Nifty reflections. ACM SIGCSE Bulletin 39, 2 (2007), 25–26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Darrell Porcello and Sherry Hsi. 2013. Crowdsourcing and curating online education resources. Science 341, 6143 (2013), 240–241.Google ScholarGoogle Scholar
  19. Kate Sanders, Marzieh Ahmadzadeh, Tony Clear, Stephen H. Edwards, Mikey Goldweber, Chris Johnson, Raymond Lister, Robert McCartney, Elizabeth Patitsas, and Jaime Spacco. 2013. The Canterbury QuestionBank: Building a Repository of Multiple-choice CS1 and CS2 Questions. In Proc. of the ITiCSE Working Group Reports Conference on Innovation and Technology in Computer Science Educationworking Group Reports (ITiCSE -WGR ’13). ACM, New York, NY, USA, 33–52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Clifford A Shaffer, Ville Karavirta, Ari Korhonen, and Thomas L Naps. 2011. OpenDSA: beginning a community active-ebook project. In Proc. of the 11th Koli Calling Int. Conference on Computing Education Research. ACM, 112–117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Renée Speyer, Walmari Pilz, Jolien Van Der Kruis, and Jan Wouter Brunings. 2011. Reliability and validity of student peer assessment in medical education: a systematic review. Medical Teacher 33, 11 (2011), e572–e585.Google ScholarGoogle ScholarCross RefCross Ref
  22. Errol Thompson, Jacqueline Whalley, RF Lister, and Beth Simon. 2006. Code classification as a learning and asssessment exercise for novice programmers. National Advisory Committee on Computing Qualifications (2006).Google ScholarGoogle Scholar
  23. Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin Pärtel. 2013. Scaffolding students’ learning using Test My Code. In Proc. of the 18th ACM conference on Innovation and Tech. in Computer Science Education. ACM, 117–122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Daniel S Weld, Eytan Adar, Lydia Chilton, Raphael Hoffmann, Eric Horvitz, Mitchell Koch, James Landay, Christopher H Lin, and Mausam Mausam. 2012.Google ScholarGoogle Scholar
  25. Personalized online education—a crowdsourcing challenge. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence. 1–31.Google ScholarGoogle Scholar

Index Terms

  1. Crowdsourcing programming assignments with CrowdSorcerer

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ITiCSE 2018: Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education
          July 2018
          394 pages
          ISBN:9781450357074
          DOI:10.1145/3197091

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 2 July 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate552of1,613submissions,34%

          Upcoming Conference

          ITiCSE 2024

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader