ABSTRACT
Small automatically assessed programming assignments are an often used resource for learning programming. Creating sufficiently large amounts of such assignments is, however, time consuming. As a consequence, offering large quantities of practice assignments to students is not always possible. CrowdSorcerer is an embeddable open-source system that students and teachers alike can use for creating and evaluating small automatically assessed programming assignments. While creating programming assignments, the students also write simple input-output -tests, and are gently introduced to the basics of testing. Students can also evaluate the assignments of others and provide feedback on them, which exposes them to code written by others early in their education. In this article we both describe the CrowdSorcerer system and our experiences in using the system in a large undergraduate programming course. Moreover, we discuss the motivation for crowdsourcing course assignments and present some usage statistics.
- John Bean and Shevawn Bogdan Eaton. 2001. The psychology underlying successful retention practices. Journal of College Student Retention: Research, Theory & Practice 3, 1 (2001), 73–89.Google ScholarCross Ref
- Peter Brusilovsky, Stephen Edwards, Amruth Kumar, Lauri Malmi, Luciana Benotti, Duane Buck, Petri Ihantola, Rikki Prince, Teemu Sirkiä, Sergey Sosnovsky, et al. 2014. Increasing adoption of smart learning content for computer science education. In Proc. of the Working Group Reports of the 2014 on Innovation & Technology in Computer Science Education Conference. ACM, 31–57. Google ScholarDigital Library
- Kwangsu Cho, Christian D Schunn, and Roy W Wilson. 2006. Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology 98, 4 (2006), 891.Google ScholarCross Ref
- Paul Denny, John Hamer, Andrew Luxton-Reilly, and Helen Purchase. 2008. PeerWise: students sharing their multiple choice questions. In Proc. of the fourth international workshop on computing education research. ACM, 51–58. Google ScholarDigital Library
- Paul Denny, Andrew Luxton-Reilly, Ewan Tempero, and Jacob Hendrickx. 2011. CodeWrite: supporting student-driven practice of java. In Proceedings of the 42nd ACM technical symposium on Computer science education. ACM, 471–476. Google ScholarDigital Library
- Anhai Doan, Raghu Ramakrishnan, and Alon Y. Halevy. 2011. Crowdsourcing Systems on the World-Wide Web. Commun. ACM 54, 4 (April 2011), 86–96. Google ScholarDigital Library
- Stephen Downes. 2012. Connectivism and connective knowledge. Essays on meaning and learning networks. National Research Council Canada (2012).Google Scholar
- Enrique Estellés-Arolas and Fernando González-Ladrón-De-Guevara. 2012. Towards an Integrated Crowdsourcing Definition. J. Inf. Sci. 38, 2 (April 2012), 189–200. Google ScholarDigital Library
- Edward F Gehringer, Karishma Navalakha, and Reejesh Kadanjoth. 2011. A Student-Written Wiki Textbook Supplement for a Parallel-Architecture Course.Google Scholar
- Jeff Howe. 2006. The rise of crowdsourcing. Wired magazine 14, 6 (2006), 1–4.Google Scholar
- Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research (Koli Calling ’10). ACM, New York, NY, USA, 86–93. Google ScholarDigital Library
- Petri Ihantola, Arto Vihavainen, Alireza Ahadi, Matthew Butler, Jürgen Börstler, Stephen H. Edwards, Essi Isohanni, Ari Korhonen, Andrew Petersen, Kelly Rivers, Miguel Ángel Rubio, Judy Sheard, Bronius Skupas, Jaime Spacco, Claudia Szabo, and Daniel Toll. 2015. Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies. In Proc. of the 2015 ITiCSE on Working Group Reports (ITICSE-WGR ’15). ACM, New York, NY, USA, 41–63. Google ScholarDigital Library
- Chinmay Kulkarni, Koh Pang Wei, Huy Le, Daniel Chia, Kathryn Papadopoulos, Justin Cheng, Daphne Koller, and Scott R. Klemmer. 2013. Peer and Self Assessment in Massive Online Classes. ACM Trans. Comput.-Hum. Interact. 20, 6, Article 33 (Dec. 2013), 31 pages. Google ScholarDigital Library
- Leo Leppänen, Juho Leinonen, Petri Ihantola, and Arto Hellas. 2017. Using and collecting fine-grained usage data to improve online learning materials. In Proceedings of the 39th International Conference on Software Engineering: Software Engineering and Education Track. IEEE Press, 4–12. Google ScholarDigital Library
- Andrew Luxton-Reilly. 2009. A systematic review of tools that support peer assessment. Computer Science Education 19, 4 (2009), 209–232.Google ScholarCross Ref
- Andrei Papancea, Jaime Spacco, and David Hovemeyer. 2013. An Open Platform for Managing Short Programming Exercises. In Proc. of the Ninth Annual International ACM Conference on International Computing Education Research (ICER ’13). ACM, New York, NY, USA, 47–52. Google ScholarDigital Library
- Nick Parlante. 2007. Nifty reflections. ACM SIGCSE Bulletin 39, 2 (2007), 25–26. Google ScholarDigital Library
- Darrell Porcello and Sherry Hsi. 2013. Crowdsourcing and curating online education resources. Science 341, 6143 (2013), 240–241.Google Scholar
- Kate Sanders, Marzieh Ahmadzadeh, Tony Clear, Stephen H. Edwards, Mikey Goldweber, Chris Johnson, Raymond Lister, Robert McCartney, Elizabeth Patitsas, and Jaime Spacco. 2013. The Canterbury QuestionBank: Building a Repository of Multiple-choice CS1 and CS2 Questions. In Proc. of the ITiCSE Working Group Reports Conference on Innovation and Technology in Computer Science Educationworking Group Reports (ITiCSE -WGR ’13). ACM, New York, NY, USA, 33–52. Google ScholarDigital Library
- Clifford A Shaffer, Ville Karavirta, Ari Korhonen, and Thomas L Naps. 2011. OpenDSA: beginning a community active-ebook project. In Proc. of the 11th Koli Calling Int. Conference on Computing Education Research. ACM, 112–117. Google ScholarDigital Library
- Renée Speyer, Walmari Pilz, Jolien Van Der Kruis, and Jan Wouter Brunings. 2011. Reliability and validity of student peer assessment in medical education: a systematic review. Medical Teacher 33, 11 (2011), e572–e585.Google ScholarCross Ref
- Errol Thompson, Jacqueline Whalley, RF Lister, and Beth Simon. 2006. Code classification as a learning and asssessment exercise for novice programmers. National Advisory Committee on Computing Qualifications (2006).Google Scholar
- Arto Vihavainen, Thomas Vikberg, Matti Luukkainen, and Martin Pärtel. 2013. Scaffolding students’ learning using Test My Code. In Proc. of the 18th ACM conference on Innovation and Tech. in Computer Science Education. ACM, 117–122. Google ScholarDigital Library
- Daniel S Weld, Eytan Adar, Lydia Chilton, Raphael Hoffmann, Eric Horvitz, Mitchell Koch, James Landay, Christopher H Lin, and Mausam Mausam. 2012.Google Scholar
- Personalized online education—a crowdsourcing challenge. In Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence. 1–31.Google Scholar
Index Terms
- Crowdsourcing programming assignments with CrowdSorcerer
Recommendations
Analysis of Students' Peer Reviews to Crowdsourced Programming Assignments
Koli Calling '18: Proceedings of the 18th Koli Calling International Conference on Computing Education ResearchWe have used a tool called CrowdSorcerer that allows students to create programming assignments. The students are given a topic by a teacher, after which the students design a programming assignment: the assignment description, the code template, a ...
Crowdsourcing Content Creation for SQL Practice
ITiCSE '20: Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science EducationCrowdsourcing refers to the act of using the crowd to create content or to collect feedback on some particular tasks or ideas. Within computer science education, crowdsourcing has been used -- for example -- to create rehearsal questions and programming ...
Does Creating Programming Assignments with Tests Lead to Improved Performance in Writing Unit Tests?
CompEd '19: Proceedings of the ACM Conference on Global Computing EducationWe have constructed a tool, CrowdSorcerer, in which students create programming assignments, their model solutions and associated test cases using a simple input-output format. We have used the tool as a part of an introductory programming course with ...
Comments