skip to main content
10.1145/3437914.3437973acmotherconferencesArticle/Chapter ViewAbstractPublication PagescepConference Proceedingsconference-collections
short-paper
Open Access

Analysis of an automatic grading system within first year Computer Science programming modules

Published:07 January 2021Publication History

ABSTRACT

Reliable and pedagogically sound automated feedback and grading systems are highly coveted by educators. Automatic grading systems are useful for ensuring equity of grading of student submissions to assignments and providing timely feedback on the work. Many of these systems test submissions to assignments based on test cases and the outputs that they achieve, while others use unit tests to check the submissions.

The approach presented in this paper checks submissions based on test cases but also analyses what the students actually wrote in their code. Assignment questions are constructed based around the concepts that the student are currently learning in lectures, and the patterns searched for in their submissions are based on these concepts. In this paper we show how to implement this approach effectively. We analyse the use of an automatic grading system within first year Computer Science programming modules and show that the system is straightforward to use and suited for novice programmers, while providing automatic grading and feedback.

Evaluation received from students, demonstrators and lecturers show the system is extremely beneficial. The evaluation shows that such systems allow demonstrators more time to assist students during labs. Lecturers can also provide instant feedback to students while keeping track of their progress and identifying where the gaps in students’ knowledge are.

References

  1. Autolab Project. 2020. Autolab Project. http://www.autolabproject.com Accessed: 2020-06-11.Google ScholarGoogle Scholar
  2. Natalie Culligan and Kevin Casey. 2018. Building an Authentic Novice Programming Lab Environment. In International Conference on Enguaging Pedagogy (ICEP).Google ScholarGoogle Scholar
  3. John DeNero, Sumukh Sridhara, Manuel A Pérez-Quiñones, Aatish Nayak, and Ben Leong. 2017. Beyond Autograding: Advances in Student Feedback Platforms.. In SIGCSE. 651–652.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Stephen H Edwards and Manuel A Perez-Quinones. 2008. Web-CAT: automatically grading programming assignments. In ACM SIGCSE Bulletin, Vol. 40. ACM, 328–328.Google ScholarGoogle Scholar
  5. HackerRank. 2020+. HackerRank. https://www.hackerrank.com Accessed: 2020-07-14.Google ScholarGoogle Scholar
  6. David Harmon and Stephen Erskine. 2017. Eurostudent Survey VI. http://hea.ie/assets/uploads/2018/01/HEA-Eurostudent-Survey.pdfGoogle ScholarGoogle Scholar
  7. Jack Hollingsworth. 1960. Automatic graders for programming classes. Commun. ACM 3, 10 (1960), 528–529.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Aidan Mooney, Susan Bergin, and Emlyn Hegarty-Kelly. 2017. Incorporating the Virtual Programming Lab into a first year Computer Science module. In Technology-Enabled Feedback Approaches for First-Year: Y1Feedback Case Studies in Practice.Google ScholarGoogle Scholar
  9. OK. 2020. OK. https://okpy.org Accessed: 2020-07-11.Google ScholarGoogle Scholar
  10. Keith Quille, Susan Bergin, and Aidan Mooney. 2015. Press#, a web-based educational system to predict programming performance. International Journal of Computer Science and Software Engineering (IJCSSE) 4, 7(2015), 178–189.Google ScholarGoogle Scholar
  11. Replit. 2020. repl.it classroom. https://repl.it/site/classrooms Accessed: 2019-06-11.Google ScholarGoogle Scholar
  12. Juan Carlos Rodríguez-del Pino, Enrique Rubio Royo, and Zenón Hernández Figueroa. 2012. A Virtual Programming Lab for Moodle with automatic assessment and anti-plagiarism features. (2012).Google ScholarGoogle Scholar
  13. Stepik. 2020. Stepik - smart tools for IT instructors. https://stepik.org/catalog Accessed: 2020-07-11.Google ScholarGoogle Scholar
  14. Chris Wilcox. 2015. The role of automation in undergraduate computer science education. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education. ACM, 90–95.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. John Wrenn, Shriram Krishnamurthi, and Kathi Fisler. 2018. Who Tests the Testers?(ICER ’18). Association for Computing Machinery, New York, NY, USA, 51–59.Google ScholarGoogle Scholar

Index Terms

  1. Analysis of an automatic grading system within first year Computer Science programming modules
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        CEP '21: Proceedings of the 5th Conference on Computing Education Practice
        January 2021
        39 pages
        ISBN:9781450389594
        DOI:10.1145/3437914

        Copyright © 2021 Owner/Author

        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 January 2021

        Check for updates

        Qualifiers

        • short-paper
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate32of71submissions,45%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format