skip to main content
article

Automatic test-based assessment of programming: A review

Published:01 September 2005Publication History
Skip Abstract Section

Abstract

Systems that automatically assess student programming assignments have been designed and used for over forty years. Systems that objectively test and mark student programming work were developed simultaneously with programming assessment in the computer science curriculum. This article reviews a number of influential automatic assessment systems, including descriptions of the earliest systems, and presents some of the most recent developments. The final sections explore a number of directions automated assessment systems may take, presenting current developments alongside a number of important emerging e-learning specifications.

References

  1. Beck, K. 2003. Test Driven Development: By Example. Addison-Wesley, Boston, MA. Google ScholarGoogle Scholar
  2. Daly, C. and Horgan, J. 2005. Patterns of plagiarism. In Proceedings of the 36th SIGCSE Technical Symposium on Computer Science Education. Google ScholarGoogle Scholar
  3. Daly, C. and Waldron, J. 2004. Assessing the assessment of programming ability. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education. 210--213. Google ScholarGoogle Scholar
  4. Daly, C. 1999. RoboProf and an introductory computer programming course. In Proceedings of the 4th Annual SIGCSE/SIGCUE ITiCSE Conference on Innovation and Technology in Computer Science Education. 155--158. Google ScholarGoogle Scholar
  5. Edwards, S. H. 2003a. Using test-driven development in the classroom: Providing students with automatic, concrete feedback on performance. In Proceedings of the International Conference on Education and Information Systems: Technologies and Applications. 421--426.Google ScholarGoogle Scholar
  6. Edwards, S. H. 2003b. Teaching software testing: Automatic grading meets test-first coding. In Proceedings of the OOPSLA'03 Conference. Poster presentation. 318--319. Google ScholarGoogle Scholar
  7. English, J. 2004. Automated Assessment of GUI Programs using JEWL. Proceedings of the 9th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education. 131--141. Google ScholarGoogle Scholar
  8. English, J. 2002. Experience with a computer-assisted formal programming examination. ACM SIGCSE Bull. 34, 3: Proceedings of the 7th Annual Conference on Innovation and Technology in Computer Science Education. 51--54. Google ScholarGoogle Scholar
  9. English, J. and Siviter, P. 2000. Experience with an automatically assessed course. In Proceedings of the 5th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education. 168--171. Google ScholarGoogle Scholar
  10. Forsythe, G. E. and Wirth, N. 1965. Automatic grading programs. Commun. ACM 8, 5, 275--529. Google ScholarGoogle Scholar
  11. Gehringer, E. F. 2001. Electronic peer review and peer grading in computer-science courses. In Proceedings of the 32nd SIGCSE Technical Symposium on Computer Science Education. 139--143. Google ScholarGoogle Scholar
  12. Hext, J. B. and Winings, J. W. 1969. An automatic grading scheme for simple programming exercises. Commun. ACM 12, 5, 272--275. Google ScholarGoogle Scholar
  13. Higgins, C., Hegazy, T., Symeonidis, P., and Tsintsifas, A. 2003. The CourseMaster CBA system: Improvements over Ceilidh. J. Edu. Inf.Technol. 8, 3, 287--304. Google ScholarGoogle Scholar
  14. Hollingsworth, J. 1960. Automatic graders for programming classes. Commun. ACM 3, 10, 528--529. Google ScholarGoogle Scholar
  15. Hung, S.-L., Kwok, L. F., and Chan, R. 1993. Automatic programming assessment. Comput. Edu. 20, 2, 183--190. Google ScholarGoogle Scholar
  16. Isaacson, P. C. and Scott, T. A. 1989. Automating the execution of student programs. SIGCSE Bull. 21, 2, 15--22. Google ScholarGoogle Scholar
  17. Jackson, D. and Usher, M. 1997. Grading student programing using ASSYST. In Technical Symposium on Computer Science Education, Proceedings of the 28th SIGCSE (San Jose, CA), 335--339. Google ScholarGoogle Scholar
  18. Jackson, D. 2000. A semi-automated approach to on-line assessment. In Proceedings of the 5th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, 164--167. Google ScholarGoogle Scholar
  19. Joy, M. and Luck, M. 1998. Effective electronic marking for on-line assessment. In Proceedings of the 6th Annual Conference on the Teaching of Computing (Dublin City University, Ireland). 134--138. Google ScholarGoogle Scholar
  20. Korhonen, A. and Malmi, L. 2000. Algorithm simulation with automatic assessment. In Proceedings of the 5th Annual SIGCSE/SIGCUE ItiCSE Conference on Innovation and Technology in Computer Science Education. 160--163. Google ScholarGoogle Scholar
  21. Korhonen, A., Malmi, L., Myllyselk, P., and Scheinin, P. 2002. Does it make a difference if students exercise on the web or in the classroom? In Proceedings of the 7th Annual SIGCSE/SIGCUE Conference on Innovation and Technology in Computer Science Education, ITiCSE02 (Aarhus, Denmark), ACM Press, New York, 121--124. Google ScholarGoogle Scholar
  22. Leal, J. P. and Moreira, N. 1998. Automatic grading of programming exercises. Tech. Rep. DCC-98-4, Dep. di Ciência de Computadores, Universidade do Porto, Portugal.Google ScholarGoogle Scholar
  23. Luck, M. and Joy, M. S. 1999. A secure on-line submission system. Softw. Pract. Exper. 29,8, 721--740. Google ScholarGoogle Scholar
  24. Mitrovic, A., Martin, B., and Mayo, M. 2000. Using evaluation to shape ITS design: Results and experiences with SQL-tutor. In User Modeling and User- Adapted Interaction 12. 243--279. Google ScholarGoogle Scholar
  25. Naur, P. 1964. Automatic grading of students' ALGOL programming. BIT 4, 177--188.Google ScholarGoogle Scholar
  26. Oliver, R. G. 1998. Experience of assessing programming assignments by computer. In Computer Based Assessment Volume 2: Case Studies in Science and Computing. D. Charman and A. Elmes, eds., University of Plymouth. 45--49.Google ScholarGoogle Scholar
  27. Reek, K. A. 1989. The TRY system---or---how to avoid testing student programs. SIGCSE Bull. 21, 1, 112--116. Google ScholarGoogle Scholar
  28. Saikkonen, R., Malmi, L., and Korhonen, A. 2001. Fully automatic assessment of programming exercises. In Proceedings of the ITiCSE 2001Conference, ACM Press, New York, 133--136. Google ScholarGoogle Scholar
  29. Smythe, C., et al. 2005. IMS question and test interoperability, Ver. 2.0, IMS Global Learning Consortium.Google ScholarGoogle Scholar
  30. Sykes, E. R. and Franek, F. 2004. A prototype for an intelligent tutoring system for students learning to program in Java. Int. J. Comput.Appl.1, 35--44.Google ScholarGoogle Scholar
  31. Thoburn, G. and Row, G. 1996. PASS---An automated program assessment system. In Proceedings of the 4th Annual Conference on the Teaching of Computing (Centre for Teaching Computing, Dublin City University).R. Rory O'Connor and S. Alexander, eds.Google ScholarGoogle Scholar
  32. Von Matt, U. 1994. Kassandra: The automatic grading system. Tech. Rep.UMIACS-TR-94-59, Institute for Advanced Computer Studies, Dept. of Computer Science, University of Maryland.Google ScholarGoogle Scholar
  33. Wilson, S., Blinco, K., and Rehak, D. 2004. Service-oriented frameworks: Modelling the infrastructure for the next generation of e-learning systems. JISC-CETIS. Available from JISC.Google ScholarGoogle Scholar
  34. Woit, D. and Mason, D. 2003. Effectiveness of on-line assessment. In Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education. 137--141. Google ScholarGoogle Scholar

Index Terms

  1. Automatic test-based assessment of programming: A review

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader