skip to main content
10.1145/2016911.2016920acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Exploring programming assessment instruments: a classification scheme for examination questions

Published:08 August 2011Publication History

ABSTRACT

This paper describes the development of a classification scheme that can be used to investigate the characteristics of introductory programming examinations. We describe the process of developing the scheme, explain its categories, and present a taste of the results of a pilot analysis of a set of CS1 exam papers. This study is part of a project that aims to investigate the nature and composition of formal examination instruments used in the summative assessment of introductory programming students, and the pedagogical intentions of the educators who construct these instruments.

References

  1. Anderson, L. W. and L. Sosniak, A., "Excerpts from the "Taxonomy of Educational Objectives, The Classification of Educational Goals, Handbook I: Cognitive Domain," in Bloom's Taxonomy: A Forty Year Retrospective, L. W. Anderson and L. Sosniak, A., Eds., ed Chicago, Illinois, USA: The University of Chicago Press, 1994, 9--27.Google ScholarGoogle Scholar
  2. Banerjee, M., M. Capozzoli, L. McSweeney, and D. Sinha, "Beyond kappa: a review of interrater agreement measures," Canadian Journal of Statistics, 27:3--23, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  3. Biggs, J. B., "What the Student Does: teaching for enhanced learning," Higher Education Research and Development, 18:57--75, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  4. Clear, T., J. Whalley, R. Lister, A. Carbone, M. Hu, J. Sheard, B. Simon, and E. Thompson, "Reliably classifying novice programmer exam response using the SOLO taxonomy," in NACCQ 2008, Auckland, New Zealand, 2008.Google ScholarGoogle Scholar
  5. Dale, N., "Content and emphasis in CS1," inroads - The SIGCSE Bulletin, 37:69--73, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Dale, N., "Most difficult topics in CS1: Results of an online survey of educators," inroads - The SIGCSE Bulletin, 38:49--53, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Dart, B. and G. Boulton-Lewis, "The SOLO model: Addressing fundamental measurement issues," in Teaching and Learning in Higher Education, M. Turpin, Ed., ed Camberwell, Victoria, Australia: ACER Press, 1998, 145--176.Google ScholarGoogle Scholar
  8. Davies, M. and J. L. Fleiss, "Measuring agreement for multinomial data," Biometrics, 38:1047--1051, 1982.Google ScholarGoogle ScholarCross RefCross Ref
  9. Goldfinch, T., A. L. Carew, A. Gardner, A. Henderson, T. McCarthy, and G. Thomas, "Cross-institutional comparison of mechanics examinations: A guide for the curious," in AaaE conference, Yeppoon, 2008, 1--8.Google ScholarGoogle Scholar
  10. Lister, R., "Concrete and other neo-piagetian forms of reasoning in the novice programmer," in 13th Australasian Computing Education conference, Perth, Australia, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Lister, R., T. Clear, Simon, D. J. Bouvier, P. Carter, A. Eckerdal, J. Jacková, M. Lopez, R. McCartney, P. Robbins, O. Seppälä, and E. Thompson, "Naturally occurring data as research instrument: Analyzing examination responses to study the novice programmer," inroads - The SIGCSE Bulletin, 41:156--173, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lopez, M., J. Whalley, P. Robbins, and R. Lister, "Relationships between reading, tracing and writing skills in introductory programming.," in Fourth International Workshop on Computing Education Research (ICER '08), Sydney, Australia, 2008, 101--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Morrison, B., M. Clancy, R. McCartney, B. Richards, and K. Sanders, "Applying data structures in exams," in SIGCSE'11, Dallas, Texas, USA, 2011, 353--358. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Parsons, D. and P. Haden, "Parson's programmimg puzzles: A fun and effective learning tool for first programming courses," in Eighth Australasian Computing Education conference (ACE2006), Hobart, Australia, 2006, 157--163. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Petersen, A., M. Craig, and D. Zingaro, "Reviewing CS1 exam question content," in SIGCSE'11, Dallas, Texas, USA, 2011, 631--636. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ramsden, P., Learning to Teach in Higher Education. New York, NY, USA: Routledge, 1992.Google ScholarGoogle Scholar
  17. Schulte, C. and J. Bennedsen, "What do teachers teach in introductory programming?," in Second International Computing Education Research workshop (ICER'06), Canterbury, UK, 2006, 17--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Sheard, J., A. Carbone, R. Lister, B. Simon, E. Thompson, and J. Whalley, "Going SOLO to assess novice programmers," in 13th Annual Conference on Innovation and Technology in Computer Science Education (ITiCSE'08), Madrid, Spain, 2008, 209--213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Shuhidan, S., M. Hamilton, and D. D'Souza, "Instructor perspectives of multiple-choice questions in summative assessment for novice programmers," Computer Science Education, 20:229--259, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  20. Simon, A. Carbone, M. De Raadt, R. Lister, M. Hamilton, and J. Sheard, "Classifying computing education papers: Process and results," in 4th International Workshop on Computing Education research (ICER 2008), Sydney, NSW, Australia, 2008, 161--171. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Simon, B., M. Clancy, R. McCartney, B. Morrison, B. Richards, and K. Sanders, "Making sense of data structure exams," in International Computing Education Research workshop (ICER'10), Aarhus, Denmark, 2010, 97--105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Tew, A., "Assessing Fundamental Introductory Computing Concept Knowledge in a Language Independent Manner," PhD Dissertation, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Tew, A. E., "Developing a validated assessment of fundamental CS1 concepts," in SIGCSE'10, Milwaukee, Wisconsin, USA, 2010, 97--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Venables, A., G. Tan, and R. Lister, "A closer look at tracing, explaining and code writing skills in the novice programmer," in The fifth International Computing Education Research Workshop (ICER 2009), Berkeley, California, USA, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Whalley, J., R. Lister, E. Thompson, T. Clear, P. Robbins, P. K. A. Kumar, and C. Prasad, "An Australasian study of reading and comprehension skills in novice programmers, using the Bloom and SOLO taxonomies," in Eighth Australasian Computing Education conference (ACE2006), Hobart, Australia, 2006, 243--252. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Exploring programming assessment instruments: a classification scheme for examination questions
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ICER '11: Proceedings of the seventh international workshop on Computing education research
            August 2011
            156 pages
            ISBN:9781450308298
            DOI:10.1145/2016911
            • General Chair:
            • Kate Sanders,
            • Program Chairs:
            • Michael E. Caspersen,
            • Alison Clear,
            • Kate Sanders

            Copyright © 2011 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 8 August 2011

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate189of803submissions,24%

            Upcoming Conference

            ICER 2024
            ACM Conference on International Computing Education Research
            August 13 - 15, 2024
            Melbourne , VIC , Australia

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader