skip to main content
10.1145/2818314.2818320acmotherconferencesArticle/Chapter ViewAbstractPublication PageswipsceConference Proceedingsconference-collections
research-article

Design and First Results of a Psychometric Test for Measuring Basic Programming Abilities

Authors Info & Claims
Published:09 November 2015Publication History

ABSTRACT

We present the design of a test for measuring students' abilities concerning the application of control structures. Validated test instruments are a valuable tool for the evaluation of teaching both in a research setting as well as in a classroom setting. Our test is based on item-response-theory, in particular the Rasch model, and comprises a set of items all following the same format and using a simple, artificial programming language. We field-tested and modified the instrument in four iterations using only small samples and special statistical methods instead of the large samples usually required for IRT models. After the fourth iteration, the test has now reached a usable state. Based on the results, we were able to identify two misconceptions that are occurring very frequently in our test population - students of grade 7 to 10 in secondary schools.

References

  1. E. B. Andersen. A goodness of fit test for the rasch model. Psychometrika, 38(1):123--140, 1973.Google ScholarGoogle ScholarCross RefCross Ref
  2. D. J. Bartholomew, F. Steele, I. Moustaki, and J. I. Galbraith. Analysis of Multivariate Social Science Data. Chapman & Hall/CRC and CRC Press, Boca Raton, 2nd ed edition, 2008.Google ScholarGoogle Scholar
  3. J. Bennedsen and C. Schulte. A competence model for object-interaction in introductory programming. In Proceedings of the 18th Workshop of the Psychology of Programming Interest Group, Sussex UK, September 2006, pages 215--229. 2001.Google ScholarGoogle Scholar
  4. P. S. Buffum, E. V. Lobene, M. H. Frankosky, K. E. Boyer, E. N. Wiebe, and J. C. Lester. A practical guide to developing and validating computer science knowledge assessments with application to middle school. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education, SIGCSE '15, pages 622--627, New York, NY, USA, 2015. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Computing at School Working Group. Computer science: A curriculum for schools, March 2012.Google ScholarGoogle Scholar
  6. V. Dagienė and G. Futschek. Bebras international contest on informatics and computer literacy: Criteria for good tasks. In R. T. Mittermeir and M. M. Sysło, editors, Informatics Education - Supporting Computational Thinking, Lecture notes in computer science, pages 19--30. Springer Berlin Heidelberg, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. L. V. DiBello and W. Stout. Irt-based cognitive diagnostic models and related methods /. Journal of Educational Measurement, 44(4):285--383, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  8. S. Fincher and M. Petre. Computer Science Education Research. RoutledgeFalmer, London, New York, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. G. H. Fischer. The linear logistic test model as an instrument in educational research. Acta Psychologica, 37(6):359--374, 1973.Google ScholarGoogle ScholarCross RefCross Ref
  10. U. Fuller, C. G. Johnson, T. Ahoniemi, D. Cukierman, I. Hernán-Losada, J. Jackova, E. Lahtinen, T. L. Lewis, D. M. Thompson, C. Riedesel, and E. Thompson. Developing a computer science-specific learning taxonomy. In Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR '07, pages 152--170. ACM, New York, NY and USA, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. C. A. Glas and N. D. Verhelst. Testing the rasch model. In G. H. Fischer and I. W. Molenaar, editors, Rasch Models, pages 69--95. Springer New York, New York, NY, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  12. C. Holmboe, L. McIver, and G. Carlisle. Research agenda for computer science education. In G. Kadoda, editor, Proceedings of the 13th Workshop of the Psychology of Programming Interest Group, Bournemouth UK, April 2001, pages 207--223. 2001.Google ScholarGoogle Scholar
  13. I. Koller and R. Hatzinger. Nonparametric tests for the rasch model: explanation, development, and application of quasi-exact tests for small samples. InterStat, 11:1--16, 2013.Google ScholarGoogle Scholar
  14. A. N. Kumar. A study of the influence of code-tracing problems on code-writing skills. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '13, pages 183--188, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. B. Linck, L. Ohrndorf, S. Schubert, P. Stechert, J. Magenheim, W. Nelles, J. Neugebauer, and N. Schaper. Competence model for informatics modelling and system comprehension. In Proceedings of the IEEE Global Engineering Education Conference (EDUCON), Berlin, Germany, 13-15 March 2013, pages 85--93, Piscataway, N.J., 2013. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  16. L. Malmi, J. Sheard, Simon, R. Bednarik, J. Helminen, A. Korhonen, N. Myller, J. Sorva, and A. Taherkhani. Characterizing research in computing education: A preliminary analysis of the literature. In Proceedings of the Sixth International Workshop on Computing Education Research, Aarhus, Denmark, 9-10 August 2010, pages 3--12, New York, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Mühling, P. Hubwieser, and M. Berges. Dimensions of programming knowledge. In Proceedings of ISSEP, Ljubljana, - to appear., 2015.Google ScholarGoogle Scholar
  18. A. M. Mühling. Investigating Knowledge Structures in Computer Science Education. PhD thesis, Technische Universität München, München, 2014.Google ScholarGoogle Scholar
  19. F. H. Müller, B. Hanfstingl, and I. Andreitz. Skalen zur motivationalen regulation beim lernen von schülerinnen und schülern: Adaptierte und ergänzte version des academic self-regulation questionnaire (srq-a) nach ryan & connell, 2007.Google ScholarGoogle Scholar
  20. R. E. Pattis. Karel the Robot: A Gentle Introduction to the Art of Programming. Wiley, New York, 1981. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. I. Ponocny. Nonparametric goodness-of-fit tests for the rasch model. Psychometrika, 66(3):437--460, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  22. G. Rasch. Probabilistic Models for Some Intelligence and Attainment Tests. University of Chicago Press, Chicago, expanded ed edition, 1980.Google ScholarGoogle Scholar
  23. A. Ruf, A. M. Mühling, and P. Hubwieser. Scratch vs. karel: Impact on learning outcomes and motivation. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education, WiPSCE '14, pages 50--59, New York, NY, USA, 2014. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. L. A. Sudol and C. Studer. Analyzing test items: Using item response theory to validate assessments. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education, SIGCSE '10, pages 436--440, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. E. Tew. Assessing Fundamental Introductory Computing Concept Knowledge in a Language Independent Manner. PhD thesis, Georgia Institute of Technology, Atlanta, December 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. E. Tew and M. Guzdial. The fcs1: A language independent assessment of cs1 knowledge. In Proceedings of the 42Nd ACM Technical Symposium on Computer Science Education, SIGCSE '11, pages 111--116, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Wald. Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society, 54:426--482, 1943.Google ScholarGoogle ScholarCross RefCross Ref
  28. T. Winters and T. Payne. What do students know? an outcomes-based assessment system. In Proceedings of the First International Workshop on Computing Education Research, ICER '05, pages 165--172, New York, NY, USA, 2005. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. T. Winters and T. Payne. Closing the loop on test creation: A question assessment mechanism for instructors. SIGCSE Bull, 38(1):169--172, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. C. M. Yoder and M. L. Schrag. Nassi-shneiderman charts an alternative to flowcharts for design. SIGMETRICS Perform. Eval. Rev., 7(3--4):79--86, 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Design and First Results of a Psychometric Test for Measuring Basic Programming Abilities

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          WiPSCE '15: Proceedings of the Workshop in Primary and Secondary Computing Education
          November 2015
          149 pages
          ISBN:9781450337533
          DOI:10.1145/2818314

          Copyright © 2015 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 9 November 2015

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate104of279submissions,37%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader