skip to main content
10.1145/3587102.3588834acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2

Published:30 June 2023Publication History

ABSTRACT

In college-level introductory computer science courses, the programming ability of students is often evaluated using pseudocode responses to prompts. However, this does not necessarily reflect modern programming practice in industry and academia, where developers have access to compilers to test snippets of code on-the-fly. As a result, use of pseudocode prompts may not capture the full gamut of student capabilities due to lack of support tools usually available when writing programs. An assessment environment where students could write, compile, and run code could provide a more comfortable and familiar experience for students that more accurately captures their abilities. Prior work has found improvement in student performance when digital assessments are used instead of paper-based assessments for pseudocode prompts, but there is limited work focusing on the difference between digital pseudocode and compile-and-run assessment prompts. To investigate the impact of the assessment approach on student experience and performance, we conducted a study at a public university across two introductory programming classes (N=226). We found that students both preferred and performed better on typical programming assessment questions when they utilized a compile-and-run environment compared to a pseudocode environment. Our work suggests that compile-and-run assessments capture more nuanced evaluation of student ability by more closely reflecting the environments of programming practice and supports further work to explore administration of programming assessments.

References

  1. ACM 2013. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science.Google ScholarGoogle Scholar
  2. Al-Qdah, M. and Ababneh, I. 2017. Comparing Online and Paper Exams: Performances and Perceptions of Saudi Students. International Journal of Information and Education Technology. 7, 2 (2017), 106--109. DOI: https://doi.org/10.18178/ijiet.2017.7.2 .850Google ScholarGoogle ScholarCross RefCross Ref
  3. Auerbach, C. and Silverstein, L.B. 2003. Qualitative data: An introduction to coding and analysis. Qualitative data: An introduction to coding and analysis. NYU press. 31--87.Google ScholarGoogle Scholar
  4. Bagert, D.J. 1988. Should computer science examinations contain "programming" problems? (Feb. 1988).Google ScholarGoogle Scholar
  5. Ben-Ari, M. 2004. Situated Learning in Computer Science Education. Computer Science Education. 14, 2 (2004), 85--100. DOI:https://doi.org/10.1080/08993400412331363823.Google ScholarGoogle ScholarCross RefCross Ref
  6. Bugbee, A.C. 1996. The Equivalence of Paper-and-Pencil and Computer-Based Testing. Journal of Research on Computing in Education. 28, 3 (1996), 282-- 299. DOI:https://doi.org/10.1080/08886504.1996.10782166.Google ScholarGoogle ScholarCross RefCross Ref
  7. Canvas: https://learn.canvas.net/login/canvas.Google ScholarGoogle Scholar
  8. Chamillard, A.T. and Braun, K.A. 2000. Evaluating Programming Ability in an Introductory Computer Science Course. Proceedings of the Thirty-First SIGCSE Technical Symposium on Computer Science Education (Austin, Texas, USA, May 2000), 5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Chamillard, A.T. and Joiner, J.K. 2001. Using lab practica to evaluate programming ability. ACM SIGCSE Bulletin (2001), 159--163.Google ScholarGoogle Scholar
  10. Clariana, R. and Wallace, P. 2002. Paper--based versus computer--based assessment: key factors associated with the test mode effect. British Journal of Educational Technology. 33, 5 (Nov. 2002), 593--602. DOI:https://doi.org/10.1111/1467--8535.00294.Google ScholarGoogle ScholarCross RefCross Ref
  11. Daly, C. and Waldron, J. 2004. Assessing the Assessment of Programming Ability. 36, (2004), 210--213. DOI:https://doi.org/10.1145/1028174.971375.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Depradine, C. and Gay, G. 2004. Active participation of integrated development environments in the teaching of object-oriented programming. Computers & Education. 43, 3 (Nov. 2004), 291--298. DOI:https://doi.org/10.1016/j.compedu.2003.10.009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Espana-Boquera, S. et al. 2017. Analyzing the learning process (in Programming) by using data collected from an online IDE. 2017 16th International Conference on Information Technology Based Higher Education and Training (ITHET) (Ohrid, Macedonia, Jul. 2017), 1--4.Google ScholarGoogle Scholar
  14. Haghighi, P.D. et al. 2005. Summative Computer Programming Assessment Using Both Paper and Computer. (2005), 67--75.Google ScholarGoogle Scholar
  15. Harrison, M.A. et al. 2011. Which students complete extra-credit work. College Student Journal. 45, 3 (2011), 550--555.Google ScholarGoogle Scholar
  16. Hylton, K. et al. 2016. Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education. 92--93, (Jan. 2016), 53--63. DOI:https://doi.org/10.1016/j.compedu.2015 .10.002.Google ScholarGoogle ScholarCross RefCross Ref
  17. Jacobson, N. 2000. Using on-computer exams to ensure beginning students' programming competency. ACM SIGCSE Bulletin. 32, 4 (Dec. 2000), 53--56. DOI:https://doi.org/10.1145/369295.369324.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lappalainen, V. et al. 2016. Paper-based vs computer-based exams in CS1. Proceedings of the 16th Koli Calling International Conference on Computing Education Research (Koli Finland, Nov. 2016), 172--173.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Lemos, R.S. 1980. Measuring Programming Language Proficiency. 13, 4 (1980), 261--273. DOI:https://doi.org/10.1080/00011037.1980.11008280.Google ScholarGoogle ScholarCross RefCross Ref
  20. McCracken, M. et al. 2001. A multi-national, multi-institutional study of assessment of programming skills of first-year CS students. (Canterbury, UK, Dec. 2001), 125--180.Google ScholarGoogle Scholar
  21. Nachar, N. 2008. The Mann-Whitney U: A Test for Assessing Whether Two Independent Samples Come from the Same Distribution. Tutorials in Quantitative Methods for Psychology. 4, 1 (Mar. 2008), 13--20. DOI:https://doi.org/10.20982/tqmp.04.1.p013.Google ScholarGoogle ScholarCross RefCross Ref
  22. Noyes, J. et al. 2004. Paper-based versus computer-based assessment: is workload another test mode effect? British Journal of Educational Technology. 35, 1 (Jan. 2004), 111--113. DOI:https://doi.org/10.1111/j.14678535.2004.00373.x.Google ScholarGoogle ScholarCross RefCross Ref
  23. Öqvist, M. and Nouri, J. 2018. Coding by hand or on the computer? Evaluating the effect of assessment mode on performance of students learning programming. Journal of Computers in Education. 5, 2 (Jun. 2018), 199--219. DOI:https://doi.org/10.1007/s40692-018-0103--3.Google ScholarGoogle ScholarCross RefCross Ref
  24. Rajala, T. et al. 2016. Automatically assessed electronic exams in programming courses. Proceedings of the Australasian Computer Science Week Multiconference (Canberra Australia, Feb. 2016), 1--8.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Seppälä, O. et al. 2015. Do we know how difficult the rainfall problem is? Proceedings of the 15th Koli Calling Conference on Computing Education Research (Koli Finland, Nov. 2015), 87--96.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Tanenbaum, A.S. and Bos, H. 2008. Modern Operating Systems. Pearson.Google ScholarGoogle Scholar
  27. Viera, A.J. and Garrett, J.M. 2005. Understanding Interobserver Agreement: The Kappa Statistic. Family Medicine. May (2005), 360--363. DOI:https://doi.org/Vol. 37, No. 5.Google ScholarGoogle Scholar
  28. zyBooks: https://www.zybooks.comGoogle ScholarGoogle Scholar

Index Terms

  1. Pseudocode vs. Compile-and-Run Prompts: Comparing Measures of Student Programming Ability in CS1 and CS2

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ITiCSE 2023: Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1
      June 2023
      694 pages
      ISBN:9798400701382
      DOI:10.1145/3587102

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 30 June 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate552of1,613submissions,34%

      Upcoming Conference

      ITiCSE 2024
    • Article Metrics

      • Downloads (Last 12 months)76
      • Downloads (Last 6 weeks)6

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader