Skip to main content
Top

2014 | Book

Computer Assisted Assessment. Research into E-Assessment

International Conference, CAA 2014, Zeist, The Netherlands, June 30 – July 1, 2014. Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the International Conference on Computer Assisted Assessment, CAA 2014, held in Zeist, The Netherlands, in June/July 2014. The 16 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers address issues such as large-scale testing facilities in higher education; formative assessment for 21st century skills; future trends for technology-enhanced assessment; latest advancements of technologies; practical experiences.

Table of Contents

Frontmatter
Gauging Teachers’ Needs with Regard to Technology-Enhanced Formative Assessment (TEFA) of 21st Century Skills in the Classroom
Abstract
Several trends in society have led to a request towards schools to integrate 21st Century Skills and technology enhanced formative assessment (TEFA) in their curricula. Although there are frameworks defined at an international level, implementation of technology enhanced formative assessment of 21st Century Skills at school level is seldom. This paper explores the underlying reasons for this hampered implementation by consulting and collaborating with teachers. It provides an overview of these reasons and proposes a collaborative professionalization approach to overcome detected implementation barriers and challenges.
Ellen Rusman, Alejandra Martínez-Monés, Jo Boon, María Jesús Rodríguez-Triana, Sara Villagrá-Sobrino
Non Satis Scire: To Know Is Not Enough
e-Assessment of Student-Teachers’ Competence as New Teachers
Abstract
In teacher education programmes, text-based portfolios are generally used to assess student-teachers’ competence as new teachers. However, striking discrepancies are known to exist between the competencies reflected in a written portfolio and the competencies observed in actual classroom practice. Multiple assessments should be used to provide a more valid assessment of student-teachers’ competence as new teachers. Technology can support this kind of multiple and flexible ways of assessment. In a Research & Development project, four types of e-assessments were designed, implemented and evaluated in 27 interventions in 13 post-graduated teacher education programs in the Netherlands. Teacher educators reported positive outcomes of the interventions in terms of new procedures, materials and tools. No significant effects were found of the implementation of the four types of e-assessments on the evaluation by either teacher educators or student-teachers. A possible explanation for this absence of effects might be teething problems of the interventions implemented.
Wilfried Admiraal, Tanja Janssen, Jantina Huizenga, Frans Kranenburg, Ruurd Taconis, Alessandra Corda
The Emergence of Large-Scale Computer Assisted Summative Examination Facilities in Higher Education
Abstract
A case study is presented of VU University Amsterdam where a dedicated large-scale CAA examination facility was established. In the facility, 385 students can take an exam concurrently. The case study describes the change factors and processes leading up to the decision by the institution to establish the facility, the start-up of the facility, the foreseen optimization of the use of the facility, threats to the sustainability of the facility and possible future developments. Comparisons are made with large-scale CAA practice at the University of Southampton in the UK. The conclusions are that some specific coincidental situations may be needed to support the decision by senior management to establish such a facility. Long-term sustainability of the dedicated facility is expected to be dependent on the payment structure, the scheduling possibilities and on the educational and assessment benefits that can be achieved. Hybrid models of dedicated facilities and regular computer rooms for CAA seem likely to be adopted, thus balancing cost and benefits. The case shows that sustained effort in building up expertise and momentum are needed to result in viable and sustainable CAA exam facilities.
Silvester Draaijer, Bill Warburton
Functional, Frustrating and Full of Potential: Learners’ Experiences of a Prototype for Automated Essay Feedback
Abstract
OpenEssayist is an automated feedback system designed to support university students as they write essays for assessment. A first generation prototype of this system was tested on a cohort of postgraduate distance learners at the UK Open University from September to December 2013. A case study approach was used to examine three participants’ experiences of the prototype. Findings from the case studies offered insight into how different users may perceive the usefulness, future potential and end-user of such a tool. This study has important implications for the next phase of development, when the role of OpenEssayist in supporting students’ learning will need to be more clearly understood.
Bethany Alden Rivers, Denise Whitelock, John T. E. Richardson, Debora Field, Stephen Pulman
Implementation of an Adaptive Training and Tracking Game in Statistics Teaching
Abstract
Statistics teaching in higher education has a number of challenges. An adaptive training, tracking and teaching tool in a gaming environment aims to address problems inherent in statistics teaching. This paper discusses the implementation of this tool in a large first year university programme and considers its uses and effects. It finds that such a tool has students practice with statistics problems frequently and that success rate of the statistics course may increase.
Caspar M. Groeneveld
Assessment of Collaborative Problem Solving Using Linear Equations on a Tangible Tabletop
Abstract
Using Tangible User Interfaces (TUI) for assessing collaborative problems has only been marginally investigated in technology-based assessment. Our first empirical studies focused on light-weight performance measurements, usability, user experience, and gesture analysis to increase our understanding of how people interact with TUI in an assessment context. In this paper we propose a new approach for assessing individual skills for collaborative problem solving using the MicroDYN methodology with TUIs. These so-called MicroDYN items are high quality and designed to assess individual problem solving skills. The items are based on linear structural equations. We describe how this approach was applied to create an assessment item for a collaborative setting with children that implements a simplified model of climate change using the knowledge of the previous studies. Finally, we propose a series of research questions as well as a future empirical study.
Valérie Maquil, Eric Tobias, Samuel Greiff, Eric Ras
Computer Assisted, Formative Assessment and Dispositional Learning Analytics in Learning Mathematics and Statistics
Abstract
Learning analytics seeks to enhance the learning process through systematic measurements of learning related data and to provide informative feedback to learners and teachers, so as to support the regulation of the learning. Track data from technology enhanced learning systems constitute the main data source for learning analytics. This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics [1]: an infrastructure that combines learning dispositions data with data extracted from computer assisted, formative assessments. In a large introductory quantitative methods module based on the principles of blended learning, combining face-to-face problem-based learning sessions with e-tutorials, we investigate the predictive power of learning dispositions, outcomes of continuous formative assessments and other system generated data in modeling student performance and their potential to generate informative feedback. Using a dynamic, longitudinal perspective, Computer Assisted Formative Assessments seem to be the best predictor for detecting underperforming students and academic performance, while basic LMS data did not substantially predict learning.
Dirk T. Tempelaar, Bart Rienties, Bas Giesbers
Learning Analytics: From Theory to Practice – Data Support for Learning and Teaching
Abstract
Much has been written lately about the potential of Learning Analytics for improving learning and teaching. Nevertheless, most of the contributions to date are concentrating on the abstract theoretical or algorithmic level, or, deal with academic efficiencies like teachers’ grading habits. This paper wants to focus on the value that Learning Analytics brings to pedagogic interventions and feedback for reflection. We first analyse what Learning Analytics has to offer in this respect, and, then, present a practical use case of applied Learning Analytics for didactic support in primary school Arithmetic.
Wolfgang Greller, Martin Ebner, Martin Schön
Using Confidence as Feedback in Multi-sized Learning Environments
Abstract
This paper describes the use of existing confidence and performance data to provide feedback by first demonstrating the data’s fit to a simple linear model. The paper continues by showing how the model’s use as a benchmark provides feedback to allow current or future students to infer either the difficulty or the degree of under or over confidence associated with a specific question. Next, the paper introduces Confidence/Performance Indicators as graphical representations of this feedback and concludes with an evaluation of s trial use in an online setting. Findings support the efficacy of using of the Indicators to provide feedback to encourage students in multi-sized learning environments to reflect upon and rethink their choices, with future work focusing on the effectiveness of Indicator use on performance.
Thomas L. Hench
A Review of Static Analysis Approaches for Programming Exercises
Abstract
Static source code analysis is a common feature in automated grading and tutoring systems for programming exercises. Different approaches and tools are used in this area, each with individual benefits and drawbacks, which have direct influence on the quality of assessment feedback. In this paper, different principal approaches and different tools for static analysis are presented, evaluated and compared regarding their usefulness in learning scenarios. The goal is to draw a connection between the technical outcomes of source code analysis and the didactical benefits that can be gained from it for programming education and feedback generation.
Michael Striewe, Michael Goedicke
High Speed High Stakes Scoring Rule
Assessing the Performance of a New Scoring Rule for Digital Assessment
Abstract
In this paper we will present the results of a three year subsidized research project investigating the performance of a new scoring rule for digital assessment. The scoring rule incorporates response time and accuracy in an adaptive environment. The project aimed to assess the validity and reliability of the ability estimations generated with the new scoring rule. It was also assessed whether the scoring rule was vulnerable for individual differences. Results show a strong validity and reliability in several studies within different domains: e.g. math, statistics and chess. We found no individual differences in the performance of the HSHS scoring rule for risk taking behavior and performance anxiety, nor did we find any performance differences for gender.
Sharon Klinkenberg
Do Work Placement Tests Challenge Student Trainees to Learn?
Abstract
The study described in this article shows that embedding formative work placement tests in the student’s learning process facilitates the student’s development while on the work placement. This study measured the development of the student’s learning process by determining the extent to which students gained an understanding of their current and desired levels of knowledge, felt challenged to learn, and more deeply explored the specialism of their work placement department. The exchange of knowledge between the student trainee and the work supervisor was measured. The E-Flow Nursing project was used as a case study. In this project, it was agreed that students were to include their test results in their personal activity plans, in line with recommendations from previous research into formative testing in general, which had revealed that formative testing can lead to positive developments in the learning process provided that it is embedded in the learning process.
Jelly Zuidersma, Elvira Coffetti
Digital Script Concordance Test for Clinical Reasoning
The Development of a Dutch Digital Script Concordance Test for Clinical Reasoning for Nursing Specialists
Abstract
The Master of Advanced Nursing Practice (MANP) programme in the Netherlands is the professional training for the nursing specialist. The field of MANP is in flux; taking independent medical action and prescribing medication are among the principle aspects of this. Consequently clinical reasoning is an important part of the curriculum and makes great demands on the level of medical and nursing knowledge. At the moment the clinical reasoning capabilities of students are tested by means of two methods (assessment, case-history papers) that are frequently very labor-intensive for the teachers in regard to both developing questions and evaluation. The case-history papers also have a low inter-assessor validity, which is undesirable. The digital test is not suitable for this method of examination. In addition, the field of work is not involved in either the development of the questions or their validation.
Three Universities of Applied Sciences (Rotterdam, Fontys, Zuyd), along with the Learning Station Care Foundation initiated this project. The question was whether there was a suitable type of question to assess digitally the clinical reasoning capabilities of the trainee nursing specialist. The aim was greater possibilities for the teacher and trainee nursing specialist to support learning and to establish and pursue the desired level of knowledge. Based on a literature study it was jointly decided that the question type Script Concordance Test (SCT) could be used for this. The SCT type is in English and has been in use for 15-20 years. The starting point is the generic knowledge of the MANP trainee (medical and nursing) that is necessary for clinical reasoning. As the MANP programme is practice oriented it has the added value that in constructing SCT questions experts (medical and nursing specialists) working in the field have an essential role in validating the questions. Accordingly, this project will investigate whether there are digital test systems that can support this process and improve the quality of the tests.
In this project the SCT question type is digitalized, and digital tests have been developed for the complex practice of clinical reasoning for the MANP programme. The SCT question type has been included in the system of the Learning Station Care Foundation especially for this project. The conclusion is that the digital training and testing with the SCT type offers new possibilities for education and retraining. It must be noted that construction of the question type is labor-intensive and recruiting experts for the validation process is time-consuming. An expected result of the project is that the question type supports the learning process of clinical reasoning and teachers are enthusiastic about the various possibilities. The SCT question type can make an important contribution to the development and maintenance of clinical reasoning skills in (trainee) nursing specialists.
Christof Peeters, Wil de Groot-Bolluijt, Robbert Gobbens, Marcel van Brunschot
Practical Implementation of Innovative Image Testing
Abstract
The testing of image interpretation skills within the profession of Radiology (often paper- pencil) lags behind practice. To increase the authenticity of assessment of image interpretation skills, the Dutch national progress test for medical specialists in training to become radiologists, is digitized using the program VQuest. This programme makes it possible to administer a test with 2D and 3D images, in which images can be viewed and processed as they can in practice. During implementation, the entire assessment cycle from test design to assessment analysis and evaluation has been run through twice. Excluding some small improvements, both trainee specialist and organizational members were satisfied with the digitized assessment. Amongst other things, the trainee specialist feel that this application of digital testing is more consistent with the situation in practice than the conventional testing method.
Corinne Tipker-Vos, Kim de Crom, Anouk van der Gijp, Cécile Ravesloot, M. van der Schaaf, Christian Mol, Mario Maas, Jan van Schaik, Koen Vincken
Where Is My Time? Identifying Productive Time of Lifelong Learners for Effective Feedback Services
Abstract
Lifelong learners are confronted with a broad range of activities they have to manage every day. In most cases they have to combine learning, working, family life and leisure activities throughout the day. Hence, learning activities from lifelong learners are disrupted. The difficulty to find a suitable time slot to learn during the day has been identified as the most frequent cause. In this scenario mobile technologies play an important role since they can keep track of the most suitable moments to accomplish specific learning activities in context. Sampling of learning preferences on mobile devices is a key benchmarks for lifelong learners to become aware on which learning task suits in which context, to set realistic goals and to set aside time to learn on a regular basis. The contribution of this manuscript is twofold: first, a classification framework for modelling lifelong learners’ preferences is presented based on a literature review; second, a mobile application for experience sampling is piloted aiming to identify which are the preferences from lifelong learners regarding when, how and where learning activities can be integrated.
Bernardo Tabuenca, Marco Kalz, Dirk Börner, Stefaan Ternier, Marcus Specht
Tangible-Based Assessment of Collaborative Problem Solving
Abstract
Using Tangible User Interfaces (TUI) for assessing collaborative problems has only been marginally investigated in technology-based assessment. Our first empirical studies focused on light-weight performance measurements, usability, user experience, and gesture analysis to increase our understanding of how people interact with TUI in an assessment context. In this paper we present three of those studies: a windmill scenario where users can learn about the dynamics of energy generation using wind power; a traffic simulator educating the audience on the impacts of different traffic parameters on its fluidity; and a simple climate change scenario allowing children to comprehend the relation between their family’s behaviour and its effect on CO2 levels. The paper concludes each scenario by presenting assessment methodologies and observed learning outcome.
Eric Tobias, Valérie Maquil, Eric Ras
Backmatter
Metadata
Title
Computer Assisted Assessment. Research into E-Assessment
Editors
Marco Kalz
Eric Ras
Copyright Year
2014
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-08657-6
Print ISBN
978-3-319-08656-9
DOI
https://doi.org/10.1007/978-3-319-08657-6

Premium Partner