Elsevier

Computers & Education

Volume 58, Issue 4, May 2012, Pages 1247-1259
Computers & Education

Measuring integration of information and communication technology in education: An item response modeling approach

https://doi.org/10.1016/j.compedu.2011.12.015Get rights and content

Abstract

This research describes the development and validation of an instrument to measure integration of Information and Communication Technology (ICT) in education. After literature research on definitions of integration of ICT in education, a comparison is made between the classical test theory and the item response modeling approach for the development and validation of a questionnaire. Following the last approach, a construct on integration of ICT is developed, items are generated and an outcome space of Likert type answering categories is defined. The resulting questionnaire has been administered to 933 teacher educators. In this study the collected data are tested for fit to the Rasch model of measurement. It is concluded that the instrument can be used for fundamental measurement of perceived use of ICT for teaching and support of student learning of the reference population, allowing for identification of stages of innovation of ICT integration. We reflect on the critical value of the item response modeling approach and the Rasch measurement model for measurement of integration of ICT in education and discuss some limitations of the study.

Highlights

► We follow an Item Response Modeling approach to develop and validate a questionnaire. ► This results in an instrument for measurement of educators' perceived use of ICT. ► The instrument allows for identification of stages of innovation of ICT integration. ► The approach is essential for valid and reliable benchmarking.

Introduction

Information and Communication Technology (ICT) is high on the education reform agenda of developed and developing countries. Policies for education reform are built around the premise and promise of effective ICT integration (Richards, 2004). Even though much is expected from integration of ICT in education, not much research can be found on measuring effective integration of ICT in teaching practice or on added value of ICT for teaching and learning in general. Studies and measurement tools to investigate the ICT integration level of higher education institutions are scarce (Akbulut, 2009). Cox (2008) recommends to identify how to monitor specific ICT types being used and which data collection methods can provide the most reliable and robust results. Proctor, Watson, and Finger (2003) argue that unless more sophisticated notions of defining ICT curriculum integration are developed, researchers run the risk of promulgating severely restrictive ways of measuring it. In what follows we therefore give an overview of how integration of ICT in education is defined in research literature, after which we reflect on how we can measure it for our own research purposes.

Plomp, Anderson, Law, and Quale (2003) differentiate between learning about ICT, learning with ICT, and learning through ICT. Zhang (2007) distinguishes between an approach where ICT is seen as the object of education with as purpose to learn about ICT and to get technically skilled, an approach where ICT is used to strengthen expositive teaching, and an approach where is strived for innovative teaching practice, harnessing the full potentials of ICT. Also capability theory refers to the potentials of ICT for educational change and understands ICT as tools to reach an end (Alampay, 2006). Another relevant categorization on use of ICT in education is that of Maddux and Johnson (2005), who differentiate between ICT applications of type I and of type II: Type I applications are those educational applications that simply make it easier, quicker, or more convenient to continue teaching or learning in traditional ways; type II applications are those educational applications that make available new and better ways of teaching or learning. Others see the potential of ICT not only to innovate teaching practice, but also to change the curriculum. Bull, Bell, and Kajder (In Jamieson-Proctor, Watson, Finger, Grimbeek, & Burnett, 2007) identify two approaches to the use of technology that derive from employing the technology to deliver the existing content more efficiently or alternately to employ the innovation to re-conceptualize aspects of the existing curriculum. Gareis and Hüsing (2009) argue that the transformational potential of ICT is rooted in its effect in terms of empowerment of users, by opening up new, more effective ways for achievement of goals rather than simply making existing structures and processes more efficient.

In education, most agree that the purpose of technology integration is to achieve learning goals and enhance learning – not to use fancy technology tools (Liu and Velasquez in Jamieson-Proctor et al., 2007). It is argued that what counts is not the ICT type but its implementation process (Tubin, 2006). Bowes (2003) argues that effective use of ICT in classroom practice depends on teachers explicitly addressing the question in what way, if at all, the use of ICT can value, given a student learning outcome. ICT ideally supports both teachers' professionalism and students' ability to become independent learners. This means using ICT for enhancing inquiry and data-based decisions, the freedom to make mistakes, the opportunity to work with experts out of school, and assuming responsibility for the outcomes (Tubin, 2006).

In much research on integration of ICT in education, different stages or phases are identified. It has also been suggested to analyze ICT-based innovations on a continuum ranging from the assimilation level through the transition level and up to the transformation level (Mioduser, Nachmias, Tubin, & Forkosh-Baruch, 2003). UNESCO identifies four categories or stages of development concerning ICT use in education: emerging, applying, infusing and transforming (UNESCO, 2005). At the transforming stage of ICT-mediated teaching and learning pedagogies, students' thinking processes are supported by ICT (SEAMO, 2010). The pedagogies adopted by educators at this stage are situated in the constructivist paradigm where learning is perceived as an active construction and reconstruction of knowledge, and teaching is a process of guiding and facilitating students in the process of knowledge construction individually and collaboratively (SEAMO, 2010; Steffe & Gale, 1995). Mills and Tincher (2003) formulated and validated a developmental model for technology integrating, based on stages, standards and indicators of their technology professional development initiative. They organized standards into phases to reflect a development approach “from novice technology facilitators who use technology as a tool for the delivery of instruction to expert technology integrators who are being the technology – augmenting student learning with technology” (Mills & Tincher, 2003).

In the context of a capacity building program for teacher educators on ICT integration in education, we aim to measure teacher educators' use of ICT in education over the course of the three-year program. We pursue to assess the use of ICT for teaching and support of student learning, ranging from more traditional to more innovative approaches, reflecting stages of development concerning ICT use in education as described in Section 1.1. The capacity building program involves around 1000 teacher educators, who participate in a panel research throughout the program as well. We opt to develop and validate a self-report questionnaire instrument that can be administered at different points in time of the capacity building program.

Cox (2008) argues that the ways in which ICT has evolved, have influenced the focus and scope of research. A large element of the current research agenda is to measure the uptake of ICT in schools by teachers, pupils, types of computers and so on (Cox, 2008). In the last two decades, researchers have also recognized the need to investigate the effects of ICT on students' generic and specific skills and knowledge, the effects of group and collaborative learning, taking account of human–computer interfaces, the changing nature of knowledge presented and the role of the teacher. The most robust evidence of ICT use enhancing students' learning is from studies which focused on specific uses of ICT, and clearly identifying the range and type of ICT use (Cox and Abbot in Marshall & Cox, 2008). Typically, research conducted within a behaviorist perspective will use quantitative methods and questionnaires, designed to provide evidence at a point in time of program practices, features and outcomes (Marshall & Cox, 2008). Christensen and Knezek (2008) argue that competencies, defined in terms of behaviors, could be reasonably assessed by observation as well as by self-report.

Different questionnaire instruments to measure the use of ICT for teaching and learning have been developed and tested (Christensen & Knezek, 2008) following principles of the classical test theory. Validation in classical test theory mostly focuses on models at the test-score level and links test scores to true scores. On the other hand, both person parameters (i.e., true scores) and item parameters (i.e., item difficulty and item discrimination) are dependent on the test and the respondent sample, respectively, and these dependencies can limit the utility of the person and item statistics in practical test development work and complicate any analyses (Hambleton & Jones, 1993). Advantages of classical test theory models are that they are based on relatively weak assumptions (i.e., they are easy to meet in real test data) and they are well-known and have a long track record. Modern test theories are considered superior to the classical test theory as it makes stronger assumptions and provides stronger findings. A good test theory or model like item response theory models can provide a frame of reference for doing test design work. A good test model might specify the precise relationships among test items and ability scores (Hambleton & Jones, 1993). In scale development, both the traditional statistics and item response theory models like the Rasch model can enhance the measurement capacity of a scale (Cavanagh, Romanoski, Giddings, Harris, & Dellar, 2003a).

To validate a self-report questionnaire instrument and to measure integration of ICT in teaching and learning, allowing for identification of stages of innovation of ICT integration, we follow in this study the Rasch measurement model and methodology. The Rasch measurement model and methodology is a modern test theory that involves rigorous and extensive analysis of the data and provides additional psychometric information that cannot be obtained through the classical test theory approach. Items on use of ICT have so far rarely been ordered from “easy” to “hard” by calibration against the distribution of educators' perception on ICT use. Usually questionnaire scales on use of ICT for teaching and support of student learning are not constructed with the items being selected to fit a measurement model and form a one-dimensional scale in which the items can be said to be affected by one dominant trait. Classical test theory tries to have all items of “similar difficulty” and does not have a conceptual measurement design in the preparation of the items (Cavanagh et al., 2003a; Cavanagh, Romanoski, Giddings, Harris, & Dellar, 2003b). Following an item response modeling approach, data are tested for fit into the Rasch model, which allows for a detailed examination of the internal construct validity of the scale, including properties such as reliability and ordering of categories. When data fit the Rasch model, requirements of ‘fundamental measurements’, as defined by Bond and Fox (2007) are met: measurements allow an order of ranking, calculations as adding up and subtracting are possible, and calibration of the items is independent of the respondents and vice versa (objectivity).

Section snippets

Development of an instrument measuring the use of ICT for teaching and support of student learning

In what follows we describe the development of an instrument to measure integration of ICT in teacher education, tested in the context of Vietnam. The development was carried out before the start of the capacity building program and took place in different building blocks, as prescribed by Wilson (2005). In the first building block a comprehensive literature study on integration of ICT in education informed the development of a construct map on integration of ICT in teaching and learning. This

Research objective: validation of the measurement instrument

In Section 2 of this research paper, we reported on the development of the self-report questionnaire. In what follows, we aim to validate the developed measurement instrument, following the principles of the Rasch measurement model and methodology.

Our research aims to apply a scale development and validation process that can:

  • 1.

    Produce a scale to measure teacher educators' self-reported use of ICT for teaching and support of student learning;

  • 2.

    Produce a scale with item difficulties and measures of

Data collection

Data collection for this validation study took place in the beginning of 2010. The questionnaire was presented to all teacher educators working in five teacher education institutions participating in the capacity building program. The five provincial institutions are from different regions in the north and center of Vietnam and were selected by the Ministry of Education and Training of Vietnam for participation in the program. On a total population of 1021 teacher educators, 933 completed the

A Wright map on use of ICT for teaching and support of student learning

Factor analysis on the items of both sets (Extraction method: PCA) retains two factors. Nevertheless, all items load higher on the first retained factor with factor loadings from 0.498 to 0.848. For our research on integration of ICT in teacher education, we combine the item sets on teacher educators' perceived use of ICT for teaching and support of student learning. In theory, both sets are mutually influential and collectively comprise a single, one-dimensional scale. Semantically, the items

Conclusions

Testing data for fit to the Rasch model is rarely done so far for questionnaire scales on use of ICT for teaching and support of student learning. Complementing classical test theory, the item response modeling approach can add value to the measurement capacity of a scale. It can enhance the development of measurement scales in the field of integration of ICT in education. Most theoretical models for technology integration are based on stages, standards and indicators. Analysis of ICT-based

Discussion

The question remains whether a model fits the data well enough to be useful in guiding the measurement process. Statistical evidence and also judgment play important roles in answering this question (Hambleton & Jones, 1993). The statistical evidence provided in our research led us to the conclusion that with the developed measurement instrument we can fundamentally measure the use of ICT for teaching and support of student learning in teacher education. On the other hand, when the Rasch model

References (41)

  • Bowes, J. (2003). The emerging repertoire demanded for of teachers of the future: surviving the transition. Paper...
  • Cavanagh, R., Romanoski, J., Giddings, G., Harris, M., & Dellar, G. (2003a). Application of the Rasch model and...
  • Cavanagh, R., Romanoski, J., Giddings, G., Harris, M., & Dellar, G. (2003b). Development of a Rasch model scale to...
  • R. Christensen et al.

    Self-report measures and findings for information technology attitudes and competencies

  • L. Cohen et al.

    Research methods in education

    (2007)
  • M.J. Cox

    Researching IT in education

  • K. Gareis et al.

    Measuring transformational use of ICTs at regional level

  • R.K. Hambleton et al.

    An NCME instructional module on comparison of classical test theory and item response theory and their applications to test development

    Educational Measurement: Issues and Practice

    (1993)
  • R. Jamieson-Proctor et al.

    Measuring the use of information and communication technologies (ICTs) in the classroom

    Computers in the Schools

    (2007)
  • J.M. Linacre

    Understanding Rasch measurement: optimizing rating scale category effectiveness

    Journal of Applied Measurement

    (2002)
  • Cited by (0)

    View full text