Factors influencing the utilisation of a school self-evaluation instrument
Introduction
School self-evaluation is receiving increased attention in educational research around the world. Although school effectiveness research has provided us with insights into the characteristics of high performing schools, this research neither answers the question of causality nor clarifies how underperforming schools can be improved (Coe & Fitz-Gibbon, 1998). Moreover, centrally developed, general school improvement strategies prove not to work in many schools, as schools differ, for example, in terms of the causes for underperformance and in terms of their policy-making capacities (e.g. Fullan, 1998, McLaughlin, 1998). In other words, school improvement proves to be rather context-dependent. As school staff best know their particular school context, i.e., know what is feasible and what is not in their situation, they are likely to be in the best position to be able to say in which areas they would like to improve and then try to accomplish these improvements. A school self-evaluation system is valuable from this perspective: it can monitor schools thoroughly and provide timely, high quality school performance feedback to serve as a basis for school improvement (Coe & Visscher, 2002a).
Based on definitions by Scheerens, Glas and Thomas (2003) and Van Petegem (2001) school self-evaluation is defined by Schildkamp (2007) as “a procedure involving systematic information gathering initiated by the school itself and intended to assess the functioning of the school and the attainment of its educational goals for purposes of supporting decision-making and learning and for fostering school improvement as a whole” (p. 4).
Dutch schools are used to considerable autonomy. Since 1917 they have been free to choose the religious, ideological and pedagogical principles on which they base their education as well as in how they organize their teaching activities (Ministerie van Onderwijs, Cultuur & Wetenschappen, 1999). Since August 1998, the Dutch “Quality Law” prescribes that schools are responsible for the quality of education they provide and for pursuing policies that ensure school improvement. The law also prescribes that all schools must develop a quality assurance system.
As from September 1, 2002, when the new law on the Supervision of Education went into effect, the new role of the Inspectorate was also laid down by law. For schools and governing bodies the most important stipulations relate to extending the competencies of the Inspectorate and to the so-called ‘principle of proportionality’. The latter means that the supervision of schools starts from the results of school self-evaluations, provided they meet the requirements set by the Inspectorate (Inspectie van het Onderwijs, 2002, Ministerie van Onderwijs et al., 2000–2002, Renkema, 2002).
More than 70 Dutch different instruments for school self-evaluation are available now (The Standing International Conference of Central & General Inspectorates of Education, 2003). However, studies pointed at the presence of technical weaknesses in these instruments, such as a lack of attention to their reliability and validity (Cremers-van Wees et al., 1996, Hendriks, 2000). ZEBO (the acronym stands for self-evaluation in primary schools) has been developed as a response to this situation.
ZEBO is an instrument for measuring school process indicators (reflecting processes at classroom and at school level), with school effectiveness research as its conceptual background. Thirteen variables that school effectiveness research had shown to be associated with relatively high value-added achievement were selected for the development of ZEBO (Scheerens & Bosker, 1997).
ZEBO is made up of four questionnaires: one for school managers, one for teachers, one for students in grade 3, and one for students in grades 4–8. ZEBO measures school process variables by questioning different groups of respondents in the same school on the same topics. Students are asked to judge the nature of instruction in their class regarding the extent of: structured education, adaptive education, classroom climate and learning time. School leaders are asked to judge the features of the school in terms of co-operation and consultation, student care, the working environment, educational leadership, staff development, and agreement on school goals. Teachers judge instruction in the classroom, as well as the educational organisation at the school level (Hendriks, 2001, Hendriks and Bosker, 2003).
The comparison of the school and classroom scores of a particular school to the national averages is an important feature of the ZEBO feedback, which includes norm-referenced tables (in percentiles) of the actual performance of a representative reference group of Dutch primary schools. Furthermore, school reports compare teachers’ scores with those of the school management, and classroom reports compare teachers’ scores with those of students (Hendriks, 2001, Hendriks and Bosker, 2003).
ZEBO provides schools with performance feedback. The consistent positive effect of evaluating student achievement on educational effectiveness (Scheerens & Bosker, 1997), and the central place of the feedback mechanism in control theory and other scientific disciplines point to the important role of feedback (Coe, 1998). Kluger and DeNisi (1996), in their meta-analysis of about 100 years of feedback research, did indeed find that the effects of varying feedback interventions substantially improved overall performance (p. 40). However, they also found that feedback effects were detrimental in one third of all cases. Therefore, it is worthwhile to gain insight into the conditions under which feedback works. One reason for the occurrence of no effect of performance feedback is that the feedback is not always (fully) made use of (Coe and Visscher, 2002b, Weiss, 1998). New valuable information often proves to be an insufficient precondition for triggering improvement-oriented behaviour (Coe and Visscher, 2002b, Weiss, 1998). Weiss (1998) based on all her research on the utilisation of evaluation results points to possible reasons for this: e.g. use can break down because the target users may not receive the evaluation results, not understand or believe them, not know what to do about them, or not have the authority to use them. Motivation and commitment to improve are required for utilisation, and in many cases resources and social support too.
This means that the introduction of school self-evaluation systems does not necessarily lead to the development of actions to improve school performance.
Coe and Visscher (2002b) conclude that, although the justifications for using School Performance Feedback Systems (SPFSs), such as ZEBO, are plausible and thousands of schools have voluntarily implemented them, the rational response to our ignorance of the conditions promoting effective SPFS use (Van Petegem & Vanhoof, 2004) must be to conduct solid evaluations. Schildkamp (2007) conducted a longitudinal study into the use of the Dutch school self-evaluation instrument ZEBO. The primary goal of this article is to provide insight into the factors that are decisive for the use of the ZEBO self-evaluation results. In other words, our central research question is:
Which factors influence the use of self-evaluation results obtained from the Dutch school self-evaluation instrument ZEBO?
Section snippets
Theoretical framework
Visscher (2002) has developed a theoretical framework for studying the use of SPFSs, such as ZEBO. Based on a review of the literature on educational innovation he identifies three groups of factors that are supposed to influence the use of a SPFS:
- •
the implementation process features,
- •
the SPFS characteristics,
- •
the school organisational characteristics.
Visscher's general theoretical framework was contextualized to the specific nature of ZEBO and the way in which it was introduced into participating
Sample
A purposive sample of primary schools was drawn. All 312 schools in the district of the school advisory service “Expertis” were asked to participate in the study. Seventy-nine Dutch primary schools were willing to participate. The sample was representative for the Netherlands regarding the composition of the pupil population of schools (F = 0.26, p = 0.61) in terms of the social economic status of the parents of these pupils, however, as the schools in the sample on overage had a smaller size, the
Results of the questionnaires: principals
Table 4 displays the results of the stepwise regression analyses which were conducted with either the conceptual use of ZEBO, or the instrumental use of ZEBO as the dependent variable (for the data collected in 2003, 2004 and 2006).
The conceptual and/or instrumental use of ZEBO by principals was influenced by the degree to which:
- •
The ZEBO output fits with the needs of the user (in 2003 and 2004).
- •
The goal of using ZEBO is clear (in 2006).
- •
ZEBO use will lead to quality improvement (in 2004 and
Conclusions
Both quantitative (based on the questionnaires) and qualitative (based on the interviews) data were used to assess the explanatory power of the theoretical framework. The results show that teachers and principals differ to some extent regarding the variables influencing their use of the self-evaluation results.
Firstly, the use of ZEBO by teachers is especially influenced by implementation process features, whereas this is only minimally so for the principals. A possible explanation for this may
References (40)
What makes an evaluation useful? Reflections from experience in large organizations
American Journal of Evaluation
(2003)- et al.
School self-evaluation and school improvement: A critique of values and procedures
Studies in Educational Evaluation
(2004) - et al.
Looking for a balance between internal and external evaluation of school quality: Evaluation of the SVI model
Journal of Education Policy
(2008) - Coe, R. (1998). Feedback, value added and teachers’ attitudes: Models, theories and experiments. Unpublished PhD...
- et al.
Introduction
- et al.
Drawing up the balance sheet for School Performance Feedback Systems
- et al.
School effectiveness research: Criticisms and recommendations
Oxford Review of Education
(1998) - et al.
Current empirical research on evaluation utilization
Review of Educational Research
(1986) - et al.
Instrumenten voor zelfevaluatie:inventarisatie en beschrijving
(1996) - et al.
Evaluating school self-evaluation
(2001)
Toegepaste data-analyse. Technieken voor niet-experimenteel onderzoek in de sociale wetenschappen
Discovering statistics using SPSS for windows
The meaning of educational change: A quarter of a century of learning
Schools and innovations. Conditions fostering the implementation of educational innovations
Systemen voor kwaliteitszorg in het primair en secundair onderwijs
Kwaliteitszorg Voortgezet Onderwijs: Instrumenten en Organisaties
ZEBO instrument voor zelfevaluatie in het basisonderwijs. Handleiding bij een geautomatiseerd hulpmiddel voor kwaliteitszorg in basisscholen
Kwaliteitszorg in het primair onderwijs. BOPO-project (412-02-010). Deelstudie 1: Peiling 2003/2004
Onderwijsverslag over het jaar 2001
Cited by (32)
Formative assessment: A systematic review of critical teacher prerequisites for classroom practice
2020, International Journal of Educational ResearchCitation Excerpt :AfL also requires certain ICT skills as indicated in four studies, for example, with regard to how to use certain digital assessment systems and tools (Aschbacher & Alonzo, 2006; Feldman & Capobianco, 2008; Lee et al., 2012; Penuel et al., 2007). Many studies pointed to the importance of collaboration between teachers in regard to DBDM (Blanc et al., 2010; Brown et al., 2014; Datnow et al., 2012, 2013; Farley-Ripple & Buttram, 2014; Hubbard, Datnow, & Pruyn, 2014; Jimerson, 2014; Lachat & Smith, 2005; Levin & Datnow, 2012; McNaughton et al., 2012; Park & Datnow, 2009; Schildkamp & Kuiper, 2010; Schildkamp & Visscher, 2009; Schildkamp et al., 2012, 2010a, 2010b; Wayman, Cho et al., 2012; Young, 2006) and some also found this for AfL (Birenbaum et al., 2011; Bryant & Carless, 2010; Feldman & Capobianco, 2008; Havnes et al., 2012; Kay & Knaack, 2009; Lee, 2011; Sach, 2013). Regarding the nature of collaboration, the following aspects were frequently mentioned, which are closely related to data and assessment literacy:
Irish teachers, starting on a journey of data use for school self-evaluation
2019, Studies in Educational EvaluationPrerequisites for data-based decision making in the classroom: Research evidence and practical illustrations
2016, Teaching and Teacher EducationCitation Excerpt :A positive attitude can support the use of data, when buy-in and belief in the use of data exists (Jimerson, 2013; Kerr et al., 2006; Schildkamp et al., 2012, 2013; Schildkamp & Kuiper, 2010; Schildkamp & Visscher, 2009a, b; Van der Kleij & Eggen, 2013; Vanhoof et al., 2012; Wayman et al., 2012). This also implies that teachers with a positive attitude are not afraid of making changes based on data (Schildkamp & Teddlie, 2008; Schildkamp & Visscher, 2009a). The need for a positive attitude among teachers was frequently highlighted in focus group meetings.
Data-based decision making for instructional improvement in primary education
2016, International Journal of Educational ResearchCritical facilitators: External supports for self-evaluation and improvement in schools
2014, Studies in Educational Evaluation