Skip to main content
Top
Published in: Educational Assessment, Evaluation and Accountability 3/2018

11-07-2018

Pointing teachers in the wrong direction: understanding Louisiana elementary teachers’ use of Compass high-stakes teacher evaluation data

Author: Timothy G. Ford

Published in: Educational Assessment, Evaluation and Accountability | Issue 3/2018

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Spurred by Race to the Top, efforts to improve teacher evaluation systems have provided states with an opportunity to get teacher evaluation right. Despite the fact that a core reform area of Race to the Top was the use of teacher evaluation to provide on-going and meaningful feedback for instructional decision making, we still know relatively little about how states’ responses in this area have led to changes in teachers’ use of these sources of data for instructional improvement. Self-determination theory (SDT) and the concept of functional significance was utilized as a lens for understanding and explaining patterns of use (or non-use) of Compass-generated evaluation data by teachers over a period of 3 years in a diverse sample of Louisiana elementary schools. The analysis revealed that the majority of teachers exhibited either controlled or amotivated functional orientations to Compass-generated information, and this resulted in low or superficial use for improvement. Perceptions of the validity/utility of teacher evaluation data were critical determinants of use and were multifaceted: In some cases, teachers had concerns about how state and district assessments would harm vulnerable students, while some questioned the credibility and/or fairness of the feedback. These perceptions were compounded by (a) the lack of experience of evaluators in evaluating teachers with more specialized roles in the school, such as special education teachers; (b) a lack of support in terms of training on Compass and its processes; and (c) lack of teacher autonomy in selecting appropriate assessments and targets for Student Learning Target growth.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
1
This estimate is based upon the product of Dynarski’s (2016) estimate of a principal’s salary of $45/h; the number of U.S. K-12 teachers (3.1 million); the average number of hours spent per evaluation; typical number of observations in a given year (2).
 
2
All the descriptions of the Compass system discussed in this section are as they were during the study period of 2011–2015. Since this time, Compass has again changed to reflect adjustments to assessment policy as well as teacher evaluation policy.
 
3
Teachers who receive a “highly effective” rating in a given year are only required to have one formal observation the following year.
 
4
The Compass teacher evaluation rubric utilizes only 5 of the 22 domains and 20 of the 76 elements of the full Danielson Framework for Teaching.
 
5
While not clearly specified in the policy, in most cases in our sample the same evaluator observed both lessons conducted by the teacher. Assignment of evaluators was ultimately up to each building principal.
 
6
While there is no available data on how many teachers have been dismissed during the Compass era due to ineffective ratings, aggregate results from the Louisiana Department of Education (2013, 2014, 2015b, 2016) report that around 4% of teachers were rated “ineffective” in 2012–2013, 2% in 2013–2014, and less than 1% in 2014–2015 and 2015–2016.
 
7
It is important to mention that both the Standards for Educational and Psychological Testing (AERA, APA, NCME, 2014), and the Joint Committee on Standards for Educational Evaluation (JCSEE 2009) define and delineate issues of evaluation related to clarity (JCSEE), credibility (JCSEE), and fairness (AERA et al.). The operational definitions of these terms in this paper share some overlap but also differ somewhat from theirs, as will be discussed as each term is defined below.
 
8
In the JCSEE, one aspect of clarity that aligns with the definition used in this paper is accuracy standard A2, “defined expectations.” Another aspect, however, which was not a focus of our definition per se, is the necessity for clarity on how the assessments/evaluation tools are aligned with the expectations (JCSEE standard A1).
 
9
The concept of credibility does not relate in any direct way to the JCSEE standards, but might nevertheless be an overall judgment rendered by an evaluatee of the process based on several of these standards. None of these standards are specifically referenced in this study.
 
10
This aspect of fairness is only part of the Standards for Educational and Psychological Testing framework. Other aspects of fairness concern the degree of measurement bias as well as influences of test-taking contexts which were not as present in the literature on the topic of teacher evaluation.
 
11
Our use of the concept of utility refers most specifically to the JCSEE standards of utility related to evaluator qualifications and functional reporting (Standards U3 and U5). The other utility standards were not as salient in the teacher evaluation literature.
 
12
The three added interviews in the equation ((37 + 32 + 32) + 3) = 104 refer to three of the five teachers that were lost after the first wave that we were able to track down and interview one final time. Our main purpose in interviewing them was to get a sense of why they left. This is why they were not included in the second wave teacher sample numbers, but added their interviews separately to the total.
 
13
The final sample of principal interviews was 20, and there were two instructional coaches interviewed in the third wave.
 
14
Thirty randomly selected interview transcripts across the three waves (about one third of the total) were selected for the purpose of checking inter-rater reliability.
 
15
Inter-rater reliability was calculated via the proportion agreement method (Campbell et al. 2013), which takes the sum of the number of coding agreements and disagreements for a given code divided by the total number of codings of the lowest submitter (the coder with the fewest instances of the code).
 
Literature
go back to reference Adams, C. M., Ford, T. G., Forsyth, P. B., Ware, J. K., Barnes, L. B., Khojasteh, J., Mwavita, M., Olsen, J. J., & Lepine, J. A. (2017). Next generation school accountability: A vision for improvement under ESSA. Palo Alto, CA: Learning Policy Institute. Adams, C. M., Ford, T. G., Forsyth, P. B., Ware, J. K., Barnes, L. B., Khojasteh, J., Mwavita, M., Olsen, J. J., & Lepine, J. A. (2017). Next generation school accountability: A vision for improvement under ESSA. Palo Alto, CA: Learning Policy Institute.
go back to reference American Educational Research Association [AERA], American Psychological Association [APA] National Council on Measurement in Education [NCME]. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association. American Educational Research Association [AERA], American Psychological Association [APA] National Council on Measurement in Education [NCME]. (2014). Standards for educational and psychological testing. Washington, D.C.: American Educational Research Association.
go back to reference Amrein-Beardsley, A., & Collins, C. (2012). The SAS education value-added assessment system (SAS-EVAAS) in the Houston independent School District (HISD): Intended and unintended consequences. Educational Policy Analysis Archives, 20(12) Retrieved from: http://epaa.asu.edu/ojs/article/view/1096. Amrein-Beardsley, A., & Collins, C. (2012). The SAS education value-added assessment system (SAS-EVAAS) in the Houston independent School District (HISD): Intended and unintended consequences. Educational Policy Analysis Archives, 20(12) Retrieved from: http://​epaa.​asu.​edu/​ojs/​article/​view/​1096.
go back to reference Beaver, J. K., & Weinbaum, E. H. (2015). State test data and school improvement efforts. Educational Policy, 29(3), 478–503.CrossRef Beaver, J. K., & Weinbaum, E. H. (2015). State test data and school improvement efforts. Educational Policy, 29(3), 478–503.CrossRef
go back to reference Blase, J., & Blase, J. (1999). Principals’ instructional leadership and teacher development: Teachers’ perspectives. Educational Administration Quarterly, 35(3), 349–378. Blase, J., & Blase, J. (1999). Principals’ instructional leadership and teacher development: Teachers’ perspectives. Educational Administration Quarterly, 35(3), 349–378.
go back to reference Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.CrossRef Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.CrossRef
go back to reference Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320.CrossRef Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320.CrossRef
go back to reference Chow, A. P. Y., Wong, E. K. P., Yeung, A. S., & Mo, K. W. (2002). Teachers’ perceptions of appraiser–appraisee relationships. Journal of Personnel Evaluation in Education, 16(2), 85–101.CrossRef Chow, A. P. Y., Wong, E. K. P., Yeung, A. S., & Mo, K. W. (2002). Teachers’ perceptions of appraiser–appraisee relationships. Journal of Personnel Evaluation in Education, 16(2), 85–101.CrossRef
go back to reference Cosner, S. (2011). Teacher learning, instructional considerations and principal communication: Lessons from a longitudinal study of collaborative data use by teachers. Educational Management Administration & Leadership, 39(5), 568–589.CrossRef Cosner, S. (2011). Teacher learning, instructional considerations and principal communication: Lessons from a longitudinal study of collaborative data use by teachers. Educational Management Administration & Leadership, 39(5), 568–589.CrossRef
go back to reference Curry, K. A., Mwavita, M., Holter, A., & Harris, E. (2016). Getting assessment right at the classroom level: Using formative assessment for decision making. Educational Assessment, Evaluation and Accountability, 28(1), 89–104.CrossRef Curry, K. A., Mwavita, M., Holter, A., & Harris, E. (2016). Getting assessment right at the classroom level: Using formative assessment for decision making. Educational Assessment, Evaluation and Accountability, 28(1), 89–104.CrossRef
go back to reference Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and improvement. New York, NY: Teachers College Press. Darling-Hammond, L. (2013). Getting teacher evaluation right: What really matters for effectiveness and improvement. New York, NY: Teachers College Press.
go back to reference Darling-Hammond, L. (2014). One piece of the whole: Teacher evaluation as part of a comprehensive system for teaching and learning. American Educator, 38(1), 4–13. Darling-Hammond, L. (2014). One piece of the whole: Teacher evaluation as part of a comprehensive system for teaching and learning. American Educator, 38(1), 4–13.
go back to reference Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15.CrossRef Darling-Hammond, L., Amrein-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15.CrossRef
go back to reference Datnow, A., & Hubbard, L. (2015). Teachers' use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4). Datnow, A., & Hubbard, L. (2015). Teachers' use of assessment data to inform instruction: Lessons from the past and prospects for the future. Teachers College Record, 117(4).
go back to reference Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco: Jossey-Bass. Datnow, A., & Park, V. (2014). Data-driven leadership. San Francisco: Jossey-Bass.
go back to reference Datnow, A., Greene, J. C., & Gannon-Slater, N. (2017). Data use for equity: Implications for teaching, leadership, and policy. Journal of Educational Administration, 55(4), 354–360.CrossRef Datnow, A., Greene, J. C., & Gannon-Slater, N. (2017). Data use for equity: Implications for teaching, leadership, and policy. Journal of Educational Administration, 55(4), 354–360.CrossRef
go back to reference Deci, E. L., & Ryan, R. M. (2000). The “what” and the “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.CrossRef Deci, E. L., & Ryan, R. M. (2000). The “what” and the “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.CrossRef
go back to reference Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125, 627–668.CrossRef Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125, 627–668.CrossRef
go back to reference Delvaux, E., Vanhoof, J., Tuytens, M., Vekeman, E., Devos, G., & Van Petegem, P. (2013). How may teacher evaluation have an impact on professional development? A multilevel analysis. Teaching and Teacher Education, 36, 1–11.CrossRef Delvaux, E., Vanhoof, J., Tuytens, M., Vekeman, E., Devos, G., & Van Petegem, P. (2013). How may teacher evaluation have an impact on professional development? A multilevel analysis. Teaching and Teacher Education, 36, 1–11.CrossRef
go back to reference Denzin, N. K. (2001). Interpretive interactionism (2nd ed.). Thousand Oaks, CA: Sage.CrossRef Denzin, N. K. (2001). Interpretive interactionism (2nd ed.). Thousand Oaks, CA: Sage.CrossRef
go back to reference Doherty, K. M., & Jacobs, S. (2015). State of the states 2015: Evaluating teaching, leading, and learning. Washington, DC: National Council on Teacher Quality. Doherty, K. M., & Jacobs, S. (2015). State of the states 2015: Evaluating teaching, leading, and learning. Washington, DC: National Council on Teacher Quality.
go back to reference Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L., & Midgley, C. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motivation (pp. 75–146). San Francisco, CA: W. H. Freeman. Eccles, J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L., & Midgley, C. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motivation (pp. 75–146). San Francisco, CA: W. H. Freeman.
go back to reference Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through professional learning communities: Early lessons from Delaware. Studies in Educational Evaluation, 42, 41–53.CrossRef Farley-Ripple, E. N., & Buttram, J. L. (2014). Developing collaborative data use through professional learning communities: Early lessons from Delaware. Studies in Educational Evaluation, 42, 41–53.CrossRef
go back to reference Farrell, C. C. (2015). Designing school systems to encourage data use and instructional improvement: A comparison of school districts and charter management organizations. Educational Administration Quarterly, 51(3), 438–471.CrossRef Farrell, C. C. (2015). Designing school systems to encourage data use and instructional improvement: A comparison of school districts and charter management organizations. Educational Administration Quarterly, 51(3), 438–471.CrossRef
go back to reference Farrell, C. C., & Marsh, J. A. (2016a). Metrics matter: How properties and perceptions of data shape teachers’ instructional responses. Educational Administration Quarterly, 52(3), 423–462.CrossRef Farrell, C. C., & Marsh, J. A. (2016a). Metrics matter: How properties and perceptions of data shape teachers’ instructional responses. Educational Administration Quarterly, 52(3), 423–462.CrossRef
go back to reference Farrell, C. C., & Marsh, J. A. (2016b). Contributing conditions: A qualitative comparative analysis of teachers’ instructional responses to data. Teaching and Teacher Education, 60, 398–412.CrossRef Farrell, C. C., & Marsh, J. A. (2016b). Contributing conditions: A qualitative comparative analysis of teachers’ instructional responses to data. Teaching and Teacher Education, 60, 398–412.CrossRef
go back to reference Ford, T. G., Van Sickle, M. E., & Fazio-Brunson, M. (2016). The role of “informational significance” in shaping Louisiana elementary teachers’ use of high-stakes teacher evaluation data for instructional decision making. In K. K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 117–135). New York: Palgrave Macmillan. Ford, T. G., Van Sickle, M. E., & Fazio-Brunson, M. (2016). The role of “informational significance” in shaping Louisiana elementary teachers’ use of high-stakes teacher evaluation data for instructional decision making. In K. K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 117–135). New York: Palgrave Macmillan.
go back to reference Ford, T. G., Van Sickle, M. E., Clark, L. V., Fazio-Brunson, M., & Schween, D. C. (2017). Teacher self-efficacy, professional commitment and high-stakes teacher evaluation (HSTE) policy in Louisiana. Educational Policy, 31(2), 202–248. Ford, T. G., Van Sickle, M. E., Clark, L. V., Fazio-Brunson, M., & Schween, D. C. (2017). Teacher self-efficacy, professional commitment and high-stakes teacher evaluation (HSTE) policy in Louisiana. Educational Policy, 31(2), 202–248.
go back to reference Glover, T. A., Reddy, L. A., Kettler, R. J., Kurz, A., & Lekwa, A. J. (2016). Improving high-stakes decisions via formative assessment, professional development, and comprehensive educator evaluation: The school system improvement project. Teachers College Record, 118(14), 1–26. Glover, T. A., Reddy, L. A., Kettler, R. J., Kurz, A., & Lekwa, A. J. (2016). Improving high-stakes decisions via formative assessment, professional development, and comprehensive educator evaluation: The school system improvement project. Teachers College Record, 118(14), 1–26.
go back to reference Grissom, J. A., & Youngs, P. A. (2016). Improving teacher evaluation systems: Making the most of multiple measures. New York: Teachers College Press. Grissom, J. A., & Youngs, P. A. (2016). Improving teacher evaluation systems: Making the most of multiple measures. New York: Teachers College Press.
go back to reference Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26(1), 5–28.CrossRef Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26(1), 5–28.CrossRef
go back to reference Harris, D. N., & Herrington, C. D. (Eds.). (2015). Value added meets the schools: The effects of using test-based teacher evaluation on the work of teachers and leaders [special issue]. Educational Research, 44(2), 71–141. Harris, D. N., & Herrington, C. D. (Eds.). (2015). Value added meets the schools: The effects of using test-based teacher evaluation on the work of teachers and leaders [special issue]. Educational Research, 44(2), 71–141.
go back to reference Hewitt, K., & Amrein-Beardsley, A. (2016). Introduction: The use of growth measures for educator accountability at the intersection of policy and practice. In K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 1–25). New York: Palgrave Macmillan.CrossRef Hewitt, K., & Amrein-Beardsley, A. (2016). Introduction: The use of growth measures for educator accountability at the intersection of policy and practice. In K. Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice: Intended and unintended consequences of high-stakes teacher evaluations (pp. 1–25). New York: Palgrave Macmillan.CrossRef
go back to reference Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199–222.CrossRef Honig, M. I., & Venkateswaran, N. (2012). School–central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199–222.CrossRef
go back to reference Huguet, A., Farrell, C. C., & Marsh, J. A. (2017). Light touch, heavy hand: Principals and data-use PLCs. Journal of Educational Administration, 55(4), 376–389.CrossRef Huguet, A., Farrell, C. C., & Marsh, J. A. (2017). Light touch, heavy hand: Principals and data-use PLCs. Journal of Educational Administration, 55(4), 376–389.CrossRef
go back to reference Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data-driven” mantra: Different conceptions of data-driven decision making. Yearbook of the National Society for the Study of Education, 106(1), 105–131.CrossRef Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data-driven” mantra: Different conceptions of data-driven decision making. Yearbook of the National Society for the Study of Education, 106(1), 105–131.CrossRef
go back to reference Jiang, J. Y., Sporte, S. E., & Luppescu, S. (2015). Teacher perspectives on evaluation reform: Chicago’s REACH students. Educational Researcher, 44, 105–116. Jiang, J. Y., Sporte, S. E., & Luppescu, S. (2015). Teacher perspectives on evaluation reform: Chicago’s REACH students. Educational Researcher, 44, 105–116.
go back to reference Jones, N. D. (2016). Special education teacher evaluation: An examination of critical issues and recommendations for practice. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 63–76). New York: Teachers College Press. Jones, N. D. (2016). Special education teacher evaluation: An examination of critical issues and recommendations for practice. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 63–76). New York: Teachers College Press.
go back to reference Kelly, K. O., Ang, S. Y. A., Chong, W. L., & Hu, W. S. (2008). Teacher appraisal and its outcomes in Singapore primary schools. Journal of Educational Administration, 46(1), 39–54.CrossRef Kelly, K. O., Ang, S. Y. A., Chong, W. L., & Hu, W. S. (2008). Teacher appraisal and its outcomes in Singapore primary schools. Journal of Educational Administration, 46(1), 39–54.CrossRef
go back to reference Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes and lessons from three urban districts. American Journal of Education, 112, 496–520.CrossRef Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes and lessons from three urban districts. American Journal of Education, 112, 496–520.CrossRef
go back to reference Kraft, M. A., & Gilmour, A. F. (2017). Revisiting the widget effect: Teacher evaluation reforms and the distribution of teacher effectiveness. Educational Researcher, 46(5), 234–249.CrossRef Kraft, M. A., & Gilmour, A. F. (2017). Revisiting the widget effect: Teacher evaluation reforms and the distribution of teacher effectiveness. Educational Researcher, 46(5), 234–249.CrossRef
go back to reference Lavigne, A. L., & Good, T. L. (2014). Teacher and student evaluation: Moving beyond the failure of school reform. New York: Routledge. Lavigne, A. L., & Good, T. L. (2014). Teacher and student evaluation: Moving beyond the failure of school reform. New York: Routledge.
go back to reference Lavigne, A. L., & Good, T. L. (2015). Improving teaching through observation and feedback: Beyond state and federal mandates. New York: Routledge. Lavigne, A. L., & Good, T. L. (2015). Improving teaching through observation and feedback: Beyond state and federal mandates. New York: Routledge.
go back to reference Lipsky, M. (2010). Street-level bureaucracy: Dilemmas of the individual in public service (2nd ed.). Thousand Oaks, CA: Russell Sage Foundation. Lipsky, M. (2010). Street-level bureaucracy: Dilemmas of the individual in public service (2nd ed.). Thousand Oaks, CA: Russell Sage Foundation.
go back to reference Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166.CrossRef Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166.CrossRef
go back to reference Longo-Schmid, J. (2016). Teachers’ voices: Where policy meets practice. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 49–71). New York: Palgrave Macmillan.CrossRef Longo-Schmid, J. (2016). Teachers’ voices: Where policy meets practice. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 49–71). New York: Palgrave Macmillan.CrossRef
go back to reference Louisiana Department of Education. (2012). Compass: Louisiana’s path to excellence—Teacher evaluation guidebook. Baton Rouge, LA: Author. Louisiana Department of Education. (2012). Compass: Louisiana’s path to excellence—Teacher evaluation guidebook. Baton Rouge, LA: Author.
go back to reference Louisiana House Bill 1033. (2010). Evaluation and Assessment Programs. Louisiana House Bill 1033. (2010). Evaluation and Assessment Programs.
go back to reference Lortie, D. (1975). Schoolteacher: A sociological analysis. Chicago: University of Chicago Press. Lortie, D. (1975). Schoolteacher: A sociological analysis. Chicago: University of Chicago Press.
go back to reference Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York: Teachers College Press. Mandinach, E. B., Honey, M., Light, D., & Brunner, C. (2008). A conceptual framework for data driven decision making. In E. B. Mandinach & M. Honey (Eds.), Data-driven school improvement: Linking data and learning (pp. 13–31). New York: Teachers College Press.
go back to reference Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10(3), 439–462. Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10(3), 439–462.
go back to reference Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48. Marsh, J. A. (2012). Interventions promoting educators’ use of data: Research insights and gaps. Teachers College Record, 114(11), 1–48.
go back to reference Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision making: A framework for understanding capacity building. Educational Management Administration & Leadership, 43(2), 269–289.CrossRef Marsh, J. A., & Farrell, C. C. (2015). How leaders can support teachers with data-driven decision making: A framework for understanding capacity building. Educational Management Administration & Leadership, 43(2), 269–289.CrossRef
go back to reference McLaughlin, M. W. (1987). Learning from experience: Lessons from policy implementation. Educational Evaluation and Policy Analysis, 9, 171–178.CrossRef McLaughlin, M. W. (1987). Learning from experience: Lessons from policy implementation. Educational Evaluation and Policy Analysis, 9, 171–178.CrossRef
go back to reference Milanowski, A. T., & Heneman, H. G. (2001). Assessment of teacher reactions to a standards-based teacher evaluation system: A pilot study. Journal of Personnel Evaluation in Education, 15(3), 193–212.CrossRef Milanowski, A. T., & Heneman, H. G. (2001). Assessment of teacher reactions to a standards-based teacher evaluation system: A pilot study. Journal of Personnel Evaluation in Education, 15(3), 193–212.CrossRef
go back to reference Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage. Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook (3rd ed.). Thousand Oaks, CA: Sage.
go back to reference Niemiec, C. P., & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education, 7, 133–144. Niemiec, C. P., & Ryan, R. M. (2009). Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education, 7, 133–144.
go back to reference Papay, J. P. (2011). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48, 163–193.CrossRef Papay, J. P. (2011). Different tests, different answers: The stability of teacher value-added estimates across outcome measures. American Educational Research Journal, 48, 163–193.CrossRef
go back to reference Park, V., Daly, A. J., & Guerra, A. W. (2013). Strategic framing: How leaders craft the meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.CrossRef Park, V., Daly, A. J., & Guerra, A. W. (2013). Strategic framing: How leaders craft the meaning of data use for equity and learning. Educational Policy, 27(4), 645–675.CrossRef
go back to reference Reddy, L. A., Dudek, C. M., Peters, S., Alperin, A., Kettler, R. J., Kurz, A. (2018). Teachers’ and school administrators’ attitudes and beliefs of teacher evaluation: A preliminary investigation of high poverty school districts. Educational Assessment, Evaluation, and Accountability, 30, 47–70. Reddy, L. A., Dudek, C. M., Peters, S., Alperin, A., Kettler, R. J., Kurz, A. (2018). Teachers’ and school administrators’ attitudes and beliefs of teacher evaluation: A preliminary investigation of high poverty school districts. Educational Assessment, Evaluation, and Accountability, 30, 47–70.
go back to reference Rice, J. K., & Malen, B. (2016). When theoretical models meet school realities: Educator responses to student growth measures in an incentive pay program. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 29–47). Palgrave Macmillan US. Rice, J. K., & Malen, B. (2016). When theoretical models meet school realities: Educator responses to student growth measures in an incentive pay program. In K. Kappler Hewitt & A. Amrein-Beardsley (Eds.), Student growth measures in policy and practice (pp. 29–47). Palgrave Macmillan US.
go back to reference Rosenholtz, S. J. (1991). Teachers’ workplace: The social organization of schools. New York: Teachers College Press. Rosenholtz, S. J. (1991). Teachers’ workplace: The social organization of schools. New York: Teachers College Press.
go back to reference Ryan, R. M., & Brown, K. W. (2005). Legislating competence: The motivational impact of high-stakes testing as an educational reform. In C. Dweck & A. Elliot (Eds.), Handbook of competence and motivation (pp. 354–372). New York: Guilford Press. Ryan, R. M., & Brown, K. W. (2005). Legislating competence: The motivational impact of high-stakes testing as an educational reform. In C. Dweck & A. Elliot (Eds.), Handbook of competence and motivation (pp. 354–372). New York: Guilford Press.
go back to reference Ryan, R. M., & Deci, E. L. (2002). An overview of self-determination theory: An organismic dialectical perspective. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 3–33). Rochester, NY: University of Rochester Press. Ryan, R. M., & Deci, E. L. (2002). An overview of self-determination theory: An organismic dialectical perspective. In E. L. Deci & R. M. Ryan (Eds.), Handbook of self-determination research (pp. 3–33). Rochester, NY: University of Rochester Press.
go back to reference Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford Press. Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford Press.
go back to reference Schildkamp, K., & Visscher, A. (2010). The use of performance feedback in school improvement in Louisiana. Teaching and Teacher Education, 26(7), 1389–1403.CrossRef Schildkamp, K., & Visscher, A. (2010). The use of performance feedback in school improvement in Louisiana. Teaching and Teacher Education, 26(7), 1389–1403.CrossRef
go back to reference Schildkamp, K., Poortman, C., Luyten, H., & Ebbeler, J. (2017). Factors promoting and hindering data-based decision making in schools. School Effectiveness and School Improvement, 28(2), 242–258.CrossRef Schildkamp, K., Poortman, C., Luyten, H., & Ebbeler, J. (2017). Factors promoting and hindering data-based decision making in schools. School Effectiveness and School Improvement, 28(2), 242–258.CrossRef
go back to reference Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510–529.CrossRef Schneider, A., & Ingram, H. (1990). Behavioral assumptions of policy tools. The Journal of Politics, 52(2), 510–529.CrossRef
go back to reference Skrla, L., Scheurich, J. J., Garcia, J., & Nolly, G. (2004). Equity audits: A practical leadership tool for developing equitable and excellent schools. Educational Administration Quarterly, 40(1), 133–161.CrossRef Skrla, L., Scheurich, J. J., Garcia, J., & Nolly, G. (2004). Equity audits: A practical leadership tool for developing equitable and excellent schools. Educational Administration Quarterly, 40(1), 133–161.CrossRef
go back to reference Sun, M., Mutcheson, R. B., & Kim, J. (2016). Teachers' use of evaluation for instructional improvement and school supports for such use. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 169–183). New York: Teachers College Press. Sun, M., Mutcheson, R. B., & Kim, J. (2016). Teachers' use of evaluation for instructional improvement and school supports for such use. In J. A. Grissom & P. Youngs (Eds.), Improving teacher evaluation systems: Making the most of multiple measures (pp. 169–183). New York: Teachers College Press.
go back to reference The Joint Committee on Standards for Educational Evaluation [JCSEE]. (2009). The personnel evaluation standards: How to assess systems for evaluating educators. Thousand Oaks, CA: Corwin. The Joint Committee on Standards for Educational Evaluation [JCSEE]. (2009). The personnel evaluation standards: How to assess systems for evaluating educators. Thousand Oaks, CA: Corwin.
go back to reference The New Teacher Project. (2010). Teacher evaluation 2.0. New York: Author. The New Teacher Project. (2010). Teacher evaluation 2.0. New York: Author.
go back to reference Tuytens, M., & Devos, G. (2011). Stimulating professional learning through teacher evaluation: An impossible task for the school leader? Teaching and Teacher Education, 27(5), 891–899.CrossRef Tuytens, M., & Devos, G. (2011). Stimulating professional learning through teacher evaluation: An impossible task for the school leader? Teaching and Teacher Education, 27(5), 891–899.CrossRef
go back to reference Van Gasse, R., Vanlommel, K., Vanhoof, J., & Van Petegem, P. (2017). The impact of collaboration on teachers’ individual data use. School Effectiveness and School Improvement, 28, 1–16. Van Gasse, R., Vanlommel, K., Vanhoof, J., & Van Petegem, P. (2017). The impact of collaboration on teachers’ individual data use. School Effectiveness and School Improvement, 28, 1–16.
go back to reference Vansteenkiste, M., Lens, W., De Witte, H., & Feather, N. T. (2005). Understanding unemployed people’s job search behavior, unemployment experience and well-being. A comparison of expectancy-value theory and self-determination theory. British Journal of Social Psychology, 44, 269–287.CrossRef Vansteenkiste, M., Lens, W., De Witte, H., & Feather, N. T. (2005). Understanding unemployed people’s job search behavior, unemployment experience and well-being. A comparison of expectancy-value theory and self-determination theory. British Journal of Social Psychology, 44, 269–287.CrossRef
go back to reference Vansteenkiste, M., Lens, W., & Deci, E. L. (2006). Intrinsic versus extrinsic goal contents in self-determination theory: Another look at the quality of academic motivation. Educational Psychologist, 41(1), 19–31.CrossRef Vansteenkiste, M., Lens, W., & Deci, E. L. (2006). Intrinsic versus extrinsic goal contents in self-determination theory: Another look at the quality of academic motivation. Educational Psychologist, 41(1), 19–31.CrossRef
go back to reference Watt, H. M. G., & Richardson, P. W. (2014). Why people choose teaching as a career: An expectancy-value approach to understanding teacher motivation. In P. W. Richardson, S. A. Karabenick, & H. M. G. Watt (Eds.), Teacher motivation: theory and practice (pp. 3–19). London: Routledge. Watt, H. M. G., & Richardson, P. W. (2014). Why people choose teaching as a career: An expectancy-value approach to understanding teacher motivation. In P. W. Richardson, S. A. Karabenick, & H. M. G. Watt (Eds.), Teacher motivation: theory and practice (pp. 3–19). London: Routledge.
go back to reference Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). In ) (Ed.), The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project. Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). In ) (Ed.), The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project.
go back to reference Wigfield, A., & Eccles, J. (1992). The development of achievement task values: A theoretical analysis. Developmental Review, 12, 265–310.CrossRef Wigfield, A., & Eccles, J. (1992). The development of achievement task values: A theoretical analysis. Developmental Review, 12, 265–310.CrossRef
go back to reference Yin, R. K. (2017). Case study research and applications: Design and methods (5th ed.). Thousand Oaks, CA: SAGE publications. Yin, R. K. (2017). Case study research and applications: Design and methods (5th ed.). Thousand Oaks, CA: SAGE publications.
go back to reference Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521–548. Young, V. M. (2006). Teachers’ use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112, 521–548.
Metadata
Title
Pointing teachers in the wrong direction: understanding Louisiana elementary teachers’ use of Compass high-stakes teacher evaluation data
Author
Timothy G. Ford
Publication date
11-07-2018
Publisher
Springer Netherlands
Published in
Educational Assessment, Evaluation and Accountability / Issue 3/2018
Print ISSN: 1874-8597
Electronic ISSN: 1874-8600
DOI
https://doi.org/10.1007/s11092-018-9280-x

Other articles of this Issue 3/2018

Educational Assessment, Evaluation and Accountability 3/2018 Go to the issue