Examining Rater Accuracy and Consistency with a Special Education Observation Protocol

Evelyn S. Johnson, Yuzhu Zheng, Angela R. Crawford, Laura A. Moylan

Research output: Contribution to journalArticlepeer-review

2 Scopus citations
1 Downloads (Pure)

Abstract

Research indicates that instructional aspects of teacher performance are the most difficult to reach consensus on, significantly limiting teacher observation as a way to systematically improve instructional practice. Understanding the rationales that raters provide as they evaluate teacher performance with an observation protocol offers one way to better understand the training efforts required to improve rater accuracy. The purpose of this study was to examine the accuracy of raters evaluating special education teachers’ implementation of evidence-based math instruction. A mixed-methods approach was used to investigate: 1) the consistency of the raters’ application of the scoring criteria to evaluate teachers’ lessons, 2) raters’ accuracy on two lessons with those given by expert-raters, and 3) the raters’ understanding and application of the scoring criteria through a think-aloud process. The results show that raters had difficulty understanding some of the high inference items in the rubric and applying them accurately and consistently across the lessons. Implications for rater training are discussed.

Original languageAmerican English
JournalStudies in Educational Evaluation
StatePublished - 1 Mar 2020

Keywords

  • feedback
  • rater accuracy
  • rater consistency
  • special education
  • teacher observation

EGS Disciplines

  • Special Education and Teaching

Fingerprint

Dive into the research topics of 'Examining Rater Accuracy and Consistency with a Special Education Observation Protocol'. Together they form a unique fingerprint.

Cite this