Variance and Reliability in Special Educator Observation Rubrics

Angela R. Crawford, Evelyn S. Johnson, Laura A. Moylan, Yuzhu Zheng

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

This study describes the development and initial psychometric evaluation of the Recognizing Effective Special Education Teachers (RESET) observation instrument. The study uses generalizability theory to compare two versions of a rubric, one with general descriptors of performance levels and one with item-specific descriptors of performance levels, for evaluating special education teacher implementation of explicit instruction. Eight raters (four for each version of the rubric) viewed and scored videos of explicit instruction in intervention settings. The data from each rubric were analyzed with a four facet, crossed, mixed-model design to estimate the variance components and reliability indices. Results show lower unwanted sources of variance and higher reliability indices with the rubric with item-specific descriptors of performance levels. Contributions to the fields of intervention and teacher evaluation are discussed.

Original languageAmerican English
JournalEarly and Special Education Faculty Publications and Presentations
StatePublished - 1 Dec 2019

Keywords

  • explicit instruction
  • generalizability theory
  • observation systems
  • special education teacher evaluation

EGS Disciplines

  • Educational Assessment, Evaluation, and Research
  • Special Education and Teaching

Fingerprint

Dive into the research topics of 'Variance and Reliability in Special Educator Observation Rubrics'. Together they form a unique fingerprint.

Cite this