Generalizability of Student Writing across Multiple Tasks: A Challenge for Authentic Assessment

John D. Hathcoat and Jeremy D. Penn   |    Email Article Download Article

Critics of standardized testing have recommended replacing standardized tests with more authentic assessment measures, such as classroom assignments, projects, or portfolios rated by a panel of raters using common rubrics. Little research has examined the consistency of scores across multiple authentic assignments or the implications of this source of error on the generalizability of assessment results. This study provides a framework for conceptualizing measurement error when using authentic assessments and investigates the extent to which student writing performance may generalize across multiple tasks. Results from a generalizability study found that 77% of error variance may be attributable to differences within people across multiple writing assignments. Decision studies indicated that substantive improvements in reliability may be gained by increasing the number of assignments, as opposed to increasing the number of raters. Judgments about relative student performance may require closer scrutiny of task characteristics as a source of measurement error.



« Back to Archive