Stumping e-rater:challenging the validity of automated essay scoring
作者:
Highlights:
•
摘要
For this study, various parties were invited to “challenge” e-rater—an automated essay scorer that relies on natural language processing techniques—by composing essays in response to Graduate Record Examinations (GRE®) Writing Assessment prompts with the intention of undermining its scoring capability. Specifically, using detailed information about e-rater's approach to essay scoring, writers tried to “trick” the computer-based system into assigning scores that were higher or lower than deserved. E-rater's automated scores on these “problem essays” were compared with scores given by two trained, human readers, and the difference between the scores constituted the standard for judging the extent to which e-rater was fooled. Challengers were differentially successful in writing problematic essays. As a whole, they were more successful in tricking e-rater into assigning scores that were too high than in duping e-rater into awarding scores that were too low. The study provides information on ways in which e-rater, and perhaps other automated essay scoring systems, may fail to provide accurate evaluations, if used as the sole method of scoring in high-stakes assessments. The results suggest possible avenues for improving automated scoring methods.
论文关键词:Writing assessment,Graduate Record Examinations (GRE),Validity,Automated scoring,Essay scoring,Computer-assisted
论文评审过程:Available online 20 November 2001.
论文官网地址:https://doi.org/10.1016/S0747-5632(01)00052-8