This website uses cookies and similar technologies to understand visitors' experiences. By continuing to use this website, you accept our use of cookies and similar technologies,Terms of Use, and Privacy Policy.

Apr 13 2012 - 05:18 PM
Trends in Ed: One small step for student writing evaluation software
Researchers at the University of Akron have just released a very timely study in which they claim that automated scoring software (AES), across a variety of brands, has proved capable of rating student essays (or extended-response writing items) with a similar level of accuracy and consistency as human evaluators. The student work used was 22,000 short essays from students in grades 7-9, divided into traditional writing or responses to prompts. The scoring involved a variety of rubrics, depending on the type of essay. Of course there are still caveats. As Inside Higher Ed points out, AES is most likely much easier to fool, particularly if one imagines a scenario in which there is no human oversight of the grading process. Computers are still fairly "rigid" thinkers when compared to humans, and the algorithms they use for evaluation are most likely less robust. And yet, with that said, this still looks like an important step towards the growing computerization of the education world.
Posted in: Trends in EdNew Learning Times|By: Fred Rossoff|1282 Reads