Politico reports this morning:

 

 

PARCC says many states with Common Core-based assessments will use automated scoring for student essays this year. A spokesman says that in these states, about two-thirds of all student essays will be scored automatically, while one-third will be human-scored. As in the past, a spokesman said about 10 percent of all responses will be randomly selected to receive a second score as part of a general check. States can still opt to have all essays hand-scored.

 

This is another reason to opt out of the state testing.

 

Do you think that PARCC is unaware of the studies by Les Perelman at MIT that show the inadequacy of computer-graded scoring of essays?

 

Here is a quote from an interview with Professor Perelman, conducted by Steve Kolowich of the Chronicle of Higher Education:

 

 

“Les Perelman, a former director of undergraduate writing at the Massachusetts Institute of Technology, sits in his wife’s office and reads aloud from his latest essay.

 

“Privateness has not been and undoubtedly never will be lauded, precarious, and decent,” he reads. “Humankind will always subjugate privateness.”

 

Not exactly E.B. White. Then again, Mr. Perelman wrote the essay in less than one second, using the Basic Automatic B.S. Essay Language Generator, or Babel, a new piece of weaponry in his continuing war on automated essay-grading software.

 

“The Babel generator, which Mr. Perelman built with a team of students from MIT and Harvard University, can generate essays from scratch using as many as three keywords.

 

“For this essay, Mr. Perelman has entered only one keyword: “privacy.” With the click of a button, the program produced a string of bloated sentences that, though grammatically correct and structurally sound, have no coherent meaning. Not to humans, anyway. But Mr. Perelman is not trying to impress humans. He is trying to fool machines.

 

“Software vs. Software

 

“Critics of automated essay scoring are a small but lively band, and Mr. Perelman is perhaps the most theatrical. He has claimed to be able to guess, from across a room, the scores awarded to SAT essays, judging solely on the basis of length. (It’s a skill he happily demonstrated to a New York Times reporter in 2005.) In presentations, he likes to show how the Gettysburg Address would have scored poorly on the SAT writing test. (That test is graded by human readers, but Mr. Perelman says the rubric is so rigid, and time so short, that they may as well be robots.)

 

“In 2012 he published an essay that employed an obscenity (used as a technical term) 46 times, including in the title.

 

“Mr. Perelman’s fundamental problem with essay-grading automatons, he explains, is that they “are not measuring any of the real constructs that have to do with writing.” They cannot read meaning, and they cannot check facts. More to the point, they cannot tell gibberish from lucid writing.”