In the brave new world of Common Core, all tests will be delivered online and graded by computers. This is supposed to be faster and cheaper than paying teachers or even low-skill hourly workers or read student essays.

But counting on machines to grade student work is a truly bad idea. We know that computers can’t recognize wit, humor, or irony. We know that many potentially great writers with unconventional writing styles would be declared failures (EE Cummings immediately to mind).

But it is worse than that. Computers can’t tell the difference between reasonable prose and bloated nonsense. Les Perelman, former director of undergraduate writing at MIT, created a machine, withe help of a team of students, called BABEL.

He was interviewed by Steve Kolowich of The Chronicle of Higher Education, who wrote:

“Les Perelman, a former director of undergraduate writing at the Massachusetts Institute of Technology, sits in his wife’s office and reads aloud from his latest essay.

“Privateness has not been and undoubtedly never will be lauded, precarious, and decent,” he reads. “Humankind will always subjugate privateness.”

Not exactly E.B. White. Then again, Mr. Perelman wrote the essay in less than one second, using the Basic Automatic B.S. Essay Language Generator, or Babel, a new piece of weaponry in his continuing war on automated essay-grading software.

“The Babel generator, which Mr. Perelman built with a team of students from MIT and Harvard University, can generate essays from scratch using as many as three keywords.

“For this essay, Mr. Perelman has entered only one keyword: “privacy.” With the click of a button, the program produced a string of bloated sentences that, though grammatically correct and structurally sound, have no coherent meaning. Not to humans, anyway. But Mr. Perelman is not trying to impress humans. He is trying to fool machines.

“Software vs. Software

“Critics of automated essay scoring are a small but lively band, and Mr. Perelman is perhaps the most theatrical. He has claimed to be able to guess, from across a room, the scores awarded to SAT essays, judging solely on the basis of length. (It’s a skill he happily demonstrated to a New York Times reporter in 2005.) In presentations, he likes to show how the Gettysburg Address would have scored poorly on the SAT writing test. (That test is graded by human readers, but Mr. Perelman says the rubric is so rigid, and time so short, that they may as well be robots.)

“In 2012 he published an essay that employed an obscenity (used as a technical term) 46 times, including in the title.

“Mr. Perelman’s fundamental problem with essay-grading automatons, he explains, is that they “are not measuring any of the real constructs that have to do with writing.” They cannot read meaning, and they cannot check facts. More to the point, they cannot tell gibberish from lucid writing.”

The rest of the article reviews projects in which professors claim to have perfected machines that are as reliable at judging student essays as human graders.

I’m with Perelman. If I write something, I have a reader or an audience in mind. I am writing for you, not for a machine. I want you to understand what I am thinking. The best writing, I believe, is created by people writing to and for other people, not by writers aiming to meet the technical specifications to satisfy a computer program.