The New York Times recently reported on the introduction of software that is able to grade student essays and give instant feedback. It is currently being used in a number of universities; many others are likely to follow suit.
The student submits an essay and instantly receives a graded response from a computer. The student can then revise in hopes of improving the grade.
The software inevitably will be adopted for use in schools as well as colleges and universities.
Actually the Educational Testing Service already has an essay grader that can grade 16,000 essays in 20 seconds. Michael Winerip wrote about this in another article in the New York Times, back when he had a regular education column.
In both articles, the chief critics of machine grading of essays is Lew Perelman of MIT, who teaches writing. He says that it is easy to game the system, to prep for it; he also says that the system cannot identify good writing. It does not like short sentences or short paragraphs. Worse, as he says about the ETS system, it cannot tell truth from falsehood:
“He tells students not to waste time worrying about whether their facts are accurate, since pretty much any fact will do as long as it is incorporated into a well-structured sentence. “E-Rater doesn’t care if you say the War of 1812 started in 1945,” he said.”
Brave New World, indeed.
Used Pearson’s SuccessNetPlus this year to satisfy a data requirement for admin. It graded the essays and gave them a wildly inaccurate score. My best writers whose essays were filled with strong voice and engaging content scored equal to or less than mind-numbing 5-paragraph essays. The data was useless because the scores were absurd.
Jim Davis: just a recasting of what has been going on in the standardized testing industry for some time, except that absurd results are churned out faster.
Read Todd Farley’s MAKING THE GRADES: MY MISADVENTURES IN THE STANDARDIZED TESTING INDUSTRY (2009). You will (unfortunately) find your observations repeated throughout his small tome, fleshed out with examples that will probably make you weep or make your skin crawl.
Or both.
Truly disgusting..who will grade the level of insight and voice as Jim says above….
Why don’t we just get computers to write our books for us? Someday will computers feel our emotions for us? Experience our struggles, our joys?
Eh…no.
The programs take the place of an editor. It can not engage in conversation about a topic. It can’t see the muddy parts of our humanity…the muddy parts that writing sifts through.
While I have no qualm with editing grammar it is only a fraction of what makes a writer. Yet, grammar and editing have all of the curb appeal in our communities.
We have one associated with our textbooks in our school. While the computer can polish a lot of sentences that say absolutely nothing, students struggle knowing what the corrections refer to–it needs a human to interpret and guide. Also, some of our kids have figured out how to make it give them the number they want.
Run as fast as you can in the opposite direction.
“Often they come from very prestigious institutions where, in fact, they do a much better job of providing feedback than a machine ever could,” Dr. Shermis said. “There seems to be a lack of appreciation of what is actually going on in the real world.”
So, in the “real world” where classes are ridiculously large, computers need to score essays, even if they are far less that accurate than a real-live teacher because that will just have to be good enough for the general populous while software manufacturers rake in the big bucks? Oh, now I get it!
Excerpt from the NT Times story:
[Two start-ups, Coursera and Udacity, recently founded by Stanford faculty members to create “massive open online courses,” or MOOCs, are also committed to automated assessment systems because of the value of instant feedback.
“It allows students to get immediate feedback on their work, so that learning turns into a game, with students naturally gravitating toward resubmitting the work until they get it right,” said Daphne Koller, a computer scientist and a founder of Coursera.]
Khan Academy is a MOOC. Look for more in elementary & secondary school.
Blurred lines. Privacy concerns as well as COPYRIGHT, branding & IP.
Get in front of this one. Kids needs teachers. Especially if they spend a lot of time online. They will lose the ability to think & speak face to face in real time. It’s an important skill that you can’t get from a MOOC unless the MOOC is on learning how to speak face to face in real time.
MOOC ALERT
http://paper.li/EducationNY/1359400814
Only the multiple choice quizzes are graded automatically, as far as I know. The essays and written responses are graded through peer review, which has its own set of issues, but the companies claim that the peer scores correlate well with the scores the professor would give.
The MOOCs are extremely valuable in my opinion. I’ve learned an immense amount because I went in purely for the knowledge and not for a degree. I’d never have a chance to take a class put together by some of these really great professors without it.
This is completely different that machines grading K12 written essays on high stakes tests (or at any time for that matter).
I took Courser classes and had online discussions with people from all over the world, most of whom were thrilled with the opportunity. You should give it a try, it is awesome, especially for the cost, which is nothing.
My public high school uses an online site called turnitin.com which will score essays based on a rubric you upload or a common core rubric, which is already available. Installing my own rubric has proven impossible, so I grade the essays on my own. The common core rubrics do not account for course content, only ELA standards. I simply use the website as a repository, eliminating those enormous stacks of paper. The site checks for common grammatical and usage errors, but I score for that and content, then assign the grade. As a history teacher, it takes some of the more tedious part of my job, while leaving me to determine the score based on my own criteria. I like it, except it doesn’t catch all errors, and marks some things as errors that obviously are not (such as the student’s name). All in all, it’s worked well for us this year, but that’s because we still score our students’ work. I don’t want a computer grading essays anymore than I want a computer writing them.
In my area, we use a program called My Access to do the same thing. If the kids do the grammar and spelling corrections, then I grade for content. It’s all on-line so it saves a lot of paper. It keeps me from having to read the word “Constitution” misspelled three different ways in the same essay, so it’s useful for basic formatting.
Our state grades the State Writing test electronically. We have to change our entire bell schedule for two days so that the kids have the 90 minutes required to do the essay, because it all has to be done in one sitting. My son got a perfect score on essay last year. He has terrible grammar and struggles with organization, but he has an enormous vocabulary, and that’s what I think got him the perfect score. The computer program is impressed with big words.
Wouldn’t a simple Spellcheck option, which most WP programs have integrated now anyway, also save you from having to read multiple spellings of the same word?
(And I can only imagine what such a program would do to all my sentence-fragment parenthetical asides! LOL)
Yes, spell check would help, but for whatever reasons the kids ignore those red lines. Having a “score” seems to help. But I wouldn’t let the computer grade the essays. I need to see their writing myself.
I can’t wait for our first training on this! We will learn the “buzz words” and how to make sure they are incorporated into our daily writing. It will start in Kindergarten and we will proudly display our “writing” on the wall all the way up. I see the writing on the wall. Or should I say the typing.
Of course, artificial intelligence systems have not evolved, of course, to a point where they can actually read with something like understanding. I haven’t been able to find any information about the algorithm used for these grading systems, but I suspect that they look at matters like sentence length, paragraph length, word length, word frequency, appearance of key words from the prompt, number of commas, number of transition words from a list–simple stuff–and that they have correlated these to a high degree to scores from human graders on a bunch of tests.
This is not grading. It substitutes correlation for understanding, and like the standards themselves, encourages formulaic writing. But formulas are the enemy of good writing, as every decent writing teacher knows. We don’t want a nation of robot students incapable of producing anything but a five-paragraph theme. The rise and fall of the Dow Jones used to be said to be highly correlated to the lengths of skirts. But even if that were the case, it would have been a mistake to make financial decisions based upon trends in skirt length. Both could be related to a third causative factor or to a set of causative factors that, through historically coupled, became uncoupled for one reson or another.
The same sort of problem one finds with these grading systems is to be found with the ubiquitous readability measures, which also look at matters like word frequency and sentence and word length. Consider Noam Chomsky’s famous sentence
Colorless green ideas sleep furiously.
This would be rated very readable by most readability systems.But it is utter nonsense. Or consider this phrase from Dylan Thomas:
the twelve triangles of the cherub wind
The phrase has one low-frequency word, cherub, but it would still get a low readability score. But it’s not an easily readable phrase, of course. It’s extraordinarily perplexing. Understanding it depends upon one’s remembering that old maps used to picture cherubim on them, blowing in various directions and forming “triangles” of wind, with the mouth of the cherub at the apex. And that’s just the beginning. One then has to relate this obscure reference to the rest of the context of the poem, and no two scholars or readers agree upon exactly how to do that.
Or think of something like Emily Dickinson’s “I Heard a Fly Buzz When I Died” or Ralph Waldo Emerson’s “”Brahma.” Both poems use fairly short phrases or sentences and absolutely common words. However, both are very, very complex conceptually. Reading either is NOT a simple undertaking.
If these grading systems are adopted generally, then perhaps we should start instructing students to write essays that begin like this:
Do colorless green ideas sleep furiously? Not if [key word from prompt] excoriate marjoram. As will be shown below, there are three persuasive reasons to lollygag the effusiveness of peripatetic [key verb from prompt with suffix to make it nominative]. First, intransigent evenings exfoliate in carburetors, round about the mulberry orientation. Second, . . .
In the note above, I meant, of course, “high readability score,” “not low readability score.”
ETS researchers working on their e-rater program have a lot of published work and a pretty extensive bibliography of the research used in the e-rater here: http://www.ets.org/research/topics/as_nlp/bibliography/
College Board employees also turn out a lot of “research” to show that College Board products (the PSAT, SAT, AccuPlacer, and Advanced Placement, for example) are better than sliced pepperoni…but independent research just does not confirm the College Board findings.
Thanks very much for the link. I do want to say that I fully support work on such systems. I just have my doubts about using them when the stakes are really high, and I worry about their leading to the teaching of awful, formulaic writing. I write and edit for a living. Whenever I load a word-processing program, I turn off the grammar checker first thing because it’s generally terrible. It flags a lot of stuff that is perfectly grammatical. Well-crafted language is complex, nuanced, and unpredictable. Years ago I was judging at a high-school speech tournament. One of the students had done a dramatic interpretation in which she performed one of e.e. cummings’s poetic deconstructions as a piece of scat singing. The performance was brilliant. What a perfect analogue to what cummings was doing in the poem! I rated this student’s performance first. The other judges didn’t get it, at all, and rated her dead last.
It’s a MADhouse! In our related arts classes, we are encouraged to have our students occasionally engage in writing activities. We were given sample writing prompts to use as guidelines.
Well, every short writing piece my students submit is identically interchangeable with all the others with the exception of specific and personlized nouns and adjectives. There is no individualized structure–no real element of true personalization. These templates for writing are as bad as mad-libs. So THIS is what passes for “writing” these days?
No, they should not. I agree with the comments above. A computer can look for key words and criteria. But, that is not writing. I honestly believe they use the same type of program to evaluate resumes and cover letters, overlooking the best an brightest.
I was a writing teacher for 17 years. How in the world can one TEACH writing if she never reads student work? It’s the ultimate formative assessment! It informs what you do next! Ah well, every reader of this blog knows that. How do we tell the corps who are making all the decisions?
What a joke. Who says you have to write in exactly the manner the computer is programmed to want to see? This was always my battle. They wanted me to write their way and I wanted to write my way. I write my way. If this is what happens where will the creative writing come from? It can’t happen. Spell check makes mistakes. I have caught it many times. That is why I have 5 dictionaries. Three regular including unabridged and two law dictionaries. Many times I have to go to the unabridged. It is all soooo regimented and non-creative. Do we want clones (Cylons)?
Yes, George, they DO want clones. This is part of the reform/corporate take over/1% movement. Now our students will be told their creative, wonderful writing is NOT wonderful but, instead, is wrong, such as Jim Davis describes above. In order to attain high scores on writing assessments, students WILL have to conform to the computer-graded standards, and teachers will be under-the-gun to teach them to write that way–or else.
This is actually a perfect fit for the reformers because they do not understand learning, teaching, child development, statistics and now we see the also do not understand writing!
I
I posted about this here, if interested: http://atthechalkface.com/2013/04/05/your-very-existence-is-an-act-of-rebellion/
very, very moving. thanks for posting this, Barbara!
This is not entirely new news. The criterion online essay grader was used at my former school in Florida for years. I realized its impact when students in my journalism class inserted false info and data in their writing. When questioned, the students told me only their “voice” and conventions mattered. Ack. The school’s writing scores on the fcat were always high, though.
I will let my computer read this and provide a response…..see if it can recognize a bad idea when it reads one…..
I constantly hear about standardized tests being about bubbling in score sheets when people protest standardized testing, but I wonder if what states are doing. NYT when the developed state tests for standards education even before NCLB developed short answer and essays that were standardized too. They are terrible, based on points and ‘”ANCHORED” by instructions on how to score them by people. The essays for English are silly and very awkward, not the way most people would ever write an essay except that NYS needed some kind of essay that didn’t dictate what the reading was and needed some standard score. They are awful. Boring and uninteresting to write and boring to read and score. But they are uniform enough to be scored standardly. I can’t believe other states aren’t doing this too but protestors find it easier to mentioned the multiple choice format. If we ignore these types of tests too, we may end up with even more tests leaning this way and it isn’t pretty. The point of writing is not just to impart information, but engage the person who is reading the paper and interest them in the subject or literature that is discussed. Maybe now that computers are being used educators can look more deeply in using standards for education at all.
lelling: PLEASE READ Todd Farley’s book–as recommended above by Krazy TA. AND read the Todd interview & his HuffPost article as I recommended below.
Computers may do a bad job now, but artificial intelligence is growing by leaps and bounds. I would be very surprised if computers didn’t make most of us teachers obsolete within a couple decades. Typesetters, travel agents, bank tellers, family farmers…teachers are next. Technology routinely makes whole categories of workers obsolete.
The field of Artificial intelligence is stalled. What we are seeing with Google Translate, Watson,etc is an admission of failure and a decision to use brute force statistics to “correlate” items rather than understand them. That is why Google translations are sometimes so nonsensical. The danger is that the powers that be are mistaking this number crunching for true machine thinking.
Agreed, Scott F. It’s one thing to automate relatively standardized jobs, like clerical jobs and even manual jobs that are very routine and can require actions that can be performed by machines, but it’s another to automate jobs that require original thought and nuanced interpretation. The idea that HAL is just around the corner is long dead.
As I noted elsewhere, my biggest fear is that we’ll redefine (i.e., dumb-down) our definitions of intellectual work to accommodate the more limited capacities of computers and machines. We can’t allow “education” to be defined as “education that can be provided by a computer”.
Jim Davis, Krazy TA & all readers: also–read Diane’s post “An Interview with Todd Farley” (December 27, 2012) and the link, within, to his June 8, 2012 Huffington Post Education article, “Lies, Damn Lies and Statistics, or What’s Really Up With Automated Scoring.” You’ll receive a real education about computerized essay scoring.
I also would add something far more sinister: The transformation of education from an activity focused on the development of our intellects–the core of our humanness and therefore humanity–into a normalizing of our behavior according to our wanna-be corporate masters.
The goal of writing is to express one’s thoughts in a way that enables others (i.e., humans) to at least grasp, if not intimately “share”, those thoughts. Writing, like speech, is a fundamental human activity that is critical for civilized societies to exist. To teach writing, then, is to help a student develop their skills to understand, organize, and express their thoughts. Traditionally, this was done by teaching the Trivium (logic, grammar, and rhetoric) to enable students to think clearly, express themselves accurately, and argue convincingly.
Since the goal of writing is to extend one’s internal understanding to another through written expression, students need skilled and experienced human colleagues to hone their skills. At best, a computer can only offer a program that executes some sort of pattern analysis and scoring according to a correlative model that is based on some one’s (or group’s) view of what “good” writing looks like. In other words, since computers don’t “think” (a minor point that seems lost on most journalists, business types, and teachers), the computer can only try to find correlative similarities between submitted writings and some aggregate model of writing. To the computer, the submitted writing is just a string of encoded electoral impulses that are manipulated according to some algorithm to produce an encoded output that is displayed in a form that we can perceive. But to call that “understanding” is a travesty.
In fact that defeats the very purpose of writing, since there can be no hope of establishing an understanding–and hence communication–between the writer and the reviewer. Computers don’t understand anything; hence any output from the computer can’t convey any understanding of the subject essay. The teacher who can only read the computer’s assessment thus can’t understand the writing. How then, can this serve the purpose of training and coaching good writers?
So, if the idea of computer grading actually defeats the purpose of writing, and if the very programs that do the “grading” are likely to be at least as biased if not more biased than human graders, then what’s the point? Well, profits for companies that make the software are important. But there’s much more at stake. Replacing writing teachers is just another nail in the coffin of humanities education. Education will become still more like technical training. Worse, as you noted Cathy, by forcing students to “write to the model”, the owners of the code, like Bill Gates and Rupert Murdoch, will actively mold the mind of the young to their own models of humanity. And we’re talking about some pretty withered human beings here.
Time for home schooling.
M&S: I so agree with you as to your analysis of the situation(see my response to George, above, at 2:18 AM). You’re exactly on point here. We are seeing a reverse revolution–that of the 1%–a throwback to an America with no unions and worker drones, but even worse–now, a control of the press (look at dictatorships–Haiti comes to mind; Koch bros. expressing interest in buying Chicago Tribune & L.A. Times), schools, colleges & universities, media outlets (Walton–CNN) and this latest–our writing skills, not to mention HOW we write–cursive died a slow death long ago, thanks to test prep.time, no time to teach and–last but not least–technology (everyone types & texts {adolescents can’t even spell anymore for all the text shortcuts}).
And we’re losing the ability to think carefully and develop the patience to learn and master new subjects; everything now has to be fast, exciting, and superficial.
well said!!!
As an English teacher, as much as this would help in my work load, I would not be doing my job. The student’s job is to write the essay, and mine is to read, respond, correct and assess the essay ( or any other writing). Besides, the reading of a student’s writing is a personal connection to the student. There is an agreement, student writes, it I read it.
This is just another example of the business world thinking they know what’s best for education. “Better” = faster, more “accurate”, more predictable, less expensive.
What sounded all too familiar to me, as a public school teacher, was the part in the article stating that the program will free the teacher to do “other things”. AKA: paperwork, I would bet. Gotta be accountable, after all. Pre-K through Doctorates.
Bush had a task force investigating the standardization of testing for all the colleges and universities in the USA. Bet he loves this.
Since when did Gates, Murdoch, Bloomberg, Broad, etc become such experts in the field of education?
Throw the bums out. Please.
😦
Hmmmm! So all along we are the proto-type for the robots who will inherit the Earth.
The human develops and inputs the information for those inevitable nonhuman characters and we will obey. Starting to feel like it!! The problem, of course, no matter how human-like a robot can be produced, they can NOT be human. The pain felt by
the very right brained human who is filled with creative wit, many artistic talents,
thinking that adapts to logic and brings color to the black and white world of the
very left brain thinker….all the children will become a slave to the metallic nanny and
teacher from the moment the metallic nurse or doctor holds them.
I need to go out and look at a rainbow because my computer is starting to growl at me!!!
Just sayin!!!!
When I worked for Arlington County Government, it paid for an outside consultant to reevaluate every position in the County. The department that did best on the first cut was the Library, because the Librarians researched the company, found examples of what they had done elsewhere, and wrote their job descriptions to match what appeared to be the rubrics the company was using.
I was in data processing. On first cut we had not done well. But there was an appeal process. As it happens, our director’s significant other (do not remember if they had already married by then) was one of top library people. But now we needed more than their rubrics, because it had to be an oral presentation. Out of the 40 or so people in the department, i was selected to do the presentation, both because I was the in the middle of the position ladder we were trying to get reclassified, and because I was considered the most persuasive advocate.
They had one person with a checklist, and some note paper. Less than five minutes into my 20 minute presentation he simply put his pencil down. Apparently the checklist was to be factors that uprated the job and factors that downrated the job. But my presentation would not let him do that. Further, my presentation was demonstrating a skill and knowledge level that they had not accounted for in evaluating our positions. The end result of the appeal was that our salaries were set almost 20% higher than on the first pass.
The point of this story is not to brag, although I am more than proud of the success I achieve. The point was that there was an appeal process to a human being, and it was possible to make a difference. Also, that on the first pass the human evaluators were acting much as computer programs do, looking for key information, as the librarians had been able to demonstrate with their success.
What would be fun is to turn into some of these grading programs classic works – since they are not looking for fiction, novels and short stories probably would not suffice. But how about part of The Making of the President by Theodore White, or a portion of a Pulitzer Prize winning news article? Maybe a summary chapter from an award-winning biography. How would that work?
I have served as a Reader of the free response questions for the AP US Government and Politics Exam. It is intense and time-consuming. We are using a pre-established rubric. Once we are trained, if there is any doubt about how to score a particular response we ask someone else what they think. We do not have to account for grammar, usage, or spelling: on a test, without time to plan and revise, that might be inappropriate unless it is a test ON grammar, usage and spelling. We were looking for specific content and the ability to apply it in response to the prompt/question. One time we gave full credit to a political cartoon that clearly had all the necessary information organized in an effective manner.
As to voice and structure? Let me start with the latter. God, I hate to think what our literature would be like if all of my favorite authors had been drilled and killed in the five-paragraph format. Clearly there should be more than one way of structuring an answer. That in our assessments we depend so much upon selective response items (multiple choice) forcing convergent thinking with no opportunity to explain one’s reasoning is bad enough. Let us not in our assessment continue to drain out the imagination and creativity that can be so important in learning by forcing everyone to write in a similar fashion so that the computer can understand it.
Let’s also note another thing. I write very differently when I compose on a computer and when I write by hand. As it happens, I type very quickly so I am not disadvantaged for time in writing on a computer. But my thinking patterns change. Given my druthers, for things that are very important, I still prefer to do at least some of the writing process on paper, because i still find it a deeper kind of thinking.
That’s me.
I suspect I am not alone.
For whatever the words of a now-retired social studies teacher are worth.
Worth a great deal and beautifully put! Thank you!
I am a special education advocate and I am so fed up with
the trend and unraveling of our education system I hurt inside!
My LD children are often extremely creative and need the extra
time, patience, and accommodations for them to achieve and
blossom. I fear for their future and the loss to society for their gifts.
“Worse, as he says about the ETS system, it cannot tell truth from falsehood:”
It sounds like these computer graders are designed to grade our Politicians speeches!
ROFLMAO!
We use a computer to score writing prompts. The computer scores are very inflated compared to our rubric scores. I have kids print out the computer essay with computer score, and then I read and score so students know the ‘real’ score. The computer does assign them game like activities for each of the six traits of writing. That part is fine. But the actual scores seem tied to some formula that does not align with actual writing scores.
Considering how awful computer translators are, I would not want a computer to grade anything I wrote. Computers miss the finer points of language, tone, word choice, even some facts.
The biggest loss might be the missed opportunity for teachers to work together and assess their students’ writing. Through this process, a lot of learning about writing instruction can occur.
Puny earthling teachers, you cannot resist the power of Neoliberaltron.
With our infinite power, we will soon reform you all into Soylent Green.
hee hee
Don’t laugh, Bob, we’re halfway there!
Scientific research in natural language processes is extraordinarily valuable for many purposes. It can help us, for example, to do mundane tasks like spell checking. It can provide us with rough (very rough) translators.
But the whole point of writing is for one human being to communicate to another human being. There HAS TO BE another human on the other end of the communication, or the whole undertaking is moot.
One more thing: what is most valuable in any spoken or written communication is what is most unpredictable. How I would love the conjure up the shade of Allan Ginsburg and get him to write a response to one of these damned inane standardized test prompts! The result would be a delight to read, I am certain, and it would affirm our humanity. And I am equally certain that it wouldn’t receive a passing score.
into, not in
Rules for writing are made to be broken. Every good writing teacher knows this and teaches it, like a mantra, to his or her students. I used to tell me students, quoting Robert Frost, “No surprise in the writer, no surprise in the reader.” What happens when a computerized grading system encounters an error made for effect? Like this. It punishes that tiny bit of creativity. But more to the point, what happens when it encounters rebellion, when it encounters the student who writes the brilliant piece intended, satirically, as an attack on the inane writing prompt: (“Write about a time when you overcame a challenge. Write about three ways to be more productive”)? Well, it punishes that rebellion. What it will reward is formulaic writing, conformity.
In his moving, ironically idiosyncratic book The Canon, Harold Bloom says that the one thing that all great writers have in common is that they are STRANGE. They don’t give you what you are looking for, what you expect.
It would be amusing, indeed, to have Rumi or Blake or Whitman or Ginsberg write in response to one of these inane standardized test writing prompts and to see what a computerized grading system makes of that.
Robert, Thank you for writing this. With my students, I have used the first part of the Robert Frost quote “No tears in the writer, no tears in the reader”. When an 8th grader bares his soul about being in a fight, or shares her pain about the death of a grandparent, or takes a risk and talks about crying when a beloved pet is taken away…that is precious and sacred and sometimes, as you say, strange. I want to issue a warning…Keep your machine grades and rubrics and 5 paragraph formula standards away from my students!
If you want little robots to spew out five-paragraph themes (like those encouraged by state writing tests, the new Common Core State Standards [sic], and these electronic grading systems), then why not just skip the students altogether and replace them with robots with built-in theme-writing software? Isn’t trying to robotize students just a silly stop-gap measure? The theme and story writing software has actually gotten very, very good, as some of you doubtless know. Certainly, it’s much better than any student will ever be at spewing out efficient, thoroughly machine-predictable writing.
I didn’t read your comment before posting mine, but it looks like we are on the same page. I am reminded that the great writers of history (as well as great visual artists) were often rejected by the pundits of their own time. Muhammad, for example, was criticized by the people of His day but He set the standard for writing in Arabic ever afterward.
Conventional writing techniques and formats are all quite necessary. It is important to teach children the foundation of writing before letting them make their own signature writing style. As said in this article, the computer grades against sentence structure mistakes. If children had a better foundation of sentence structure, they would have much better grades.
Interesting debate about computer grading. What will we have next – computer graded art work and poetry? Florida discontinued short and extended answers in its standardized tests because it cost too much to pay low wages to uncertified evaluators to check the answers. The projected format for the new PARCC tests based on Common Core coming in 2015 are supposed to be essay. Students are to write 2 essays, one to a picture prompt and another to a piece of literature which supposedly will assess their higher order thinking skills in addition to basic knowledge. Who wants to bet that the test-makers plan to computerize the grading on these exams? I don’t disagree with computers looking at conventions – spelling, grammar, sentence structure, organization, but the content should be read by an actual human being otherwise we are training a generation of imitative robots.