If you add the scores on standardized tests for five years in a row, can you tell who the best and worst teachers are?
No.
But that’s the theory behind value-added assessment.
The idea is that an “effective” teacher raises test scores every year. The computer predicts what the test scores are supposed to be, and the teacher who meets the target is great, while the one who doesn’t is ineffective and should be shunned or banished.
But study after study shows that value-added assessment is rife with error. As this paper from the National Academy of Education and the American Educational Research Association shows, value-added assessment is unstable, inaccurate and unreliable. Teachers who get high ratings one year may get low ratings the next year. Teachers are misidentified. Data are missing. The scores say more about which students were in the classroom than the teachers’ “quality” and ability to teach well.
Teachers of the gifted are in trouble because the students are so close to the ceiling that it is very difficult to “make” them get higher scores.
Teachers of special education are in trouble because their students have many problems taking a standardized assessment. A teacher wrote me last year to tell me that her students would cry, hide under their desks, and react with rage; one tore up the test and ate the paper.
Teachers of English language learners are in trouble because many of their students don’t know how to read English.
A superintendent in Connecticut wrote me to say that his state department of education is pushing the Gates’ MET approach. I urged him to read Jesse Rothstein’s critique. In fact, the MET study won the National Education Policy Center’s Bunkum award for research that reached a conclusion that was the opposite of its own evidence.
For a fast and accurate summary of what research says about value-added assessment, read this article by Linda Darling-Hammond.
VAM is junk science. Bunk science.
Just another club with which to knock teachers, wielded by those who could never last five minutes in a classroom.
Putting the absolute stupidity of this idea aside, I’ve always thought that teachers should suggest a better approach–Getting a percentage of their students’ incomes after high school graduation. The logic is that teachers who add value will produce students who make more money; therefore, to align incentives property, the teachers should get a cut of the incomes from their graduates.
Just think how much Steve Jobs’s estate owes California’s teachers! 🙂
For published version of VAM critique: http://lancasterteachers.com/Evaluating%20Teacher%20Evaluation%20-%20PDK.pdf
Thank you Mel. I can use this.
I am reading articles on the Chicago teacher’s strike and none of them mention this solid research evidence against VAM. It is critically important to get this out. Because it is so easy to sell the idea that objection to evaluation is just an excuse of bad teachers or unions protecting bad teachers, this must get into the news stories. How to do this? I don’t see Weingarten being quoted saying this. Maybe the reporters just don’t want to report it.
The way Clinton may have won the election for Obama by a passionate comparison of facts with policies, getting the relevant evidence before the public is vitally important. I hope among your readers are those who talk to reporters, and can set them straight.
Educators can give the facts…the problem is the reporter reporting them or the newspaper printing them.
Thanks Diane.
Districts in New York are scrambling to buy web based assessments that claim to have their own form of ‘junk science’ to rate teacher effectiveness.
In addition, New York now mandates that a numerical value be determined for every task a teacher does. The big winner here seems to be Charlotte Danielson with her magical rubric that will pinpoint who is effective and who is not.
The junk is spreading
I don’t know anything about Charlotte Danielson. I see she is in charge of Chicago too. What is her background?
http://www.iobservation.com/danielson-collection/Biography/
Did you notice that the first and third paragraphs are exactly the same?
It’s suspicious when you can’t find out if she has a BOD, can’t find any research that supports her work, any real details about her background, or research that supports her framework. I find that very troubling.
Is she guilty of Google bombing? http://en.wikipedia.org/wiki/Google_bomb It seems when you try to search for her BOD or background all you get are pages of her links. Smell fishy to me
The Hawaii State Teachers Association (HSTA) has posted the results of their survey about the “Educator Effectiveness System” here:
http://contractforthefuture.org/blog/2013/1/10/ees-pilot-program-survey-results
Note: Unlike the other 49 U.S. states, Hawaii’s public education is a statewide bureaucracy. There are nt “school disctricts” as there are in the rest of the U.S. The HSTA speaks for all teachers in Hawaii.
As you will see, the Charlotte Danielson Teacher Evaluation Model is in full force in Hawaii. As an example of the statewide oligarchy that exists in the Department of Education, less than 1% of Hawaii’s teachers had a say in adopting the Danielson model for evaluations. And, who is Charlotte Danielson anyway? As of September Diane Ravitch, noted education scholar and author, did not know who she was. See the comments on her blog here:
Does Danielson’s philosophical model meet the standards of scientific research and validity that we should expect from a system that has such high stakes outcomes (in other words, is so extreme it could affect someone’s career)?
I have been looking for any type of valid research that supports the Danielson model. Does anyone out there know of such a study? So far, from what I’ve found, it seems like she’s a talking head philosopher who has managed to market herself very well. She makes some very good observations and suggestions for overriding philosophies regarding teacher evaluations, but to hold any teacher’s career on the line based on one woman’s philosophy is outrageous. Even Ms. Danielson does not seem to support this approach.
What I have found in my search instead, thanks to Diane Ravitch’s invaluable blog site, is a scholarly report based on academic research that raises some important concerns about implementing VAM (Value Added Modeling) in teacher and school evaluations. Please read this:
Click to access v17n17.pdf
For information about Charlotte Danielson, here’s is a page from her web site:
http://www.danielsongroup.org/article.aspx?type=news&page=YouDontKnowCharlotte
I tried to find the research behind “research based” and have found absolutely nothing… Has anyone found where the domains came from or their 22 components?
Actually, a lot of districts, including the one my kids attend, are looking at Danielson and related systems for evaluating teachers. Whatever the downside, the main point is that these systems focus primarily on what teachers *do* in the classroom – their practice, in other words. VAM models focus exclusively on the results – kids’ test scores. But the two things are related only as mediated by a whole host of factors, like family income, parental education level, and so on. Without taking those factors into account, VAM scores are meaningless. Yet I have yet to hear of a system where they collect that kind of data – and it would be very hard to do in any case. The test-score craze is blind to the quality of teacher practice and putting all the emphasis on factors that are largely determined by household characteristics. (And as to growth, if poverty affects a student’s learning, don’t you think it might affect their ability to “grow” – demonstrate score increases – as well? VAM may account for different starting levels, but it implicitly assumes that all students can increase “achievement” at the same rate regardless of their circumstances.)
This is one flavor of the “poverty doesn’t matter” school of thought. It’s not that we should presume low-income students can’t do well; it’s that we can’t expect them to hurdle a higher bar while they have weights on their feet. The basic assumption behind all this nonsense is that any “good” teacher should be able to get every single student to “grow” equally much during a year, and that if they are not “growing,” the only possible reason is that the teacher is not working hard enough.
Conveniently, this puts the onus on teachers (and helps discredit their unions) while absolving us all from any responsibility to take on the challenge of child poverty in this country, or expending resource so that schools can better serve at-risk students. Because the latter would mean spending money, and raising taxes – and heaven knows such “socialistic” solutions are just nonsense.
As an aside – Michigan last year enshrined VAM measures in legislation that would determine a teacher’s performance evaluation (which in turn determines whether they keep their job, as tenure was gutted at the same time). VAM scores must make up at least 50% of a teacher’s evaluation. Other factors can be considered, but there is no minimum share for them. We’re still waiting for a Governor’s Council to recommend a worthwhile evaluation system that can square that circle.
Do we need high-quality teaching? Yes. Do we need to do a better job serving at-risk student populations and narrowing gaps? Yes. Will VAM help us get there? Absolutely not.
In NYS districts are putting numerical values on the Danielson Frameworks in order to be able to create data to be used in evaluations. Danielson quickly picked up on this potential financial windfall and is now partnering with companies like Teachscape to create new tools that will magically create data. It’s not about teaching, it’s about data and profits.
It is problematic. We are using the NYSUT rubric and it may even be worse. It is back to the old check list days and mixes supervision/coaching with evaluation. This will not approve teaching
A had a huge disagreement with my district’s admins over the proper use of the rubric. They incorrectly think they can use their walk- thrus to collect data for APPR. Imagine stopping in a classroom for 30 secs and even thinking you can use that? Talk about a morale buster.
Last night on the PBS News Hour, Randy Weingarten was interviewed, and did not make the argument that VAM is a bad way to evaluate teachers, and they are for good ways not bad ways that don’t reflect the teacher’s benefit to the children. I’d like to know what gives here. Does she accept VAM in some form. What form. Is this a key issue in the Chicago Strike or not?
What is actually happening?
The AFT has accepted some uses of VAM.
From the research I have seen, it is loaded with inaccuracy and is highly unstable.
It is an issue in the CTU/CPS strike. Emanuel wants VAM to have a large weight–eventually up to 50%.
If the VAM methods are as bad as the Linda Darling-Hammond critiques say—and I tend to trust her—then I don’t understand why AFT would accept it. Does anyone have a link to their arguments? This division of her with the Chicago teachers is really damaging. I saw a bit of the Alex Wagner show on MSNBC today, and they were discussing the strike and they were all saying that the Democrats standing up to the union was a good thing. And there was no awareness of these critiques. Mind you, these are down the line liberals. This is a real problem.
Thanks for the discussion on VAM above. There was another op-ed in the New York Times yesterday dissing the Chicago teachers and quoting the Chetty study that was published by the National Bureau of Economic Research, extolling VA. It was frighteningly convincing if one doesn’t know the research.
I am a music teacher educator in Michigan, and the VAM’s mandated by the state are truly wreaking havoc, as Norton suggested above. Since there is no standardized test for music (as there is not for most school subjects, actually), school districts get to decide what test scores to use for music teachers. So, many of them are choosing English Language Arts scores. But, according to No Child Left Behind, music teachers are not “highly qualified” to teach ELA (they are not certified to do so). Shouldn’t this be illegal? By the way, the Michigan law is explicit that teacher evaluation protocols are NOT subject to collective bargaining.
I’m stunned that Diane is unfamiliar with Danielson’s work. Danielson is being called a “Guru.”
Here is a link to a conversation with Danielson about her evaluation protocol:
http://blogs.edweek.org/edweek/rick_hess_straight_up/2011/06/straight_up_conversation_teacher_eval_guru_charlotte_danielson.html
My district is adopting the Danielson model for teacher evaluation next year. The AZ legislature, controlled by ALEC, has mandated that 30% of teacher evals will be based on state test scores. Principals too will be rated by test scores.
To me the bigger work that needs to be done is educating people that American Public Education is not failing. I have a highlighted copy of Diane’s book on my Nook ,and I use it often to debunk the right wing talking points that have impugned our professional integrity. I also still use “The Manufactured Crisis by David Berliner, written in 1995. Reformists have skewed the facts, and I am ready to correct them.
Next, we have to take back state legislatures from the regressives in time for the next redistricting in 2020. As long as state legislatures are controlling this conversation with no input from educators, we are at their mercy.
Let’s return our profession and our professionals to the status we deserve.
Just one more high school chem/phys teacher giving you a big thumbs up on what you’ve written. Thank you.
I thought that the behavioral approach was out and constructivism is in, yet we are putting teachers in a place in which they have to force their students to learn specific aspects of specific subjects, i.e. standards.
I just went through a Danielson evaluation that was forced upon us by our new superintendent with minimal orientation for the teachers. It was absolutely subjective and used as a “gotcha” form of harassment. We were completely on our own to figure it out and if we got it wrong, oh well, too bad…. you get a Needs Improvement. This is the biggest pile of manure every dumped on teachers.