As is well known, the U. S. Department of Education zealously believes–like Michelle Rhee–that low test scores are caused by “bad” teachers. The way to find these ineffective teachers, the theory goes, is to see whose students get higher scores and whose don’t. That’s known as value-added measurement (VAM), and the DOE used Race to the Top to persuade or bribe most states to use it to discover who should be terminated.

As we also know, things have not worked out too well, as some Teachers of the Year were fired; some got a bonus one year, then got fired the next year. In many states, teachers are rated by the scores of students they never taught. The overall effect of VAM has been demoralization, even among those with high scores because they know the ratings are arbitrary.

For some reason, teachers don’t like to “win ” at the expense of their colleagues and they can spot a phony deal a mile away.

But the U.S. DOE won’t give up, so they released a research brief attempting to show that VAM does work!

But Audrey Amrein Beardsley deconstructs the brief and shows that it is a mix of ho-hum, old-hat and wrong-headed assumptions.

It’s true (but not new) that disadvantaged students have less access to the best teachers (e.g., NBCT, advanced degrees, expertise in content areas (although as Beardsley says, the brief doesn’t suggest such things matter).

It is true, that “Students’ access to effective teaching varies across districts. There is indeed a lot of variation in terms of teacher quality across districts, thanks largely to local (and historical) educational policies (e.g., district and school zoning, charter and magnet schools, open enrollment, vouchers and other choice policies promoting public school privatization), all of which continue to perpetuate these problems.”

She writes:

“What is most relevant here, though, and in particular for readers of this blog, is that the authors of this brief used misinformed approaches when writing this brief and advancing their findings. That is, they used VAMs to examine the extent to which disadvantaged students receive “less effective teaching” by defining “less effective teaching” using only VAM estimates as the indicators of effectiveness, and as relatively compared to other teachers across the schools and districts in which they found that such grave disparities exist. All the while, not once did they mention how these disparities very likely biased the relative estimates on which they based their main findings.

Most importantly, they blindly agreed to a largely unchecked and largely false assumption that the teachers caused the relatively low growth in scores rather than the low growth being caused by the bias inherent in the VAMs being used to estimate the relative levels of “effective teaching” across teachers. This is the bias that across VAMs is still, it seems weekly, becoming more apparent and of increasing concern.”

VAM in the real world is Junque Science.