A new report prepared by Andy Porter, dean of the graduate school of education at University of Pennsylvania, and Morgan Polikoff of the University of Southern California caution about value-added-measurement, basing teacher evaluation on test scores, because this method has “a weak to nonexistent link with teacher performance.”

Why are at least 30 states using this flawed measure? Because Arne Duncan made it a requirement of eligibility for Race to the Top and for state waivers. Despite the lack of evidence or negative evidence, states have passed laws tying as much as 50% of a teachers’ evaluation on scores.

“Morgan Polikoff and Andrew Porter, two education experts, analyzed the relationships between “value-added model” (VAM) measures of teacher performance and the content or quality of teachers’ instruction by evaluating data from 327 fourth and eighth grade math and English teachers in six school districts. The weak relationships made them question whether the data would be useful in evaluating teachers or improving classroom instruction, the report says.”

The article quoted the recent American Statistical Association report. How many more artifices and reports will it take before D.C. attention:

“In April, the American Statistical Association issued a statement criticizing the use of value-added model, saying teachers account for between 1 and 14 percent of the variability in student test scores.

“Ranking teachers by their VAM scores can have unintended consequences that reduce quality,” the statement said. “This is not saying that teachers have little effect on students, but that variation among teachers accounts for a small part of the variation in scores. The majority of the variation in test scores is attributable to factors outside of the teacher’s control such as student and family background, poverty, curriculum, and unmeasured influences.”