Last week, Kevin Huffman and John Ayers resigned. Huffman was state commissioner of education in Tennessee, and he employed every possible strategy to make testing a centerpiece of education policy. Ayers was director of the Cowen Institute at Tulane University in New Orleans, which was greatly embarrassed when it released–and then rescinded–a “research” report claiming amazing gains in the charter schools of New Orleans. Both were big boosters of using student test scores to judge the quality and effectiveness of teachers, a methodology referred to as VAM, or value-added-modeling.

 

Audrey Amrein-Beardsley, one of the nation’s expert researchers on teacher evaluation, looks at the two resignations as evidence that the VAM-mania is failing and claiming victims. There is as yet no evidence that VAM improves teaching,  improves student achievement, or correctly identifies the strengths and weaknesses of teachers. As its critics have said consistently, VAM results depend on many factors outside the control of the teacher and may vary for many different reasons. A teacher may get a high VAM rating one year, and a low VAM rating the next year. VAM ratings may change if a different test is used. Yet those who stubbornly believe that everything that matters can be measured with precision can’t let go of their data-driven mindset.

 

The lesson: proceed with caution with a methodology that has no record of success and that inevitably places far too much importance on standardized tests.