The American Statistical Association released a brief report on value-added assessment that was devastating to its advocates.

ASA said it was not taking sides, but then set out some caveats that left VAM with no credibility.

Can a school district judge teacher quality by the test scores of his or her students?

ASA wrote this:

“VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.

o VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.

o Under some conditions, VAM scores and rankings can change substantially when a different model or test is used, and a thorough analysis should be undertaken to evaluate the sensitivity of estimates to different models.

• VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs, or schools. Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality.”

Now, if teachers account for only1%-14% of the variability in test scores; and if the majority of opportunities for qualit improvemt are found in the system, not individuals, and if VAM ranking “can have unintended consequences that reduce quality,” then it is hard to read this statement as anything other than a warning about the danger of relying on VAM to rank teachers.

But our intrepid team of Harvard economists is unfazed!

What do Chetty, Friedman, and Rockoff say about the ASA statement? Do they modify their conclusions? No. Did it weaken their arguments in favor of VAM? Apparently not. They agree with all of the ASA cautions but remain stubbornly attached to their original conclusion that one “high-value added (top 5%) rather than an average teacher for a single grade raises a student’s lifetime earnings by more than $50,000.” How is that teacher identified? By the ability to raise test scores. So, again, we are offered the speculation that one tippy-top fourth-grade teacher boosts a student’s lifetime earnings, even though the ASA says that teachers account for “about 1% to 14% of the variability in test scores…”