Bruce Baker chides state policymakers for claiming that they are relying on MET, the Gates’ teacher evaluation program.

In his previous post, Baker delves into the ways that state officials are misusing value-added measurement and student growth percentiles. I asked Bruce if he would clarify the difference and he responded as follows (go to the link to see the video):

The key difference is explained in the previous post – which I think needs more attention:

With value added modeling, which does attempt to parse statistically the relationship between a student being assigned to teacher X and that students achievement growth, controlling for various characteristics of the student and the student’s peer group, there still exists a substantial possibility of random-error based mis-classification of the teacher or remaining bias in the teacher’s classification (something we didn’t catch in the model affected that teacher’s estimate). And there’s little way of knowing what’s what.

With student growth percentiles, there is no attempt to parse statistically the relationship between a student being assigned a particular teacher and the teacher’s supposed responsibility for that student’s change among her peers in test score percentile rank.

Quick summary is that value added models attempt, I would argue unsuccessfully, to parse the influence of the teacher on student test score growth, whereas growth percentile models make no effort to isolate teacher effect. It’s entirely about relative reshuffling of students, aggregated to the teacher level.

As I say in the video:

One approach tries (VAM) and the other one doesn’t (SGP). One doesn’t work (VAM) and the other is completely wrong for the purpose to begin with(SGP).