I earlier reported that the latest data show that 97% of teachers in Pittsburgh received ratings of either “distinguished” or “advanced.” Similar findings have emerged elsewhere, which makes me wonder why it was necessary to spend billions of dollars to create these new evaluation systems, which are often incomprehensible. But Kipp Dawson, a Pittsburgh teacher wrote a comment warning that the evaluation system is flawed and riddled with unreliable elements, like VAM. Don’t be fooled, Dawson says. The Pittsburgh evaluation system was created with the lure of Gates money. It attempts to quantify the unmeasurable.

Dawson writes:

I am a Pittsburgh teacher and an activist in the Pittsburgh Federation of Teachers (AFT). Let’s not let ourselves get pulled into the trap of applauding the results of a wholly flawed system. OK, so this round the numbers look better than the “reformers” thought they would. BUT the “multiple measures” on which they are based are bogus. And it was a trap, not a step forward, that our union let ourselves get pulled in (via Gates money) to becoming apologists for an “evaluation” system made up of elements which this column has helped to expose as NOT ok for “evaluating” teachers, and deciding which of us is an “effective” teacher, and which of us should have jobs and who should be terminated.

A reminder. VAM. A major one of these “multiple measures.” Now widely rejected as an “evaluating” tool by professionals in the field, and by the AFT. A major part of this “evaluation” system.

Danielson rubrics, another major one of these multiple measures: after many permutations and reincarnations in Pittsburgh, turned into the opposite of what they were in the beginning of this process — presented to us as a tool to help teachers get a window on our practice, but now a set of numbers to which our practice boils down, and which is used to judge and label us. And “objective?” In today’s world, where administrators have to justify their “findings” in a system which relies so heavily on test scores? What do you think . . .

Then there’s (in Pittsburgh) Tripod, the third big measure, where students from the ages of 5 (yes, really) through high school “rate” their teachers — which could be useful to us for insight but, really, a way to decide who is and who is not an “effective” teacher?

To say nothing of the fact that many teachers teach subjects and/or students which can’t be boiled down in these ways, so they are “evaluated” on the basis of other people’s “scores” over which they have even less control.

Really, now.

So, yes, these numbers look better than they did last year, in a “practice run.” But is this whole thing ok? Should we be celebrating that we found the answer to figuring out who is and who is not an “effective” teacher?

This is a trap. Let’s not fall into it.