It turns about that Houston has been awarding test-based bonuses for years. It turns about that tying test scores to scores has not been good for teachers or students. It turns out that the ratings jump around from year to year. They are inaccurate, unreliable, and unstable. Value-added assessment, as everyone recognizes, creates massive pressure to raise scores on standardized tests of questionable value. The more pressure, the less reliable the scores. The more pressure, the more teaching to the test and the more cheating. (http://nepc.colorado.edu/blog/houston-you-have-problem)
Value-added assessment is inherently incapable of producing better education because it does not measure better education. It only measures test scores. Higher test scores are a byproduct of better education. If you aim for the scores, you miss the target. The target is deeper understanding, greater knowledge, more thoughtful writing, more careful observation, a greater love of learning. The very act of measuring destroys the target instead of bringing it closer.
I know test results are the wrong way to evaluate teachers. But what is the alternative? I do not trust administrative evaluation. My child attended the Bronx HS of Science and I am sure you have read of the controversial evaluation methods being employed there. In fact, in my experience with two children receiving their entire education in New York public schools I have found countless wonderful teachers wrestling with unhelpful administrators. This no doubt has been aggravated by the Bloomberg/Klein school reforms, but it is unclear how to move forward.
Dennis,
I understand there are certainly administrators that are ineffective. But a good system — and I don’t think the NYC DOE is a good system — promotes good supervisors who were master teachers who know how to evaluate holistically.
If there are lousy administrators, we must ask ourselves why. The DOE has turned its back on teaching, and allows people to supervise after a mere few years of teaching (!), because they believe in the cult of management. As a result, the evaluation process is very canned, involving checklists which means that the supervisor cannot simply observe what’s going on in the classroom, but must vigilantly tick off the correct box when teacher completes certain tasks. It becomes a very inorganic process by design, created by a bureaucracy that doesn’t trust their supervisors are up to the task.
There are much better ways of doing this! Nevertheless, as flawed as the current evaluation system is, I would much, much rather these observation-reports-by-numbers count towards my end-of-year rating than test scores, which vary wildly from year to year, and are extremely flawed tools. High stakes testing is not an antidote to problematic observation reports — they are far less reliable. A human being, even one chained to a template with boxes to check off, still has to see me several times a year, and has a better sense of my competence than any standardized test could.