Larry Lee reports on pending legislation intended to create a state framework for evaluating teachers.
He cites an analysis of Alabama’s proposed legislation by Audrey Amrein-Beardsley. She says that it is apparent that Alabama’s lawmakers did not inform themselves about research on teacher evaluation measures.
Amrein-Beardsley writes:
Nothing is written about the ongoing research and evaluation of the state system, that is absolutely necessary in order to ensure the system is working as intended, especially before any types of consequential decisions are to be made (e.g., school bonuses, teachers’ denial of tenure, teacher termination, teacher termination due to a reduction in force).
To measure growth the state is set to use student performance data on state tests, as well as data derived via the ACT Aspire examination, American College Test (ACT), and “any number of measures from the department developed list of preapproved options for governing boards to utilize to measure student achievement growth.” As mentioned in my prior post about Alabama, this is precisely what has gotten the whole state of New Mexico wrapped up in, and quasi-losing their ongoing lawsuit. While providing districts with menus of off-the-shelf and other assessment options might make sense to policymakers, any self respecting researcher should know why this is entirely inappropriate.
Clearly the state does not understand the current issues with value-added/growth levels of reliability, or consistency, or lack thereof, that are altogether preventing such consistent classifications of teachers over time. Inversely, what is consistently evident across all growth models is that estimates are very inconsistent from year to year, which will likely thwart what the bill has written into it here, as such a theoretically simple proposition.
Unless the state plans on “artificially conflating” scores, by manufacturing and forcing the oft-unreliable growth data to fit or correlate with teachers’ observational data (two observations per year are to be required), and/or survey data (student surveys are to be used for teachers of students in grades three and above), such consistency is thus far impossible unless deliberately manipulated.
In short, Alabama legislators are considering a measure that is very likely invalid and unreliable. They really should do some more homework and go back to the drawing board.

I have little confidence that this will make a difference, but I think Audrey Amrein-Beardsley is performing a public service by speaking out on the issue. I hope that she can leverage some VIP contacts in the legislature before they act. These three interrelated measures are a legacy from the Gates funded and deeply flawed Measures of Effective Teachers study, $64 million to Harvard for a study so deeply flawed it could not have survived peer review.
LikeLike
Student surveys can present their own problems. A graduating senior came back to my classroom the week before graduation to apologize for the survey he had filled out six years earlier. As he put it, “I was a mad little pr%&* who wanted to get even with you! You made me do homework. You made me retake tests until I learned. You insisted I do my own work. You busted my a$$ with your d#&@ed requirements until I thought about murdering you. And now, I finally realized that you were the only person in my entire life who actually cared about me and I’m sorry I wrote what I wrote.” Had his “survey” been part of my evaluation all those years earlier, who knows what the repercussions would have been. They sure as heck wouldn’t be asking him about his elementary school teacher as a part of his graduation survey, would they?
LikeLike
And with the current love affair policy makers have with numerical data, I am sure they reduce those student surveys to a single number rating. So use a wildly inappropriate tool for rating teachers and squeeze out any possible useful information that a teacher might use to inform their practice.Take all humanity out of education in the name of efficient accountability.
LikeLike