The Education Policy Analysis Archives is releasing a series of articles about VAM that you should read.
Here are links to the first three. Forgive the formatting. I am copying the email I received. There are more on the way, including a dissection of the much over-hyped Raj Chetty, et al, analysis that made the front-page of the New York Times and was cited by President Obama in his State of the Union address last year.
Education Policy Analysis Archives has just published the introduction and two articles of EPAA/AAPE¹s Special Issue on Value-Added:What America’s Policymakers Need to Know and Understand, Guest Edited by Dr. Audrey Amrein-Beardsley and Assistant Editors Dr. Clarin Collins, Dr. Sarah Polasky, and Ed Sloat
>>http://epaa.asu.edu/ojs/article/view/1311
>>
>>Baker, B.D., Oluwole, J., Green, P.C. III (2013) The legal consequences
>>of
>>mandating high stakes decisions based on low quality information: Teacher
>>evaluation in the race-to-the-top era. Education Policy Analysis
>>Archives,
>>21(5). Retrieved [date], from http://epaa.asu.edu/ojs/article/view/1298
>>
>>Pullin, D. (2013). Legal issues in the use of student test scores and
>>value-added models (VAM) to determine educational quality. Education
>>Policy
>>Analysis Archives, 21(6). Retrieved [date], from
>>http://epaa.asu.edu/ojs/article/view/1160.
Reblogged this on Transparent Christina.
I have blogged about these excellent reports here: http://radicalscholarship.wordpress.com/2013/01/28/nfl-again-a-harbinger-for-failed-education-reform/
The Day I Met VAM…
Shuffling through the pressing crowd at the convention center, I felt very much out of place, but at the same time a bit privileged. I, along with another teacher from the district, had been invited to attend a regional meeting for administrators for the unveiling of some incredible evaluation tool that would change everything when it came to assessment.
That was nearly a decade ago, but I still remember sitting at the table and listening to the presenter protest our misled understanding of success. He said it didn’t matter if 100% of our students passed the State test or not; that meant nothing and was no cause for pride. What really mattered was if we showed value added to each student. That was the mark of true success, and his program, using a complex statistical formula, was just what we needed. Not only would it deliver reports on every learning objective imaginable, it would also predict the level at which each student should be performing year to year if teachers were doing their job.
As you might guess, the man’s presentation was a huge success, and most schools in the region signed on to his program touting this new tool called Value Added Measure (VAM). To everyone responsible for gathering data, this was a Godsend. But for everyone responsible for producing the added value to each student, this was a nightmare.
On the surface, this kind of evaluation doesn’t seem too much to ask of teachers, and I suppose that would be true if the “complex statistical formula” measuring and predicting student performance met the two criteria of testing: validity and reliability.
Validity is the degree to which a test or program measures what it is supposed to measure. In other words, if it is measuring a student’s progress based on in-school instruction, then the tool must have some way of removing possible out-of-school influences that could affect performance, such as parents getting a divorce, a move to a new home, a break-up with a boyfriend, illness, financial difficulties, etc. The problem, of course, is that there is no way to remove any of these variables from students’ lives. They don’t leave them at the door when they step into the school.
The second requirement for standardized measurement is reliability: the extent to which the instrument being used produces the same results on repeated trials. In other words, test results, when administered under like conditions, must produce like results time and time again. Only when reliability is established can one know that the test is measuring true progress in learning and not just providing a snapshot of performance on a single day.
Companies that produce these VAM programs know that their product must deliver validity and reliability. One such company, TAP, even addresses the requirements in their explanation of VAM on their website:
Value-added analysis is a statistical technique that uses student achievement data over time to measure the learning gains students make. This methodology offers a way to estimate the impact schools and teachers have on student learning isolated from other contributing factors such as family characteristics and socioeconomic background. In other words, value-added analysis provides a way to measure the effect a school or teacher has on student academic performance over the course of a school year or another period of time.
TAP attempts to address validity and reliability here, but reality leaves VAM short on both requirements. Even if the programmers can allow for students’ socio-economic-status and their family demographics, there is no way they can remove all the variables that come attached to different students. Reliability, which is dependent upon the stability of validity, cannot be established if the variables are always changing, as variables always do with children.
In short, the programmers of VAM tools have developed wares much like traveling medicine men used to cook up snake oil. Because everyone is so desperate to be cured, they are far too ready to buy the “solution”. The bottom line is that education has no short cuts. It is messy. It is tumultuous, one day never like another, one student never experiencing the same day twice, teachers juggling strategies to meet each child’s needs. These program designers think they have created something complex, but their formulas do not even come close to the complexity of the human beings in every classroom every single day. There is no measurement for such complexity. Period.
Lisamyers.org