Bruce Lederman is representing his wife, Sheri Lederman, a fourth grade teacher in Great Neck, New York, in a legal challenge to New York State’s teacher evaluation system. Several readers asked to see the court papers, and I will post some of the affidavits from nationally recognized experts in a day or two. For now, here is Bruce Lederman’s explanation of the theory behind the legal claim on behalf of Sheri Lederman. The New York State Education Department sought to have the case dismissed without a hearing, but the state Supreme Court accepted the case. There will be oral arguments on August 12 at 10 a.m. in Albany in the court of Judge McDonough, 10 Eagle Street. If you are interested, please attend.

Bruce Lederman writes:

Diane:

                  Several of your readers have asked for an explanation of the legal theories behind the Lederman v. King lawsuit.  I am attaching the reply memorandum of law which explains in detail the evidence and expert opinions in the case, as well as the legal arguments at issue. I also attached reply expert and facts affidavits from Aaron Pallas (Columbia), Linda Darling-Hammond (Stanford), Audrey Amrein-Beardsley (ASU), and Sean Patrick Corcoran (NYU), Jesse Rothstein (Berkeley), Carol Burris, Sharon Fougner and myself (which has an important email exchange with Professor John Friedman, co-author of the widely cited Chetty, Friedman & Rockoff studies).

                  To summarize for your readers the legal theories, we are proceeding based upon three theories.   First, seek to have Sheri’s Growth Score Rating of 1 out of 20 points declared null and void under New York law on the grounds that it is “arbitrary and capricious.” Under New York law, any actions by a State Agency (in this case the Dept. of Education) can be challenged as “arbitrary and capricious” which is generally defined by the Courts as irrational and unreasonable based upon the facts. Second, we are asserting that the New York Growth Model (a VAM program) actually violates the New York law because it does not measure growth as defined in Education Law §3012-c(2)(i), is also not transparent and available to teachers before the beginning of the school year as required by Education Law § 3012-c(2)(j)(1) and does not allow all teachers to get all points as required by Education Law § 3012-c(2)(j)(2). Third, we argue that if Sheri is not allowed to have the individual facts of her case reviewed and is rated by a computer program whose results are not reviewable by a human being base upon real life facts, then she has been denied due process of law in violation of the Constitution. We ask, rhetorically, is this 2001 a space odyssey where the computer is always right and common sense has gone out the window?

                  One specific thing we are challenging is that she got a growth score of 14 out of 20 in year 2012/13 and a growth score of 1 out of 20 in 2013/14, even though the proficiency of her students (i.e., Students whose scores meet or exceed state standards) was virtually identical and there is no rational explanation for such wild swings in scores year to year. Another thing her case illustrates is the problem of ceiling effect when teaching high performing students. For one student, she got a failing student growth percentile (SGP) of 27 out of 100 because the student got 60 out of 60 questions right on a 3rd grade test, and got 64 out of 66 questions right on his 4th grade test while in Sheri’s class. Even though the student was in the 98th percentile, the teacher was rated in the 27th percentile because a child got 2 questions wrong. Is that rational?

                  The issues of why New York’s Growth Model does not comply with the law is that the law tells the Department of Education to measure change in student achievement between two points in time. New York’s Growth Model does not do this because instead of measuring growth, it creates what we are calling a “survivor-type” competition where the computer predicts what children should do and evaluates teachers on a bell-curve for whose students met the computer predictions. There are many problems with this, most notably that the computer is comparing apples and oranges. The fact that a child got a score of 300 on a 3rd grade math test and a score of 295 on a 4th grade math test does not prove that the child did not learn substantial amounts in 4th grade. This is explained very well by Professor Aaron Pallas in his reply affidavit, which I highly recommend reading. Sheri and I suggest that all our experts provide important information and I suggest that people read their affidavits.              

    

                  Another significant fact is a series of statistics located by Dr. Carol Burris. Dr. Burris found that there were wild swings in teacher ratings between 2012/13 and 2013/14 which made absolutely no sense. For example, Scarsdale, which is generally highly regarded, went from having 0% ineffective teachers and 13% highly effective teachers to 19% ineffective teachers and 0% highly effective teachers in one year. Something is obviously wrong. There are additional examples in Dr. Burris’ reply affidavit which your readers may find interesting.

Finally, a very important issue which is presented is the defense of New York State which claims that there are academic studies recommending the use of VAM-type programs for these types of high stakes teacher evaluations. All of our experts do a great job of explaining that there are no studies that suggest that VAM-type programs can accurately rate teachers in individual cases. Professor Sean Patrick Corcoran from NYU explains that studies have found that VAM is unbiased, not that it is accurate. New York’s Education Department is misunderstanding the difference in their position in our case. Professor Corcoran provides a simple example that if you throw darts at a dart board and always miss, but miss as much to the left as to the right, and as much to the top as to the bottom, you are not biased, but you are also neither precise nor accurate. Also, I had an interesting email exchange with John Friedman, co-author of the widely discussed Chetty, Friedman Rockoff studies where he readily acknowledged that his studies were only saying that VAM-type scores tend to be accurate “on average” which he explained means over the lifetime of teacher. He suggested considering VAM scores like a type of lifetime batting average in baseball. Professor Friedman specifically said that VAM scores can be too high or too low in any year, and that they may be wrong because a particular student had a bad day when the test was taken. Following this logic (which comes from one of the leading VA researchers) rating teachers based upon VAM generated scores is like rating a baseball player based upon a single randomly chosen at bat.

                  We are scheduled to have an oral argument on August 12, 2015, and are optimistic that the Judge will recognize that something is terribly wrong with New York’s Growth Model and the rating of 1 out of 20 points given to Sheri. We believe we have established that New York’s Growth Model (which it paid a contractor $3.48 million to develop) is a statistical black box which no rational person could find fair or accurate.

                  We thank all those who have supported us.