Mike Petrilli of the right-leaning Thomas B. Fordham Institute thinks that policymakers are wrong to judge schools by proficiency rates. In a thoughtful article called “The Problem with Proficiency,” he argues that it makes more sense to grade schools by whether their students show “growth.”
He offers the example of a school where the proficiency rates (passing rates on state tests) are very low but the improvement each year is impressive.
In his hypothetical, he offers this example:
Our school—let’s call it Jefferson—serves a high-poverty population of middle and high school students. Eighty-nine percent of them are eligible for a free or reduced-price lunch; 100 percent are African American or Hispanic. And on the most recent state assessment, less than a third of its students were proficient in reading or math. In some grades, fewer than 10 percent were proficient as gauged by current state standards.
But, he adds, at the same school “every year Jefferson students gain two and a half times as much in math and five times as much in English as the average school in New York City’s relatively high-performing charter sector. Its gains over time are on par or better than those of uber-high performing charters like KIPP Lynn and Geoffrey Canada’s Promise Academy.”
Now, how would you rate this school?
Gary Rubinstein recognized that Mike Petrilli was responding to the poor showing of many charter schools in New York City on the recent Common Core tests. He wrote a post called “Petrilli’s Desperate Attempt to Save Democracy Prep’s Reputation.”
Matt Di Carlo has often pointed out the problems inherent in grading schools by changes in proficiency rates. In his most recent article, he argued that:
In general, it is not a good idea to present average student performance trends in terms of proficiency rates, rather than average scores, but it is an even worse idea to use proficiency rates to measure changes in achievement gaps.
Put simply, proficiency rates have a legitimate role to play in summarizing testing data, but the rates are very sensitive to the selection of cut score, and they provide a very limited, often distorted portrayal of student performance, particularly when viewed over time. There are many ways to illustrate this distortion, but among the more vivid is the fact, which we’ve shown in previous posts, that average scores and proficiency rates often move in different directions. In other words, at the school-level, it is frequently the case that the performance of the typical student — i.e., the average score — increases while the proficiency rate decreases, or vice-versa.
Critics of the New Orleans “miracle,” on the other hand, have frequently complained that charter champions keep talking about student test score “growth” in the Recovery School District but refuse to admit that the RSD is one of the lowest-performing districts in the state of Louisiana.
Petrilli’s article provoked an extended online discussion among about 50 think tank denizens and policy wonks in D.C. and beyond, who went back and forth about what accountability should look like, how to measure it, etc.
For my part, I find myself alienated from the conversation because I see less and less value in our multi-billion investment in testing and accountability.
This was my contribution to the online discussion, which many in the conversation, no doubt, thought to be from Mars:
