For many of us who believe in the importance of public education, the Obama administration was a great disappointment. The President is a man of great dignity, but he gave the Department of Education to the Gates Foundation, the Broad Foundation, and John Podesta’s Center for American Progress. Race to the Top prioritized truly dreadful policies that closed schools, evaluated teachers by test scores (because Bill Gates liked the idea, not because of any evidence that it was right to do so), encouraged states to open more privately managed charter schools, and made data and test scores the heart and soul of education. President Obama may have been great in many other policy arenas, but on education, his Race to the Top was (in my view) a flop. With the support of the Obama administration, the public became familiar with the claim that school choice advances civil rights, despite clear evidence that school choice accelerates segregation by race, religion, and social class.

Now the U.S. Department of Education has commissioned a study to evaluate Race to the Top. Peter Greene here reviews this study by two of our leading research institutes that asks and answers the question: Did Race to the Top Work?

Of course, your reading of the study depends on what “work” mean?

Did RTTT succeed in getting most states to authorize charter schools or increase the number of charter schools in the state? The answer is yes.

Did it incentivize most states to adopt a test-based evaluation of their public school teachers? Well, yes, it did.

Did it encourage states to close schools and fire teachers and principals when test scores were low? Yes indeed.

Did RTTT make high-stakes testing the central way of measuring American education and the ultimate goal of education? Yes.

Voila! It “worked.”

But Peter wonders if implementation of bad ideas is really the best way to define “works”?

He begins:

Did Race To The Top Work?

Not only is this a real question, but the Department of Education, hand in hand with Mathematica Policy Research and American Institutes for Research, just released a 267-page answer of sorts. Race to the Top: Implementation and Relationship to Student Outcomes is a monstrous creature, and while this is usually the part where I say I’ve read it so you don’t have to, I must confess that I’ve only kind of skimmed it. But what better way to spend a Saturday morning than reviewing this spirited inquiry into whether or not a multi-billion-dollar government program was successful in hitting the wrong target (aka getting higher scores on a narrow, poorly-designed standardized reading and math tests).

Before We Begin

So let’s check a couple of our pre-reading biases before we walk through this door. I’ve already shown you one of mine– my belief that Big Standardized Test scores are not a useful, effective or accurate measure of student achievement or school effectiveness, so this is all much ado about not so much nothing as the wrong thing.

We should also note the players involved. The USED, through its subsidiary group, the Institute of Educational Sciences, is setting out to answer a highly loaded question: “Did we just waste almost a decade and a giant mountain of taxpayer money on a program that we created and backed, or were we right all along?” The department has set out to answer a question, and they have a huge stake in the answer.

So that’s why they used independent research groups to help, right? Wellll….. Mathematica has been around for years, and works in many fields researching policy and programs; they have been a go-to group for reformsters with policies to peddle. AIR sounds like a policy research group, but in fact they are in the test manufacture business, managing the SBA (the BS Test that isn’t PARCC). Both have gotten their share of Gates money, and AIR in particular has a vested interest in test-based policies.

[As someone who worked in the U.S. Department of Education many moons ago, I know that the folks who get millions to evaluate federal government programs tend not to be overly critical or they might deal themselves out of future contracts for evaluations. There is a large number of inside groups in D.C. who live for government grants, known as Beltway Bandits.]

He continues:

And right up front, the study lets us know some of the hardest truth it has to deliver. Well, hard of you’re a RTT-loving reformster. For some of us, the truth may not be so much “hard” as “obvious years ago.”

The relationship between RTT and student outcomes was not clear. Trends in student outcomes could be interpreted as providing evidence of a positive effect of RTT, a negative effect of RTT, or no effect of RTT.

Bottom line: the folks who created the study– who were, as I noted above, motivated to find “success”– didn’t find that the Race to the Top accomplished much of anything. Again, from the executive summary:

In sum, it is not clear whether the RTT grants influenced the policies and practices used by states or whether they improved student outcomes. RTT states differed from other states prior to receiving the grants, and other changes taking place at the same time as RTT reforms may also have affected student outcomes. Therefore, differences between RTT states and other states may be due to these other factors and not to RTT. Furthermore, readers should use caution when interpreting the results because the findings are based on self-reported use of policies and practices.

Hmm. Well, that doesn’t bode well for the upcoming 200 pages.

Peter then proceeds in his jolly and inimitable fashion to evaluate the evaluation. And it does it for free!

Did it misdirect the goals of American education? Did it cause a national teacher shortage? Did it demoralize experienced teachers and cause an exodus of talented teachers? Did it help grow the charter movement? Did the charter movement sap resources from public schools? Those question were not part of the “scope of work.”