Patrick Wolf, the “independent” evaluator of the Milwaukee voucher program remains incensed that the National Education Policy Center did not notice that he dropped the attrition rate of students in the Milwaukee voucher program from 75% to 56%.

I, the humble historian, still wonder what is so impressive about Milwaukee, one of the lowest-performing cities on NAEP. If the voucher students get the same test scores as kids in public schools, and Milwaukee is low-performing, what does that say about the efficacy of vouchers? And, I dunno, but 56% still looks like a huge attrition rate. If all those students dropped out from the voucher schools, what does it say about vouchers? But what do I know? I am far, far below Dr. Wolf.

Kevin Welner, the director of the National Education Policy Center, responds here, including a comment by Casey Cobb, who wrote the original review of the Wolf analysis for NEPC.

Welner writes:

“In a new post on Education Next, Patrick Wolf asserts that the “dust up” emerging from his first post was “avoidable.” But he never addresses the main way it could have been avoided: if he had been honest with his readers the first time around, instead of implying ignorance or wrongdoing as a cheap way to scores some points against Diane Ravitch and (to a lesser extent) NEPC.

Wolf’s new post includes no statement to the effect that he made a mistake in leading readers to believe that Casey Cobb (and therefore NEPC and Diane Ravitch) had concocted the 75% figure out of thin air. Nor, obviously, does he apologize for misleading his readers or for his baseless attacks on Cobb, NEPC, and Ravitch.

The only mistake he acknowledges was “in the form of the initial 75% program attrition figure.” That is, my colleagues incorrectly put the 75% figure in our report, and we then corrected it to 56%. That’s of course true, but what’s odd is that he doesn’t address the core issue. When he wrote the first Education Next post, why was he not honest with his readers? Why did he attack based on an argument that is, at best, misleading?

Here’s what an apology looks like, honestly and straightforwardly admitting a mistake: “Dr. Wolf, you are correct that NEPC’s editing process failed to notice the discrepancy between the 75% figure and the 56% figure. We apologize, and we will correct our postings to accurately note that the SCDP report was changed during the writing of our review.” This morning, I also reached out to our expert reviewer, Casey Cobb, who poured through his notes and drafts, forensically piecing together whatever information he could find – and he responded to me with an honest note that he’s allowing me to post below. In my view, Casey and NEPC both made mistakes in this process, but those mistakes were small and understandable – readers can decide for themselves.

But more importantly, those small mistakes in no way change or mitigate what – without some explanation – looks like a deceitful attack. Reading Wolf’s new blog, I’m left to believe that he considered Cobb and NEPC (and Ravitch) fair game since we failed to figure out the miraculous transformation from 75% to 56%. But his initial point didn’t argue that. It didn’t say, “We originally published the report using an incorrect figure of 75% attrition, and it seems that Cobb and NEPC were confused by the switch,” or even “Those credulous fools used the 75% that we’d originally posted instead of noticing that we’d changed that figure to 56%. Haha – joke’s on them.” Instead, he hid his knowledge of the source of that 75% figure and attacked us as being incompetent or deceptive.

If Wolf were driving down the road and swerved to intentionally hit a person who had slipped off the sidewalk, I dare say he couldn’t excuse his behavior by arguing that the pedestrian had wrongly left the sidewalk. Did I mention that he was responsible for greasing down the sidewalk the previous day?

Here’s the note sent to me by Casey Cobb:

It would be helpful in future reports if the SCDP and Patrick Wolf flag such corrections and identify to readers what was “updated and corrected” when they post corrections. The apparently erroneous 75% attrition figure that mysteriously turned to 56% wasn’t explicitly identified in the “corrected” report.

Yes, in my review of the originally released report, which does not indicate “draft,” or “work in progress,” or anything of that sort, I referenced what Wolf and his colleagues wrote about the 75% attrition from their sample. Then, during the process of editing and creating many drafts of my review, this “updated and corrected” report was posted on the SCDP website.

The updated report’s face page still indicates a publication date of “February 2012,” but three pages in reads, “Updated and Corrected March 8, 2012.” The website on which the report can be found still says, “Posted by UArk Dept. of Ed. Reform – February 1, 2012 – DER Publications, MPCP – Final Reports, SCDP, SCDP Milwaukee Evaluation” (http://www.uaedreform.org/updated-student-attainment-and-the-milwaukee-parental-choice-program-final-follow-up-analysis/).

I have no recollection or record of how the 56% number was captured and added. I can only surmise that I downloaded this newer version during the latter stages of my review, perhaps while traveling and using a different computer (and so needing a new copy). I obviously didn’t notice anything different, and I continued on with my review citing the new 56% figure (which, again, contained no footnote or indication of the change from 75%–that would have been helpful to those of us who are memory-challenged). Yes, this was a mistake; I certainly missed the inconsistency in the final edits of my review.

The NEPC review needs only one correction. In the current version, the Summary of the Review and Section III make reference to the 75% sample attrition rate; in Section V, the reference is made to the “updated and corrected” 56% rate. I will take responsibility for indicating via footnote that updated figure in Wolf’s revised report is now “56%.” An explanation can be added to indicate that the review took place during a period in which the Report #30 changed and that an updated version of Report #30 was posted on March 8th which did not indicate what items were corrected. Hopefully, if changes to SCDP reports occur following their original posting in the future, authors provide information on what was actually updated.

Patrick Wolf, in what can only be perceived as a patronizing tone, asks that, “While Casey Cobb is correcting his review of our report, he should also revise his charge on page 4 that, ‘Curiously, it [meaning the report] fails to state how many program-switchers there were, when they switched and in which direction, and how many graduated.’” But then he goes on to admit my statement was, in fact, true; the report does not address it. Yet he still laments that I didn’t refer readers to a journal article identified in the report as forthcoming (that is, not yet published) that speaks to this very issue not addressed in Report #30. Now we are all free to read it.

Patrick Wolf is right that the 75% vs. 56% attrition rate is not the main issue here, although either number raises questions about construct validity (what the treatment actually was). Other more substantive issues remain unaddressed by him and his colleagues. For one, explaining why it is they rely so heavily upon a .10 level of statistical significance when the industry standard is.05. And then why this is brushed off in lieu of grand summary statements of the program’s success? Perhaps adherence to commonly accepted scientific standards gave way to a desire to promote voucher-like programs.

Finally, I want to add my voice to the call for Wolf to release the Milwaukee data that he and his colleagues used, to generate their claims, so that other researchers may analyze it dispassionately – attrition rates and all.”