I posted this morning about a very poorly designed “evaluation” of North Carolina’s voucher program, whose purpose seems to have been to satisfy the wealthy school choice advocacy groups.

This additional review confirms that impression and shows how poorly constructed the evaluation was and how misleading its PR. This kind of research mars the professional reputation of those associated with it, though it may land them grants from the Walton Foundation.

“The report’s primary flaw is that it has no external validity. That is, the students tested as part of this study are different from the average Opportunity Scholarship student. As a result, there’s no reason to think that the untested Opportunity Scholarship students would similarly outperform their public school counterparts. As the Charlotte Observer‘s Ann Doss-Helms noted, just over half of the voucher schools that participated in the study were Catholic, while only 10 percent of all schools receiving Opportunity Scholarship vouchers are Catholic. Additionally, the report only looked at students who were recruited and volunteered to take a test. These students are different from the average voucher student.

“Because of these differences, you can’t use the report to make claims about the average voucher student or the impact of the voucher program overall. The effects highlighted by the researchers only apply to the 89 Opportunity Scholarship students (in the researcher’s preferred comparison) who volunteered to be tested, representing just 1.6 percent of the 5,624 Opportunity Scholarship students in the 16-17 school year. The report tells us nothing about the other 98.4 percent of Opportunity Scholarship students.”

One of the researchers told the local newspaper “the study she and her colleagues conducted provides valuable insights but doesn’t mean the average scholarship recipient is outperforming peers who stayed in public schools.”

What? Useless! What a sham!

And voucher boosters are singing its praises.

“Regardless, the report’s biggest weakness remains that the results – even if accurately measured – tell us nothing about the program as a whole. Because the report only examines the test results of a small, non-representative sample of students who volunteered to participate, these results don’t tell us whether the average scholarship recipient is outperforming peers who stayed in public schools. The report (though not the press release) makes this flaw clear. To the extent the report tells us anything about student performance of voucher students, it only tells us about the 1.6 percent of voucher students recruited to take part in the study.

“In short, this report is not an evaluation in the common understanding of the word. Despite the report’s publicity, it does nothing to tell us whether the Opportunity Scholarship is helping or hurting its students. And the roll-out, coordinated with right-wing advocacy groups, has done more to misinform, rather than inform, the public.”

The researchers might give some thought to their professional standards and ethics. They have embarrassed their profession.