Carol Burris is the Executive Director of the Network for Public Education. She was a much honored high school principal in New York State, following many years in the classroom. She earned her doctorate from Teachers College, Columbia University.

From my perspective, I think it always wise to pay attention to the funders of any study, especially when the funders have a strong point of view about the outcome. Just as we are wary when the tobacco industry releases a study that “proves” the safety of tobacco use, or the pharma industry funds a study claiming that opioids are not addictive, we should be wary of any study funded by the major sponsors of the charter school movement. “Follow the money” is a principle that should never be ignored.

Burris writes here about the new national CREDO study of charter schools, which was uncritically reviewed by Education Week and other publications, which simply quoted the press release.

She writes:

Last week the Center for Research on Education Outcomes (CREDO) released its third National Study on charter schools. The report was funded by two nonprofits that wholeheartedly support charter schools and generously fund them—the Walton Family Foundation and The City Fund. The City Fund, which was started and funded by pro-charter billionaires John Arnold and Reed Hastings, exists to turn public school districts into “portfolio” districts of charter schools and charter-like public schools. 

Commenting on the report, Margaret “Macke” Raymond, founder and director of CREDO, told Ed Week’s Libby Stanford that the results were “remarkable.” Stanford claimed that “charters have drastically improved, producing better reading and math scores than traditional public schools.”

However, neither of those claims describes the reality of what the report found, as I will explain.  

Let’s begin with what CREDO uses as its measure of achievement. In all of its reports, CREDO uses “days of learning” to attribute differences in student achievement between charter schools and district public schools. That measure creates dramatic bar graphs allowing CREDO to disguise the trivial effects on achievement those “days of learning” represent. 

The overall math state score increase that CREDO attributes to a student attending a charter school is “six days of learning.” But what does that mean in the standard measures most researchers use, such as changes in standard deviations or effect sizes?

According to CREDO, 5.78 days of learning translates to only a 0.01 standard deviation. That means that the 6.0 “days of learning” increase in math translates to about a 0.0104 increase in standard deviations. Does that sound tiny? It is. For comparison, the negative impact on math scores of receiving a voucher in Louisiana was determined to be 0.4 standard deviations – more than 36 times greater magnitude.

After CREDO released its second national charter study in 2013, the National Education Policy Center (NEPC) reviewed it. You can find that critical review here, accompanied by a publication release titled CREDO’s Significantly Insignificant Findings

As the authors of the review (Andrew Maul and Abby McClelland) note, a 0.01 difference (which the 2023 math gain only slightly exceeds) in a standard deviation means that “only a quarter of a hundredth of a percent (0.000025) of the variation” in the test scores could be explained by the type of school (charter or public) that the child attended.  

Put another way, if a student gains six days of math and they originally scored at the 50th percentile on a standardized test, they would move to the 50.4th percentile.  It’s as if they stood on a sheet of loose-leaf paper to stand taller—that’s how small the real difference is.

But what about the reported reading-score increase of 16 days? Sixteen CREDO days account for only a 0.028 standard deviation. Now we are increasing height by standing on two and a half sheets of looseleaf.

According to CREDO, those increases are statistically significant. Shouldn’t that count? As the NEPC reviewers state in their summary, “with a very large sample size, nearly any effect will be statistically significant, but in practical terms these effects are so small as to be regarded, without hyperbole, as trivial.”

To put all of this in a broader perspective, Maul and McClelland point out, “[Eric] Hanushek has described an effect size of 0.20 standard deviations for Tennessee’s class size reform as “relatively small” considering the nature of the intervention.” Hanushek is married to Macke Raymond, who found the much, much, much slighter results of her organization’s study to be “remarkable.”

Using CREDO’s conversion, in order to achieve 0.20 standard deviations of change, the difference would have to be 115.6 days of learning. 

The only place in the report where there was an over 100-day difference was in online charter school students’ results in math. Compared with the public school students included in the study, online charter school students learned 124 fewer days of math. They may have something there.

CREDO Methodology

To draw its conclusions, CREDO matches charter students with what it calls “virtual twins” from public schools. But not all public schools were included, nor were all charter schools. The only public schools included were those in 29 states (for some odd reason, CREDO also includes NYC as a state) and the District of Columbia that met their definition of “feeder schools.” CREDO refers to 31 states, which include New York City and the District of Columbia, throughout the report. 

According to page 35 of the report, in 2017-2018, there were 69,706 open public schools in their included “states,” and of those, fewer than half (34,792) were “feeder schools.” That same year, NCES Common Core of Data reports 91,326 non-charter public schools, 86,315 of which were in states that had charter schools.

From the chart, then, we can estimate that only about 38% of public schools and 94.5% of charter schools were included in the study, at least during the 2017 school year.  

What, then, is a feeder school? The report claims that it is the public school the student would have attended if she were not in the charter school. But that is an inaccurate description. In the methodology report, CREDO explains how they identify feeder schools. “We identify all students at a given charter school who were enrolled in a TPS during the previous year. We identify these TPS as “feeder schools” for each charter school. Each charter school has a unique feeder school list for each year of data.”

While I understand why researchers want to use feeder schools for comparison, it produces an inherent bias in the sample. Feeder schools are, by definition, schools where parents disrupt their child’s schooling and place them in a charter school. They are not, as the report claims, “the school the student would have attended.”  If a child starts in a charter school, her local school would not be a feeder school unless there was a parent who was so dissatisfied with the school that they were willing to pull their child out and place them in a charter, which may even be miles away in a neighborhood with very different demographics. 

Virtual Twins 

In 2013, Maul and McClelland also explained the virtual-twin method along with the problems inherent in its use. 

“The larger issue with the use of any matching-based technique is that it depends on the
premise that the matching variables account for all relevant differences between students;
that is, once students are matched on the aforementioned seven variables [gender, ethnicity, English proficiency status, eligibility for subsidized meals, special education status, grade level, and a similar score from a prior year’s standardized test (within a tenth of a standard deviation), the only] remaining meaningful difference between students is their school type. Thus, for example, one must believe that there are no remaining systematic differences in the extent to which parents are engaged with their children (despite the fact that parents of charter school
students are necessarily sufficiently engaged with their children’s education to actively
select a charter school), that eligibility for subsidized meals is a sufficient proxy for
poverty when taken together with the other background characteristics.”

In addition to the above, special education students are not a monolith. Research has consistently shown that charters not only take fewer special education students but also enroll fewer students with more challenging disabilities that impact learning than public schools. English language learners, who are at different stages of language acquisition, are not a monolith as well. A few years ago, Wagma Mommandi and Kevin Welner filled an entire book (“School’s Choice”) with illustrations of how charter schools shape their enrollment – often in ways that the virtual-twin approach would not control. Therefore, even the included categories are rough proxies. 

Virtual twinning (or “Virtual Control Record” or VCR) also results in an additional problem—large shares of charter school students going “unmatched” and therefore being excluded from the results. Again, I quote NEPC 2013.

“Even more troubling, the VCR technique found a match for only 85% of charter students.
There is evidence that the excluded 15% are, in fact, significantly different from the included
students in that their average score is 0.43 standard deviations lower than the average of
the included students; additionally, members of some demographic subgroups such as
English Language Learners were much less likely to have virtual matches.”

That was in 2013. In this new report, the problem is worse. The overall match rate dropped further to 81.2%. English-language learners had a match rate of 74.9%; multi-racial students had a rate of 58.1%; and the match rate for Native American students was only 38%. 

And in some states, match rates were terrible. In New York, only 43.9% of charter school ELL students had a match, and 51% of special education students were matched. In the three categories that are most likely to affect educational outcomes—poverty, disability, and non-proficiency in English—New York rates were well below the average match rate for each category, which might at least partially explain the state’s above-average results.  

The study itself notes, in a footnote, “Low match rates require a degree of caution in interpreting the national pooled findings as they may not fairly represent the learning of the student groups involved.”

Do Charters Cherry-Pick and Push Low-scoring Students Out?

Perhaps the most incredulous claim, however, in the study was its “proof” that charters do not cherry-pick or skim and, in fact, teach students who are lower initial achievers.

Here is the CREDO methodology on page 41 for making that claim. 

“We compare students who initially enrolled in a TPS and took at least one achievement test before transferring to a charter school to their peers who enroll in the TPS. We can observe the distribution of charter students’ test scores across deciles of achievement and do the same for students in the feeder TPS.”

That may measure something, but not whether charter schools cherry-pick. First, it ignores potential differences in the majority of charter students who never enrolled in a public school. Second, it compares the scores of students whose parents withdrew them from the public school and then compares them with a more satisfied parent population. It’s far more likely that a withdrawal will occur if a student is doing poorly rather than doing well.  

Given the CREDO dataset, it would have been relatively easy to explore the question of whether or not charter schools push lower-achieving students out, but that question was not explored. 

Findings Regarding Charter Management Organizations

Although I did not review the study’s report that compared student achievement between standalone charters and charter management organizations (CMOS), I noticed that the CMOs of four states of the thirty-one were not included, one of which is Ohio, a state in which the vast majority of charters are run by CMOs (78%), with for-profits outnumbering nonprofits by 2 to 1. 

CREDO used the same capricious definition as the National Alliance for Public Charter Schools—a CMO must control three or more schools to be included, which excludes many of the low-performing for-profit-run schools that NPE identified in our report, Chartered for Profit II. While it lists K12 online as a CMO, the equally low-performing Pearson’s Connections Academy was absent from the CMO list. 

Conclusion


My review of the study found that the issues included in NEPC’s 2013 review were unaddressed in the newly released study, and new issues have emerged.  Hopefully, those who are far more skilled in this type of regression analysis than I am will do a more comprehensive review. But given the bias introduced by the methods in matching and the additional biases created by charters’ shaping of their own enrollment, it’s easy to see how the 0.011 or 0.028 SD findings could be masking negative actual charter effects that are at least as large (in the other direction). 

Moreover, based on the trivial topline increases combined with serious methodological issues, I think it is safe to say that despite the billions of tax dollars spent on growing charter schools, overall charter student achievement is about the same as the achievement of students in CREDO’s feeder schools and no conclusions can be drawn regarding the majority of public schools. As to the billionaire funders who financed the report that no doubt cost millions to produce—they got what they paid for. And reporters covering the report have thus far failed to ask the challenging questions that their readers deserve.