Archives for category: National Education Policy Center

Early in her tenure as Secretary of Education, Betsy DeVos admitted that she is not a “numbers person.” She is also not a research person. The research shows that none of her favorite reforms improve education. Bu that never deters her. When the U.S. Department of Education study of the D.C. voucher program showed that the students actually lost ground as compared to their public school peers, she didn’t care. Nonetheless, she did recently cite a study from the Urban Institute claiming that the Florida tax credit program (vouchers) produced higher enrollments in college.

William Mathis, research director of the National Education Policy Center and Vice-Chair of the Vermont Board of Education, took a closer look at the study and found that the study did not prove what she thinks it does and offers no support for vouchers because of the confounding variable of selection effects. Someone at the Department should explain to her what a “variable” is and what “selection effects” are.

Do Private Schools increase College Enrollments for Poor Children?

A Closer Look at the Urban Institute’s Florida Claims

William J. Mathis

A review of:

Chingos, Matthew M. and Kuehn, Daniel (September 2017). The Effects of Statewide Private School Choice on College Enrollment and Graduation; Evidence from the Florida Tax Credit Scholarship Program, Urban Institute. 52 pp.

The Urban Institute reports that low income students who attended a private school on a Florida tax credit scholarship (“neovouchers”), in pre-collegiate grades had higher percentage enrollments in community colleges than traditional public school students. Using language such as the “impact of” and “had substantial positive impacts,” the findings are presented as causal. This purported effect was not found by the study’s authors in four year institutions or in the awarding of degrees – just in matriculation to community colleges.

Nevertheless, for school choice advocates, this report was hailed as good news on the heels of recent negative statewide school voucher reports coming out of Louisiana, Indiana, DC and Ohio. While community colleges are non-selective, most would agree that increased community college attendance is a good thing.

That said, a closer look indicates there is less to this latest report than first meets the eye. The primary problem—selection effects—is obliquely acknowledged by the report’s authors but is far too critical to push to the background.

There are at least three important differences that likely exist between the voucher group and the non-voucher group.

• Motivation, Effort, and Seeking Out Education Options – The very act of opting to enroll in a private school signals a very significant difference between the groups. Such an action requires considerable effort on the part of parents and students in selecting, applying, and transporting the child to the private school. These private school parents demonstrate, almost by definition, a higher involvement in their child’s education. Logically, these families would also be more likely to seek out community college options.

• Finances – While the program is available only to less affluent families, private schools can charge an amount higher than the $6,000 maximum available through the neovoucher. (Currently, eligibility rules require that the student’s household income not exceed 260 percent of the federal poverty level). Parents who can arrange or pay these supplemental tuition and fees to attend a private school represent the upper economic end of this means-tested group.

• Admissions – Private schools can continue their usual admissions policies, which may exclude children with special needs or deny admission on the basis of other characteristics. We cannot know the specific differences this introduces between the treatment and comparison groups, but we can be reasonably certain that these differences exist.

The study is based on “matching” private school students with traditional public school students and then comparing the two groups. While a common technique in voucher research, troubles arise when trying to pair up each student with her doppelganger from the other camp. As the authors acknowledge, “the quality of any matching can vary” (p. 12). While the researchers did an admirable job of matching, the entire process runs the risk of leaving out very important and determinative missing variables, as described above.

The study’s regression analysis also attempts to control for differences among students. In theory, an absolutely inclusive model can “confirm” a theory, and thus the researcher can claim a causal effect. But that’s a slippery slope. Regression is simply multiple correlation – and despite many inferences in the report, that is not causation. This is particularly true in this case, where selection effects are so strong.

In summary, it is the selection effects that primarily limit the study. A reasonable interpretation of the data is simply that the difference between the groups in their enrollment rates at community college is primarily due to different characteristics of families and students. In any case, the claim of private schools causing higher community-college attendance rates—let alone high college attendance in general—is a reach too far.

The National Education Policy Center specializes in reviewing think tank reports, few of which are peer-reviewed. Many think tanks are advocacy organizations that use pseudo-scholarship to promote policy goals.

NEPC’s latest review gives a thumbs down to a report that advises on ways to eliminate democratic control of public schools. None of its so-called “reforms” have worked in practice, and the goal itself is unworthy:

BOULDER, CO (June 13, 2017) – A recent report offers a how-to guide for reform advocates interested in removing communities’ democratic control over their schools. The report explains how these reformers can influence states to use the Every Student Succeeds Act (ESSA) Title I school improvement funds to support a specific set of reforms: charter schools, state-initiated turnarounds, and appointment of an individual with full authority over districts or schools.

Leveraging ESSA to Support Quality-School Growth was reviewed by Gail L. Sunderman of the University of Maryland.

While the report acknowledges that there is limited research evidence on the effectiveness of these reforms as school improvement strategies, it uses a few exceptional cases to explain how advocates seeking to influence the development of state ESSA plans can nevertheless push them forward.

As Sunderman’s review explains, the report omits research that would shed light on the models, and it fails to take into account the opportunity costs of pursuing one set of policies over another. It also relies on test score outcomes as the sole measure of success, thus ignoring other impacts these strategies may have on students and their local communities or the local school systems where they occur. Finally, and as noted above, support for the effectiveness of these approaches is simply too limited to present them as promising school improvement strategies.

For these reasons, concludes Sunderman, policymakers, educators and state education administrators should be wary of relying on this report to guide them as they develop their state improvement plans and consider potential strategies for assisting low-performing schools and districts.

Find the review by Gail L. Sunderman at:
http://nepc.colorado.edu/thinktank/review-ESSA-accountability

Find Leveraging ESSA to Support Quality-School Growth, by Nelson Smith and Brandon Wright, published by the Thomas B. Fordham Institute and Education Cities, at:
https://edex.s3-us-west-2.amazonaws.com/publication/pdfs/03.30 – Leveraging ESSA To Support Quality-School Growth_0.pdf

We can always count on researchers at the National Education Policy Center to review reports issued by think tanks and advocacy groups, some of which are the same.

This review analyzes claims about Milwaukee’s voucher schools. It is funny to describe them as successful, since Milwaukee is really the poster city for the failure of school choice. It has had vouchers and charters since 1990 and is near the very bottom of the NAEP tests for urban districts, barely ahead of sad Detroit, another city afflicted by charters. Both cities demonstrate that school choice does not fix the problems of urban education or urban students and families.

Find Documents:

Press Release: http://nepc.info/node/8612
NEPC Review: http://nepc.colorado.edu/thinktank/review-milwaukee-vouchers
Report Reviewed: http://www.will-law.org/wp-content/uploads/2017/03/apples.pdf

Contact:
William J. Mathis: (802) 383-0058, wmathis@sover.net
Benjamin Shear: (303) 492-8583, benjamin.shear@colorado.edu

Learn More:

NEPC Resources on Accountability and Testing
NEPC Resources on Charter Schools
NEPC Resources on School Choice
NEPC Resources on Vouchers

BOULDER, CO (April 25, 2017) – A recent report from the Wisconsin Institute for Law and Liberty attempts to compare student test score performance for the 2015-16 school year across Wisconsin’s public schools, charter schools, and private schools participating in one of the state’s voucher programs. Though it highlights important patterns in student test score performance, the report’s limited analyses fail to provide answers as to the relative effectiveness of school choice policies.

Apples to Apples: The Definitive Look at School Test Scores in Milwaukee and Wisconsin was reviewed by Benjamin Shear of the University of Colorado Boulder.

Comparing a single year’s test scores across school sectors that serve different student populations is inherently problematic. One fundamental problem of isolating variations in scores that might be attributed to school differences is that the analyses must adequately control for dissimilar student characteristics among those enrolled in the different schools. The report uses linear regression models that use school-level characteristics to attempt to adjust for these differences and make what the authors claim are “apples to apples” comparisons. Based on these analyses, the report concludes that choice and charter schools in Wisconsin are more effective than traditional public schools.

Unfortunately, the limited nature of available data undermines any such causal conclusions. The inadequate and small number of school-level variables included in the regression models are not able to control for important confounding variables, most notably prior student achievement. Further, the use of aggregate percent-proficient metrics masks variation in performance across grade levels and makes the results sensitive to the (arbitrary) location of the proficiency cut scores. The report’s description of methods and results also includes some troubling inconsistencies. For example the report attempts to use a methodology known as “fixed effects” to analyze test score data in districts outside Milwaukee, but such a methodology is not possible with the data described in the report.

Thus, concludes Professor Shear, while the report does present important descriptive statistics about test score performance in Wisconsin, it wrongly claims to provide answers for those interested in determining which schools or school choice policies in Wisconsin are most effective.

Find the review by Benjamin Shear at:

http://nepc.colorado.edu/thinktank/review-milwaukee-vouchers

Find Apples to Apples: The Definitive Look at School Test Scores in Milwaukee and Wisconsin, by Will Flanders, published by the Wisconsin Institute for Law and Liberty, at:

http://www.will-law.org/wp-content/uploads/2017/03/apples.pdf

The National Coucil for Teacher Quality issued a report calling for higher admission standards for entrants into teaching, specifically, higher SAT and ACT scores. This report was reviewed on behalf of the National Education Policy Center. It is interesting and strange that so many people think that scores on the SAT or ACT have remarkable predictive powers. The cardinal rule of psychometric is that a test should be used only for the purpose for which it was designed. These tests were designed to gauge likely success in college, but multiple studies have concluded that the students’ four-year grade-point-average is more reliable than either the SAT or ACT. Why would anyone think they predict good teachers? NCTQ should turn its attention to making the teaching profession more fulfilling and rewarding. At a time of teacher shortages, raising the bar will exacerbate the shortage.

The NCTQ is Gates-funded and endorses VAM to rate teachers. So they start with a strong bias towards standardized testing.

NEPC says:

BOULDER, CO (March 23, 2017) – A recent report from the National Council on Teacher Quality (NCTQ) advocates for a higher bar for entry into teacher preparation programs. The NCTQ report suggests, based on a review of GPA and SAT/ACT requirements at 221 institutions in 25 states, that boosting entry requirements would significantly improve teacher quality in the U.S. It argues that this higher bar should be set by states, by the Council for the Accreditation of Educator Preparation (CAEP), and by the higher-education institutions themselves.

However, the report’s foundational claims are poorly supported, making its recommendations highly problematic.

The report, Within Our Grasp: Achieving Higher Admissions Standards in Teacher Prep, was reviewed by a group of scholars and practitioners who are members of Project TEER (Teacher Education and Education Reform). The team was led by Marilyn Cochran-Smith, the Cawthorne Professor of Teacher Education for Urban Schools at Boston College, along with Megina Baker, Wen-Chia Chang, M. Beatriz Fernández, & Elizabeth Stringer Keefe. The review is published by the Think Twice Think Tank Review Project at the National Education Policy Center, housed at University of Colorado Boulder’s School of Education.

The reviewers explain that the report does not provide the needed supports for its assertions or recommendations. It makes multiple unsupported and unfounded claims about the impact on teacher diversity of raising admissions requirements for teacher candidates, about public perceptions of teaching and teacher education, and about attracting more academically able teacher candidates.

Each claim is based on one or two cherry-picked citations while ignoring the substantial body of research that either provides conflicting evidence or shows that the issues are much more complex and nuanced than the report suggests. Ultimately, the reviewers conclude, the report offers little guidance for policymakers or institutions.

Find the review by Marilyn Cochran-Smith, Megina Baker, Wen-Chia Chang, M. Beatriz Fernández, & Elizabeth Stringer Keefe at:
http://nepc.colorado.edu/thinktank/review-admissions

Find Within Our Grasp: Achieving Higher Admissions Standards in Teacher Prep, by Kate Walsh, Nithya Joseph, & Autumn Lewis, published by the National Council on Teacher Quality, at:
http://www.nctq.org/dmsView/Admissions_Yearbook_Report

The National Education Policy Center recently published its 18th annual report on schoolhouse commercialism. When these reports began, the focus was usually the intrusion of advertising and other selling of products via textbooks, videos, and other means of communication.

 

Now the commercialism is different: when children are online, corporations are watching them and mining their data.

 

 

Faith Boninger and Alex Molnar’s report is called: “Learning to Be Watched: Surveillance Culture at School.”

 

 

They summarize it thus:

 

 

“Schools now routinely direct children online to do their schoolwork, thereby exposing them to tracking of their online behavior and subsequent targeted marketing. This is part of the evolution of how marketing companies use digital marketing, ensuring that children and adolescents are constantly connected and available to them. Moreover, because digital technologies enable extensive personalization, they amplify opportunities for marketers to control what children see in the private world of their digital devices as well as what they see in public spaces. This year’s annual report on schoolhouse commercialism trends considers how schools facilitate the work of digital marketers and examines the consequent threats to children’s privacy, their physical and psychological well-being, and the integrity of the education they receive. Constant digital surveillance and marketing at school combine to normalize for children the unquestioned role that corporations play in their education and in their lives more generally.”

 

 

 

Key Takeaway: 18th Annual Report on Schoolhouse Commercialism Trends explores the use of digital marketing in schools

The “Department of Education Reform” at the University of Arkansas published a study touting the stupendous results of “no excuses” charter schools, where students are subjected to strict discipline and intense test prep.

The National Education Policy Center engaged Professor Jeanette Powers of Arizona State University to review the study, and she criticized it strongly. Subsequently, the study was revised and then reviewed again.

Professor Powers still find the claims to be inflated.

“The primary (and repeated) claim of the report is that “No Excuses” charter schools can close the achievement gap. Powers explains that the underlying research that this report relies upon only supports the more limited and appropriate claim that the subset of No Excuses charter schools have done relatively well in raising the test scores of the students who participate in school lotteries and then attended the schools. The claim that these schools can close the achievement gap is supported by nothing other than an arithmetic extrapolation of evidence that comes with clear limitations.

“A common and well-recognized problem in charter school research is “selection effects.” That is, parents who choose “No Excuses” schools may be more educated, more engaged in the school-selection process, and differ in other significant ways from those parents who did not choose such a school. This would logically be a major concern for oversubscribed “No Excuses” schools, but the findings cannot be generalized to all parents.

“Over-subscribed schools that conduct lotteries for student admission are, one would assume, different from less popular schools. Nevertheless, Cheng et al. imply that the findings can be generalized to all No Excuses charter schools.

“The prominent and oversubscribed “No Excuses” schools are often supported by extensive outside resources. Offering an extended school day, for example, may not be financially feasible for other schools, and the scaling-up costs of doing so are not addressed. A charter that takes the No-Excuses approach yet lacks the additional resources should not be assumed to show the same results.

“The sample of schools included in the studies Cheng et al. analyzed is largely drawn from major urban areas in the Northeast and is small, particularly at the high school level.”

Find Powers’ original review and follow-up review of the “No Excuses” charter report here.

The original Arkansas report is currently available at the following url:
http://www.uaedreform.org/no-excuses-charter-schools-a-meta-analysis-of-the-experimental-evidence-on-student-achievement

The republished version of the Arkansas report is currently available at the following url:

Click to access OP226.pdf

All day long, I have posted about the free-market reform of the schools in New Orleans. I have done so because the mainstream media has been touting the success of privatization for almost ten years. States and districts have declared their intention to copy the New Orleans model, believing it was a great success. I just heard a CNN news report stating that the elimination of public schools was controversial, but test scores are up, and the city is investing in its children’s futures. The same report said that 50% of black men are unemployed and 50% of black children live in poverty.

As this report from the National Education Policy Center shows, the test score gains have disproportionately benefited the most advantaged students.

The rhetoric of corporate reform is always about “saving poor black kids.” In New Orleans, they have not yet been saved.

The National Education Policy Center regularly reviews research findings, in effect, acting as an independent peer review board.

In this case, its reviewer challenges the latest CREDO report on urban charters:

Is It Time to Stop the CREDO-Worship?

New review explains CREDO charter school research flaws, raises concerns about misunderstandings of effect sizes

Contact:

William J. Mathis, (802) 383-0058, wmathis@sover.net

Andrew Maul, (805) 893-7770, amaul@education.ucsb.edu

URL for this press release: http://tinyurl.com/mbse6m7

BOULDER, CO (April 27, 2015) — A recent report contends charter schools generally helped students increase reading and math scores and that urban charters had an even stronger positive effect. But a new review released today questions the strong reliance that has been placed on this and similar reports.

Andrew Maul reviewed Urban Charter School Study Report on 41 Regions 2015 for the Think Twice think tank review project. The review is published by the National Education Policy Center, housed at the University of Colorado Boulder School of Education. Maul, an assistant professor in the Graduate School of Education at the University of California-Santa Barbara, focuses his research on measurement theory, validity, and research design.

Urban Charter School Study Report on 41 Regions 2015 was produced and published by the Center for Research on Education Outcomes (CREDO) at Stanford University’s Hoover Institution. It is a follow-up report to CREDO’s 2013 National Charter School Study.

The new report analyzes the differences in student performance at charter schools and traditional public schools in 41 urban areas in 22 states. Researchers sought to establish whether being in an urban charter school, as opposed to a non-urban one, had a different effect on reading and math scores, and if so, why. The report found a small positive effect of being in a charter school overall on both math and reading scores, and a slightly stronger effect in urban environments.

Maul’s review, however, explains “significant reasons to exercise caution.”

For its analysis, CREDO again used its own, unusual research technique that attempts to simulate a controlled experiment: constructing “virtual twins” for each charter student. The “twins” were derived by averaging the performance of up to seven other students, chosen to match the charter students by demographics, poverty and special education status, grade level, and a prior year’s standardized test score.

Maul points out that the technique isn’t adequately documented. He adds: “It remains unclear and puzzling why the researchers use this approach rather than the more accepted approach of propensity score matching.” The CREDO technique, he warns, might not adequately control for differences between families who select a charter school and those who do not.

CREDO also fails to justify choices such as the estimation of growth and the use of “days of learning” as a metric.

But regardless of concerns over methodology, Maul points out, “the actual effect sizes reported are very small, explaining well under a tenth of one percent of the variance in test scores.” The effect size reported, for example, may simply reflect the researchers’ exclusion of some lower-scoring students from their analysis.

“To call such an effect ‘substantial’ strains credulity,” Maul concludes. Overall, the report fails to provide compelling evidence that charter schools are more effective than traditional public schools, whether or not they are located in urban districts.

Find Andrew Maul’s review
on the NEPC website at:
http://nepc.colorado.edu/
thinktank/review-urban-
charter-school

Find CREDO’s Urban Charter School Study Report on 41 Regions 2015 on the web at:
http://urbancharters.
stanford.edu/index.php

The Think Twice think tank
review project (http://thinktankreview.org)
of the National Education Policy Center (NEPC) provides the public, policymakers, and the press with timely, academically sound reviews of selected publications. NEPC is housed at the University of Colorado Boulder School of Education. The Think Twice think tank review project is made possible in part by support provided by the Great Lakes Center for Education Research and Practice.

The mission of the National Education Policy Center is to produce and disseminate high-quality, peer-reviewed research to inform education policy discussions. We are guided by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence.

For more information on the NEPC, please visit http://nepc.colorado.edu/.

This review is also found on the GLC website at http://www.greatlakes
center.org/.

The National Education Policy Center (NEPC) is housed at the University of Colorado Boulder School of Education. Its mission is to produce and disseminate high-quality, peer-reviewed research to inform education policy discussions. We are guided by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence. For more information about the NEPC, please visit http://nepc.colorado.edu/.

Our mailing address is:
National Education Policy Center
School of Education, 249 UCB
University of Colorado
Boulder, CO 80309-0249

For all other communication with NEPC, write to nepc@colorado.edu.

National Education Policy Center · School of Education, 249 UCB · University of Colorado · Boulder, CO 80309-0249 · USA

The National Education Policy Center regularly reviews reports from think tanks and advocacy groups. In this report, its scholars review an effort by charter school advocates to defend charter schools against critics. The conclusion: charters promote privatization and segregation.

“National Charter School Report Misleading and Superficial, Review Finds”

Contact:
Gary Miron, (269) 599-7965, gary.miron@wmich.edu
Daniel Quinn, (517) 203-2940, dquinn@greatlakescenter.org

EAST LANSING, Mich. (Feb. 23, 2015) — A report from the National Alliance for Public Charter Schools (NAPCS) attempted to “separate fact from fiction” about charter schools. The report addressed 21 “myths” regarding charter schools, which were quickly rejected. However, an academic review of the report finds that it perpetuated its own myths and fictions about charter schools rather than adding to the discourse surrounding school choice.

The report, Separating Fact and Fiction: What You Need to Know about Charter Schools, was assembled by NAPCS with no author identified. Gary Miron, Western Michigan University, William J. Mathis, University of Colorado Boulder, and Kevin G. Welner, University of Colorado Boulder, reviewed the report for the Think Twice think tank review project of the National Education Policy Center (NEPC) with funding from the Great Lakes Center for Education Research and Practice.

Succinctly, the original report addressed various claims about charter schools in such areas as financial equality of charter schools, lower teacher qualifications, student selection demographics, academic outcomes, segregation, and innovation.

Yet, the reviewers found that the report’s main purpose appears to be the “repetition or ‘spinning’ of claims voiced by advocacy groups and think tanks that promote privatization and school choice.” Furthermore, the reviewers found that it relied almost exclusively on advocacy documents rather than more careful and balanced empirical research, and provides only a superficial examination of any “criticisms” regarding charter schools.l

The review is organized in a format that lists each of the criticisms identified, and then provides a short commentary based on the extant research literature. Where the original document overlooked research evidence, the reviewers provide readers with a valuable tool to examine charter school criticisms.

Additionally, the reviewers find that the report fails to redirect the sector toward its original ideals, “Charter schools were originally designed to be a new form of public school. They were supposed to be small, locally run, innovative and highly accountable. They were supposed to be open to all and were expected to provide new freedoms to teachers to creatively innovate and serve their communities.”

Instead, the reviewers point out the most disappointing non-myth that comes out of the research: “In reality, the main outcomes of charter schools have been to promote privatization and accelerated the stratification and re-segregation of schools.”

The reviewers conclude, this report is unlikely to be of any use to “the discerning policy-maker” and fails to engage the important underlying issues.

Read the full review at:

Home


Find Separating Fact and Fiction on the web:
http://www.publiccharters.org/publications/separating-fact-fiction-public-charter-schools/
Think Twice, a project of the National Education Policy Center, provides the public, policymakers and the press with timely, academically sound reviews of selected publications. The project is made possible by funding from the Great Lakes Center for Education Research and Practice.
The review can also be found on the NEPC website:
http://nepc.colorado.edu

Professor Francesca Lopez of the University of Arizona responded to Betts and Tang’s critique of her post on the website of the National Education Policy Center.

 

 

 

She writes:

 

 

 

In September, the National Education Policy Center (NEPC) published a think-tank review I wrote on a report entitled, “A Meta-Analysis of the Literature on the Effect of Charter Schools on Student Achievement,” authored by Betts and Tang and published by the Center on Reinventing Public Education. My review examined the report, and I took the approach that a person reading the report and the review together would be in a better position to understand strengths and weaknesses than if that person read the report in isolation. While my review includes praise of some elements of the report, there is no question that the review also points out flawed assumptions and other areas of weakness in the analyses and the presentation of those analyses. The authors of the report subsequently wrote a prolonged rebuttal claiming I misrepresent their analysis and essentially reject my criticisms.

 

The rebuttal takes up 13 pages, which is considerably longer than my review. Yet these pages are largely repetitive and can be addressed relatively briefly. In the absence of sound evidence to counter the issues raised in my review, the rebuttal resorts to lengthy explanations that obscure, misrepresent, or altogether evade my critiques. What seems to most strike readers I’ve spoken with is the rebuttal’s insulting and condescending tone and wording. The next most striking element is the immoderately recurrent use of the term “misleading,” which is somehow repeated no fewer than 50 times in the rebuttal.

 

Below, I respond to each so-labeled “misleading statement” the report’s authors claim I made in my review—all 26 of them. Overall, my responses make two primary points:

 

 The report’s authors repeatedly obscure the fact that they exaggerate their findings. In their original report, they present objective evidence of mixed findings but then extrapolate their inferences to support charter schools. Just because the authors are accurate in some of their descriptions/statements does not negate the fact that they are misleading in their conclusions.

 

 The authors seem to contend that they should be above criticism if they can label their approaches as grounded in “gold standards,” “standard practice,” or “fairly standard practice.” When practices are problematic, they should not be upheld simply because someone else is doing it. My task as a reviewer was to help readers understand the strengths and weaknesses of the CRPE report. Part of that task was to attend to salient threats to validity and to caution readers when the authors include statements that outrun their evidence.

 

One other preliminary point, before turning to specific responses to the rebuttal’s long list. I am alleged by the authors to have insinuated that, because of methodological issues inherent in social science, social scientists should stop research altogether. This is absurd on its face, but I am happy to provide clarification here: social scientists who ignore details that introduce egregious validity threats (e.g., that generalizing from charter schools that are oversubscribed will introduce bias that favors charter schools) and who make inferences on their analyses that have societal implications, despite their claims of being neutral, should act more responsibly. If unwilling or unable to do so, then it would indeed be beneficial if they stopped producing research.

 

What follows is a point-by-point response to the authors’ rebuttal. For each point, I briefly summarize those contentions, but readers are encouraged to read the full 13 pages. The three documents – the original review, the rebuttal, and this response – are available at http://nepc.colorado.edu/thinktank/review-meta-analysis-effect-charter. The underlying report is available at http://www.crpe.org/publications/meta-analysis- literature-effect-charter-schools-student-achievement.

 

#1. The authors claim that my statement, “This report attempts to examine whether charter schools have a positive effect on student achievement,” is misleading because: “In statistics we test whether we can maintain the hypothesis of no effect of charter schools. We are equally interested in finding positive or negative results.” It is true that it is the null hypothesis that is tested. It is also true that the report attempts to examine whether charter schools have a positive effect on student achievement.

 

Moreover, it is telling that when the null hypothesis is not rejected and no assertion regarding directionality can be made, the authors still make statements alluding to directionality (see the next “misleading statement”).

 

#2. The authors object to my pointing out when they claim positive effects when their own results show those “effects” to not be statistically significant. There is no question that the report includes statements that are written in clear and non-misleading ways. Other statements are more problematic. Just because the authors are accurate in some of their descriptions does not negate my assertion that they make “[c]laims of positive effects when they are not statistically significant.” They tested whether a time trend was significant; it was not. They then go on to say it is a positive trend in the original report, and they do it again in their rebuttal: “We estimate a positive trend but it is not statistically significant.” This sentence is misleading. As the authors themselves claim in the first rebuttal above, “In statistics we test whether we can maintain the hypothesis of no effect.” This is called null hypothesis statistical testing (NHST). In NHST, if we reject the null hypothesis, we can say it was positive/negative, higher/lower, etc. If we fail to reject the null hypothesis (what they misleadingly call “maintain”), we cannot describe it in the direction that was tested because the test told us there isn’t sufficient support to do that. The authors were unable to reject the null hypothesis, but they call it positive anyway. Including the caveat that it is not significant does not somehow lift them above criticism. Or, to put this in the tone and wording of the authors’ reply, they seem “incapable” of understanding this fundamental flaw in their original report and in their rebuttal. There is extensive literature on NHST. I am astonished they are “seemingly unaware” of it.

 

#3. My review pointed out that the report shows a “reliance on simple vote-counts from a selected sample of studies,” and the authors rebut this by claiming my statement “insinuates incorrectly that we did not include certain studies arbitrarily.” In fact, my review listed the different methods used in the report, and it does use vote counting in a section, with selected studies. My review doesn’t state or imply that they were arbitrary, but they were indeed selected.

 

#4. The authors also object to my assertion that the report includes an “unwarranted extrapolation of the available evidence to assert the effectiveness of charter schools.” While my review was clear in stating that the authors were cautious in stating limitations, I also pointed to specific places and evidence showing unwarranted extrapolation. The reply does not rebut the evidence I provided for my assertion of extrapolation.

 

#5. My report points out that the report “… finds charters are serving students well, particularly in math. This conclusion is overstated; the actual results are not positive in reading and are not significant in high school math; for elementary and middle school math, effect sizes are very small…” The authors contend that their overall presentation of results is not misleading and that I was wrong (in fact, that I “cherry picked” results and “crossed the line between a dispassionate scientific analysis and an impassioned opinion piece”) by pointing out where the authors’ presentation suggested pro-charter results where unwarranted. Once again, just because the authors are accurate in some of their descriptions does not negate my assertion that the authors’ conclusions are overstated. I provided examples to support my statement that appear to get lost in the authors’ conclusions. They do not rebut my examples, but instead call it “cherry picking.” I find it telling that the authors can repeatedly characterize their uneven results as showing that charters “are serving students well” but if I point to problems with that characterization it is somehow I, not them, who have “crossed the line between a dispassionate scientific analysis and an impassioned opinion piece.”

 

#6. I state in my review that the report includes “lottery-based studies, considering them akin to random assignment, but lotteries only exist in charter schools that are much more popular than the comparison public schools from which students are drawn. This limits the study’s usefulness in broad comparisons of all charters versus public schools.” The rebuttal states, “lottery-based studies are not ‘akin’ to random assignment. They are random assignment studies.” The authors are factually wrong. Lottery-based charter assignments are not random assignment in the sense of, e.g., random assignment pharmaceutical studies. I detail why this is so in my review, and I would urge the authors to become familiar with the key reason lottery-based charters are not random assignment: weights are allowed. The authors provided no evidence that the schools in the study did not use weights, thus the distinct possibility exists that various students do not have the same chance of being admitted, and are therefore, not randomly assigned. The authors claim charter schools with lotteries are not more popular than their public school counterparts. Public schools do not turn away students because seats are filled; their assertion that charters do not need to be more popular than their public school counterparts is unsubstantiated. Parents choose a given charter school for a reason – oftentimes because the neighborhood school and other charter school options are less attractive. But beyond that, external validity (generalizing these findings to the broader population of charter schools) requires that over-enrolled charters be representative of charters that aren’t over-enrolled. That the authors test for differences does not negate the issues with their erroneous assumptions and flatly incorrect statements about lottery-based studies.

 

#7. The authors took issue with my critique that their statement, “One conclusion that has come into sharper focus since our prior literature review three years ago is that charter schools in most grade spans are outperforming traditional public schools in boosting math achievement” is an overstatement of their findings. In their rebuttal, they list an increase in the number of significant findings (which is not surprising given the larger sample size), and claim effect sizes were larger without considering confidence intervals around the reported effects. In addition to that, the authors take issue with my critique of their use of the word “positive” in terms of their non-significant trend results, which I have already addressed in #2.

 

#8. The authors take issue with my finding that their statement, “…we demonstrated that on average charter schools are serving students well, particularly in math” (p. 36) is an overstatement. I explained why this is an overstatement in detail in my review.

 

#9. The authors argue, “Lopez cites a partial sentence from our conclusion in support of her contention that we overstate the case, and yet it is she who overstates.” The full sentence that I quoted reads, “But there is stronger evidence of outperformance than underperformance, especially in math.” I quoted that full sentence, sans the “[b]ut.” They refer to this as “chopping this sentence in half,” and they attempt to defend this argument by presenting this sentence plus the one preceding it. In either case, they fail to support their contention that they did not overstate their findings. Had the authors just written the preceding sentence (“The overall tenor of our results is that charter schools are in some cases outperforming traditional public schools in terms of students’ reading and math achievement, and in other cases performing similarly or worse”), I would not have raised an issue. To continue with “But there is stronger evidence of outperformance than underperformance, especially in math” is an ideologically grounded overstatement.

 

#10. The authors claim, “Lopez seriously distorts our work by comparing results from one set of analyses with our conclusions from another section, creating an apples and oranges problem.” The section the authors are alluding to reported results of the meta- analysis. I pointed out examples of their consistent exaggeration. The authors address neither the issue I raise nor the support I offer for my assertion that they overstate findings. Instead, they conclusively claim I am “creating an apples and oranges problem.”

 

#11. The authors state, “Lopez claims that most of the results are not significant for subgroups.” They claim I neglected to report that a smaller sample contributed to the non-significance, but they missed the point. The fact that there are “far fewer studies by individual race/ethnicity (for the race/ethnicity models virtually none for studies focused on elementary schools alone, middle schools alone, or high schools) or other subgroups” is a serious limitation. The authors claim that “This in no way contradicts the findings from the much broader literature that pools all students.” However, the reason ethnicity/race is an important omission is because of the evidence of the segregative effects of charter schools. I was clear in my review in identifying my concern: the authors’ repeated contentions about the supposed effectiveness of charter schools, regardless of the caution they maintained in other sections of their report.

 

#12. The authors argue, “The claim by Lopez that most of the effects are insignificant in the subgroup analyses is incomplete in a way that misleads. She fails to mention that we conduct several separate analyses in this section, one for race/ethnicity, one for urban school settings, one for special education and one for English Learners.” Once again, the authors miss the point, as I explain in #11. The authors call my numerous examples that discredit their claims “cherry picking.” The points I raise, however, are made precisely to temper the claims made by the authors. If cherry-picking results in a full basket, perhaps there are too many cherries to be picked.

 

#13. The authors take issue that I temper their bold claims by stating that the effects they found are “modest.” To support their rebuttal, they explain what an effect of .167 translates to in percentiles, which I argued against in my review in detail. (The authors chose to use the middle school number of .167 over the other effect sizes, ranging from .023 to .10; it was the full range of results that I called “modest.”) Given their reuse of percentiles to make a point, it appears the authors may not have a clear understanding of percentiles: they are not interval-level units. An effect of .167 is not large given that it may be negligible when confidence intervals are included. That it translates into a 7 percentile “gain” when percentiles are not interval level units (and confidence bands are not reported) is a continued attempt to mislead by the authors. I detail the issues with the ways the authors present percentiles in my review. (This issue is revisited in #25, below.)

 

#14. The authors next take issue with the fact I cite different components of their report that were “9 pages apart.” I synthesized the lengthy review (the authors call it “conflating”), and once again, the authors attempt to claim that my point-by-point account of limitations with their report is misleading. Indeed, according to the authors, I am “incapable of understanding” a “distinction” they make. In their original 68-page report, they make many “distinctions.” They appear “incapable of understanding” that the issues I raise concerning “distinctions” is that they were reoccurring themes in their report.

 

#15. The authors next find issue with the following statement: “The authors conclude that ‘charter schools appear to be serving students well, and better in math than in reading’ (p. 47) even though the report finds ‘…that a substantial portion of studies that combine elementary and middle school students do find significantly negative results in both reading and math – 35 percent of reading estimates are significantly negative, and 40 percent of math estimates are significantly negative (p. 47)’.” This is one of the places where I point out that the report overstates conclusions notwithstanding their own clear findings that should give them caution. In their rebuttal, the authors argue that I (in a “badly written paragraph”) “[insinuate] that [they] exaggerate the positive overall math effect while downplaying the percentage of studies that show negative results.” If I understand their argument correctly, they are upset that I connected the two passages with “even though the report finds” instead of their wording: “The caveat here is”. But my point is exactly that the caveat should have reigned in the broader conclusion. They attempt to rebut my claim by elaborating on the sentence, yet they fail to address my critique. The authors’ rebuttal includes, “Wouldn’t one think that if our goal had been to overstate the positive effects of charter schools we would never have chosen to list the result that is the least favorable to charter schools in the text above?” I maintain the critique from my review: despite the evidence that is least favorable to charter schools, the authors claim overall positive effects for charter schools—obscuring the various results they reported. Again, just because they are clear sometimes does not mean they do not continuously obscure the very facts they reported.

 

#16. The authors take issue with the fact that my review included two sentences of commentary on a companion CRPE document that was presented by CRPE as a summary of the Betts & Tang report. As is standard with all NEPC publications, I included an endnote that included the full citation of the summary document, clearly showing an author (“Denice, P.”) other than Betts & Tang. Whether Betts & Tang contributed to, approved, or had nothing to do with the summary document, I did not and do not know.

 

#17. The next issue the authors have is that I critiqued their presentation and conclusions based on the small body of literature they included in their section entitled, “Outcomes apart from achievement.” The issue I raise with the extrapolation of findings can be found in detail in the review. The sentence from the CRPE report that seems to be the focus here reads as follows, “This literature is obviously very small, but both papers find evidence that charter school attendance is associated with better noncognitive outcomes.” To make such generalizations based on two papers (neither of which was apparently peer reviewed) is hardly an examination of the evidence that should be disseminated in a report entitled, “A Meta-Analysis of the Literature on the Effect of Charter Schools on Student Achievement.” The point of the meta-analysis document is to bring together and analyze the research base concerning charter schools. The authors claim that because they are explicit in stating that the body of literature is small, that their claim is not an overstatement. As I have mentioned before, just because the authors are clear in their caveats, making assertions about the effects of charter schools with such limited evidence is indeed an overstatement. We are now seeing more and more politicians who offer statements like, “I’m not a scientist and haven’t read the research, but climate change is clearly a hoax.” The caveats do little to transform the ultimate assertion into a responsible statement.

 

Go to the link to read the rest of Professor Lopez’s response to Betts and Tang, covering the 26 points they raised.