Archives for category: Research

New Zealand is one of the few—perhaps the only—nation that abandoned national standards.

As Professor Martin Thrupp Explains here, scholars and researchers helped to expose the flaws of national standards.

The national standards were driven by political, not educational, purposes. The ruling party pushed them and couldn’t stop pushing them, ignoring all criticism.

Thrupp’s book, co-edited with Bob Lingard, Meg Maguire, and David Hursh, “The Search for Better Educational Standards: A Cautionary Tale” teaches us that concerted efforts by educators, scholars, and parents can roll back ruinous education policy.

He writes:

“The National-led Government had become fully invested in the National Standards policy. When it was first announced in 2007, it was National’s big idea for education – the ‘cornerstone’ of its education policy. Over the 10 years that followed, the Government had dismissed all criticisms. Any late turning back would be a sign of weakness, and instead the National party wanted to plough on with this truly awful project that had already became a world-class example of how not to make education policy….

“Despite the National-led Government’s adherence to the National Standards, researchers and academics certainly pushed back against the policy…In fact, researchers and academics did a great deal in this space! A particular highlight for me was the 2012 open letter signed by over 100 education academics against the public release of the National Standards data. But there were countless other instances of academics and researchers opposing the National Standards, either publicly or more behind the scenes. Opinion pieces, articles, TV debates, radio, public meetings, meetings behind closed doors – and all the rest of it. Chapter 8 of A Cautionary Tale, about the politics of research, gives numerous examples.

“A number of us also did empirical research that helped to explain how the National Standards were a problem (see A Cautionary Tale, especially chapters 3, 5 and 7). And, of course, New Zealand researchers are part of international networks that are working on the same concerns about high-stakes assessment in other countries (see A Cautionary Tale, especially chapters 2 and 10). Note to Cullen: without doubt, some of the best work in this area is coming from Australian academics.

“It is true that some researchers and academics chose to support the National-led Government’s National Standards policies (A Cautionary Tale, chapter 8). This happened for various reasons that may have included the researchers’ educational views, their political beliefs, the political pressures that were upon them or their organisations, and the advantages that came with supporting the policy. It may have also involved a judgement that it was better to be ‘inside the tent’ and have influence than be on the outside.

“But this range of viewpoints among researchers and academics is no different than was seen within the teaching profession and amongst principals, where National Standards also had supporters. Indeed, a central problem that the new Labour-led Government will have to grapple with, having removed the National Standards policy, is doing away with the data-driven disposition amongst teachers and principals that grew along with the policy under the previous Government.

“Looking ahead

“Even though most teachers and principals did not like the impact of the National Standards policy, after a decade of its influence New Zealand primary schools are now marinated in the thinking, language, and expectations of the National Standards. This has also had wider impacts, for instance on early childhood education. It will all take a little while to undo.

“It’s great, though, that New Zealand primary schools will now be able to spend less time shoring up judgements about children – judgements that have often been pointless or harmful – and instead spend more time making learning relevant and interesting for each child. Removing National Standards should also allow teachers to be less burdened, contributing to making teaching a more attractive career again.”

Jack Hassard wrote about the use of social media to spread fake news. Facebook, Twitter, and Google have become facilitators of fake news.

We know it is there. What can we do about it?

This is a very good analysis by a group of scholars at the Stanford History Education Group about civic reasoning, which explains how to avoid being hoaxed by fake news.

The questions that must always be present in any discussion is: How do you know? Who said so? What is the source? How reliable is the source? Can you confirm this information elsewhere? What counts as reliable evidence?

Many people use Wikipedia as a reliable source, but Wikipedia is crowdsourced and is not authoritative. I recall some years back when I gave a lecture in North Carolina that was named in honor of a distinguished senator of the state. The Wikipedia entry said he was a Communist, as were members of his staff. This was obviously the work of a troll. But it might not be obvious to a student researching a paper.

They write:

“Fake news is certainly a problem. Sadly, however, it’s not our biggest. Fact-checking organizations like Snopes and PolitiFact can help us detect canards invented by enterprising Macedonian teenagers,3 but the Internet is filled with content that defies labels like “fake” or “real.” Determining who’s behind information and whether it’s worthy of our trust is more complex than a true/false dichotomy.

“For every social issue, there are websites that blast half-true headlines, manipulate data, and advance partisan agendas. Some of these sites are transparent about who runs them and whom they represent. Others conceal their backing, portraying themselves as grassroots efforts when, in reality, they’re front groups for commercial or political interests. This doesn’t necessarily mean their information is false. But citizens trying to make decisions about, say, genetically modified foods should know whether a biotechnology company is behind the information they’re reading. Understanding where information comes from and who’s responsible for it are essential in making judgments of credibility.

“The Internet dominates young people’s lives. According to one study, teenagers spend nearly nine hours a day online.4 With optimism, trepidation, and, at times, annoyance, we’ve witnessed young people’s digital dexterity and astonishing screen stamina. Today’s students are more likely to learn about the world through social media than through traditional sources like print newspapers.5 It’s critical that students know how to evaluate the content that flashes on their screens.

“Unfortunately, our research at the Stanford History Education Group demonstrates they don’t.* Between January 2015 and June 2016, we administered 56 tasks to students across 12 states. (To see sample items, go to http://sheg.stanford.edu (link is external).) We collected and analyzed 7,804 student responses. Our sites for field-testing included middle and high schools in inner-city Los Angeles and suburban schools outside of Minneapolis. We also administered tasks to college-level students at six different universities that ranged from Stanford University, a school that rejects 94 percent of its applicants, to large state universities that admit the majority of students who apply.

“When thousands of students respond to dozens of tasks, we can expect many variations. That was certainly the case in our experience. However, at each level—middle school, high school, and college—these variations paled in comparison to a stunning and dismaying consistency. Overall, young people’s ability to reason about information on the Internet can be summed up in two words: needs improvement.

“Our “digital natives”† may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they’re easily duped. Our exercises were not designed to assign letter grades or make hairsplitting distinctions between “good” and “better.” Rather, at each level, we sought to establish a reasonable bar that was within reach of middle school, high school, or college students. At each level, students fell far below the bar.”

They offer specific examples of hoaxes to show how easily people are duped.

They conclude:

“The senior fact checker at a national publication told us what she tells her staff: “The greatest enemy of fact checking is hubris”—that is, having excessive trust in one’s ability to accurately pass judgment on an unfamiliar website. Even on seemingly innocuous topics, the fact checker says to herself, “This seems official; it may be or may not be. I’d better check.”

“The strategies we recommend here are ways to fend off hubris. They remind us that our eyes deceive, and that we, too, can fall prey to professional-looking graphics, strings of academic references, and the allure of “.org” domains. Our approach does not turn students into cynics. It does the opposite: it provides them with a dose of humility. It helps them understand that they are fallible.

“The web is a sophisticated place, and all of us are susceptible to being taken in. Like hikers using a compass to make their way through the wilderness, we need a few powerful and flexible strategies for getting our bearings, gaining a sense of where we’ve landed, and deciding how to move forward through treacherous online terrain. Rather than having students slog through strings of questions about easily manipulated features, we should be teaching them that the World Wide Web is, in the words of web-literacy expert Mike Caulfield, “a web, and the way to establish authority and truth on the web is to use the web-like properties of it.”13 This is what professional fact checkers do.

“It’s what we should be teaching our students to do as well.”

Bruce Baker at Rutgers University is one of the most eminent scholars of school finance in the nation.

In this post, he remembers the days when states insisted upon rigorous research to understand funding equity and inequity.

That kind of research, on which he cut his teeth, died, and he knows why.

“These were the very types of analyses needed to inform state school finance polices and to advance the art and science of evaluating educational reforms for their potential to improve equity, productivity and efficiency. But these efforts largely disappeared over the next decade. More disconcerting, these efforts were replaced by far less rigorous, often purely speculative policy papers, free of any substantive empirical analysis and devoid of any conceptual frameworks.

“This shift was largely brought about under the leadership of Arne Duncan. Kevin Welner of the University of Colorado and I explained first in a report for the National Education Policy Center and subsequently in shorter form in the journal Educational Researcher, that Secretary Duncan had begun to give lip service to improving educational productivity and efficiency, but accompanied that lip service with wholly insufficient resources. Kevin Welner and I explained that:

“the materials provided on the Department’s website as guiding resources present poorly supported policy advisement. The materials listed and recommendations expressed within those materials repeatedly fail to provide substantive analyses of the cost effectiveness or efficiency of public schools, of practices within public schools, of broader policies pertaining to public schools, or of resource allocation strategies.” [ix]

“Among other issues, the materials provided on the web site failed to acknowledge even the existence of the relevant conceptual frameworks and rigorous empirical methods which had risen to prominence in state supported and federally documented research in the years prior.”

John King, then the state commissioner in New York, quickly followed Duncan’s lead. The top researchers sat in the audience while Duncan’s favorites presented misleading graphs.

Thus did the field die.

Russ Walsh writes here about the difference between “belief” and “knowledge.”

He writes:

I imagine that most of those who read this blog accept climate change and the human impact on climate change as settled science. We’ve seen the evidence; we’ve heard from the experts and we have reached an informed conclusion. This is a good thing and one that most Americans not in the White House or in denial for economic and political reasons also accept. It is not a matter of believing or disbelieving climate science; it is a matter of rigorous academic inquiry.

Now I would ask all teachers and teacher leaders to apply the same academic rigor to instructional practice. That is we must make our instructional decisions on what we know works – based on research.

Unfortunately as I have talked to teachers over the years about instructional practice, I have heard a lot of faith-based language.

“I don’t believe in homework.”

“I believe in phonics.”

“I don’t believe in teaching to the test.”

“I believe in independent reading.”

“I believe in using round robin and popcorn reading.”

For about 2,000 years doctors “believed” that blood-letting was an effective treatment for a wide variety of ailments. Today, I would bet if you encountered a doctor who recommended blood-letting for your flu symptoms, you would run, not walk, out the office door screaming. Science, and mounting numbers of dead patients, caught up with blood-letting. So, as professionals, we need to hold ourselves to the same standards. We need to follow the science and stop talking about our beliefs and start talking about the scientific research behind our instructional decision making.

What do you do when the research is inconclusive or when research findings conflict?

Russ has some advice for you.

Rachel M. Cohen writes in The Atlantic about a new study by Jesse Rothstein, showing that education is important but it is not the key to economic and social mobility.

She writes:

“A new working paper authored by the UC Berkeley economist Jesse Rothstein builds on that research, in part by zeroing in on one of those five factors: schools. The idea that school quality would be an important element for intergenerational mobility—essentially a child’s likelihood that they will one day outearn their parents—seems intuitive: Leaders regularly stress that the best way to rise up the income ladder is to go to school, where one can learn the skills they need to succeed in a competitive, global economy. “In the 21st century, the best anti-poverty program around is a world-class education,” Barack Obama declared in his 2010 State of the Union address. Improving “skills and schools” is a benchmark of Republican House Speaker Paul Ryan’s poverty-fighting agenda.

“Indeed, this bipartisan education-and-poverty consensus has guided research and political efforts for decades. Broadly speaking, the idea is that if more kids graduate from high school, and achieve higher scores on standardized tests, then more young people are likely to go to college, and, in turn, land jobs that can secure them spots in the middle class.

“Rothstein’s new work complicates this narrative. Using data from several national surveys, Rothstein sought to scrutinize Chetty’s team’s work—looking to further test their hypothesis that the quality of a child’s education has a significant impact on her ability to advance out of the social class into which she was born.

“Rothstein, however, found little evidence to support that premise. Instead, he found that differences in local labor markets—for example, how similar industries can vary across different communities—and marriage patterns, such as higher concentrations of single-parent households, seemed to make much more of a difference than school quality. He concludes that factors like higher minimum wages, the presence and strength of labor unions, and clear career pathways within local industries are likely to play more important roles in facilitating a poor child’s ability to rise up the economic ladder when they reach adulthood….

“Jose Vilson, a New York City math teacher, says educators have known for years that out-of-school factors like access to food and healthcare are usually bigger determinants for societal success than in-school factors. He adds that while he tries his best to adhere to his various professional duties and expectations, he also recognizes that “maybe not everyone agrees on what it means to be successful” in life….

“Rothstein is quick to say that his new findings do not mean that Americans should do away with investments in school improvement, or even that education is unrelated to improving opportunity. Certainly the more that people can read, write, compute, think, and innovate, the better off society and liberal democracy would be. “It will still be good for us if we can figure out how to educate people more and better,” he says. “It might help the labor market, our civic society, our culture.” But Americans should be more clear, he says, about why they are investing in school improvement. His research suggests that doing so in order to boost a child’s chances to outearn their parents is unlikely to be successful. According to Rothstein, education systems just don’t go very far in explaining the differences between high- and low-opportunity areas.”

Union membership is another factor that explains whether children can escape poverty. But unions are under siege, and that route has been nearly closed off by the joint efforts of ALEC, the Koch brothers, the Walton family, and other billionaires who want to pull the ladder up behind them and claim that school choice will solve the economic disparity that benefits them.

Early in her tenure as Secretary of Education, Betsy DeVos admitted that she is not a “numbers person.” She is also not a research person. The research shows that none of her favorite reforms improve education. Bu that never deters her. When the U.S. Department of Education study of the D.C. voucher program showed that the students actually lost ground as compared to their public school peers, she didn’t care. Nonetheless, she did recently cite a study from the Urban Institute claiming that the Florida tax credit program (vouchers) produced higher enrollments in college.

William Mathis, research director of the National Education Policy Center and Vice-Chair of the Vermont Board of Education, took a closer look at the study and found that the study did not prove what she thinks it does and offers no support for vouchers because of the confounding variable of selection effects. Someone at the Department should explain to her what a “variable” is and what “selection effects” are.

Do Private Schools increase College Enrollments for Poor Children?

A Closer Look at the Urban Institute’s Florida Claims

William J. Mathis

A review of:

Chingos, Matthew M. and Kuehn, Daniel (September 2017). The Effects of Statewide Private School Choice on College Enrollment and Graduation; Evidence from the Florida Tax Credit Scholarship Program, Urban Institute. 52 pp.

The Urban Institute reports that low income students who attended a private school on a Florida tax credit scholarship (“neovouchers”), in pre-collegiate grades had higher percentage enrollments in community colleges than traditional public school students. Using language such as the “impact of” and “had substantial positive impacts,” the findings are presented as causal. This purported effect was not found by the study’s authors in four year institutions or in the awarding of degrees – just in matriculation to community colleges.

Nevertheless, for school choice advocates, this report was hailed as good news on the heels of recent negative statewide school voucher reports coming out of Louisiana, Indiana, DC and Ohio. While community colleges are non-selective, most would agree that increased community college attendance is a good thing.

That said, a closer look indicates there is less to this latest report than first meets the eye. The primary problem—selection effects—is obliquely acknowledged by the report’s authors but is far too critical to push to the background.

There are at least three important differences that likely exist between the voucher group and the non-voucher group.

• Motivation, Effort, and Seeking Out Education Options – The very act of opting to enroll in a private school signals a very significant difference between the groups. Such an action requires considerable effort on the part of parents and students in selecting, applying, and transporting the child to the private school. These private school parents demonstrate, almost by definition, a higher involvement in their child’s education. Logically, these families would also be more likely to seek out community college options.

• Finances – While the program is available only to less affluent families, private schools can charge an amount higher than the $6,000 maximum available through the neovoucher. (Currently, eligibility rules require that the student’s household income not exceed 260 percent of the federal poverty level). Parents who can arrange or pay these supplemental tuition and fees to attend a private school represent the upper economic end of this means-tested group.

• Admissions – Private schools can continue their usual admissions policies, which may exclude children with special needs or deny admission on the basis of other characteristics. We cannot know the specific differences this introduces between the treatment and comparison groups, but we can be reasonably certain that these differences exist.

The study is based on “matching” private school students with traditional public school students and then comparing the two groups. While a common technique in voucher research, troubles arise when trying to pair up each student with her doppelganger from the other camp. As the authors acknowledge, “the quality of any matching can vary” (p. 12). While the researchers did an admirable job of matching, the entire process runs the risk of leaving out very important and determinative missing variables, as described above.

The study’s regression analysis also attempts to control for differences among students. In theory, an absolutely inclusive model can “confirm” a theory, and thus the researcher can claim a causal effect. But that’s a slippery slope. Regression is simply multiple correlation – and despite many inferences in the report, that is not causation. This is particularly true in this case, where selection effects are so strong.

In summary, it is the selection effects that primarily limit the study. A reasonable interpretation of the data is simply that the difference between the groups in their enrollment rates at community college is primarily due to different characteristics of families and students. In any case, the claim of private schools causing higher community-college attendance rates—let alone high college attendance in general—is a reach too far.

The National Education Policy Center reviewed CREDO’s latest report on ranking charter organizations and found it wanting.

CREDO Report Fails to Build Upon Prior Research in Creating Charter School Classification System

Key Review Takeaway: Report overstates its findings, ignores relevant literature, and fails to address known methodological issues, suggesting an agenda other than sound policymaking.

NEPC Review: http://nepc.colorado.edu/thinktank/review-CMOs

Report Reviewed: https://credo.stanford.edu/pdfs/CMO FINAL.pdf

Contact:
William J. Mathis: (802) 383-0058, wmathis@sover.net
Gary Miron: (269) 599-7965, gary.miron@wmich.edu

Learn More:

NEPC Resources on Charter Management Organizations

BOULDER, CO (September 7, 2017) – Charter Management Organizations 2017, written by James Woodworth, Margaret Raymond, Chunping Han, Yohannes Negassi, W. Payton Richardson, and Will Snow, and released by Center for Research on Education Outcomes (CREDO), assessed the impact of different types of charter school-operating organizations on student outcomes in 24 states, plus New York City and Washington, D.C. The study finds that students in charter schools display slightly greater gains in performance than their peers in traditional public schools, especially students in charter schools operated by certain types of organizations.

Gary Miron and Christopher Shank of Western Michigan University reviewed the report and found CREDO’s distinctions between organization types to be arbitrary and unsupported by other research in the field. This raises concerns about the practical utility of the CREDO findings.

In addition, Miron and Shank contend that CREDO researchers made several dubious methodological decisions that threaten the validity of the study. A number of these problems have been raised in reviews of prior CREDO studies. Specifically, CREDO studies have been criticized for:

Over-interpreting small effect sizes;

Failing to justify the statistical assumptions underlying the group comparisons made;

Not taking into account or acknowledging the large body of charter school research beyond CREDO’s own work;

Ignoring the limitations inherent in the research approach they have taken, or at least failing to clearly communicate limitations to readers.

These problems have not only gone unaddressed in Charter Management Organizations 2017, but have been compounded by the CREDO researchers’ confusing and illogical charter organization classification system. As a result, the reviewers conclude that the report is of limited value. Policymakers should interpret the report’s general findings about charter school effectiveness with extreme caution, but might find CREDO’s work useful as a tool to understand how specific charter school management organizations perform relative to their peers.

Find the review, by Gary Miron and Christopher Shank, at:
http://nepc.colorado.edu/thinktank/review-CMOs

Find Charter Management Organizations 2017, by James Woodworth, Margaret Raymond, Chunping Han, Yohannes Negassi, W. Payton Richardson, and Will Snow, published by CREDO, at:
https://credo.stanford.edu/pdfs/CMO FINAL.pdf

The National Education Policy Center (NEPC) Think Twice Think Tank Review Project (http://thinktankreview.org) provides the public, policymakers, and the press with timely, academically sound reviews of selected publications. The project is made possible in part by support provided by the Great Lakes Center for Education Research and Practice:

Home

The National Education Policy Center (NEPC), housed at the University of Colorado Boulder School of Education, produces and disseminates high-quality, peer-reviewed research to inform education policy discussions. Visit us at: http://nepc.colorado.edu

I am reposting this because the original omitted the link to the article. I went to the car repair shop and the computer repair shop today, and wrote this post while paising in a coffee shop between repairs. Carol Burris’s article links to the original study, which has the ironic title “In Pursuit of the Common Good: The Spillover Effects of Charter Schools on Public School Studenys of New York City.” Ironic, since charter schools have nothing to do with the common good.

Recently, a study was released that made the absurd claim that public schools make academic gains when a charter opens close to them or is co-located in their building. To those of us who have seen co-located charters take away rooms previously used for the arts, dance, science, or resource rooms for students with disabilities, the finding seemed bizarre, as did the contention that draining away the best students from neighborhood public schools was a good thing for the losing school.

The rightwing DeVos-funded media eagerly reported this “finding,” without digging deeper. Why should they? It propagated a myth they wanted to believe.

The author of this highly politicized study is Sarah Cordes of Temple University.

Carol Burris, executive director of the Network for Public Education and a former principal, is a highly skilled researcher. She reviewed Cordes’ findings and determined they were vastly overstated. Her review of Cordes’ study was peer-reviewed by some of the nation’s most distinguished researchers.

Burris writes:

“Cordes attempted to measure the effects of competition from a charter school on the achievement, attendance and grade retention of students in nearby New York City public schools. In addition, she sought to identify the cause of any effects she might find.”

She did not take into account the high levels of mobility among New York City public school students, especially the most disadvantaged.

But worse, her findings are statistically small as compared to other interventions:

“Upon completing her analysis, Cordes concludes that “the introduction of charter schools within one mile of a TPS increases the performance of TPS students on the order of 0.02 standard deviations (sds) in both math and English Language Arts (ELA).”

“To put that effect size in perspective, if you lower class size, you find the effect on achievement to be ten times greater (.20) than being enrolled in a school within one mile of charter school. Reading programs that focus on processing strategies have an effect size of nearly .60. And direct math instruction (effect size .61) with strong teacher feedback (effect size .75) has strong benefits for math achievement[2]. With a .02 effect size, the effect of being enrolled in a school located near a charter school is akin to increasing your height by standing on a few sheets of paper.”

Burris noted that what really mattered was money:

“Although it appears that Cordes found very small achievement gains in a public school if a charter is located within a half mile, that correlation does not tell us why those gains occurred. To answer that question, Cordes looked at an array of factors — demographics, school spending, and parent and teacher survey data about school culture and climate.

There was only ONE standout out factor that rose to the commonly accepted level of statistical significance — money.”

Burris concludes that journalists need to check other sources before believing “studies” and “reports” that make counter-intuitive claims:

“The bottom line is that Sarah Cordes found what every researcher before her found — “competition” from charters has little to no effect on student achievement in traditional public schools. It also found that when it comes to learning, money matters as evidenced by increased spending, especially in co-located schools.

“Most reporters generally lack advanced skills in research methods and statistics. They depend on abstracts and press releases, not having the expertise to look with a critical eye themselves. But it does not take a lot of expertise to see the problems with this particular study.”

Sarah Cordes’ “study” will serve the purposes of Trump and DeVos and others who are trying to destroy the common good. Surely, that was not her intention. Perhaps her dissertation advisors st New York University could have helped her develop a sounder statistical analysis. It seems obvious that the public schools that have been closed to make way for charters received no benefit at all–and they are not included in the study.

Steven Singer wrote a great post about a study by corporate reformers proving that they are wrong. Will they care that one of their favorite tactics is a failure? Of course not.

https://gadflyonthewallblog.wordpress.com/2017/08/26/study-closing-schools-doesnt-increase-test-scores/

Open the link to read it all and to see the links he cites.

He writes:

“You might be tempted to file this under ‘No Shit, Sherlock.’

“But a new study found that closing schools where students achieve low test scores doesn’t end up helping them learn. Moreover, such closures disproportionately affect students of color.

“What’s surprising, however, is who conducted the study – corporate education reform cheerleaders, the Center for Research on EDucation Outcomes (CREDO).

“Like their 2013 study that found little evidence charter schools outperform traditional public schools, this year’s research found little evidence for another key plank in the school privatization platform.

“These are the same folks who have suggested for at least a decade that THE solution to low test scores was to simply close struggling public schools, replace them with charter schools and voilà.

“But now their own research says “no voilà.” Not to the charter part. Not to the school closing part. Not to any single part of their own backward agenda.

“Stanford-based CREDO is funded by the Hoover Institution, the Walton Foundation and testing giant Pearson, among others. They have close ties to the KIPP charter school network and privatization propaganda organizations like the Center for Education Reform.

“If THEY can’t find evidence to support these policies, no one can!

“After funding one of the largest studies of school closures ever conducted, looking at data from 26 states from 2003 to 2013, they could find zero support that closing struggling schools increases student test scores.

“The best they could do was find no evidence that it hurt.

“But this is because they defined student achievement solely by raw standardized scores. No other measure – not student grades, not graduation rates, attendance, support networks, community involvement, not even improvement on those same assessments – nothing else was even considered.

“Perhaps this is due to the plethora of studies showing that school closures negatively impact students in these ways. Closing schools crushes the entire community economically and socially. It affects students well beyond academic achievement.”

Mark Weber, aka the blogger Jersey Jazzman, is getting his doctorate in research and statistics while teaching in a public school in New Jersey. He is a sharp critic of shoddy research, especially when it comes to the fantastical claims made on behalf of charter schools.

In his latest post, he asks why CREDO, the charter-evaluating institute at Stanford University run by Macke Raymond, continues to use a metric that has never been validated.

Journalists who have little expertise in evaluating research claims eagerly take up the claim that School X produces an additional “number of days of learning.”

It happened most recently in Texas, where charter schools finally managed to match the test scores of public schools (you know, those “failing schools” for which charter schools are supposed to be the rescuers.)

He shows how the Texas study refers to “days of learning” and this is translated to infer “substantial” improvement. But, as JJ shows, the gains are actually very small, and might more accurately be described as “tiny.”

He writes:

Stanley Pogrow published a paper earlier this year that didn’t get much attention, and that’s too bad. Because he quite rightly points out that it’s much more credible to describe results like the ones reported here as “small” than as substantial. 0.03 standard deviations is tiny: plug it in here and you’ll see it translates into moving from the 50th to the 51st percentile (the most generous possible interpretation when converting to percentiles).

I have been working on something more formal than a blog post to delve into this issue. I’ve decided to publish an excerpt now because, frankly, I am tired of seeing “days of learning” conversions reported in the press and in research — both peer-reviewed and not — as if there was no debate about their validity.

The fact is that many people who know what they are talking about have a problem with how CREDO and others use “days of learning,” and it’s well past time that the researchers who make this conversion justify it.

Jersey Jazzman calls on Macke Raymond and the staff at CREDO to justify their use of this measurement. The “days of learning” inflates the actual changes, he says.

The concept of days of learning, he says, is based on the work of economist Erik Hanushek of the Hoover Institution (Stanford). It may be coincidental that he is Macke Raymond’s husband. They are both very smart people. I hope they respond to Mark Weber’s challenge.