Archives for category: Research

Russ Walsh writes here about the difference between “belief” and “knowledge.”

He writes:

I imagine that most of those who read this blog accept climate change and the human impact on climate change as settled science. We’ve seen the evidence; we’ve heard from the experts and we have reached an informed conclusion. This is a good thing and one that most Americans not in the White House or in denial for economic and political reasons also accept. It is not a matter of believing or disbelieving climate science; it is a matter of rigorous academic inquiry.

Now I would ask all teachers and teacher leaders to apply the same academic rigor to instructional practice. That is we must make our instructional decisions on what we know works – based on research.

Unfortunately as I have talked to teachers over the years about instructional practice, I have heard a lot of faith-based language.

“I don’t believe in homework.”

“I believe in phonics.”

“I don’t believe in teaching to the test.”

“I believe in independent reading.”

“I believe in using round robin and popcorn reading.”

For about 2,000 years doctors “believed” that blood-letting was an effective treatment for a wide variety of ailments. Today, I would bet if you encountered a doctor who recommended blood-letting for your flu symptoms, you would run, not walk, out the office door screaming. Science, and mounting numbers of dead patients, caught up with blood-letting. So, as professionals, we need to hold ourselves to the same standards. We need to follow the science and stop talking about our beliefs and start talking about the scientific research behind our instructional decision making.

What do you do when the research is inconclusive or when research findings conflict?

Russ has some advice for you.

Rachel M. Cohen writes in The Atlantic about a new study by Jesse Rothstein, showing that education is important but it is not the key to economic and social mobility.

She writes:

“A new working paper authored by the UC Berkeley economist Jesse Rothstein builds on that research, in part by zeroing in on one of those five factors: schools. The idea that school quality would be an important element for intergenerational mobility—essentially a child’s likelihood that they will one day outearn their parents—seems intuitive: Leaders regularly stress that the best way to rise up the income ladder is to go to school, where one can learn the skills they need to succeed in a competitive, global economy. “In the 21st century, the best anti-poverty program around is a world-class education,” Barack Obama declared in his 2010 State of the Union address. Improving “skills and schools” is a benchmark of Republican House Speaker Paul Ryan’s poverty-fighting agenda.

“Indeed, this bipartisan education-and-poverty consensus has guided research and political efforts for decades. Broadly speaking, the idea is that if more kids graduate from high school, and achieve higher scores on standardized tests, then more young people are likely to go to college, and, in turn, land jobs that can secure them spots in the middle class.

“Rothstein’s new work complicates this narrative. Using data from several national surveys, Rothstein sought to scrutinize Chetty’s team’s work—looking to further test their hypothesis that the quality of a child’s education has a significant impact on her ability to advance out of the social class into which she was born.

“Rothstein, however, found little evidence to support that premise. Instead, he found that differences in local labor markets—for example, how similar industries can vary across different communities—and marriage patterns, such as higher concentrations of single-parent households, seemed to make much more of a difference than school quality. He concludes that factors like higher minimum wages, the presence and strength of labor unions, and clear career pathways within local industries are likely to play more important roles in facilitating a poor child’s ability to rise up the economic ladder when they reach adulthood….

“Jose Vilson, a New York City math teacher, says educators have known for years that out-of-school factors like access to food and healthcare are usually bigger determinants for societal success than in-school factors. He adds that while he tries his best to adhere to his various professional duties and expectations, he also recognizes that “maybe not everyone agrees on what it means to be successful” in life….

“Rothstein is quick to say that his new findings do not mean that Americans should do away with investments in school improvement, or even that education is unrelated to improving opportunity. Certainly the more that people can read, write, compute, think, and innovate, the better off society and liberal democracy would be. “It will still be good for us if we can figure out how to educate people more and better,” he says. “It might help the labor market, our civic society, our culture.” But Americans should be more clear, he says, about why they are investing in school improvement. His research suggests that doing so in order to boost a child’s chances to outearn their parents is unlikely to be successful. According to Rothstein, education systems just don’t go very far in explaining the differences between high- and low-opportunity areas.”

Union membership is another factor that explains whether children can escape poverty. But unions are under siege, and that route has been nearly closed off by the joint efforts of ALEC, the Koch brothers, the Walton family, and other billionaires who want to pull the ladder up behind them and claim that school choice will solve the economic disparity that benefits them.

Early in her tenure as Secretary of Education, Betsy DeVos admitted that she is not a “numbers person.” She is also not a research person. The research shows that none of her favorite reforms improve education. Bu that never deters her. When the U.S. Department of Education study of the D.C. voucher program showed that the students actually lost ground as compared to their public school peers, she didn’t care. Nonetheless, she did recently cite a study from the Urban Institute claiming that the Florida tax credit program (vouchers) produced higher enrollments in college.

William Mathis, research director of the National Education Policy Center and Vice-Chair of the Vermont Board of Education, took a closer look at the study and found that the study did not prove what she thinks it does and offers no support for vouchers because of the confounding variable of selection effects. Someone at the Department should explain to her what a “variable” is and what “selection effects” are.

Do Private Schools increase College Enrollments for Poor Children?

A Closer Look at the Urban Institute’s Florida Claims

William J. Mathis

A review of:

Chingos, Matthew M. and Kuehn, Daniel (September 2017). The Effects of Statewide Private School Choice on College Enrollment and Graduation; Evidence from the Florida Tax Credit Scholarship Program, Urban Institute. 52 pp.

The Urban Institute reports that low income students who attended a private school on a Florida tax credit scholarship (“neovouchers”), in pre-collegiate grades had higher percentage enrollments in community colleges than traditional public school students. Using language such as the “impact of” and “had substantial positive impacts,” the findings are presented as causal. This purported effect was not found by the study’s authors in four year institutions or in the awarding of degrees – just in matriculation to community colleges.

Nevertheless, for school choice advocates, this report was hailed as good news on the heels of recent negative statewide school voucher reports coming out of Louisiana, Indiana, DC and Ohio. While community colleges are non-selective, most would agree that increased community college attendance is a good thing.

That said, a closer look indicates there is less to this latest report than first meets the eye. The primary problem—selection effects—is obliquely acknowledged by the report’s authors but is far too critical to push to the background.

There are at least three important differences that likely exist between the voucher group and the non-voucher group.

• Motivation, Effort, and Seeking Out Education Options – The very act of opting to enroll in a private school signals a very significant difference between the groups. Such an action requires considerable effort on the part of parents and students in selecting, applying, and transporting the child to the private school. These private school parents demonstrate, almost by definition, a higher involvement in their child’s education. Logically, these families would also be more likely to seek out community college options.

• Finances – While the program is available only to less affluent families, private schools can charge an amount higher than the $6,000 maximum available through the neovoucher. (Currently, eligibility rules require that the student’s household income not exceed 260 percent of the federal poverty level). Parents who can arrange or pay these supplemental tuition and fees to attend a private school represent the upper economic end of this means-tested group.

• Admissions – Private schools can continue their usual admissions policies, which may exclude children with special needs or deny admission on the basis of other characteristics. We cannot know the specific differences this introduces between the treatment and comparison groups, but we can be reasonably certain that these differences exist.

The study is based on “matching” private school students with traditional public school students and then comparing the two groups. While a common technique in voucher research, troubles arise when trying to pair up each student with her doppelganger from the other camp. As the authors acknowledge, “the quality of any matching can vary” (p. 12). While the researchers did an admirable job of matching, the entire process runs the risk of leaving out very important and determinative missing variables, as described above.

The study’s regression analysis also attempts to control for differences among students. In theory, an absolutely inclusive model can “confirm” a theory, and thus the researcher can claim a causal effect. But that’s a slippery slope. Regression is simply multiple correlation – and despite many inferences in the report, that is not causation. This is particularly true in this case, where selection effects are so strong.

In summary, it is the selection effects that primarily limit the study. A reasonable interpretation of the data is simply that the difference between the groups in their enrollment rates at community college is primarily due to different characteristics of families and students. In any case, the claim of private schools causing higher community-college attendance rates—let alone high college attendance in general—is a reach too far.

The National Education Policy Center reviewed CREDO’s latest report on ranking charter organizations and found it wanting.

CREDO Report Fails to Build Upon Prior Research in Creating Charter School Classification System

Key Review Takeaway: Report overstates its findings, ignores relevant literature, and fails to address known methodological issues, suggesting an agenda other than sound policymaking.

NEPC Review: http://nepc.colorado.edu/thinktank/review-CMOs

Report Reviewed: https://credo.stanford.edu/pdfs/CMO FINAL.pdf

Contact:
William J. Mathis: (802) 383-0058, wmathis@sover.net
Gary Miron: (269) 599-7965, gary.miron@wmich.edu

Learn More:

NEPC Resources on Charter Management Organizations

BOULDER, CO (September 7, 2017) – Charter Management Organizations 2017, written by James Woodworth, Margaret Raymond, Chunping Han, Yohannes Negassi, W. Payton Richardson, and Will Snow, and released by Center for Research on Education Outcomes (CREDO), assessed the impact of different types of charter school-operating organizations on student outcomes in 24 states, plus New York City and Washington, D.C. The study finds that students in charter schools display slightly greater gains in performance than their peers in traditional public schools, especially students in charter schools operated by certain types of organizations.

Gary Miron and Christopher Shank of Western Michigan University reviewed the report and found CREDO’s distinctions between organization types to be arbitrary and unsupported by other research in the field. This raises concerns about the practical utility of the CREDO findings.

In addition, Miron and Shank contend that CREDO researchers made several dubious methodological decisions that threaten the validity of the study. A number of these problems have been raised in reviews of prior CREDO studies. Specifically, CREDO studies have been criticized for:

Over-interpreting small effect sizes;

Failing to justify the statistical assumptions underlying the group comparisons made;

Not taking into account or acknowledging the large body of charter school research beyond CREDO’s own work;

Ignoring the limitations inherent in the research approach they have taken, or at least failing to clearly communicate limitations to readers.

These problems have not only gone unaddressed in Charter Management Organizations 2017, but have been compounded by the CREDO researchers’ confusing and illogical charter organization classification system. As a result, the reviewers conclude that the report is of limited value. Policymakers should interpret the report’s general findings about charter school effectiveness with extreme caution, but might find CREDO’s work useful as a tool to understand how specific charter school management organizations perform relative to their peers.

Find the review, by Gary Miron and Christopher Shank, at:
http://nepc.colorado.edu/thinktank/review-CMOs

Find Charter Management Organizations 2017, by James Woodworth, Margaret Raymond, Chunping Han, Yohannes Negassi, W. Payton Richardson, and Will Snow, published by CREDO, at:
https://credo.stanford.edu/pdfs/CMO FINAL.pdf

The National Education Policy Center (NEPC) Think Twice Think Tank Review Project (http://thinktankreview.org) provides the public, policymakers, and the press with timely, academically sound reviews of selected publications. The project is made possible in part by support provided by the Great Lakes Center for Education Research and Practice:

Home

The National Education Policy Center (NEPC), housed at the University of Colorado Boulder School of Education, produces and disseminates high-quality, peer-reviewed research to inform education policy discussions. Visit us at: http://nepc.colorado.edu

I am reposting this because the original omitted the link to the article. I went to the car repair shop and the computer repair shop today, and wrote this post while paising in a coffee shop between repairs. Carol Burris’s article links to the original study, which has the ironic title “In Pursuit of the Common Good: The Spillover Effects of Charter Schools on Public School Studenys of New York City.” Ironic, since charter schools have nothing to do with the common good.

Recently, a study was released that made the absurd claim that public schools make academic gains when a charter opens close to them or is co-located in their building. To those of us who have seen co-located charters take away rooms previously used for the arts, dance, science, or resource rooms for students with disabilities, the finding seemed bizarre, as did the contention that draining away the best students from neighborhood public schools was a good thing for the losing school.

The rightwing DeVos-funded media eagerly reported this “finding,” without digging deeper. Why should they? It propagated a myth they wanted to believe.

The author of this highly politicized study is Sarah Cordes of Temple University.

Carol Burris, executive director of the Network for Public Education and a former principal, is a highly skilled researcher. She reviewed Cordes’ findings and determined they were vastly overstated. Her review of Cordes’ study was peer-reviewed by some of the nation’s most distinguished researchers.

Burris writes:

“Cordes attempted to measure the effects of competition from a charter school on the achievement, attendance and grade retention of students in nearby New York City public schools. In addition, she sought to identify the cause of any effects she might find.”

She did not take into account the high levels of mobility among New York City public school students, especially the most disadvantaged.

But worse, her findings are statistically small as compared to other interventions:

“Upon completing her analysis, Cordes concludes that “the introduction of charter schools within one mile of a TPS increases the performance of TPS students on the order of 0.02 standard deviations (sds) in both math and English Language Arts (ELA).”

“To put that effect size in perspective, if you lower class size, you find the effect on achievement to be ten times greater (.20) than being enrolled in a school within one mile of charter school. Reading programs that focus on processing strategies have an effect size of nearly .60. And direct math instruction (effect size .61) with strong teacher feedback (effect size .75) has strong benefits for math achievement[2]. With a .02 effect size, the effect of being enrolled in a school located near a charter school is akin to increasing your height by standing on a few sheets of paper.”

Burris noted that what really mattered was money:

“Although it appears that Cordes found very small achievement gains in a public school if a charter is located within a half mile, that correlation does not tell us why those gains occurred. To answer that question, Cordes looked at an array of factors — demographics, school spending, and parent and teacher survey data about school culture and climate.

There was only ONE standout out factor that rose to the commonly accepted level of statistical significance — money.”

Burris concludes that journalists need to check other sources before believing “studies” and “reports” that make counter-intuitive claims:

“The bottom line is that Sarah Cordes found what every researcher before her found — “competition” from charters has little to no effect on student achievement in traditional public schools. It also found that when it comes to learning, money matters as evidenced by increased spending, especially in co-located schools.

“Most reporters generally lack advanced skills in research methods and statistics. They depend on abstracts and press releases, not having the expertise to look with a critical eye themselves. But it does not take a lot of expertise to see the problems with this particular study.”

Sarah Cordes’ “study” will serve the purposes of Trump and DeVos and others who are trying to destroy the common good. Surely, that was not her intention. Perhaps her dissertation advisors st New York University could have helped her develop a sounder statistical analysis. It seems obvious that the public schools that have been closed to make way for charters received no benefit at all–and they are not included in the study.

Steven Singer wrote a great post about a study by corporate reformers proving that they are wrong. Will they care that one of their favorite tactics is a failure? Of course not.

https://gadflyonthewallblog.wordpress.com/2017/08/26/study-closing-schools-doesnt-increase-test-scores/

Open the link to read it all and to see the links he cites.

He writes:

“You might be tempted to file this under ‘No Shit, Sherlock.’

“But a new study found that closing schools where students achieve low test scores doesn’t end up helping them learn. Moreover, such closures disproportionately affect students of color.

“What’s surprising, however, is who conducted the study – corporate education reform cheerleaders, the Center for Research on EDucation Outcomes (CREDO).

“Like their 2013 study that found little evidence charter schools outperform traditional public schools, this year’s research found little evidence for another key plank in the school privatization platform.

“These are the same folks who have suggested for at least a decade that THE solution to low test scores was to simply close struggling public schools, replace them with charter schools and voilà.

“But now their own research says “no voilà.” Not to the charter part. Not to the school closing part. Not to any single part of their own backward agenda.

“Stanford-based CREDO is funded by the Hoover Institution, the Walton Foundation and testing giant Pearson, among others. They have close ties to the KIPP charter school network and privatization propaganda organizations like the Center for Education Reform.

“If THEY can’t find evidence to support these policies, no one can!

“After funding one of the largest studies of school closures ever conducted, looking at data from 26 states from 2003 to 2013, they could find zero support that closing struggling schools increases student test scores.

“The best they could do was find no evidence that it hurt.

“But this is because they defined student achievement solely by raw standardized scores. No other measure – not student grades, not graduation rates, attendance, support networks, community involvement, not even improvement on those same assessments – nothing else was even considered.

“Perhaps this is due to the plethora of studies showing that school closures negatively impact students in these ways. Closing schools crushes the entire community economically and socially. It affects students well beyond academic achievement.”

Mark Weber, aka the blogger Jersey Jazzman, is getting his doctorate in research and statistics while teaching in a public school in New Jersey. He is a sharp critic of shoddy research, especially when it comes to the fantastical claims made on behalf of charter schools.

In his latest post, he asks why CREDO, the charter-evaluating institute at Stanford University run by Macke Raymond, continues to use a metric that has never been validated.

Journalists who have little expertise in evaluating research claims eagerly take up the claim that School X produces an additional “number of days of learning.”

It happened most recently in Texas, where charter schools finally managed to match the test scores of public schools (you know, those “failing schools” for which charter schools are supposed to be the rescuers.)

He shows how the Texas study refers to “days of learning” and this is translated to infer “substantial” improvement. But, as JJ shows, the gains are actually very small, and might more accurately be described as “tiny.”

He writes:

Stanley Pogrow published a paper earlier this year that didn’t get much attention, and that’s too bad. Because he quite rightly points out that it’s much more credible to describe results like the ones reported here as “small” than as substantial. 0.03 standard deviations is tiny: plug it in here and you’ll see it translates into moving from the 50th to the 51st percentile (the most generous possible interpretation when converting to percentiles).

I have been working on something more formal than a blog post to delve into this issue. I’ve decided to publish an excerpt now because, frankly, I am tired of seeing “days of learning” conversions reported in the press and in research — both peer-reviewed and not — as if there was no debate about their validity.

The fact is that many people who know what they are talking about have a problem with how CREDO and others use “days of learning,” and it’s well past time that the researchers who make this conversion justify it.

Jersey Jazzman calls on Macke Raymond and the staff at CREDO to justify their use of this measurement. The “days of learning” inflates the actual changes, he says.

The concept of days of learning, he says, is based on the work of economist Erik Hanushek of the Hoover Institution (Stanford). It may be coincidental that he is Macke Raymond’s husband. They are both very smart people. I hope they respond to Mark Weber’s challenge.

We have had an interesting conversation on the blog about the value of AP courses. It was tied to Jay Mathews’ use of AP courses to rank the quality of high schools: the more AP courses, the better the high school.

I have made clear two points: One, when I was in high school in the 1950s, there were no AP courses, so I have had no experience with them; and my children graduated high school without ever taking an AP course. This, I have no personal experience with AP. Two, I strongly object to the College Board marketing AP courses on the spurious grounds that they promote equity and civil rights. The College Board is making millions by doing so. It should be as honest as those selling cars, beer, and cigarettes.

Our friend and reader “Democracy” posted this comment:

“It sure is interesting that the pro-AP commenters on this thread do not – and cannot – cite any solid evidence that Advanced Placement is any more than hype. To be sure – there are good AP teachers. Also to be sure – as a program – AP just is NOT what the proponents claim. Far from it.

Much of the AP hype that exist in the U.S. can be traced to Jay Mathews, who slobbers unabashedly over the “merits” of AP. Mathews not only misrepresents the research on AP but also publishes the yearly Challenge Index, which purportedly lists the “best” high schools in America based solely on how many AP tests they give.

Jay Mathews writes that one of the reasons his high school ““rankings have drawn such attention is that “only half of students who go to college get to take college-level courses in high school.” What he does NOT say is that another main reason his rankings draw scrutiny is that they are phony; they are without merit. Sadly, far too many parents, educators and School Board members have bought into the “challenge” index that Mathews sells.

The Challenge Index is – and always has been – a phony list that doesn’t do much except to laud AP courses and tests. The Index is based on Jay Mathews’ equally dubious assumption that AP is inherently “better” than other high school classes in which students are encouraged and taught to think critically.

As more students take AP –– many more are doing so…they’ve been told that it is “rigor” and it’s college-level –– more are failing the tests. In 2010, for example, 43 percent of AP test scores were a 1 or 2. The Kool-Aid drinkers argue that “even students who score poorly in A.P. were better off.” Mathews says this too. But it’s flat-out wrong.

The basis for their claim is a College Board-funded study in Texas. But a more robust study (Dougherty & Mellor, 2010) of AP course and test-takers found that “students – particularly low-income students and students of color – who failed an AP exam were no more likely to graduate from college than were students who did not take an AP exam.” Other studies that have tried to tease out the effects of AP while controlling for demographic variables find that “the impact of the AP program on various measures of college success was found to be negligible.”

More colleges and universities are either refusing to accept AP test scores for credit, or they are limiting credit awarded only for a score of 5 on an AP test. The reason is that they find most students awarded credit for AP courses are just generally not well-prepared.

Former Stanford School of Education Dean Deborah Stipek wrote in 2002 that AP courses were nothing more than “test preparation courses,” and they too often “contradict everything we know about engaging instruction.” The National Research Council, in a study of math and science AP courses and tests agreed, writing that “existing programs for advanced study [AP] are frequently inconsistent with the results of the research on cognition and learning.” And a four-year study at the University of California found that while AP is increasingly an “admissions criterion,” there is no evidence that the number of AP courses taken in high school has any relationship to performance in college.

In The ToolBox Revisited (2006) Clifford Adelman scolded those who had misrepresented his original ToolBox research by citing the importance of AP “in explaining bachelor’s degree completion. Adelman said, “To put it gently, this is a misreading.” Moreover, in statistically analyzing the factors contributing to the earning of a bachelor’s degree, Adelman found that Advanced Placement did not reach the “threshold level of significance.”

College Board executives often say that if high schools implement AP courses and encourage more students to take them, then (1) more students will be motivated to go to college and (2) high school graduation rates will increase. Researchers Kristin Klopfenstein and Kathleen Thomas “conclude that there is no evidence to back up these claims.”

In fact, the unintended consequences of pushing more AP may lead to just the reverse. As 2010 book on AP points out “research…suggests that many of the efforts to push the program into more schools — a push that has been financed with many millions in state and federal funds — may be paying for poorly-prepared students to fail courses they shouldn’t be taking in the first place…not only is money being misspent, but the push may be skewing the decisions of low-income high schools that make adjustments to bring the program in — while being unable to afford improvements in other programs.”

Do some students “benefit” from taking AP courses and tests? Sure. But, students who benefit the most are “students who are well-prepared to do college work and come from the socioeconomic groups that do the best in college are going to do well in college.”

So, why do students take AP? Because they’ve been told to. Because they’re “trying to look good” to colleges in the “increasingly high-stakes college admission process,” and because, increasingly, “high schools give extra weight to AP courses when calculating grade-point averages, so it can boost a student’s class rank.” It’s become a rather depraved stupid circle.

One student who got caught up in the AP hype cycle –– taking 3 AP courses as a junior and 5 as a senior –– and only got credit for one AP course in college, reflected on his AP experience. He said nothing about “rigor” or “trying to be educated” or the quality of instruction, but remarked “if i didn’t take AP classes, it’s likely I wouldn’t have gotten accepted into the college I’m attending next year…If your high school offers them, you pretty much need to take them if you want to get into a competitive school. Or else, the admissions board will be concerned that you didn’t take on a “rigorous course load.” AP is a scam to get money, but there’s no way around it. In my opinion, high schools should get rid of them…”

Jay Mathews calls AP tests “incorruptible.” But what do students actually learn from taking these “rigorous” AP tests?

For many, not much. One student remarked, after taking the World History AP test, “dear jesus… I had hoped to never see “DBQ” ever again, after AP world history… so much hate… so much hate.” And another added, “I was pretty fond of the DBQ’s, actually, because you didn’t really have to know anything about the subject, you could just make it all up after reading the documents.” Another AP student related how the “high achievers” in his school approached AP tests:

“The majority of high-achieving kids in my buddies’ and my AP classes couldn’t have given less of a crap. They showed up for most of the classes, sure, and they did their best to keep up with the grades because they didn’t want their GPAs to drop, but when it came time to take the tests, they drew pictures on the AP Calc, answered just ‘C’ on the AP World History, and would finish sections of the AP Chem in, like, 5 minutes. I had one buddy who took an hour-and-a-half bathroom break during World History. The cops were almost called. They thought he was missing.”

An AP reader (grader), one of those “experts” cited by Mathews notes this: “I read AP exams in the past. Most memorable was an exam book with $5 taped to the page inside and the essay just said ‘please, have mercy.’ But I also got an angry breakup letter, a drawing of some astronauts, all kinds of random stuff. I can’t really remember it all… I read so many essays in such compressed time periods that it all blurs together when I try to remember.”

Dartmouth no longer gives credit for AP test scores. It found that 90 percent of those who scored a 5 on the AP psychology test failed a Dartmouth Intro to Psych exam. A 2006 MIT faculty report noted “there is ‘a growing body of research’ that students who earn top AP scores and place out of institute introductory courses end up having ‘difficulty’ when taking the next course.” Mathews called this an isolated study. But two years prior, Harvard “conducted a study that found students who are allowed to skip introductory courses because they have passed a supposedly equivalent AP course do worse in subsequent courses than students who took the introductory courses at Harvard” (Seebach, 2004).

When Dartmouth announced its new AP policy, Mathews ranted and whined that “The Dartmouth College faculty, without considering any research, has voted to deny college credit for AP.” Yet it is Jay who continually ignores and diminishes research that shows that Advanced Placement is not what it is hyped up to be.

In his rant, Mathews again linked to a 2009 column of his extolling the virtues of the book “Do What Works” by Tom Luce and Lee Thompson. In “Do What Works,” Luce and Thompson accepted at face value the inaccuracies spewed in “A Nation At Risk” (the Sandia Report undermined virtually everything in it). They wrote that “accountability” systems should be based on rewards and punishments, and that such systems provide a “promising framework, and federal legislation [NCLB] promotes this approach.” Luce and Thompson called NCLB’s 100 percent proficiency requirement “bold and valuable” and “laudable” and “significant” and “clearly in sight.” Most knowledgeable people called it stupid and impossible.

Luce and Thompson wrote that “data clearly points to an effective means” to increase AP participation: “provide monetary rewards for students, teachers, and principals.”
This flies in the face of almost all contemporary research on motivation and learning.

As I’ve noted before, College Board funded research is more than simply suspect . The College Board continues to perpetrate the fraud that the SAT actually measures something important other than family income. It doesn’t. Shoe size would work just as well.

[For an enlightening read on the SAT, see: http://www.theatlantic.com/magazine/archive/2005/11/the-best-class-money-can-buy/4307/%5D

The College Board produced a “study” purporting to show that PSAT score predicted AP test scores. A seemingly innocuous statement, however, undermined its validity. The authors noted that “the students included in this study are of somewhat higher ability than…test-takers” in the population to which they generalized. That “somewhat higher ability” actually meant students in the sample were a full standard deviation above those 9th and 10th graders who took the PSAT. Even then, the basic conclusion of the “study” was that students who scored well on the PSAT had about a 50-50 chance of getting a “3”, the equivalent of a C- , on an AP test.

A new (2013) study from Stanford notes that “increasingly, universities seem
to be moving away from awarding credit for AP courses.” The study pointed out that “the impact of the AP program on various measures of college success was found to be negligible.” And it adds this: “definitive claims about the AP program and its impact on students and schools are difficult to substantiate.” But you wouldn’t know that by reading Jay Mathews or listening to the College Board, which derives more than half of its income from AP.

What the College Board doesn’t like to admit is that it sells “hundreds of thousands of student profiles to schools; they also offer software and consulting services that can be used to set crude wealth and test-score cutoffs, to target or eliminate students before they apply…That students are rejected on the basis of income is one of the most closely held secrets in admissions.” Clearly, College Board-produced AP courses and tests are not an “incorruptible standard.” Far from it.

The College Board routinely coughs up “research studies” to show that their test products are valid and reliable. The problem is that independent, peer-reviewed research doesn’t back them up. The SAT and PSAT are shams. Colleges often use PSAT scores as a basis for sending solicitation letters to prospective students. However, as a former admissions officer noted, “The overwhelming majority of students receiving these mailings will not be admitted in the end.” But the College Board rakes in cash from the tests, and colleges keep all that application money.

Some say – and sure does look that way – that the College Board, in essence, has turned the admissions process “into a profit-making opportunity.”

Mathews complains about colleges who no longer award AP credit. He says (wink) “Why drop credit for all AP subjects without any research?” Yet again and again he discounts all the research.

Let’s do a quick research review.

A 2002 National Research Council study of AP courses and tests was an intense two-year, 563-page detailed content analysis. The main study committee was comprised of 20 members who are not only experts in their fields but also top-notch researchers. Most also write on effective teaching and learning. Even more experts were involved on content panels for each discipline (biology, chemistry, physics, math), plus NRC staff. Mathews didn’t like the fact that the researchers concluded that AP courses and tests were a “mile wide and an inch deep” and they did not comport with well-established, research-based principles of learning. He dismissed that study as the cranky “opinion of a few college professors.”

The main finding of a 2004 Geiser and Santelices study was that “the best predictor of both first- and second-year college grades” is unweighted high school grade point average, and a high school grade point average “weighted with a full bonus point for AP…is invariably the worst predictor of college performance.” And yet – as commenters noted here – high schools add on the bonus. The state of Virginia requires it.

Klopfenstein and Thomas (2005) found that AP students “…generally no more likely than non-AP students to return to school for a second year or to have higher first semester grades.” Moreover, they write that “close inspection of the [College Board] studies cited reveals that the existing evidence regarding the benefits of AP experience is questionable,” and “AP courses are not a necessary component of a rigorous curriculum.”
In other words, there’s no need for the AP imprimatur to have thoughtful, inquiry-oriented learning.

Phillip Sadler said in 2009 that his research found “students who took and passed an A.P. science exam did about one-third of a letter grade better than their classmates with similar backgrounds who did not take an A.P. course.” Sadler also wrote in the 2010 book “AP: A Critical Examination” that “Students see AP courses on their transcripts as the ticket ensuring entry into the college of their choice,” yet, “there is a shortage of evidence about the efficacy, cost, and value of these programs.” Sadly, AP was written into No Child Left Behind and Race to the Top and it is very much a mainstay of corporate-style education “reform,” touted by the likes of ExxonMobil and the US Chamber of Commerce.

For years, Mathews misrepresented Clifford Adelman’s 1999 ToolBox. As Klopfenstein and Thomas wrote in 2005, “it is inappropriate to extrapolate about he effectiveness of the AP Program based on Adelman’s work alone.” In the 2006 ToolBox Revisited Adelman issued his own rebuke:

“With the exception of Klopfenstein and Thomas (2005), a spate of recent reports and commentaries on the Advanced Placement program claim that the original ToolBox demonstrated the unique power of AP course work in explaining bachelor’s degree completion. To put it gently, this is a misreading.”

The book, ‘AP A Critical Examination’ (2010) lays out the research that makes clear AP has become “the juggernaut of American high school education,” but “ the research evidence on its value is minimal.” It is the academic equivalent of DARE (Drug Abuse Resistance Education). DARE cranks out “research” that shows its “effectiveness,” yet those studies fail to withstand independent scrutiny. DARE operates in more than 80 percent of U.S. school districts, and it has received hundreds of millions of dollars in subsidies. However, the General Accounting Office found in 2003 that “the six long-term evaluations of the DARE elementary school curriculum that we reviewed found no significant differences in illicit drug use between students who received DARE in the fifth or sixth grade (the intervention group) and students who did not (the control group).”

AP may work well for some students, especially those who are already “college-bound to begin with” (Klopfenstein and Thomas, 2010). As Geiser (2007) notes, “systematic differences in student motivation, academic preparation, family background and high-school quality account for much of the observed difference in college outcomes between AP and non-AP students.” College Board-funded studies do not control well for these student characteristics (even the College Board concedes that “interest and motivation” are keys to “success in any course”). Klopfenstein and Thomas (2010) find that when these demographic characteristics are controlled for, the claims made for AP disappear.

I’m left wondering about this wonder school where “several hundred freshmen” take “both AP World and AP Psychology” and where by graduation, students routinely “knock out the first 30 ore more college credits.”

Where is this school, and what is its name? Jay Mathews will surely be interested.”

Voucher advocates have protected D.C.’s voucher program, known as “Opportunity Scholarships,” since it was created in 2004 despite lack of strong evidence for its benefits. Evaluations have found little or no improvement in test scores. This new evaluation shows negative effects on test scores in the elementary grades for those who enrolled in voucher schools. This echoes studies in Louisiana, Indiana, and Ohio, where voucher students lost ground as compared to their peers who were offered vouchers but stayed in public schools. In the past, the D.C. evaluation team was led by Patrick Wolf of the University of Arkansas, the high temple of school choice. The evaluation team for this new study was led by Mark Dynarski of Pemberton Research and a group of Westat researchers. Dynarski, you may recall, wrote a paper for the Brookings Institution calling attention to the negative impact of vouchers in Louisiana and Indiana. Previous evaluations showed higher graduation rates in voucher schools, but also–as is now customary in voucher schools–high rates of attrition. Of those who don’t drop out and return to public schools, the graduation rate is higher.

The Washington Post reports:

 

Students in the nation’s only federally funded school voucher initiative performed worse on standardized tests within a year after entering D.C. private schools than peers who did not participate, according to a new federal analysis that comes as President Trump is seeking to pour billions of dollars into expanding the private school scholarships nationwide.

The study, released Thursday by the Education Department’s research division, follows several other recent studies of state-funded vouchers in Louisiana, Indiana and Ohio that suggested negative effects on student achievement. Critics are seizing on this data as they try to counter Trump’s push to direct public dollars to private schools.

Vouchers, deeply controversial among supporters of public education, are direct government subsidies parents can use as scholarships for private schools. These payments can cover all or part of the annual tuition bills, depending on the school.

Education Secretary Betsy DeVos has long argued that vouchers help poor children escape from failing public schools. But Sen. Patty Murray (Wash.), the top Democrat on the Senate Education Committee, said that DeVos should heed the department’s Institute of Education Sciences. Given the new findings, Murray said, “it’s time for her to finally abandon her reckless plans to privatize public schools across the country.”

DeVos defended the D.C. program, saying it is part of an expansive school-choice market in the nation’s capital that includes a robust public charter school sector.

 

“When school choice policies are fully implemented, there should not be differences in achievement among the various types of schools,” she said in a statement. She added that the study found that parents “overwhelmingly support” the voucher program “and that, at the same time, these schools need to improve upon how they serve some of D.C.’s most vulnerable students.”

DeVos’ statement suggests that neither vouchers nor charters will ever outperform public schools. The goal of choice is choice, not better academic achievement or better education, not to “save poor kids from failing schools,” but to provide choice.

 

 

 

 

We can always count on researchers at the National Education Policy Center to review reports issued by think tanks and advocacy groups, some of which are the same.

This review analyzes claims about Milwaukee’s voucher schools. It is funny to describe them as successful, since Milwaukee is really the poster city for the failure of school choice. It has had vouchers and charters since 1990 and is near the very bottom of the NAEP tests for urban districts, barely ahead of sad Detroit, another city afflicted by charters. Both cities demonstrate that school choice does not fix the problems of urban education or urban students and families.

Find Documents:

Press Release: http://nepc.info/node/8612
NEPC Review: http://nepc.colorado.edu/thinktank/review-milwaukee-vouchers
Report Reviewed: http://www.will-law.org/wp-content/uploads/2017/03/apples.pdf

Contact:
William J. Mathis: (802) 383-0058, wmathis@sover.net
Benjamin Shear: (303) 492-8583, benjamin.shear@colorado.edu

Learn More:

NEPC Resources on Accountability and Testing
NEPC Resources on Charter Schools
NEPC Resources on School Choice
NEPC Resources on Vouchers

BOULDER, CO (April 25, 2017) – A recent report from the Wisconsin Institute for Law and Liberty attempts to compare student test score performance for the 2015-16 school year across Wisconsin’s public schools, charter schools, and private schools participating in one of the state’s voucher programs. Though it highlights important patterns in student test score performance, the report’s limited analyses fail to provide answers as to the relative effectiveness of school choice policies.

Apples to Apples: The Definitive Look at School Test Scores in Milwaukee and Wisconsin was reviewed by Benjamin Shear of the University of Colorado Boulder.

Comparing a single year’s test scores across school sectors that serve different student populations is inherently problematic. One fundamental problem of isolating variations in scores that might be attributed to school differences is that the analyses must adequately control for dissimilar student characteristics among those enrolled in the different schools. The report uses linear regression models that use school-level characteristics to attempt to adjust for these differences and make what the authors claim are “apples to apples” comparisons. Based on these analyses, the report concludes that choice and charter schools in Wisconsin are more effective than traditional public schools.

Unfortunately, the limited nature of available data undermines any such causal conclusions. The inadequate and small number of school-level variables included in the regression models are not able to control for important confounding variables, most notably prior student achievement. Further, the use of aggregate percent-proficient metrics masks variation in performance across grade levels and makes the results sensitive to the (arbitrary) location of the proficiency cut scores. The report’s description of methods and results also includes some troubling inconsistencies. For example the report attempts to use a methodology known as “fixed effects” to analyze test score data in districts outside Milwaukee, but such a methodology is not possible with the data described in the report.

Thus, concludes Professor Shear, while the report does present important descriptive statistics about test score performance in Wisconsin, it wrongly claims to provide answers for those interested in determining which schools or school choice policies in Wisconsin are most effective.

Find the review by Benjamin Shear at:

http://nepc.colorado.edu/thinktank/review-milwaukee-vouchers

Find Apples to Apples: The Definitive Look at School Test Scores in Milwaukee and Wisconsin, by Will Flanders, published by the Wisconsin Institute for Law and Liberty, at:

http://www.will-law.org/wp-content/uploads/2017/03/apples.pdf