Archives for category: Research

John Thompson, historian and retired teacher in Oklahoma, reviews SLAYING GOLIATH in two-parts.

This is part one. 

He begins:

Diane Ravitch’s Slaying Goliath: The Passionate Resistance to Privatization and the Fight to Save America’s Public Schools is the history of the rise and fall of corporate school reform, but it is much more. It isn’t that surprising that a scholar like Ravitch, like so many researchers and practitioners, predicted over a decade ago how data-driven, competition-driven “reform” would fail. Technocratic “reformers,” who Ravitch calls “Goliath,” started with a dubious hunch, that socio-engineering a “better teacher” could overcome poverty and inequality, and then ignored the science that explained why evaluating teachers based on test score growth would backfire.

It may be surprising that Ravitch, an academic who had once worked in the Education Department of President George H. W. Bush, and served on the board of the conservative Fordham Foundation, become the leader of the grass roots uprising of parents, students, and educators which she dubs the “Resistance.” But it soon became clear why Ravitch inspired and guided the Resistance. In contrast to Goliath, who “ignored decades of research” and assumed the worst of their opponents, Ravitch respected and listened to practitioners and patrons.

One big surprise, which is explained in Slaying Goliath, is how Ravitch presciently understood why output-driven, charter-driven reform devolved into “privatization.”  She had firsthand experience with the hubris of the Billionaires Boys Club, understanding the danger of their desire to hurriedly “scale up” transformational change. And being an accomplished scholar, she had insights into how top-down technocrats’ embrace of behaviorism in the tradition of Edward Thorndike and B.F. Skinner, led to their commitment to “rigidly prescribed conditioning via punishments and rewards.”

Ravitch was among the first experts to fully grasp how, “Behaviorists, and the Disrupters who mimic them today, lack appreciation for the value of divergent thinking, and the creative potential of variety. And they emphatically discount mere ‘feelings.’”

Ravitch witnessed how corporate reformers “admire disruptive innovation, because high-tech businesses do it, so it must be good.” Rather than take the time to heed the wisdom of those who had no choice but to become Resisters, Goliath’s contempt for those in the classroom drove an evolution from “creative destruction” to “Corporate Disruption.”

Disrupters were in such a rush that they used children as “guinea pigs in experiments whose negative results are clear.” But they “never admit failure,” and remain oblivious to the fact that “The outcome of disruption was disruption, not better education.”  And these billionaires not only continue to “fund a hobby injurious to the common good.” They’ve ramped up their assault on public education and its defenders, perpetuating a “direct assault on democracy.”

Ravitch predicts, “Historians will look back and wonder why so many wealthy people spent so much money in a vain attempt to disrupt and privatize public education and why they ignored income inequality and wealth inequality that were eating away at the vitals of American society.”

Thompson goes on to tell some of the important events in which I was a participant. Such as the decision within the first Bush administration to trash the infamous Sandia Report, which disputed the desperate findings of “A Nation at Risk.” And my discussions with Albert Shanker about what charter schools should be in the American system. He saw them as part of a school district, operating with the approval of their peers as collaborators, as R&D labs, not as competitors for funding and students. And my 2011 meeting at the Obama White House, when the top officials asked what I thought of  Common Core and I urged them to launch field trials; they rejected the idea out of hand.

And Thompson quickly understood that, unlike the Disrupters, who wanted to reinvent and disrupt the public schools, I listened to practitioners. I assumed they knew far more than I, and I was right about that. I understood the negative effects of NCLB and the Race to the Top because I saw them through the eyes of those who had to implement shoddy ideas.

Thompson concludes:

Ravitch observes that in contrast to the Resistance, “So as long as billionaires, hedge fund managers, and their allies are handing out money, there will be people lined up to take it. But their transactions cannot be confused with a social movement.” Moreover, “The most important lesson of the past few decades is that “Reform doesn’t mean reform. It means mass demoralization, chaos, and turmoil. Disruption does not produce better education.”

I’ll conclude this post with Ravitch’s words on the two dogmas that the Disruption movement relied on:

First, the benefits of standardization, and second, the power of markets. Their blind adherence to these principles has been disastrous in education.    These principles don’t work in schools for the same reasons they don’t work for families, churches, and other institutions that function primarily on the basis of human interactions, not profits and losses.

The two most distinguished education researchers in the nation are Gene V. Glass and David C. Berliner, both of whom have held the highest positions in their profession and are universally admired for their careful research and long history of defending the highest standards in the research community.

Together they wrote an essay-review of my book SLAYING GOLIATH.

The review can also be accessed here.

They found the book to be fair-minded and unbiased. And they liked it a lot!

They did some genealogical research about me and my family.

They refer to this blog as “the most influential communications medium in the history of public education.”

They describe the book “as the efforts of a historian to find the facts and follow where they lead.”

They write “We sincerely thank Ravitch for her careful documentation of the greed, anti-democratic actions, and just plain stupidity displayed by so many of our nation’s leading political and business leaders who attempted to fix education….

“In the following, we provide a flavor of the book by brief examples from each chapter. We hope that this whets the appetite for a full reading by anyone concerned with the attacks on public education by those whom Ravitch calls the Goliaths. With her slingshot and stone, she joins a noble battle to preserve this uniquely American invention, which Horace Mann called the greatest invention of mankind….”

I think you will enjoy their insights, as when they indict Common Core as Bill Gates’ biggest folly, concluding that his love for standardization causes him to confuse schooling with DOS, the Microsoft operating system. They say that the “philanthro-capitalists” believe that schools should be run like businesses, like their own businesses. “They ignore the fact that the vast majority of businesses fail. They are incredulous when their schools fail.”

Glass and Berliner have written a valuable review (they are not entirely uncritical, as they still call me to account for the sins of my years on the other side).

I hope you will read it in its entirety.

I am immensely gratified to receive this careful and thoughtful review by two of the nation’s most respected scholars.

 

 

 

 

 

 

Gary Rubinstein has a deep aversion to hypocrisy, hypes, and propaganda.

He read a widely publicized report saying “research shows” that graduates of KIPP have higher college completion rates than their peers.

But then he discovered that the research shows no significant difference between KIPP students and their peers in college completion rates. 

His post debunks Richard Whitmire’s erroneous claim that KIPP students finish college at a rate three to five times greater than students who went to public schools. It is also a valuable lesson in reading and interpreting research findings or claims that “research shows.”

He begins:

The way reformers misuse data follows a very simple and predictable plan:  First they get some skewed data, then pick a ‘researcher’ to interpret the skewed data.  The ‘researcher’ then writes a report which gets touted in The74, EduPost, and eventually even makes it into more mainstream publications like USA Today and The Wall Street Journal.  Since the report is filled with nonsense and half-truths, within a few weeks the truth comes out and the report is discredited, but not before the damage was done and the spin has made it into folklore.  When this happens, the reformers will then ‘move the goalposts’ and get some more skewed data and start the process over again.

An example of this is the July 2017 report by Richard Whitmire called ‘The Alumni‘.  Whitmire has written books about both KIPP and about Michelle Rhee so I think you get the idea of what his point of view is.  In this poorly researched project he concludes that “Data Show Charter School Students Graduating From College at Three to Five Times National Average“.

This was probably the easiest report I ever debunked.  The biggest flaw was that for most of the charter schools, they were only counting the percent of graduating seniors who persisted in college and then comparing that percent to the overall percent of all low-income students — an apples to oranges comparison.  Whitmire acknowledges this in another post about the methodology in which he says that only KIPP counts students who leave the school before they graduate and that their numbers are much lower, but still at 38% which is at least triple the expected graduation rate for low income students.

A second flaw, and this one is very difficult to compensate for, is that charter school students are not a random sampling of all students since many families choose no to apply to them.  So you get a biased sampling even if you do count all the students who get into the charter school and not just the ones who make it to graduate from the charter school.  And even though I and others have discredited his report, it is something that still gets quoted in the main stream media.

Just recently, however, I learned of a report generated by Mathematica and funded by the John Arnold Foundation.  I think that Mathematica is a very reputable company and even though reformers often hire them to produce reports, sometimes those reports reach conclusions that reformers were not expecting.

In this case, the report called “Long-Term Impacts of KIPP Middle Schools on College Enrollment and Early College Persistence” , reached a result that completely contradicts Whitmire’s claim that “Charter School Students Graduating From College at Three to Five Times National Average”.

Read on to see just how overblown is the KIPP myth about the college success of their students

Here’s the relevant summary of what they found:

Screen Shot 2020-01-04 at 5.07.25 PM

 

 

 

A group of scholars collaborated to write a paper published by the National Bureau of Economic Research that studies how teachers affect student height. It is a wonderful and humorous takedown of the Raj Chetty et al thesis that the effects of a single teacher in the early grades may determine a student’s future lifetime earnings, her likelihood graduating from college, live in higher SES neighborhoods, as well as avoid teen pregnancy.

When the Chetty study was announced in 2011, a front-page article in the New York Times said:

WASHINGTON — Elementary- and middle-school teachers who help raise their students’ standardized-test scores seem to have a wide-ranging, lasting positive effect on those students’ lives beyond academics, including lower teenage-pregnancy rates and greater college matriculation and adult earnings, according to a new study that tracked 2.5 million students over 20 years.

The paper, by Raj Chetty and John N. Friedman of Harvard and Jonah E. Rockoff of Columbia, all economists, examines a larger number of students over a longer period of time with more in-depth data than many earlier studies, allowing for a deeper look at how much the quality of individual teachers matters over the long term.

“That test scores help you get more education, and that more education has an earnings effect — that makes sense to a lot of people,” said Robert H. Meyer, director of the Value-Added Research Center at the University of Wisconsin-Madison, which studies teacher measurement but was not involved in this study. “This study skips the stages, and shows differences in teachers mean differences in earnings.”

The study, which the economics professors have presented to colleagues in more than a dozen seminars over the past year and plan to submit to a journal, is the largest look yet at the controversial “value-added ratings,” which measure the impact individual teachers have on student test scores. It is likely to influence the roiling national debates about the importance of quality teachers and how best to measure that quality.

Many school districts, including those in Washington and Houston, have begun to use value-added metrics to influence decisions on hiring, pay and even firing….

Replacing a poor teacher with an average one would raise a single classroom’s lifetime earnings by about $266,000, the economists estimate. Multiply that by a career’s worth of classrooms.

“If you leave a low value-added teacher in your school for 10 years, rather than replacing him with an average teacher, you are hypothetically talking about $2.5 million in lost income,” said Professor Friedman, one of the coauthors…

The authors argue that school districts should use value-added measures in evaluations, and to remove the lowest performers, despite the disruption and uncertainty involved.

“The message is to fire people sooner rather than later,” Professor Friedman said.

Professor Chetty acknowledged, “Of course there are going to be mistakes — teachers who get fired who do not deserve to get fired.” But he said that using value-added scores would lead to fewer mistakes, not more.

President Obama hailed the  Chetty study in his 2012 State of the Union address.

Value-added teacher evaluation, that is, basing the evaluation of teachers on the rise or fall of their students’ test scores, was a central feature of Arne Duncan’s Race to the Top when it was unveiled in 2010. States had to agree to adopt it if they wanted to be eligible for Race to the Top funding.

When the Los Angeles Times published a value-added ranking of thousands of teachers, teachers said the rankings were filled with error, but Duncan said those who complained were afraid to learn the truth. In Florida, teacher evaluations may be based on the rise or fall of the scores of students that the teachers had never taught, in subjects they had never taught. (About 70% of teachers do not teach subjects that are tested annually to provide fodder for these ratings.) When this nutty process was challenged inn court by Florida teachers, the judge ruled that the practice might be unfair but it was not unconstitutional.

The fundamental claim of VAM (value-added modeling or measurement) has been repeatedly challenged, most notably by economist Moshe Adler. When put into law, as it was in most states, it was found to be useless, because only tiny percentages of teachers were identified as ineffective, and even the validity of the ratings of that 1-3% was dubious. The use of VAM was frozen by a judge in New Mexico, then tossed out earlier this year by a new Democratic governor. It was banned by a judge in Houston.  A large experiment funded by the Gates Foundation intended to demonstrate the value of VAM produced negative results.

Now comes economic research to test the validity of linking teacher evaluation and student height.

 

Marianne Bitler, Sean  Corcoran, Thurston Domina, and Emily Penner wrote:

NBER Working Paper No. 26480
Issued in November 2019
NBER Program(s):Program on Children, Economics of Education Program

Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.

 

Help the Network for Public Education stay strong for America’s public school students, teachers, and schools!

Be generous!

We depend on you!

 

Today the National Education Policy Center released its annual review of research on virtual charter schools. The bottom line was not good.

The title of the report is “Virtual Schools in the U.S. 2019.” It was double blind peer-reviewed.

The authors write:

The number of virtual schools in the

U.S. continues to grow.

In 2017-18, 501 full-time virtual schools enrolled 297,712 students, and 300 blended schools

enrolled 132,960. Enrollments in virtual schools increased by more than 2,000 students between

2016-17 and 2017-18, and enrollments in blended learning schools increased by over

16,000 during this same time period. Virtual schools enrolled substantially fewer minority

students and fewer low-income students compared to national public school enrollment.

Virtual schools operated by for-profit EMOs were more than four times as large as other virtual

schools, enrolling an average of 1,345 students. In contrast, those operated by nonprofit

EMOs enrolled an average of 344 students, and independent virtual schools (not affiliated

with an EMO) enrolled an average of 320 students.

Among virtual schools, far more district-operated schools achieved acceptable state school

performance ratings (56.7% acceptable) than charter-operated schools (40.8%). More

schools without EMO involvement (i.e., independent) performed well (59.3% acceptable ratings),

compared with 50% acceptable ratings for schools operated by nonprofit EMOs, and

only 29.8% acceptable ratings for schools operated by for-profit EMOs. The pattern among

blended learning schools was similar with highest performance by district schools and lowest

performance by the subgroup of schools operated by for-profit EMOs.

Given the overwhelming evidence of poor performance by full-time virtual and blended

learning schools it is recommended that policymakers:

• Slow or stop the growth in the number of virtual and blended schools and the size of

their enrollments until the reasons for their relatively poor performance have been

identified and addressed.

• Implement measures that require virtual and blended schools to reduce their student-

to-teacher ratios.

• Enforce sanctions for virtual and blended schools that perform inadequately.

• Sponsor research on virtual and blended learning “programs” and classroom innovations

within traditional public schools and districts.

There is much more in the report that deserves your attention, especially regarding the current infatuation with blended learning.

I suggest you read it for yourself.

 

Here is the citation:

 Molnar, A. (Ed.), Miron, G., Elgeberi, N., Barbour, M.K., Huerta, L., Shafer,

S.R., Rice, J.K. (2019). Virtual Schools in the U.S. 2019.  Boulder, CO: National Education Policy

Center. Retrieved [date] from http://nepc.colorado.edu/publication/virtual-schools-annual-2019 .

 

 

Jersey Jazzman, aka Mark Weber, is a teacher in New Jersey who took the time to earn a Ph.D. So he could decipher the studies and research usedto make decisions about schools.

In this post, he explains to the media how to cover charter schools.

He noticed that Senator Bernie Sanders’ proposal to ban for-profit charter schools unleashed a wave of commentary about charter schools. Many people have no idea what they are. They don’t know that they are privately managed but publicly funded and that most charter schools operate with little or no oversight. It’s a sweet deal to get public money with no one checking the books.

He writes:

I can’t say I’m surprised, but it looks like Bernie Sanders’ latest policy speech on education – where, among other things, he calls for a ban on for-profit charter schools and other charter school reforms — has generated a lot of fair to poor journalism that purports to explain what charters are and how they perform.

Predictably, the worst of the bunch is from Jon Chait, who cheerleads for charters often without adhering to basic standards of transparency. Chait’s latest piece is so overblown that even a casual reader with no background in charter schools will recognize it for the screed that it is, so I won’t waste time rebutting it.

There are, however, plenty of other pieces about Sanders’ proposals that take a much more measured tone… and yet still get some charter school basics wrong. I’m going to hold off on citing specific examples and instead hope (against hope) that maybe I can get through to some of the journalists who want to get the story of charters right.

The first warning is not to accept the claims that CREDO makes, especially not its assertion that it can measure “days of learning.” It can’t.

Second point, don’t accept the assertion that “charter schools are public schools.” They get public money but bot everything that gets public money is “public.” Like Harvard and Boeing.

Third point, do charter schools strip funding from public schools? JJ is not sure but Gordon Lafer is. See his study here on the fiscal drain that charters impose on public school. 

4) The “best” charter sectors get their gains through increased resources, peer effects, and a test prep curriculum — and not through “charteriness.”

Read the rest for yourself. JJ is always worth reading.

 

Matt Barnum reports that new research from Louisiana shows that the negative effects of vouchers persist over time. 

There used to be a belief that the negative effects were temporary, but apparently the voucher students do not bounce back, as voucher proponents hoped.

New research on a closely watched school voucher program finds that it hurts students’ math test scores — and that those scores don’t bounce back, even years later.

That’s the grim conclusion of the latest study, released Tuesday, looking at Louisiana students who used a voucher to attend a private school. It echoes research out of Indiana, Ohio, and Washington, D.C. showing that vouchers reduce students’ math test scores and keep them down for two years or more.

Together, they rebut some initial research suggesting that the declines in test scores would be short-lived, diminishing a common talking point for voucher proponents.

“While the early research was somewhat mixed … it is striking how consistent these recent results are,” said Joe Waddington, a University of Kentucky professor who has studied Indiana’s voucher program. “We’ve started to see persistent negative effects of receiving a voucher on student math achievement.”

The state’s voucher program also didn’t improve students’ chances of enrolling in college.

The results may influence local and national debates. Secretary of Education Betsy DeVos is working to drum up support for a proposed federal tax credit program that could help parents pay private-school tuition, and Tennessee lawmakers are debating whether to create a voucher-like program of their own.

If past history is a guide, Betsy DeVos will dismiss the research, as will Tennessee Governor Bill Lee. They want vouchers regardless of their impact on students.

 

Peter Greene writes here about an exceptionally silly “study” that Betsy DeVos is using to drum up fading public support for charter schools.

The study, by choice advocates Patrick  Wolf and Corey DeAngelis, attempts to measure “success” by return on investment, converting taxpayer dollars into NAEP scores.

Sounds crazy, no?

Greene writes:

This particular paper comes out of something called the School Choice Demonstration Project, which studies the effects of school choice.

A Good Investment: The Updated Productivity of Public Charter Schools in Eight U.S. Cities pretends to measure school productivity, focusing on eight cities- Houston, San Antonio, New York City, Washington DC, Atlanta, Indianapolis, Boston, and Denver. In fact, the paper actually uses the corporate term ROI– return on investment.

We could dig down to the details here, look at details of methodology, break down the eight cities, examine the grade levels represented, consider their use of Investopedia for a definition of ROI. But that’s not really necessary, because they use two methods for computing ROI– one is rather ridiculous, and the other is exceptionally ridiculous.

The one thing you can say for this method of computing ROI is that it’s simple. Here’s the formula, plucked directly from their paper so that you won’t think I’m making up crazy shit:

Cost Effectiveness=Achievement Scores divided by Per-Pupil Revenue.

The achievement scores here are the results from the NAEP reading and math, and I suppose we could say that’s better than the PARCC or state-bought Big Standardized Test, but it really doesn’t matter because the whole idea is nuts.

It assumes that the only return we should look for on an investment in schools is an NAEP score. Is that a good assumption? When someone says, “I want my education tax dollars to be well spent,” do we understand them to mean that they want to see high standardized test scores– and nothing else?? Bot even a measure of students improving on that test. The paper literally breaks this down into NAEP points per $1,000. Is that the whole point of a school?

It gets worse, and Greene explains why.

I am reminded of a fad in the 1920s to compute the dollar value of different subjects. The curriculum experts of the day calculated that teaching Latin was a total waste of time because it was expensive and produced no return on investment.

The whole thing called “education” got left out of the calculus.

 

I recently posted Leonie Haimson’s critique of the program called “Teach to One.”

John Pane, one of the authors of the RAND evaluation, wrote to say that he did not agree with Leonie’s characterization. I told him that I would publish his letter and Leonie’s response.

He wrote this letter:

On March 4, 2018 you published this blog entry, “Leonie Haimson: Reality Vs. Hype in “Teach to One” Program,” excerpting from Leonie Haimson’s blog. Your excerpt included this paragraph about my own research (with colleagues) and my public statements:

“The most recent RAND analysis of schools that used personalized learning programs that received funding through the Next Generation Learning initiative, which have included both Summit and Teach to One, concluded there were small and mostly insignificant gains in achievement at these schools, and their students were more likely to feel alienated and unsafe compared to matched students at similar schools. The overall results caused John Pane, the lead RAND researcher, to say to Ed Week that ‘the evidence base [for these schools] is very weak at this point.’“

This paragraph by Haimson has numerous false and misleading statements. Here I summarize my critique, excerpting the original paragraph:

“The most recent RAND analysis of schools that used personalized learning programs that received funding through the Next Generation Learning initiative, which have included both Summit and Teach to One, …”

None of the schools in our sample reported using Teach to One (TtO) among the 194 education technology products they mentioned. Our sample includes schools in the Next Generation Learning Challenges (NGLC) wave IIIa and wave IV programs, a subset of all the NGLC initiatives. Haimson points to blog posts by NGLC about Summit and TtO, but that does not mean our study included them.

“…included both Summit and Teach to One, concluded there were small and mostly insignificant gains in achievement at these schools, …”

Our conclusions were about the whole sample of schools, and did not single out any particular schools as is implied by juxtaposing “Summit and Teach to One” with “these schools.” Our concluding remarks related to achievement did not say “small and mostly insignificant.” What we actually said was, “Students in NGLC schools experienced positive achievement effects in mathematics and reading, although the effects were only statistically significant in mathematics. On average, students overcame gaps relative to national norms after two years in NGLC schools. Students at all levels of achievement relative to grade-level norms appeared to benefit. Results varied widely across schools and appeared strongest in the middle grades.” 

“… and their students were more likely to feel alienated and unsafe compared to matched students at similar schools”

This was not a conclusion of our report. In a supplemental appendix we did compare results from our sample (again, the whole sample of schools in the study, none of which reported using TtO) to a national sample. Our method did not use “matched students at similar schools.” Given data limitations, we were able to make the student samples similar (through weighting) only on grade level, gender, and broad classifications of geographic locale (e.g., urban vs. suburban). Even after weighting, we suspect the high-minority, high-poverty schools in the NGLC sample may be located in more distressed communities than the national survey counterparts, and that this could be related to feelings of safety. Indeed, fewer NGLC students (78 vs. 82 percent) agreed that “I feel safe in this school,” but this small difference cannot be attributed to personalized learning and has no direct relevance to TtO. None of our survey items or reports used the word “alienated.” Possibly related, 77 percent of NGLC students agreed that “at least one adult in this school knows me well” and “I feel good about being in this school,” 76 percent agreed that “I care about this school” and 72 percent agreed “I am an important part of my school community.”

The overall results caused John Pane, the lead RAND researcher, to say to Ed Week that ‘the evidence base [for these schools] is very weak at this point.’“

This EdWeek article clearly states that it is about “what K-12 educators and policymakers need to know about the research on personalized learning” broadly. Quoting accurately, “RAND has found some positive results, including modest achievement gains in some of the Gates-funded personalized-learning schools. But overall, ‘the evidence base is very weak at this point, Pane said.” There is no justification for Haimson to insert “[for these schools]” into my quoted remark. It appears as though Haimson is attempting to give a misleading impression that I was specifically talking about Summit and TtO rather than the entire body of personalized learning research.

I find it very unfortunate that you accepted Haimson’s claims without fact checking, and increased their visibility and attention through your own platform.

I am requesting that you please issue a correction in a way that previous readers of your March 4 post will likely notice. You may include this letter if you wish.

With regards,

John Pane

RAND Corporation

I forwarded John Pane’s letter to Leonie Haimson. She responded as follows:

Hi John – the Rand report was only a small part of my post on TTO which is here – I counted one short paragraph out of nearly one hundred.

Nevertheless, Diane: Please go ahead and print John’s letter in full and I will link to the letter in my blog. It is unfortunate that the specific online program names were left out of the RAND evaluation.  I had wrongly assumed that  TTO was included since it is one of the most heavily funded and promoted of the Next Generation Learning Challenge “personalized learning” programs, by Gates and others.   

I would also like to point out that the following survey stats John includes from the NGLC schools omit the results from the comparison schools, as cited in the appendix of the Rand  report:

Possibly related, 77 percent of NGLC students agreed that “at least one adult in this school knows me well” [compared to 86% of the national sample] and “I feel good about being in this school,” [vs. 89% of the national sample] 76 percent agreed that “I care about this school” [vs. 87% of the national sample] and 72 percent agreed “I am an important part of my school community.” [compared to 79% of the national sample.]

bargraph

In addition, the  students at the personalized learning schools were more likely to say that that “their classes do not keep their attention, and they get bored” compared to the national sample (30% to 23%). Only 35% of students at the NGLC schools said that “learning is enjoyable” compared to 45% of the national sample. With results like this it is very difficult to see support for the claim that students at personalized learning schools are more engaged in their coursework, feel more connected and have more agency, as is often claimed.

Now we know that TTO students aren’t included in these surveys but there is no reason to assume that the responses would be significantly different until and unless New Classrooms releases their own survey results.  And we do have the results from Mountain View school, which showed a 413% increase in the number of students who said they hated math as a result.

Nor does John’s response relate to the larger question of how difficult it is to use MAP scores to evaluate these programs, especially ones that aren’t disaggregated by race or economic status, which also calls into question the conclusions of the MarGready report.  One might expect that with all the data that NWEA has by now they would have done that by now; any thoughts on that, John?

Finally, it is extremely unfortunate that Gates, Zuckerberg etc. haven’t bothered to commission any truly randomized  small-scale evaluation of Summit, TTO or any of the other PL programs they have so heavily funded and promoted before expanding their reach and subjecting hundreds of thousands of students to them.   Summit has rejected  any independent evaluation of its results.  One can only speculate why.

 

Thanks,

Leonie Haimson