Archives for category: Research

 

Reseachers at Teachers College, Columbia University, are conducting research about the Opt Out Movement.

Please consider participating in their survey if you are interested in the efforts of parents to keep their children from taking state tests as a protest against the overuse and misuse of standardized testing.

The survey was designed by two professors: Oren Piemonte-Levy and Nancy Green Saraisky.

For further information, you can contact:

Oren Pizmony-Levy, PhD
Assistant Professor of International and Comparative Education
Department of International and Transcultural Studies
Teachers College, Columbia University
525 West 120th Street
370 Grace Dodge Hall
Box 55
New York, NY 10027

Tel (office): 212-678-3180

Email: pizmony-levy@tc.columbia.edu
Website: http://orenpizmonylevy.com/

 

Since the passage of No Child Left Behind, test scores have been defined by federal law as the goal of education. Schools and teachers that “produce”higher scores are good, schools and teachers that don’t are “bad,” and likely to suffer termination. The assumption is that higher test scores produce better life outcomes, and that is that.

In late 2016, Jay P. Greene produced a short and brilliant paper that challenged that assumption. I have fallen into the habit of asking myself whether the young people who are super-stars in many non-academic fields had high scores and guessing they did not. Fortunately, it is only in schools where students get branded with numbers like Jean Val Jean of “Les Miserables.” Outside school, they can dazzle the world as athletes, musicians, inventors, or mechanics, without a brand.

Greene writes:

“If increasing test scores is a good indicator of improving later life outcomes, we should see roughly the same direction and magnitude in changes of scores and later outcomes in most rigorously identified studies. We do not. I’m not saying we never see a connection between changing test scores and changing later life outcomes (e.g. Chetty, et al); I’m just saying that we do not regularly see that relationship. For an indicator to be reliable, it should yield accurate predictions nearly all, or at least most, of the time.

“To illustrate the un-reliability of test score changes, I’m going to focus on rigorously identified research on school choice programs where we have later life outcomes. We could find plenty of examples of disconnect from other policy interventions, such as pre-school programs, but I am focusing on school choice because I know this literature best. The fact that we can find a disconnect between test score changes and later life outcomes in any literature, let alone in several, should undermine our confidence in test scores as a reliable indicator.

“I should also emphasize that by looking at rigorous research I am rigging things in favor of test scores. If we explored the most common use of test scores — examining the level of proficiency — there are no credible researchers who believe that is a reliable indicator of school or program quality. Even measures of growth in test scores or VAM are not rigorously identified indicators of school or program quality as they do not reveal what the growth would have been in the absence of that school or program. So, I think almost every credible researcher would agree that the vast majority of ways in which test scores are used by policymakers, regulators, portfolio managers, foundation officials, and other policy elites cannot be reliable indicators of the ability of schools or programs to improve later life outcomes.”

I would add that Chetty et al did not establish a causal relationship between teacher VAM and later life outcomes, only a correlation. The claim that my fourth grade teacher “caused” me not to become pregnant a decade later strains credulity. At least mine.

Greene’s essay includes an excellent reading list of studies showing high test scores but no change in high school graduation rate or college attendance.

The Milwaukee and D.C. voucher studies that show a gain in high school graduation rate should note the high attrition rate from these programs, which inflates the graduation rate.

Imagine saying to a governor, I have a policy intervention that will raise test scores but will have little or no effect on life outcomes. Would they jump at the offer? Based on the political activity of the past 15 years, the answer is yes.

Overall, however, a seminal essay from a prominent pro-choice scholar.

In his budget proposal for 2019, Trump will ask for dramatic cuts to Research on behalf of clean energy.

He prefers fossil fuels. He likes nuclear plants too.

Nothing beats “clean coal.”

http://wapo.st/2DQ6FJU

 

Mark Weber, aka Jersey Jazzman, worked with Bruce Baker at Rutgers University to review the progress of the “reforms” (aka privatization and disruption) in Newark. This post is the first in a series that will summarize their findings.


The National Education Policy Center published a lengthy report written by Dr. Bruce Baker and myself that looks closely at school “reform” in Newark. I wrote a short piece about our report at NJ Spotlight that gives summarizes our findings. We’ve also got a deep dive into the data for our report at the NJ Education Policy website.

You might be wondering why anyone outside of New Jersey, let alone Newark, should care about what we found. Let me give you a little background before I try to answer that question…

In 2010, Mark Zuckerberg, the CEO and founder of Facebook, went on the The Oprah Winfrey Show and announced that he was giving $100 million in a challenge grant toward the improvement of Newark’s schools. Within the next couple of years, Newark had a new superintendent, Cami Anderson. Anderson attempted to implement a series of “reforms” that were supposed to improve student achievement within the city’s entire publicly-financed school system.

In the time following the Zuckerberg donation, Newark has often been cited by “reformers” as a proof point. It has a large and growing charter school sector, it implemented a teacher contract with merit pay, it has a universal enrollment system, it “renewed” public district schools by churning school leadership, it implemented Common Core early (allegedly), and so on.

So when research was released this fall that purported to show that students had made “educationally meaningful improvements” in student outcomes, “reformers” both in and out of New Jersey saw it as a vindication. Charter schools are not only good — they don’t harm public schools, because they “do more with less.” Disruption in urban schools is good, because the intractable bureaucracies in these districts needs to be shredded. Teachers unions are impeding student learning because we don’t reward the best teachers and get rid of the worst…

And so on. If Newark’s student outcomes have improved, it has to be because these and other received truths of the “reformers” must be true.

But what if the data — including the research recently cited by Newark’s “reformers” — doesn’t show Newark has improved? What if other factors account for charter school “successes”? What if the test score gains in the district, relative to other, similar districts, isn’t unique, or educationally meaningful? What if all the “reforms” supposedly implemented in Newark weren’t actually put into place? What if the chaos and strife that has dogged Newark’s schools during this “reform” period hasn’t been worth it?

What if Newark, NJ isn’t an example of “reform” leading to success, but is instead a cautionary tale?

These are the questions we set out to tackle. And in the next series of posts here, I am going to lay out, in great detail, exactly what we found, and explain what the Newark “reform” experiment is actually telling us about the future of American education.

Bruce Baker and Mark Weber have assembled a full report about charters in Newark.

There are successes and failures and much in-between.

Before accepting the assurances of reformers about Newark, read this.

Mercedes Schneider discusses a study that was reported in Education Week. The study concluded that teachers from alternative certification programs such as Teach for America get students to produce test scores there are “marginally” better than traditionally trained teachers.

Mercedes thought this was a dumb study, although she didn’t use that word. Producing higher scores, even “marginally” higher scores is not a good measure of teaching. Getting higher scores from students is not, she writes, the same as proving a high-quality, well-rounded education.

“The fact that the JCFS meta-analysis finds that teachers trained via alt cert programs have students with slightly higher test scores than those trained in traditional teacher prep programs does not surprise me.

“What does surprise me is that the JCFS researchers not only fail to question the validity of measuring teacher job performance using student tests; they promote the idea as a means to gather useful data.

“It also surprises me that the JCFS researchers do not question the degree to which student test scores represent authentic learning. They do comment on “student achievement in the U.S.” as “still below average, in comparison to the rest of the world,” but they do not carry that thought further and question how it is that the US continues to be a major world power despite those “still below average” international test scores….

“There is a reason that no national testing company would dare include with its student achievement tests a statement supporting the usage of these tests to gauge teacher effectiveness: Measuring teachers using student tests is not a valid use of such tests, and no testing company wants to be held liable for this invalid practice.

“Certainly the pressure is on traditional teacher training programs to focus on the outcome of teachers-in-training “raising” student test scores and to use those test score outcomes as purported evidence that the teacher-in-training is “effective.” May they never reach the ultimate cheapening of pedagogy and reduce teacher education to nothing more that test-score-raising.

“Are teacher alt cert programs little more that spindly, test-score-raising drive-thrus lacking in lasting pedagogical substance? There’s an issue worthy of research investigation.

“What price will America pay for its shortsighted, shallow love of high test scores? Also worthy of investigation– more so than that of the ever-increasing test score.”

A new study of teacher evaluation finds that the more that value-added test scores count, the lower teachers are rated.

“Among the most important findings from this new study: When value-added scores are incorporated into evaluations, the ratings tend to go down. And the more weight a system puts on value-added scoring, the lower the scores are likely to be, the study showed.

“That’s because value-added scores tend to be relative measures, explained Kraft. Value added is generally “designed to just compare you to your peers,” he said. “Everybody can’t be good with value added.”

“On the other hand, with classroom observation scores, everyone can be excellent, he said.

“And that leads to a huge caveat in all of this: As it stands, despite the variation in systems, almost all teachers across the country continue to get positive ratings.

“That’s largely because observation scores make up the meat of most teacher evaluation systems, said Kraft. And as we know from previous research, principals tend to rate their teachers highly.”

New Zealand is one of the few—perhaps the only—nation that abandoned national standards.

As Professor Martin Thrupp Explains here, scholars and researchers helped to expose the flaws of national standards.

The national standards were driven by political, not educational, purposes. The ruling party pushed them and couldn’t stop pushing them, ignoring all criticism.

Thrupp’s book, co-edited with Bob Lingard, Meg Maguire, and David Hursh, “The Search for Better Educational Standards: A Cautionary Tale” teaches us that concerted efforts by educators, scholars, and parents can roll back ruinous education policy.

He writes:

“The National-led Government had become fully invested in the National Standards policy. When it was first announced in 2007, it was National’s big idea for education – the ‘cornerstone’ of its education policy. Over the 10 years that followed, the Government had dismissed all criticisms. Any late turning back would be a sign of weakness, and instead the National party wanted to plough on with this truly awful project that had already became a world-class example of how not to make education policy….

“Despite the National-led Government’s adherence to the National Standards, researchers and academics certainly pushed back against the policy…In fact, researchers and academics did a great deal in this space! A particular highlight for me was the 2012 open letter signed by over 100 education academics against the public release of the National Standards data. But there were countless other instances of academics and researchers opposing the National Standards, either publicly or more behind the scenes. Opinion pieces, articles, TV debates, radio, public meetings, meetings behind closed doors – and all the rest of it. Chapter 8 of A Cautionary Tale, about the politics of research, gives numerous examples.

“A number of us also did empirical research that helped to explain how the National Standards were a problem (see A Cautionary Tale, especially chapters 3, 5 and 7). And, of course, New Zealand researchers are part of international networks that are working on the same concerns about high-stakes assessment in other countries (see A Cautionary Tale, especially chapters 2 and 10). Note to Cullen: without doubt, some of the best work in this area is coming from Australian academics.

“It is true that some researchers and academics chose to support the National-led Government’s National Standards policies (A Cautionary Tale, chapter 8). This happened for various reasons that may have included the researchers’ educational views, their political beliefs, the political pressures that were upon them or their organisations, and the advantages that came with supporting the policy. It may have also involved a judgement that it was better to be ‘inside the tent’ and have influence than be on the outside.

“But this range of viewpoints among researchers and academics is no different than was seen within the teaching profession and amongst principals, where National Standards also had supporters. Indeed, a central problem that the new Labour-led Government will have to grapple with, having removed the National Standards policy, is doing away with the data-driven disposition amongst teachers and principals that grew along with the policy under the previous Government.

“Looking ahead

“Even though most teachers and principals did not like the impact of the National Standards policy, after a decade of its influence New Zealand primary schools are now marinated in the thinking, language, and expectations of the National Standards. This has also had wider impacts, for instance on early childhood education. It will all take a little while to undo.

“It’s great, though, that New Zealand primary schools will now be able to spend less time shoring up judgements about children – judgements that have often been pointless or harmful – and instead spend more time making learning relevant and interesting for each child. Removing National Standards should also allow teachers to be less burdened, contributing to making teaching a more attractive career again.”

Jack Hassard wrote about the use of social media to spread fake news. Facebook, Twitter, and Google have become facilitators of fake news.

We know it is there. What can we do about it?

This is a very good analysis by a group of scholars at the Stanford History Education Group about civic reasoning, which explains how to avoid being hoaxed by fake news.

The questions that must always be present in any discussion is: How do you know? Who said so? What is the source? How reliable is the source? Can you confirm this information elsewhere? What counts as reliable evidence?

Many people use Wikipedia as a reliable source, but Wikipedia is crowdsourced and is not authoritative. I recall some years back when I gave a lecture in North Carolina that was named in honor of a distinguished senator of the state. The Wikipedia entry said he was a Communist, as were members of his staff. This was obviously the work of a troll. But it might not be obvious to a student researching a paper.

They write:

“Fake news is certainly a problem. Sadly, however, it’s not our biggest. Fact-checking organizations like Snopes and PolitiFact can help us detect canards invented by enterprising Macedonian teenagers,3 but the Internet is filled with content that defies labels like “fake” or “real.” Determining who’s behind information and whether it’s worthy of our trust is more complex than a true/false dichotomy.

“For every social issue, there are websites that blast half-true headlines, manipulate data, and advance partisan agendas. Some of these sites are transparent about who runs them and whom they represent. Others conceal their backing, portraying themselves as grassroots efforts when, in reality, they’re front groups for commercial or political interests. This doesn’t necessarily mean their information is false. But citizens trying to make decisions about, say, genetically modified foods should know whether a biotechnology company is behind the information they’re reading. Understanding where information comes from and who’s responsible for it are essential in making judgments of credibility.

“The Internet dominates young people’s lives. According to one study, teenagers spend nearly nine hours a day online.4 With optimism, trepidation, and, at times, annoyance, we’ve witnessed young people’s digital dexterity and astonishing screen stamina. Today’s students are more likely to learn about the world through social media than through traditional sources like print newspapers.5 It’s critical that students know how to evaluate the content that flashes on their screens.

“Unfortunately, our research at the Stanford History Education Group demonstrates they don’t.* Between January 2015 and June 2016, we administered 56 tasks to students across 12 states. (To see sample items, go to http://sheg.stanford.edu (link is external).) We collected and analyzed 7,804 student responses. Our sites for field-testing included middle and high schools in inner-city Los Angeles and suburban schools outside of Minneapolis. We also administered tasks to college-level students at six different universities that ranged from Stanford University, a school that rejects 94 percent of its applicants, to large state universities that admit the majority of students who apply.

“When thousands of students respond to dozens of tasks, we can expect many variations. That was certainly the case in our experience. However, at each level—middle school, high school, and college—these variations paled in comparison to a stunning and dismaying consistency. Overall, young people’s ability to reason about information on the Internet can be summed up in two words: needs improvement.

“Our “digital natives”† may be able to flit between Facebook and Twitter while simultaneously uploading a selfie to Instagram and texting a friend. But when it comes to evaluating information that flows through social media channels, they’re easily duped. Our exercises were not designed to assign letter grades or make hairsplitting distinctions between “good” and “better.” Rather, at each level, we sought to establish a reasonable bar that was within reach of middle school, high school, or college students. At each level, students fell far below the bar.”

They offer specific examples of hoaxes to show how easily people are duped.

They conclude:

“The senior fact checker at a national publication told us what she tells her staff: “The greatest enemy of fact checking is hubris”—that is, having excessive trust in one’s ability to accurately pass judgment on an unfamiliar website. Even on seemingly innocuous topics, the fact checker says to herself, “This seems official; it may be or may not be. I’d better check.”

“The strategies we recommend here are ways to fend off hubris. They remind us that our eyes deceive, and that we, too, can fall prey to professional-looking graphics, strings of academic references, and the allure of “.org” domains. Our approach does not turn students into cynics. It does the opposite: it provides them with a dose of humility. It helps them understand that they are fallible.

“The web is a sophisticated place, and all of us are susceptible to being taken in. Like hikers using a compass to make their way through the wilderness, we need a few powerful and flexible strategies for getting our bearings, gaining a sense of where we’ve landed, and deciding how to move forward through treacherous online terrain. Rather than having students slog through strings of questions about easily manipulated features, we should be teaching them that the World Wide Web is, in the words of web-literacy expert Mike Caulfield, “a web, and the way to establish authority and truth on the web is to use the web-like properties of it.”13 This is what professional fact checkers do.

“It’s what we should be teaching our students to do as well.”

Bruce Baker at Rutgers University is one of the most eminent scholars of school finance in the nation.

In this post, he remembers the days when states insisted upon rigorous research to understand funding equity and inequity.

That kind of research, on which he cut his teeth, died, and he knows why.

“These were the very types of analyses needed to inform state school finance polices and to advance the art and science of evaluating educational reforms for their potential to improve equity, productivity and efficiency. But these efforts largely disappeared over the next decade. More disconcerting, these efforts were replaced by far less rigorous, often purely speculative policy papers, free of any substantive empirical analysis and devoid of any conceptual frameworks.

“This shift was largely brought about under the leadership of Arne Duncan. Kevin Welner of the University of Colorado and I explained first in a report for the National Education Policy Center and subsequently in shorter form in the journal Educational Researcher, that Secretary Duncan had begun to give lip service to improving educational productivity and efficiency, but accompanied that lip service with wholly insufficient resources. Kevin Welner and I explained that:

“the materials provided on the Department’s website as guiding resources present poorly supported policy advisement. The materials listed and recommendations expressed within those materials repeatedly fail to provide substantive analyses of the cost effectiveness or efficiency of public schools, of practices within public schools, of broader policies pertaining to public schools, or of resource allocation strategies.” [ix]

“Among other issues, the materials provided on the web site failed to acknowledge even the existence of the relevant conceptual frameworks and rigorous empirical methods which had risen to prominence in state supported and federally documented research in the years prior.”

John King, then the state commissioner in New York, quickly followed Duncan’s lead. The top researchers sat in the audience while Duncan’s favorites presented misleading graphs.

Thus did the field die.