Archives for category: National Education Policy Center

I posted a summary of Professor Francesca Lopez’ review of a charter school meta-analysis published by the Center for Reinventing Public Education at the University of Washington. In my introduction, I referred to CPRE as a “leading proponent of charter schools.” Robin Lake wrote me to challenge that characterization. I associate CPRE, which receives extensive funding from the Gates Foundation, with the idea of portfolio districts, in which struggling public schools are replaced by a portfolio of privately managed schools. I invited Robin to send me any CPRE publications critical of charter schools, and I will post about them when I receive them.

Dr. Lopez writes:

Recently, I wrote a think tank review for NEPC on a CRPE report that was summarized on Diane Ravitch’s blog. I was contacted by Adam Gish, an English teacher at Garfield High School in Seattle, Washington, who had read the blog post and then asked CRPE’s Robin Lake about her opinion of the NEPC review. Mr. Gish sent me the exchange with Dr. Lake’s response, which he gave me permission to publish, with the thought being that a public exchange could help prompt a larger dialogue.

Here’s what Dr. Lake wrote:

“I patently disagree with the review. It seems to present statements out of important context and ignores what the authors say. For example the authors say that the time trend is positive but not statistically significant but the review cites the authors as having called the trend significant. That’s either a misunderstanding of basic statistical analysis or an intentional misrepresentation. There are numerous other inaccuracies and misinterpretations.

Julian Betts is one of the most cautious, rigorous, and respected analysts I know. That’s why we chose him to do this review. His analysis made minimal and evenhanded conclusions and was peer reviewed by one of the best statisticians in the country.

I really don’t see any legitimate critique here.

Hope this helps you know my view.

Best,

Robin”

Mr. Gish, in his note to me, asked “what [my] rebuttal would be,” so I would like to offer it here. It is Dr. Lake who is incorrect; nowhere in the NEPC review did I “cite the authors as having called the trend significant.” What I do point out is the authors’ claim that there is a positive trend, which is a misleading claim since they also (as I explain on pages 3 and 4 of the review) reported non-significant findings. To use the phrasing of Dr. Lake, this is “basic statistical analysis.” One cannot call something “positive” or “negative” when it is not significant. The point of the trend analysis was to determine if the trend was positive or negative. Because it was not, calling it “not significant” while at the same time calling it “positive” is inaccurate and misleading.

Dr. Lake did not offer sufficient details for a more elaborate rebuttal, but I welcome a discussion regarding what her perceived “numerous other inaccuracies and misrepresentations” might be.

Francesca López

The National Education Policy Center produces a valuable series reviewing think tank reports. In this latest one, Professor Francesca Lopez of the University of Arizona takes a close look at a meta-analysis of charter school studies published by the Center on Reinventing Public Education at the University of Washington. It is useful to know that the Center is a leading proponent of charter schools. What would be truly shocking would be if they published a review critical of charter schools.

Here is a summary of Professor Lopez’s findings, as well as links to the original report and her review.

“The report was published in August by the Center on Reinventing Public Education at the University of Washington. The report, by Julian R. Betts and Y. Emily Tang, draws on data from 52 studies to conclude that charters benefited students, particularly in math.

“This conclusion is overstated,” writes López in her review. The actual results, she points out, were not positive in reading, not significant for high school math, and yielded only very small effect sizes for elementary and middle school math.

“The reviewer also explains that the authors wrongly equate studies of students chosen for charter schools in a lottery with studies that rely on random assignment. Because schools that use lotteries do so because they’re particularly popular, those studies aren’t appropriate for making broad comparisons between charter and traditional public schools, López writes.

“The review identifies other flaws as well, including the report’s assertion of a positive trend in the effects of charter schools, even though the data show no change in those effects; its exaggeration of the magnitude of some effects; and its claim of positive effects even when they are not statistically significant. Taken together, she says, those flaws “render the report of little value for informing policy and practice.”

“The report does a solid job describing the methodological limitations of the studies reviewed, then seemingly forgets those limits in the analysis,” López concludes.

“Find Francesca López’s review on the NEPC website at:
http://nepc.colorado.edu/thinktank/review-meta-analysis-effect-charter.

“Find A Meta-Analysis of the Literature on the Effect of Charter Schools on Student Achievement, by Julian R. Betts and Y. Emily Tang and published by the Center on Reinventing Public Education, on the web at:
http://www.crpe.org/publications/meta-analysis-literature-effect-charter-schools-student-achievement.”

A new report from the National Education Policy Center reviews the “wait lists” that charter advocacy groups regularly publicize and finds them to be vastly inflated.

Charter advocacy groups claim that nearly one million students are wait-listed for admission, but they acknowledge themselves that the actual number may be about 400,000. NEPC authors Kevin Welner and Gary Miron say that even this number is an overstatement. Many students apply to multiple charter schools and get double or triple counted. Sometimes, after the students have enrolled in a charter school or a public school, their name may remain on the “wait list” of other charters.

They enumerate other reasons to doubt the “wait list” claims. Essentially, the claims are marketing devices, intended to persuade legislators of a huge, unsatisfied demand for more privately managed schools, funded but not supervised or regulated by the public.

The National Education Policy Center has released the names of the winners of its annual Bunkum awards, which recognizes the most glaring exemplars of Bunkum, hokum, spin, and hype in the world of education research. Included in the link is a YouTube video in which the distinguished researcher David Berliner announces the winners. Be it noted that the Brookings Institution, once esteemed for the quality of its research, was awarded the Grand Prize for “the shoddiest educational research of 2013.” Be it noted that the director of Brookings’ education research program is Grover Whitehurst, former director of the U.S. Department of Education’s Institute of Education Sciences in the administration of President George W. Bush.

Please go to the link to find the links for all the winners of the Bunkum awards.

Here is the NEPC press release.

Bunkum Awards 2013

This marks our eighth year of handing out the Bunkum Awards, recognizing the lowlights in educational research over the past year. As long as the bunk keeps flowing, the awards will keep coming. It’s the least we can do. This year’s deserving awardees join a pantheon of divine purveyors of weak data, shoddy analyses, and overblown recommendations from years past. Congratulations, we guess—to whatever extent congratulations are due.

2013 Bunkum Honorees:

The ‘Do You Believe in Miracles?’ Award
To Public Agenda for Review of Failure Is Not an Option
Read Review →

The “Do You Believe in Miracles?” Award goes to the Public Agenda Foundation for Failure is Not an Option: How Principals, Teachers, Students and Parents from Ohio’s High-Achieving, High-Poverty Schools Explain Their Success

A particularly egregious disservice is done by reports designed to convince readers that investment in disadvantaged communities can be ignored. In this increasingly common mythology, students’ substandard outcomes are blamed on teachers and schools that don’t follow the miracle-laden path of exceptional schools.

Early in 2013, we sent a report of this genre out for review. The authors of this report, from Public Agenda, identified nine Ohio schools where “failure is not an option.” The report’s basic claim was that certain school-based policies and programs can by themselves overcome the impact of poverty on student performance. Among the earth-shaking recommendations were: “Engage teachers,” “Leverage a great reputation,” “Be careful about burnout,” and “Celebrate success.”

While these seem like good practices and have indeed been pursued since the time when the report’s authors were in kindergarten, it’s hard to see how they will lead to miracles. Miracles are hard to come by and even harder to sustain. In fact, notwithstanding the report’s title, four of the nine selected schools had poverty rates at the state average and thus not particularly high-poverty schools.

While it may be easy to laugh at the idea that the recommended approaches will somehow overcome the effects of unemployment, bad health care, sub-standard living conditions and the like, it is also an outrageous neglect of the fundamental social needs and problems of neighborhoods, families and children. The truth that these reports hide is that school failure will almost always prevail in a society that will not invest in disadvantaged communities and the families who live there.

The ‘We’re Pretty Sure We Could Have Done More with $45 Million’ Award
To Gates Foundation for Two Culminating Reports from the MET Project
Read Review →

The “We’re Pretty Sure We Could Have Done More with $45 Million” Award goes to the Gates Foundation and its Measures of Effective Teaching Project.

We think it important to recognize whenever so little is produced at such great cost. The MET researchers gathered a huge data base reporting on thousands of teachers in six cities. Part of the study’s purpose was to address teacher evaluation methods using randomly assigned students. Unfortunately, the students did not remain randomly assigned and some teachers and students did not even participate. This had deleterious effects on the study–limitations that somehow got overlooked in the infinite retelling and exaggeration of the findings.

When the MET researchers studied the separate and combined effects of teacher observations, value-added test scores, and student surveys, they found correlations so weak that no common attribute or characteristic of teacher-quality could be found. Even with 45 million dollars and a crackerjack team of researchers, they could not define an “effective teacher.” In fact, none of the three types of performance measures captured much of the variation in teachers’ impacts on conceptually demanding tests. But that didn’t stop the Gates folks, in a reprise from their 2011 Bunkum-winning ways, from announcing that they’d found a way to measure effective teaching nor did it deter the federal government from strong-arming states into adoption of policies tying teacher evaluation to measures of students’ growth.

The ‘It’s Just Not Fair to Expect PowerPoints to Be Based on Evidence’ Award
To Achievement School District and Recovery School District for Building the Possible: The Achievement School District’s Presentation in Milwaukee & The Recovery School District’s Presentation in Milwaukee
Read Review →

The “It’s Just Not Fair to Expect PowerPoints to Be Based on Evidence” Award goes to Elliot Smalley of Tennessee’s Achievement School District and Patrick Dobard of the Louisiana Recovery School District.

For years, Jeb Bush’s “Florida Miracle” has been unmatched as the most bountiful wellspring of misleading education reform information. But Florida and Jeb have now been overtaken by the Louisiana Recovery School District, which serves the nation as the premier incubator of spurious claims about education reform and, in particular, the performance of “recovery school districts,” take-overs, portfolio districts, and charter schools.

Superintendent Patrick Dobard has taken his suitcase of PowerPoints on the road, touting the Recovery School District’s performance. Nothing has stood in his way. Not the dramatic post-Katrina change in student composition. Not the manipulation of student achievement standards in ways that inflate performance outcomes. Not the unique influx of major funds from foundations, the federal government and billionaires. And not the unaccounted-for effects of a plethora of other relevant factors.

But Dobard is not alone. Elliot Smalley, the chief of staff for the Achievement School District in Memphis, flexed his PowerPoints to show his school district’s “Level 5 Growth.” This certainly sounds impressive—substantially more impressive, for instance, than, say, Level 3 Growth. But this growth scale is unfortunately not explained in the PowerPoint itself. What we can say is that a particular school picked by Smalley to demonstrate the district’s positive reform effects may not have been a good choice, since the overall reading and math scores at that school went down. Picky researchers might also argue that more than seven schools should be studied for more than two years before shouting “Hosannah!”

As was the case with the Florida Miracle, the Bunkum Award here is not for the policy itself—serious researchers are very interested in understanding the reform processes and outcomes in these places. Rather, the Bunkum is found in the slick sales jobs being perpetrated with only a veneer of evidence and little substance backing the claims made.

The ‘Look Mom! I Gave Myself an ‘A’ on My Report Card!’ Award
Second Runner-up: To StudentsFirst for State Policy Report Card
Read Review →

First Runner-up: To American Legislative Exchange Council for Report Card on American Education:
 Ranking State K-12 Performance, Progress, and Reform
Read Review →

Grand Prize Winner: To Brookings Institution for The Education Choice and Competition Index
Read Review →

and for School Choice and School Performance in the New York City Public Schools
Read Review →

Back in the old days, when people thought they had a good idea, they would go through the trouble of carefully explaining the notion, pointing to evidence that it worked to accomplish desired goals, demonstrating that it was cost effective, and even applying the scientific method! But that was then, and this is now. And some of the coolest kids have apparently decided to take a bit of a shortcut: They simply announce that all their ideas are fantastic, and then decorate them in a way that suggests an evidence-based judgment. Witness the fact that we are now swimming in an ocean of report cards and grades whereby A’s are reserved for those who adopt the unproven ideas of the cool kids. Those who resist adopting these unproven ideas incur the wrath of the F-grade.

It’s apparently quite a fun little game. The challenge is to create a grading system that reflects the unsubstantiated policy biases of the rater while getting as many people as possible to believe that it’s legitimately based on social science. The author of the rating scheme that dupes the most policy makers wins!

This year, there are triple-winners of the “Look Mom! I gave myself an ‘A’ on my report card!” award, including our Grand Prize Winner for 2013!

Second Runner-up goes to StudentsFirst, which came up with 24 measures based on the organization’s advocacy for school choice, test-based accountability and governance changes. Unfortunately, the think tank’s “State Policy Report Card” never quite gets around to justifying these measures with research evidence linking them to desired student outcomes. Apparently, they are grounded in revealed truth unseen or unseeable to lesser mortals. Evidence, though, has never been a requirement for these report card grades. And naturally the award-winning states embrace the raters’ subjective values. In a delightful expose, our reviewers demonstrated that the 50 states received dramatically different grades from a variety of recent report cards: a given state often received a grade of “A” on one group’s list and an “F” on another group’s list.

First Runner-up goes to the American Legislative Exchange Council (ALEC), which almost took the top honors as the most shameful of a bad lot. What makes the ALEC report card particularly laughable is the Emperor’s-clothes claim that its grades are “research-based.” Yes, evidence-based or research-based report card grades would be most welcome, but all ALEC offers is a compilation of cherry-picked contentions from other advocacy think tanks. Thus, what is put forth as scientifically based school choice research is actually selective quotations from school-choice advocacy organizations such as Fordham, Friedman and the Alliance for School Choice. Similarly, the report’s claims about the benefits of alternate teacher certification in attracting higher quality candidates are based on only one paper showing higher value-added scores. Unfortunately, that paper was unpublished—and the report’s reference section led to a dead link.

This year’s Grand Prize Winner is the Brookings Institution and its Brown Center on Education Policy. Brookings has worked hard over the years to build a reputation for sound policy work. But, at least in terms of its education work, it is well on its way to trashing that standing with an onslaught of publications such as their breathtakingly fatuous choice and competition rating scale that can best be described as political drivel. It is based on 13 indicators that favor a deregulated, scaled-up school choice system, and the indicators are devoid of any empirical foundation suggesting these attributes might produce better education.

Since the mere construction of this jaundiced and unsupported scale would leave us all feeling shortchanged, Brookings has also obliged its audience with an application of its index to provide an “evaluation” of New York City’s choice system. Where an informative literature review would conventionally be presented, the authors of this NYC report touchingly extoll the virtues of school choice. They then claim that “gains” in NYC were due to school choice while presenting absolutely nothing to support this causal claim. And, following from this claim and from their exquisite choice and competition rating scale, they offer the expected recommendations. They almost literally give themselves an “A.”

Seldom do we see such a confluence of self-assured hubris and unsupported assertions. It’s hard to find words that capture this spectacular display except to say, “Congratulations, Brookings! You just won the Bunkum’s Grand Prize for shoddiest educational research for 2013.”

The Tweed insider who sends occasional reports to this blog is still anonymous. Still too dangerous to step out in the open. Wouldn’t it be swell if the Department of Education actually had a research department, instead of a hyper-active public relations department?

Insider here reviews the report on charter schools by the NYC Independent Budget Office. The report covered only the early grades, not the middle grades or high school years.

He/she writes:

Charter schools often seem to be at the center of the national debate on education. So much so in fact that when Mayor Bill de Blasio promised to review charter school policy in New York City, Eric Cantor, the Republican House Majority Leader in the United States Congress, went on the attack. Cantor claimed that de Blasio would “devastate the growth of education opportunity” and threatened to hold committee hearings about the city’s policies. To say the least it is unusual for a House Majority Leader from the United States Federal Government to threaten a city mayor who has been in office for less than 10 days. What could explain Cantor’s conniptions?

Data in a report released by the New York City Independent Budget Office the day after Cantor made his threats might answer our question. The report revealed that charter schools in New York City manage to get rid of students with lower test scores, special education students, and students who are often absent.

Here are some of the relevant quotes from the report:

“The results are revealing. Among students in charter schools, those who remained in their kindergarten schools through third grade had higher average scale scores in both reading (English Language Arts) and mathematics in third grade compared with those who had left for another New York City public school.”

“Only 20 percent of students classified as requiring special education who started kindergarten in charter schools remained in the same school after three years, with the vast majority transferring to another New York City public school (see Table 5). The corresponding persistence rate for students in nearby traditional public schools is 50 percent.”

“Absenteeism is an even greater predictor of turnover for students in charter schools, compared with its predictive power for students in nearby traditional public schools.”

It appears that Cantor and other self-proclaimed education reformers fear that transparency about the charter sector will reveal that it is an empire of cards. Rather than truly providing students with a better education it is evident that, as a sector, charter schools are just playing parlor tricks, getting rid of students who are bringing down their scores (and sending those students to the local public schools of course). No Child Left Behind and Race to the Top have managed to turn education into a set of accounting gimmicks.

Another facet of the education debate revealed by the publication of this data is the extent to which spin rather than the facts is allowed to dominate in the media. The report is now being spun by the New York Times as “addressing a common criticism of New York City charter schools, a study… said that in general their students were not, in fact, more likely to transfer out than their counterparts in traditional public schools.”

In fact, the study provides evidence that charters schools in New York City are deliberately selecting which students they keep. They keep, at a higher rate than local public schools, only those students who bring up their test scores. And they kick out students who bring down their test scores. This gets to the core mission of public education. Are schools meant to serve all students or only students who produce good metrics for the schools they attend? The charter school sector and its advocates seem to believe their only moral obligation is to serve students who do school well. Students who don’t do school well are selectively encouraged and badgered to leave or are told they are not a “good fit.” Public schools, on the other hand, still believe that education should be open to all kids and that society has an obligation to provide for every single child.

In a fascinating twist this report follows a paper released in September by two conservative think tanks claiming that the charter sector in New York City does not discriminate against students with special needs. They alleged that charter schools have fewer special education students because fewer “choose” to apply and because charter schools are less likely to classify students as needing special education services “preferring instead to use their autonomy to intervene.” This paper was trumpeted by the media and treated as though it was a genuinely objective analysis, despite the fact that its methodology had been thoroughly debunked by the National Education Policy Center. With the data in the Independent Budget Office report we now have evidence that the charter sector’s preferred intervention is to selectively attrite students who would benefit from additional supports instead of actually trying to succor them. As long as the media accepts the “findings” of clearly biased think tanks funded by conservative groups as relevant to education policy we will not be able to have an honest national conversation about what works for children.

Where do we go next? The push for greater transparency within the charter sector must continue. Charters must be subject to the same reporting requirements as public schools. Complete data must be made public so that researchers can analyze what is truly going on. At the same time the role of charters in education policy must be minimized. Charters continue to take up bandwidth that should be devoted to discussions about how to make all schools for all kids better and better. It is by now abundantly clear that the charter sector as a whole has little to contribute to this conversation.

The National Education Policy Center urges caution when reading the CREDO study of charter schools in New Orleans. Governor Bobby Jindal has already taken CREDO as evidence for the success of privatization.

NEPC says not so fast. In addition to technical issues in the study, the critics make the following observations:

“Even setting aside these concerns, the effect sizes reported for New Orleans—let alone for the state as a whole—are not impressive in terms of absolute magnitude. Differences of 0.12 standard deviations in reading and 0.14 in mathematics indicate that less than one half of one percent of the variation in test scores is explainable by membership in a charter school.

“The study’s methods raise concerns that the findings could easily be misinterpreted to inflate pro-charter conclusions. In context, there’s little to crow about: the results from Louisiana and New Orleans are not much different from the uninspiring national results; the results for the state’s suburban charter schools showed negative gain scores (somewhat less growth in charters than in the comparison schools); and the small positive results reported for New Orleans are confounded by the devastating aftermath of a unique disaster.”

An even more serious challenge to the study was posed by New Orleans-based “Research on Reforms,” which complained that the Louisiana Department of Education will not release student data to independent research organizations.

It wrote: “As long as the Louisiana Department of Education can determine to whom to release student records for research purposes, the reports produced thereof, such as the CREDO report, are nothing more than biased evaluations.”

“The Department of Education (DOE) maintains that it has the discretion to release de-identified student-level records to selected researchers, and that it has the discretion to deny the same student records to other researchers. And, for the past few years, that is what the DOE has done. CREDO received the student records, and, Research on Reforms, Inc., who submitted a public records request for the same student records, was denied. As long as the DOE gets to select its evaluators, i.e., its researchers, the impact of the state-takeover and the charter school movement will never be objectively evaluated.

“Specifically, the Department of Education (DOE) released de-identified student-level records to CREDO for the school years 2008-09, 2009-10, and 2010-11 and denied the student level records for the same school years to Research on Reforms, Inc. (ROR). Thus, ROR sued the DOE in October 2012 for violation of Louisiana’s Public Records Act. The matter is now in Civil District Court.”

The National Education Policy Center is an invaluable resource. It keeps tabs on the half-baked research that pours forth from advocacy groups pretending to be think tanks.

Its latest report reviews ALEC’s “report card” on the states.

You will not be surprised to learn that he states with the highest scores are those with vouchers, charters, and unregulated home schooling.

Its ratings are similar to those of Michelle Rhee. The “best” states are not the ones with the best education, but the ones that match ALEC’s ideology. The highest marks go to states that are abandoning public education for a free-market model of private providers.

Read here to see the illustrated version of the Wolf attack on me and NEPC.

What do we need to protect us from future Wolf attacks? Garlic? A mirror?

Maybe just common sense and concern for the commonweal.

But what do I know. I am but a humble blogger with a doctorate in history, not a statistician.

Dr. Gene Glass is a distinguished scholar with a long career in educational research and statistics.

He recently co-authored a critical review of virtual charter schools, published by the National Education Policy Center.

In response, an operative from Jeb Bush’s so-called “Foundation for Educational Excellence” created a website with Dr. Glass’s name, ridiculing him and impugning his integrity by implying he was bought by teacher union money. The smear site is called http://geneglass.org/.

Because the corporate reformers are motivated by money, they assume everyone else is. They can’t understand that some people work from ideals higher than Mammon.

None of Dr. Glass’s critics acknowledged that CREDO studied charter schools in Pennsylvania and found that the worst student academic performance was in virtual charter schools. But no one from Jeb Bush’s shop created a website to ridicule CREDO because it is funded by the Walton Foundation and led by researcher Margaret (Macke) Raymond, who is on the faculty at Stanford and affiliated with the conservative, free-market Hoover Institution.

This is Gene Glass’s bio (Wikipedia):

“Gene V Glass (born June 19, 1940) is an American statistician and researcher working in educational psychology and the social sciences. He coined the term “meta-analysis” and illustrated its first use in his Presidential address to the American Educational Research Association in San Francisco in April, 1976. The most extensive illustration of the technique was to the literature on psychotherapy outcome studies, published in 1980 by Johns Hopkins University Press under the title Benefits of Psychotherapy by Mary Lee Smith, Gene V Glass, and Thomas I. Miller. Gene V Glass is a Regents’ Professor Emeritus at Arizona State University in both the educational leadership and policy studies and psychology in education divisions, having retired in 2010 from the Mary Lou Fulton Institute and Graduate School of Education. Currently he is a Senior Researcher at the National Education Policy Center and a Research Professor in the School of Education at the University of Colorado Boulder. He is an elected member of the National Academy of Education.”

Kevin Welner is director of the National Education Policy Center at the University of Colorado in Boulder.

If you open the link to this article, you can find Welner’s links to research and contrary views on the issue.

SEPTEMBER 24, 2012 8:09 PM

Teacher evaluation and Seamus
By Kevin Welner
Since it’s campaign season, I figured it might be fun to respond to this question using an extended metaphor, with teacher evaluation policy playing the role of Gov. Romney’s Irish Setter, Seamus, and policy makers (including Pres. Obama’s EdSec Arne Duncan) playing the role of Gov. Romney.
In reading on, please remember that I’m trapped here in a “swing state,” subjected to a barrage of distorted photos of candidates overlaid with announcers’ voices portending our collective doom should we vote for the other guy. So bear with me for a bit; hopefully this will resonate even with the non-brain-addled in the non-swing states.
The Seamus story is well-known, at least to regular readers of Gail Collins’ column in the New York Times. The Romneys went on a family vacation, which included a 12-hour drive to Canada (Lake Huron). Seamus, the family dog, was put in his crate and strapped to the roof of the station wagon. The trip was carefully planned, down to specified rest stops. But Seamus fouled up the plans a bit when he expressed his displeasure in liquid fecal form, thus soiling himself and his surroundings. So Mitt Romney had to stop and hose down the dog, crate and car. They all then continued on their way. Seamus survived and, according to Gov. Romney, he “loves fresh air” and continued to like car rides, even up there in his crate.
In writing this, I can’t help but note that this all took place in the summer of 1983—the same year as “A Nation at Risk.” Coincidence?? (I’ve really got to get away from these campaign commercials…)
So how is teacher evaluation akin to Seamus? Just as the Romney family and Seamus needed to get to Canada one way or the other, we can all agree that we need good systems of teacher evaluation. The question is how we get there. Our “reformer” friends have come up with an efficient plan: use statistical growth models based on students’ test scores. Let’s strap teacher evaluation to the kids’ tests! What could go wrong?
Plenty, it turns out. This option comes with many serious weaknesses and unintended consequences. The research tells us that “lawmakers should be wary of approaches based in large part on test scores: the error in the measurements is large—which results in many teachers being incorrectly labeled as effective or ineffective; relevant test scores are not available for the students taught by most teachers, given that only certain grade levels and subject areas are tested; and the incentives created by high-stakes use of test scores drive undesirable teaching practices such as curriculum narrowing and teaching to the test.”
But since nobody can come up with an alternative that is as efficient in generating concrete numerical rankings, we stumble (or drive) forward. Even when the brown muck starts to drip down the windows, we merely perform a quick clean-up and continue on our way.
Gov. Romney’s car trip was well-planned and was executed with an unyielding emphasis on efficiency. And at the end of the day, he and his family made their way to Lake Huron. But, notwithstanding Gov. Romney’s protestations to the contrary, it seems unlikely that Seamus or any other dog in that situation would come back wanting more. Yes, the careful planning and efficiency of the trip were remarkable, but there are less stressful and unpleasant ways for a dog to make that 12-hour trip—ways that aren’t as likely to lead to undesirable, unintended consequences.
This, lord help me, is what I’m thinking about when I consider the current push for more effective teacher evaluation systems. My conclusion is we should indeed go on that trip. But let’s invite our teachers and their evaluation systems inside the station wagon, and let’s plan the trip with a complete understanding of how best to get from Point A to Point B.
Last week, the NEPC published a 3-page brief explaining the importance of balanced evaluation approaches that include all stakeholders in decision-making about evaluation systems. Not easy. Maybe not even efficient. But we won’t have to stop mid-way through to get out the hose.