Archives for category: Teacher Evaluation

Anthony Cody notes a rift among allies. Mark NAISON wrote critically about PAR–Peer Assistance and Review.

Writing from his experience, Cody explains how PAR works.

The bottom line is this: If choosing how to be evaluated as a teacher, would you rather be evaluated by the rise or fall of test scores (VAM); by the principal, acting alone; or by a committee of peers and administrators whose first obligation is to help you?

Paul Karrer, who teaches in Castroville, California, writes a scorching review of what is laughingly called “reform.”

He begins:

“Arne Duncan and his patron President Barack Obama have gotten themselves in a bit of an educational bind. Big news came out of the White House on Aug. 21 but a lot of America missed it. It seems a collision course of: 1. sunsetting of the year 2014 and the imbecilic impossible fatwa of No Child Left Behind (the obscenity of schools held accountable for testing without a morsel of input for poverty); and 2. a large push by teacher unions to dethrone he of the basketball — Sir Arne Duncan.”

So Duncan made his statement about testing “sucking the oxygen” out of teaching, a typical Duncanism in which he denounces the policies he promote and still enforces.

Says Karrer of Duncan’s fancy step:

“Is it a complete flip flop? No, it is a little greasy middle-of-the-road weaseling meant to gain favor from Obama’s once-upon-a-time education supporters and to patch the rebellious hemorrhaging of his pet bamboozle Race To The Top and its ugly stepsister Common Core. Ever since Obama initiated his slash and burn policy regarding public education with pro-privatization, the green light to pro-charter corporations, his relationship with publishing-testing companies, and his knee in the groin and knife in the backs of teachers with rigorous evaluations based on kids’ test scores, he’s been trusted about as much as a pedophile at a playground by those who once-upon-a-halo included him in their sacred prayers.”

Karrer says time is running out for the Age of Test and Punish. More and more people are speaking up and the public is catching on to the failure of test, test, test. The momentum is growing. Time is running out.

Levi Cavener, a teacher of special education in Idaho, learned that Idaho will give the Common Core test SBAC) to tenth graders even though it includes eleventh grade content.

“However, I was shocked during this exchange when the Director told me that the decision was due to the fact the state was worried students wouldn’t take the test seriously, and they didn’t want their data set tainted…because, you know, then the results wouldn’t be valid.

“Here is the Director’s response to my question of the logic in giving 10th graders the SBAC instead of 11th graders:

[The director said “Grade 11 is optional this year as your juniors have already met graduation requirements with the old ISATs and might not take the new tests seriously if they were used for accountability.”
Well, that’s convenient. I’m glad the State Department can cherry-pick the students who take the SBAC “seriously” and which students will not; I’m sure they will give that same privilege to teachers…oh..err…I guess not.]

See, here’s why my jaw was left open: The Director of Assessment admitted, rightfully and logically, that if students won’t take the test seriously, then there is no point in assessing them because the data will be invalid. And, if that’s true, let’s not assess those kidos because it would be a total waste of time and resources, not to mention the fact that the data would be completely invalid.

Thus, it would be logical to conclude that if the data is not accurate, then the SDE surely wouldn’t want to tie those scores to something as significant as a teacher’s livelihood.

Oh wait…they want to do exactly that? Shucks!

According to the the Idaho State Department of Education’s recent Tiered Licensure recommendations, SBAC data will be tied directly to a teacher’s certification, employment, and compensation.

Yet, If the Dept. of Ed admits SBAC data isn’t accurate, then what in the world are they doing on insisting that the data be tied to a teacher’s certification, employment, and compensation?

The insistence of tying data that is admittedly invalid is synonymous to tying a fortune cookie to real-world events. I don’t know about you, but my lucky numbers haven’t hit the lottery; what a scam!”

The test is more than eight hours long.

Writes Levi, “Isn’t it logical to conclude that at some point that kidos decide they would rather go outside to recess rather than reading closely on a difficult text passage or spending more time editing a written response? When the kido makes that decision, do we hold the teacher responsible for the invalid data?”

And what about special education kids? “Let’s compound that scenario for special education teachers who work with a population of students qualifying for a special education eligibility under categories of Attention Deficit Hyperactivity Disorders, Emotional Disturbances, and Autism Spectrum diagnosis.

“Yup, I’m sure these students will always take the multi-day SBAC with the utmost earnestness; it’s not like the very behaviors they demonstrated to qualify for special education services to begin with would impede their ability to complete the SBAC with total validity of the results?”

Peter Greene here evaluates a report by two analysts at Bellwether Education, a DC think tank, about how teachers should be evaluated. His post is a model of how to tear apart and utterly demolish the musings of people far removed from the classroom about how things ought to work.

He begins by situating its sponsor:

“I am fascinated by the concept of think tank papers, because they are so fancy in presentation, but so fanceless in content. I mean, heck– all I need to do is give myself a slick name and put any one of these blog posts into a fancy pdf format with some professional looking graphic swoops, and I would be releasing a paper every day.

“Bellwether Education, a thinky tank with connections to the standards-loving side of the conservative reformster world, has just released a paper on the state of teacher evaluation in the US. “Teacher Evaluation in an Era of Rapid Change: From ‘Unsatisfactory’ to ‘Needs Improvement.'” (Ha! I see what you did there.) Will you be surprised to discover that the research was funded by the Bill and Melinda Gates Foundation?”

He reviews what they describe as current trends and pulls each one apart.

Here is an example of a current trend and Greene’s response:

“3) Districts still don’t factor student growth into teacher evals

“Here we find the technocrat blind faith in data rearing its eyeless head again”

The authors say: “While raw student achievement metrics are biased—in favor of students from privileged backgrounds with more educational resources—student growth measures adjust for these incoming characteristics by focusing only on knowledge acquired over the course of a school year.”

“This is a nice, and inaccurate, way to describe VAM, a statistical tool that has now been discredited more times than Donald Trump’s political acumen. But some folks still insist that if we take very narrow standardized test results and run them through an incoherent number-crunching, the numbers we end up with represent useful objective data. They don’t. We start with standardized tests, which are not objective, and run them through various inaccurate variable-adjusting programs (which are not objective), and come up with a number that is crap. The authors note that there are three types of pushback to using said crap.

“Refuse. California has been requiring some version of this for decades. and many districts, including some of the biggest, simply refuse to do it.

“Delay. A time-honored technique in education, known as Wait This New Foolishness Out Until It Is Replaced By The Next Silly Thing. It persists because it works so often.

“Obscure. Many districts are using loopholes and slack to find ways to substitute administrative judgment for the Rule of Data. They present Delaware as an example of how futzing around has polluted the process and buttress that with a chart that shows statewide math score growth dropping while teacher eval scores remain the same.

“Uniformly high ratings on classroom observations, regardless of how much students learn, suggest a continued disconnect between how much students grow and the effectiveness of their teachers.

“Maybe. Or maybe it shows that the data about student growth is not valid.

“They also present Florida as an example of similar futzing. This time they note that neighboring districts have different distributions of ratings. This somehow leads them to conclude that administrators aren’t properly incorporating student data into evaluations.

“In neither state’s case do they address the correct way to use math scores to evaluate history and music teachers.”

After carefully pulling apart the report, here are the conclusions, theirs and his:

Greene reviews their recommendations:

“It’s not a fancy-pants thinky tank paper until you tell people what you think they should do. So Adelman and Chuong have some ideas for policymakers.

“Track data on various parts of new systems. Because the only thing better than bad data is really large collections of bad data. And nothing says Big Brother like a large centralized data bank.

“Investigate with local districts the source of evaluation disparities. Find out if there are real functional differences, or the data just reflect philosophical differences. Then wipe those differences out. “Introducing smart timelines for action, multiple evaluation measures including student growth, requirements for data quality, and a policy to use confidence intervals in the case of student growth measures could all protect districts and educators that set ambitious goals.

“Don’t quit before the medicine has a chance to work. Adelman and Chuong are, for instance, cheesed that the USED postponed the use of evaluation data on teachers until 2018, because those evaluations were going to totally work, eventually, somehow.

“Don’t be afraid to do lots of reformy things at once. It’ll be swell.

“Their conclusion

“Stay the course. Hang tough. Use data to make teacher decisions. Reform fatigue is setting in, but don’t be wimps.

“My conclusion

“I have never doubted for a moment that the teacher evaluation system can be improved. But this nifty paper sidesteps two huge issues.

“First, no evaluation system will ever be administrator-proof. Attempting to provide more oversight will actually reduce effectiveness, because more oversight = more paperwork, and more paperwork means that the task shifts from “do the job well” to “fill out the paperwork the right way” which is easy to fake.

“Second, the evaluation system only works if the evaluation system actually measures what it purports to measure. The current “new” systems in place across the country do not do that. Linkage to student data is spectacularly weak. We start with tests that claim to measure the full breadth and quality of students’ education; they do not. Then we attempt to create a link between those test results and teacher effectiveness, and that simply hasn’t happened yet. VAM attempted to hide that problem behind a heavy fog bank, but the smoke is clearing and it is clear that VAM is hugely invalid.

“So, having an argument about how to best make use of teacher evaluation data based on student achievement is like trying to decide which Chicago restaurant to eat supper at when you are still stranded in Tallahassee in a car with no wheels. This is not the cart before the horse. This is the cart before the horse has even been born.”

Multimillionaire equity investor Rex Sinquefeld doesn’t like public education. Apparently he doesn’t like teachers either. He doesn’t think teachers should be evaluated by their administrators but by the standardized test scores of their students. Evidently he doesn’t know that this method of evaluating teachers has failed to work wherever it was tried; evidently he doesn’t know that even the District of Columbia, which was first to implement this method, has put it on hold. Mr. Sinquefeld also seems unaware that about 70% of teachers don’t teach tested subjects.

He was unable to get these ideas adopted by the Missouri state legislature so he created a Constitutional amendment, which will be on the ballot this fall. It is called Constitutional Amendment 3.

If it passes, the problems and costs will begin. Missouri will have to develop tests for every subject that is taught and administer them at the beginning and end of each course. How will Missouri measure the effectiveness of art teachers, music teachers, physical education teachers? Vast new sums must be spent to create and administer dozens of new tests.

Experience in other states shows that teachers in affluent districts will get higher ratings than those who teach children in poor districts and those with disabilities. The tests measure advantage and disadvantage, not teacher quality.

The bottom line with Mr. Sinquefeld’s proposal is that it will be very costly and it will not identify the best and worst teachers. It will reward teachers in high-income districts and punish those who choose to work with students who are English learners or have disabilities or are homeless.

It will take decision-making power away from local administrators and shift it to a centralized bureaucracy. It has been tried and failed in many districts. It demoralizes teachers by reducing their jobs to nothing more than test scores.

There are research-proven ways to improve education, such as early childhood education, reduced class sizes for the students who need extra help, regular access to medical services for those who can’t afford it, and experienced teachers. These strategies have a solid research base.

Missouri should do what works, rather than investing many millions of dollars in proven failure.

Jeff Bryant wonders whether Campbell Brown will replace Michelle Rhee as the public face of “reform”? Bryant describes the movement as “Blame Teachers First.”

Bryant suspects that Rhee’s star is fading fast. Bryant describes her as “education’s Ann Coulter.” The lingering doubts about the Washington, D.C. cheating scandal never dissipate, and John Merrow’s latest blog about the millions that Rhee has paid to protect her image have not been enough to stop the slide. He notes that she never collected the $1 billion she predicted and that her organization is retreating from several states. Her biography bombed. She was unable to draw a crowd in many of the states where she claimed to have thousands of supporters. Bryant says she is yesterday’s news.

Campbell Brown is thus next in line to inherit the role as leader of the “Blame Teachers First” movement.

Bryant writes:

“With Rhee and StudentsFirst sinking under the weight of over-promises, under-performance, and unproven practices, the Blame Teachers First crowd is now eagerly promoting Campbell Brown.

“According to a report in The Wall Street Journal, Brown launched the group Partnership for Educational Justice, with a Veraga-inspired lawsuit in New York State to once again dilute teachers’ job protections, commonly called “tenure.” The suit clams students suffer from laws “making it too expensive, time-consuming and burdensome to fire bad teachers.”

“An article in The Washington Post noted, “Brown has raised the issue of tenure in op-eds and on TV programs such as ‘Morning Joe.’ But she may be just getting warmed up.”

“Actually, Brown has already been warmed up and is plenty ready to take the mound and pitch. As the very same article noted, Brown started her campaign against teachers some time ago, claiming that the New York City teachers’ union was obstructing efforts to fire teachers for sexual misconduct. Unfortunately for Brown, the ad campaign conducted by her organization Parents Transparency Project failed to note that, as The Post article recalled, at least 33 teachers had indeed been fired. “The balance were either fined, suspended or transferred for minor, non-criminal complaints.” Oops.

“Further, as my colleague Dave Johnson recalled at the time, Brown penned an op-ed in The Wall Street Journal accusing the teachers’ union of “trying to block a bill to keep sexual predators out of schools.” It turned out, the union wanted to strengthen the bill, not stop it. Double oops.

“Nevertheless – or as The Post reporter put it, “undaunted” – Brown has now decided to take on teacher personnel policies on behalf of, she claims, “millions of schoolchildren being denied a decent education.”

Who is funding the new anti-teacher drive? Bryant describes the familiar organizations that promoted Rhee, such as TNTP, which Rhee founded, as well as Republican operatives.

He writes:

“What emerges from these interwoven relationships, then, is a big-money effort led by a small number of people who are intent on the singular goal of reducing the ability of teachers to have control of their work environments. But to what end?

“Regardless of how you feel about the machinations behind the Rhee-Brown campaign, what’s clear is that it is hell-bent on imposing new policies that have little to no prospect of addressing the problem they are purported to resolve, which is to ensure students who need the best teachers are more apt to get them.

“Research generally has found that experienced teachers – the targets for these new lawsuits – make a positive difference in students’ academic trajectory. A review of that research on the website for the grassroots group Parents Across America concluded, “Every single study shows teaching experience matters. In fact, the only two observable factors that have been found consistently to lead to higher student achievement are class size and teacher experience.”

The new campaign looks very much like the old campaign, with only this difference. Brown does not pretend to be a Democrat.

Our friend and frequent commenter KrazyTA has analyzed the response of the VAM Gang (Chetty, Friedman, and Rockoff) to the American Statistical Association’s pithy demolition of their famous and much praised justification for VAM.

Here is his analysis:

I urge viewers of this blog to read the recent response by Raj Chetty (Harvard University), John Friedman (Harvard University) and Jonah Rockoff (Columbia University) to a statement by the American Statistical Association (ASA) [2014] on VAM.

A pdf file of same (less than five pages hard copy) can be accessed at—

Link: http://obs.rc.fas.harvard.edu/chetty/ASA_discussion.pdf

The last paragraph of their response to ASA’s point #7 (p. 4):

“The ASA appropriately warns that “ranking teachers by their VAM scores can have unintended consequences that reduce quality.” In particular, it is possible that teachers may feel pressured to teach to the test or even cheat if they are evaluated based on VAMs. The empirical magnitude of this problem—and potential solutions if it turns out to be a serious concern—can only be assessed by studying the behavior of teachers in districts that have started to use VAMs.”

Immediately followed by the last paragraph of their response, in full (p. 4):

“In summary, our view is that many of the important concerns about VAM raised by the ASA have been addressed in recent experimental and quasi-experimental studies. Nevertheless, we caution that there are still at least two important concerns that remain in using VAM for the purposes of teacher evaluation. First, using VAM for high-stakes evaluation could lead to unproductive responses such as teaching to the test or cheating; to date, there is insufficient evidence to assess the importance of this concern. Second, other measures of teacher performance, such as principal evaluations, student ratings, or classroom observations, may ultimately prove to be better predictors of teachers’ long-term impacts on students than VAMs. While we have learned much about VAM through statistical research, further work is needed to understand how VAM estimates should (or should not) be combined with other metrics to identify and retain effective teachers.”

My initial reaction.

While they don’t use the term “Campbell’s Law” — IMHO, they are deliberately avoiding it — notice how they take the import and sweep of Campbell’s astute observation and reduce it to “responses such as teaching to the test or cheating” with the added proviso that “there is insufficient evidence to assess the importance of this concern.” *Note that in his testimony during the Vergara trial, Dr. Chetty on p. 547 casually dismissed this challenge to his VAM-based beliefs as “Campbell’s Conjecture.”*

Link: http://www.vergaratrial.com/storage/documents/2014.01.30_Rough_am_session.txt

This is critical. First, they reduce Campbell’s Law to a statement about individual morality and ethics—of the employees no less!—rather than something that involves whole institutions [e.g., the recent VA scandal or the Potemkin Villages of the now-vanished Soviet Union] and is created/mandated/enforced from the top down. Second, by doing so they avoid having to address the destructive effects of Management by the Numbers/Management by Objective/Management by Results, i.e., the very management philosophy of those funding their “research” and leading the charterite/privatization charge. Third, they literally discard the already large amount of evidence proving the accuracy and trustworthiness of Campbell’s Law re VAM [and its fuel/food, standardized test scores] by referring to it as “insufficient” — while their pronouncements, of course, even though they need “further work,” is the current Gold Standard.

So it is hardly surprising that they are hot and heavy for heading off potential problems in data corruption by “studying the behavior of teachers in districts that have started to use VAMs” when what is needed is to independently study, monitor and regulate the behavior of folks like administrators, school boards, heads of CMOs and charter owners/operators, the DOE, and those who employ people like Chetty, Freidman and Rockoff—they’re the ones that set the numerical goals/straightjackets that drive data corruption!

*While W. Edward Deming would come in handy here, someone else thought along the same lines: “When a measure becomes a target, it ceases to be a good measure.” [Charles Goodheart]*

The next is a bit perplexing. Apparently they don’t know how to use google and Amazon to find (among many such works) Sharon L. Nichols and David C. Berliner, COLLATERAL DAMAGE: HOW HIGH-STAKES TESTING CORRUPTS AMERICA’S SCHOOLS (2010, third printing) or Phillip Harris, Bruce M. Smith and Joan Harris, THE MYTHS OF STANDARDIZED TESTING: WHY THEY DON’T TELL YOU WHAT YOU THINK THEY DO (2011). Perhaps they permit themselves no newspapers, internet, or television either, hence testing scandals such as those in Washington, DC and Houston, TX and Atlanta, GA (just to name a few) escaped their notice completely. Also, the above authors and many others, like Audrey Amrein-Beardsley (see her recent RETHINKING VALUE-ADDED MODELS IN EDUCATION: CRITICAL PERSPECTIVES ON TESTS AND ASSESSMENT-BASED ACOUNTABILITY, 2014) can be contacted by email. Is it too much to ask of those claiming to be researchers that they take the time and make the effort to, er, get the contact information they need to make sure their research is done properly?

In their response to ASA point #7 they quote the ASA to the effect that “Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of opportunities for quality improvement are found in the system-level conditions. Ranking teachers by their VAM scores can have unintended consequences that reduce quality” (p. 3). The trio start off as best they can by stating that “The ASA is correct in noting that the majority of variation in student test scores is ‘attributable to factors outside of the teacher’s control,’ and that this ‘is not saying that teachers have little effect on students.’” Wait! You can read the rest for yourselves but a fly in the ointment—or the elephant in the room—when you’re in a debate is that when you concede the most critical point you lose the argument.

Since Chetty/Friedman/Rockoff didn’t dispute the 1% to 14% assertion then I would like to point out that I would be awfully interested in knowing why they’re ignoring the other 99% to 86%. Could it be that it’s poses intractable difficulties to their VAManiacal beliefs?

My very last point. Chetty/Friedman/Rockoff don’t understand that even under the most favorable circumstances, high-stakes standardized testing measures very little, is inherently imprecise, and is used for purposes so inappropriate to its few strengths that it needs to be junked. Take out of the Chetty/Friedman/Rockoff response those terms referring to “test scores” and the like and, well, the whole thing falls apart. Those “vain and illusory” [thank you, Duane Swacker!] numbers/stats are the glue that holds VAM together, the fuel that keeps VAM moving ahead, the food that sustains its very existence.

The Golem of VAM reverts to its inert form when you remove the magic of Testolatry.

Perhaps they should have taken that class in ancient Greece rather than Bean Counting For $tudent $ucce$$—

“I have often repented of speaking, but never of holding my tongue.” [Xenocrates]

Or if you prefer another very old, very dead and very Greek guy:

“Words empty as the wind are best left unsaid.” [Homer]

Take your pick. Odds are you won’t go wrong. [a numbers/stats joke…]

😎

P.S. I leave it to readers of this blog to read the triad’s response and make their own judgments and comments.

Jordan Weissman, a business correspondent for Slate, read the Vergara decision and noted that the judge’s conclusion hinged on a strange allegation. The judge quoted David Berliner as saying that 1-3% of the teachers in the state were “grossly ineffective.” The judge then calculated that this translated into thousands of teachers, between 2,750 and 8,750, who are “grossly ineffective.”

Weissman called Professor Berliner and asked where the number 1-3% came from. Dr. Berliner said it was a “guesstimate,”

He told Weissman, “It’s not based on any specific data, or any rigorous research about California schools in particular. “I pulled that out of the air,” says Berliner, an emeritus professor of education at Arizona State University. “There’s no data on that. That’s just a ballpark estimate, based on my visiting lots and lots of classrooms.” He also never used the words “grossly ineffective.” And he does not support the judge’s belief that teacher quality can be judged by student test scores.

Dr. Berliner mailed Weissman a copy of the transcript to show that he did not use the term “grossly ineffective.”

Weissman then called Stuart Biegel, a law professor and education expert at UCLA, to ask him “whether he thought that the odd origins of the 1–3 percent figure might undermine Treu’s decision on appeal. Biegel, who represented the winning plaintiffs in one of the key cases Treu cited, said it might. But he thought that the decision’s “poor legal reasoning” and “shaky policy analysis” would be bigger problems. “If 97 to 99 percent of California teachers are effective, you don’t take away basic, hard-won rights from everybody. You focus on strengthening the process for addressing the teachers who are not effective, through strong professional development programs, and, if necessary, a procedure that makes it easier to let go of ineffective teachers,” he wrote to me in an email.”

Bob Shepherd writes on the absurd demands now placed on teachers and principals by politicians, who expect to see higher test scores every year. Step back and you realize that the politicians, the policy wonks, the economists, and the ideologues are ruining education, not improving it. They are doing their best to demoralize professional educators. What are they thinking? Are they thinking? Or is it just their love of disruption, let loose on children, families, communities, and educators?

Bon Shepherd writes:

OK, you are sitting in your year-end evaluation session, and you’ve heard from every other teacher in your school that his or her scores were a full level lower this year than last, and so you know that the central office has leaned on the principal to give fewer exemplary ratings even though your school actually doesn’t have a problem with its test scores and people are doing what they did last year but a bit better, of course, because one grows each year as a teacher–one refines what one did before, and one never stops learning.

But you know that this ritual doesn’t have anything, really, to do with improvement. It has to do with everyone, all along the line, covering his or her tushy and playing the game and doing exactly what he or she is told. And, at any rate, everyone knows that the tests are not particularly valid and that’s not really the issue at your school because, the test scores are pretty good because this is a suburban school with affluent parents, and the kids always, year after year, do quite well.

So whether the kids are learning isn’t really the issue. The issue is that by some sort of magic formula, each cohort of kids is supposed to perform better than the last–significantly better–on the tests, though they come into your classes in exactly the same shape they’ve always come into them in because, you know, they are kids and they are just learning and teaching ISN’T magic. It’s a lot of hard work. It’s magical, sometimes, of course, but its’ not magic. There’s no magic formula.

So, the stuff you’ve been told to do in your “trainings” (“Bark. Roll over. Sit. Good Boy”) is pretty transparently teaching-to-the-test because that’s the only way the insane demand that each cohort will be magically superior to the last as measured by these tests can be met, but you feel in your heart of hearts that doing that would be JUST WRONG–it would short-change your students to start teaching InstaWriting-for-the-Test, Grade 5, instead of, say, teaching writing. And despite all the demeaning crap you are subjected to, you still give a damn.

And you sit there and you actually feel sorry for this principal because she, too, is squirming like a fly in treacle in the muck that is Education Deform, and she knows she has fantastic teachers who knock it out of the park year after year, but her life has become a living hell of accountability reports and data chats to the point that she doesn’t have time for anything else anymore (she has said this many times), and now she has to sit there and tell her amazing veteran teachers who have worked so hard all these years and who care so much and give so much and are so learned and caring that they are just satisfactory, and she feels like hell doing this and is wondering when she can retire.

And the fact that you BOTH know this hangs there in the room–the big, ugly, unspoken thing. And the politicians and the plutocrats and the policy wonks at the Thomas B. Fordham Institute and the Secretary of the Department for the Standardization of US Education, formerly the USDE, and the Vichy education guru collaborators with these people barrel ahead, like so many drunks in a car plowing through a crowd of pedestrians.

Audrey Amrein-Beardsley, one of our nation’s pre-eminent experts on value-added assessment, here reviews a TED-X talk by Tennessee Commissioner of Education Kevin Huffman, boasting of the tremendous growth in test scores as a result of his policies. Beardsley points out the curious fact that Tennessee started using VAM in the 1990s with little to show for it. But, there were those Tennessee NAEP scores, proof positive, according to both Huffman and Se rotary of Education Arne Duncan that Race to the Top–or Huffman’s personal presence–was creating strong results. Nd in the end, results (test scores) are what matter most, right?

But what about those NAEP results that Huffman and Duncan tout?

Beardsley writes:

“While [William] Sanders (the TVAAS developer who first convinced the state legislature to adopt his model for high-stakes accountability purposes in the 1990s) and others (including U.S. Secretary of Education Arne Duncan) also claimed that Tennessee’s use of accountability instruments caused Tennessee’s NAEP gains (besides the fact that the purported gains were over two decades delayed), others have since spoiled the celebration because 1) the results also demonstrated an expanding achievement gap in Tennessee; 2) the state’s lowest socioeconomic students continue to perform poorly, despite Huffman’s claims; 3) Tennessee didn’t make gains significantly different than many other states; and 4) other states with similar accountability instruments and policies (e.g., Colorado, Louisiana) did not make similar gains, while states without such instruments and policies (e.g., Kentucky, Iowa, Washington) did. I should add that Kentucky’s achievement gap is also narrowing and their lowest socioeconomic students have made significant gains. This is important to note as Huffman repeatedly compares his state to theirs.”

Read the post. It is a very good demonstration of how data get used and misused for political purposes.

Follow

Get every new post delivered to your Inbox.

Join 110,667 other followers