Archives for category: Teacher Evaluations

Since she upset the heavily-funded favorite in the recent Los Angeles school board runoff, many eyes are on Monica Ratliff.

Some of her supporters were concerned when she appeared at an event where the Gates-funded Educators for Excellence presented a report on teacher evaluation. The event was attended by Superintendent John Deasy and school board president Monica Garcia, an ally of Deasy.

Immediately the tweets began to fly claiming that Ratliff supported paying teachers by student test scores. Some worried that she had crossed over to the side that opposed her in the election.

Whoa!

I wrote Monica Ratliff, we had a candid conversation, and Monica advised that we should judge her by her votes as a board member, not by tweets that did not come from her.

She wrote:

“Dear Diane,

“When I advocate for fixing the LAUSD teacher evaluation system and professional development system, I am NOT advocating that we link test scores to monetary gain for teachers or administrators.

“Across LA, there are public schools where scores have been rising over the years sans any monetary gain for teachers or administrators. If we link test scores to monetary gain, I have no doubt that we will see some increases in test scores but at what cost and by what means?

Sincerely,

Mónica Ratliff

A reader comments:

“I was present when Dr. Danielson spoke in depth during the formulation of the current Maryland evaluation system and at a Maryland State Education Association Convention. She stated that there was no research to measure teachers using student test scores. In fact, she stated that if a teacher was fired due to students test scores there could be possibility of litigation.

Charlotte Danielson is the real deal! Her research has been co-opted (did I spell that correctly?) by Gates and company. Please do not disparage her.”

Michael Weston of Hillsborough County, Florida, explains what is wrong with teacher evaluation:

“Of course we use the Danielson rubric in Hillsborough County Fl, where it forms one of the three pillars of the Gates funded “Empowering Effective Teachers”. The second pillar is value-added, the third is duplicity / deception.

Quite frankly, whether Charlotte is spelled Charlatan, whether her rubric is valuable or trash, whether her resume is padded…..none of this matters to me. What matters is what it is being used for. Teacher evaluation. Not a good use of money.

Teachers are not the major problem in education. The major problem in education is quite simple, well researched and accepted by everyone except Michelle Rhee.

The problem is the income gap.
The income gap causes an achievement gap.
The achievement gap causes efforts to close it.
The efforts to close it are misguided, destructive and tainted by greed and politics.

The pain, effort and treasure spent on teacher evaluation will produce miniscule returns, if any. The higher likelihood is that teacher evaluation schemes will have a negative net effect. The morale issues are huge and have yet to be quantified.

Most unfortunately, here is where the third pillar, duplicity / deception comes in to play. These schemes are “doomed to succeed”. Why? Because the big money behind them demands they succeed. How easy is it for the Gates Foundation to publish its own results. Very easy. How many School Districts will admit to having poisoned the well of education?”

Principal Carol Burris of South Side High School in Rockville Center, Long Island, spent her Saturday analyzing State Education Commissioner John King’s Educator Evaluation plan. Here is her review:

“When I took a look at the details of the plan imposed by Commissioner King on NYC, I was taken aback. The first thing I noticed was how low the points in the Effective range in the final 60 (other measures) were. These are the points assigned by the principal according to the rubric. I could not understand how the points in the Effective range could be as low as 45. A teacher could be rated effective in the first component, with a growth score of 9 points, effective in the second component the local measure with a score of 9 points and receive 45 points in the effective range established by the commissioner in the final 60 (see page 70) here http://files.uft.org/teacher-evaluation/13%20Attached%20Documents%20to%20NYCDOE%20APPR%20Plan%20Review%20Room%20Submission%20-%20Teachers%20and%20Principals.pdf , but she would be rated Ineffective overall.

“If you add up the points, 9+9+45=63.
In other words, the teacher is rated INEFFECTIVE overall, even though she is Effective in all three categories. At least that is what the statute 3012c would say.

“Let me explain. 3012c, which you can find here: http://www.regents.nysed.gov/meetings/2012Meetings/March2012/312bra6.pdf states on page 46 the following when describing points awarded for the local measure:

“(ii) an Effective rating in this subcomponent if the results meet district-adopted expectations for growth or achievement and they achieve a subcomponent score of: (a) 9-17 for the 2011-2012 school year, and for the 2012-2013 school year and thereafter for teachers and principals whose score on the State assessment or other comparable measures subcomponent is not based on a value-added model; or (b) 8-13 for the 2012-2013 school year and thereafter for teachers and principals whose score on the State assessment or other comparable measures subcomponent is based on a value-added model.

“In other words, if the teacher receives a score of 9 – 17 on the local measure, prior to VAM, she is in the Effective category. After VAM, it changes to 8-13. That is defined in the statute. Now look on pages 35 and 36 of the plan imposed by the Commissioner:

Click to access 13%20Attached%20Documents%20to%20NYCDOE%20APPR%20Plan%20Review%20Room%20Submission%20-%20Teachers%20and%20Principals.pdf

“On these pages you will find matrices that award points on the local measure. However, a score of 9 is not in the Effective range as 3012c requires. Rather, a score of 9 is in the Ineffective range. A teacher has to accrue 15 of the 20 points to be Effective without an approved VAM, and 13 out of 15 if there is an approved VAM.

“The entire section is confusing, because it has typographical errors, as it tries to explain the ratings with or without VAM. However, even if VAM is approved this year, the statute does not change. In fact, Effective moves down to 8 points., according to the 3012C.

“Unless I am missing an additional conversion chart, it appears to me that this plan violates 3012C. It gives a weight to test scores that was never intended, and it explains why the points are so low in the final 60. They can be low because John King raised the bar in the local measure, expecting very high student performance, for a teacher to be rated effective in that measure, and that is not in accordance with the statute passed by the legislature.”

I agree with the following comment by RRatto. If checklists for teacher evaluation were so great, how come they are never used in the nation’s elite private schools? They are a remnant of factory thinking, and unworthy of any profession. Checklists are great for auto mechanics and home builders. They are distinctly non-professional.

RRatto writes:

“anyone who claims they know how to measure a teacher’s effectiveness is full of #%^.

Because of our immensely diverse population and the need to differentiate I can tell you most teachers must change the way they teach almost daily. There is no magic rubric that can measure that.

Once we start aiming to check off the must do’s of a rubric we are no longer are teachers. We become circus monkeys performing for our masters.”

Charlotte Danielson is the leading guru of teacher evaluation. Alan Singer asks who she is, what is her background, and why will so many teachers be evaluated by her rubric.

Arthur Goldstein, aka Néw York Educator, describes the vain and convoluted effort to create a teacher evaluation system in Néw York. A pinch of this, a heavy dose of testing, and the computer will tell us which teachers are great and which are the stinkers.

Perhaps you recall the hoopla that surrounded the release in late 2011 of a study by economists Chetty, Friedman, and Rockoff, in which they claimed that a great teacher would produce a huge increase in lifetime earnings, fewer pregnancies, and other wonderful life outcomes. It was reported on the front page of the New York Times, where one of the authors said that the lesson of the study was that it was best to fire teachers who couldn’t produce big gains sooner rather than later. The study was discussed reverentially on the Newshour and President Obama referenced its conclusion in his 2012 State of the Union address.

The central claim was that the great teacher produced a lifetime gain of $266,000 for a typical classroom.

First out of the box to challenge the study was Bruce Baker of Rutgers. He pointed out that if a class had 26 students, each of them would see an increment of $1,000 per year, or about $20 a week, or less than $5 a day. Maybe. Or maybe not.

Recently he realized that the Chetty study was being used to sell test-based evaluation of teachers, and no one among the policymakers seemed aware of the defects and critiques of the study.

He decided that the study seems to have gotten a special kind of treatment, which he refers to as “Mountain-Out-of-a-Molehill-inator.” Once again, here views the skewing of the results to show that policymakers would be foolish to draw conclusions and apply them to real schools and real teachers.

He concludes:

“What really are the implications of this study for practice – for human resource policy in local public (or private schools)? Well, not much! A study like this can be used to guide simulations of what might theoretically happen if we had 10,000 teachers, and were able to identify, with slightly better than even odds, the “really good” teachers – keep them, and fire the rest (knowing that we have high odds that we are wrongly firing many good teachers… but accepting this fact on the basis that we were at least slightly more likely to be right than wrong in identifying future higher vs. lower value added producers). As I noted on my previous post, this type of big data – this type of small margin-of-difference finding in big data – really isn’t helpful for making determinations about individual teachers in the real world. Yeah… works great in big-data simulations based on big-data findings, but that’s about it.

“Indeed it’s an interesting study, but to suggest that this study has important immediate implications for school and district level human resource management is not only naive, but reckless and irresponsible and must stop.”

A teacher sent this commentary about what’s happening in her city of Syracuse, New York.

She writes:

“As part of the teacher evaluation in Syracuse, our lovely union negotiated a student survey which would count as 6% of our evaluation. It’s called the Tripod survey, but I don’t know what that means. I’ve attached the directions we were given, which includes the questions for grades K-2. They have 40 questions, and when you get to 6-8, there are over 100.

While there are questions about the classroom, there are also questions about home life. How that pertains to my classroom, I don’t know. And, there are more “personal” questions the higher the grade level. I suspect that some of this information will find its way to the information cloud in the sky.

There will be name labels on each survey, which will be removed prior to collecting the completed survey. However, if it is anything like the surveys we’ve had to give to students in the past, their school ID # is still on the pages. Otherwise, why bother with name labels which will be removed? Why not just hand out surveys like you do the NY tests? I am uncomfortable with the whole thing, and really ticked off at the union which approved, sight unseen, what was going to be done.

Although my kids, all 4 of them, went to public schools in Syracuse, I fear for the future of the kids there now. Do we have any chance at all of putting a stop to what is going on? Money really is the root of all evil.”

Somehow the word has gotten through to the Gates Foundation that many teachers don’t like their agenda.

Teachers know that Bill Gates has told governors and the media that American public education is broken and obsolete. Teachers know he created the “blame-the-teacher” narrative. Teachers know he pushed the flawed idea that test scores of students should be used to judge teacher quality. Teachers know that Gates pumped $2 million into the anti-public school agitprop film “Waiting for Superman.”

Teachers are not dumb.

But now the Gates Foundation has launched a campaign to persuade teachers that the foundation cares.

Actions speak louder than words.

Teachers in many states are being evaluated by whether student scores rise or fall. Third graders will be surveyed to see what they think of their teachers. All bad ideas from Gates.

The public school bloggers in Seattle see this effort as a Trojan horse.

It is good Gates is listening. Now teachers must speak truth to power. Let him hear what you think.