Archives for category: Teacher Evaluations

Peter Greene says that corporate reformers have discovered the secret to generating an endless supply of “ineffective” teachers: just keep proclaiming that teachers are ineffective if their students get low scores.

“In the wake of Vergara, we’ve repeatedly heard an old piece of reformster wisdom: Poor students are nearly twice as likely as their wealthier peers to have ineffective, or low-performing, teachers. This new interpretation of “ineffective” or “low-performing” guarantees that there will always be an endless supply of ineffective teachers.

“The new definition of “ineffective teacher” is “teacher whose students score poorly on test.”

“Add to that the assumption that a student only scores low on a test because of the student had an ineffective teacher.

“You have now created a perfect circular definition. And the beauty of this is that in order to generate the statistics tossed around in the poster above, you don’t even have to evaluate teachers!”

And he adds:

“You can have people trade places all day — you will always find roughly the same distribution of slow/fast, wet/dry. good/bad vision. Because what you are fixing is not the source of your problem. It’s like getting a bad meal in a restaurant and demanding that a different waiter bring it to you.”

When Arne Duncan went to South Carolina, he probably expected to meet the usual compliant, uninformed business crowd. But Patrick Hayes of EdFirstSC was waiting to meet him, hear him, and ask questions. And Hayes is neither compliant nor uninformed.

Hayes writes:

Ever seen a weasel tap dance? Would you like to?

Well, here it is: me vs. Arne Duncan.

My first thought when I left was that I should’ve boxed him in more during the preamble. ( I was worried about losing the mic as it was).

Looking this over, I don’t think it would have done a bit of good.

The man is animatronic. Push the button and out come the talking points.

For context, here’s the setup (video cuts in midway):

“Your Dept found a 36% error rate for value-added.

Your Dept’s merit pay brief says:

TX, Nashville, and Chicago programs all showed no effect on student achievement.’

We can add NY, Denver and (more than likely) DC to that list.

Why are you spending our money on a policy that is unfair to teachers and has an extensive record of failure?”

Later in the session, Duncan’s talking points misfired. Asked about our move away from Common Core, he went on at length about how upsetting it was seeing SC go back to low standards and lying to parents.

Apparently, nobody told him that our standards were among the most rigorous in the nation, according to Fordham Institute, a conservative think tank that pushes rigor.

Like Duncan, they’re big believers in the fallacy that if we make school harder, we’ve made school better.

Patrick Hayes

Director
EdFirstSC
843-852-9094

Jesse Hagopian, a teacher at Garfield High School in Seattle, here describes the decision by the Gates Foundation to delay the high-stakes consequences of the tests promoted by—-the Gates Foundation.

Hagopian writes:

“How do you know the United States is currently experiencing the largest revolt against high-stakes standardized testing in history?

“Because even the alchemists responsible for concocting the horrific education policies designed to turn teaching and learning into a test score have been shaken hard enough to awaken from the nightmare scenario of fast-tracking high-stakes Common Core testing across the nation. The Bill & Melinda Gates Foundation issued a stunning announcement on Tuesday, saying that it supports a two-year moratorium on attaching high-stakes to teacher evaluations or student promotion on tests associated with the new Common Core State Standards.

“Labor journalist Lee Sustar put it perfectly when he said of the Gates Foundation’s statement, “Dr. Frankenstein thought things got out of hand, too.”

“The mad-pseudoscientists at the Gates Foundation have been the primary perpetrators of bizarre high-stakes test experiments in teacher evaluations, even as a growing body of research—including a report from the American Statistical Association—has debunked the validity of “value added method” testing models. The Gates Foundation has used its immense wealth to circumvent the democratic process to create the Common Core State Standards (CCSS) with very little input from educators.”

After dissecting this surprising and welcome retreat, Hagopian calls on resisters to join a demonstration on June 26 and continue the mass civil resistance to the Gates Foundation’s undemocratic takeover of American education:

“This latest backtrack by the Gates Foundation shows they are vulnerable to pressure. But the question remains, will the Gates Foundation pursue its call for constraining the testing creature it created with the same zeal as it showed in creating the Common Core? Will the Foundation use its undue influence and wealth to pressure states to drop the use of high stakes testing attached to Common Core tests? On June 26th, public education advocates from around the country will arrive in Seattle to protest at the global headquarters of the Gates Foundation. You should join them and find out if the Gates Foundation is brave enough to answer these questions.

“While the Gates Foundation may be bending to the will of a popular revolt, it will take nothing short of mass civil rights movement to defeat its grotesque monster of high-stakes testing that is menacing our schools.”

Mark Henry, superintendent of the Cypress-Fairbanks district in Texas, stood up and spoke out for common sense and education ethics. In this article, he explains why his district–the third largest in Texas–will not participate in a pilot test to evaluate teachers by student test scores.

He writes:

“This latest movement to “teacher-proof” education places additional fear, anxiety and pressure on professionals who are stressed enough already. I have seen this first-hand with principals and teachers who fret over the STAAR test, a once-per-year high-stakes assessment that measures how a child performed on one test on one day. Is that really learning? I don’t think so. Testing is a key diagnostic tool, and results should be used to assess the progress of students so plans can be developed to address the gaps and deficiencies of each student.
Learning is not a business; it’s a process. Use of a teacher evaluation system tied to standardized test scores alienates educators by trying to transform classrooms into cubicles. There are many more elements that go into teaching and learning than a high-stakes, pressurized test. Tying student test scores to a teacher’s evaluation may improve test scores, but does it improve a child’s educational outcome?”

Henry says there are three reasons that schools fail: mismanagement by school boards and superintendents; ineffective principals; lack of community support.

He does not blame teachers for poor leadership or systemic failure.

He writes:

“Let’s quit trying to “teacher-proof” education and stop the overreliance on data from one high-stakes test. The answers for improvement are recruiting, training and supporting our teaching professionals. Attention to these will deepen the effectiveness of what we do in the classroom and the biggest winners will be our children. ”

Mark Henry is a hero of public education for his willingness to stand against a misinformed and harmful status quo.

I earlier reported that the latest data show that 97% of teachers in Pittsburgh received ratings of either “distinguished” or “advanced.” Similar findings have emerged elsewhere, which makes me wonder why it was necessary to spend billions of dollars to create these new evaluation systems, which are often incomprehensible. But Kipp Dawson, a Pittsburgh teacher wrote a comment warning that the evaluation system is flawed and riddled with unreliable elements, like VAM. Don’t be fooled, Dawson says. The Pittsburgh evaluation system was created with the lure of Gates money. It attempts to quantify the unmeasurable.

Dawson writes:

I am a Pittsburgh teacher and an activist in the Pittsburgh Federation of Teachers (AFT). Let’s not let ourselves get pulled into the trap of applauding the results of a wholly flawed system. OK, so this round the numbers look better than the “reformers” thought they would. BUT the “multiple measures” on which they are based are bogus. And it was a trap, not a step forward, that our union let ourselves get pulled in (via Gates money) to becoming apologists for an “evaluation” system made up of elements which this column has helped to expose as NOT ok for “evaluating” teachers, and deciding which of us is an “effective” teacher, and which of us should have jobs and who should be terminated.

A reminder. VAM. A major one of these “multiple measures.” Now widely rejected as an “evaluating” tool by professionals in the field, and by the AFT. A major part of this “evaluation” system.

Danielson rubrics, another major one of these multiple measures: after many permutations and reincarnations in Pittsburgh, turned into the opposite of what they were in the beginning of this process — presented to us as a tool to help teachers get a window on our practice, but now a set of numbers to which our practice boils down, and which is used to judge and label us. And “objective?” In today’s world, where administrators have to justify their “findings” in a system which relies so heavily on test scores? What do you think . . .

Then there’s (in Pittsburgh) Tripod, the third big measure, where students from the ages of 5 (yes, really) through high school “rate” their teachers — which could be useful to us for insight but, really, a way to decide who is and who is not an “effective” teacher?

To say nothing of the fact that many teachers teach subjects and/or students which can’t be boiled down in these ways, so they are “evaluated” on the basis of other people’s “scores” over which they have even less control.

Really, now.

So, yes, these numbers look better than they did last year, in a “practice run.” But is this whole thing ok? Should we be celebrating that we found the answer to figuring out who is and who is not an “effective” teacher?

This is a trap. Let’s not fall into it.

Billions of dollars have been spent to create new teacher evaluation systems. Here is one result: in Pittsburgh, 97% of teachers were rated either distinguished or advanced. Meanwhile budget cuts are harming children in Pennsylvania.

For Immediate Release
June 13, 2014

Contact:
Marcus Mrowka
202/531-0689
mmrowka@aft.org
http://www.aft.org

Pittsburgh Teacher Evaluation Results Demonstrate Importance of Due Process and Improvement-Focused Evaluation Systems

WASHINGTON— Statement of AFT President Randi Weingarten following news that nearly 97 percent of teachers were rated distinguished or advanced.

“On one side of the country, a judge in California wrongly ruled that the only way to ensure that kids—particularly kids who attend high-poverty schools—have good teachers is to take away teachers’ due process rights. On the other side of the country, the most recent teacher evaluation results in Pittsburgh proved this is absolutely not true. Due process not only goes hand in hand with this new evaluation system, having those rights helped to strengthen it.

“Nearly 97 percent of Pittsburgh’s teachers were rated distinguished or advanced under this new evaluation system. We’re not surprised at all by the dedication and talent of Pittsburgh’s teaching staff who go into the classroom each and every day to help our children grow and achieve their dreams—but there’s a bigger story here that rejects the assertion made in California that due process rights hurt educational quality.

“These results show what is possible when teachers, unions and the district—in a state with due process—work together on an evaluation system focused on helping teachers improve. While we may have some qualms about the construction of the evaluation system, the fact remains that far from impeding achievement due process and tenure, combined with an improvement-focused evaluation system, empower teachers and keep good teachers in the classroom, offer support to those who are struggling, and streamline the process for removing teachers who can’t improve.”

###

This morning I posted a statement by a group of professors at City University of New York in support of the edPTA, which assesses the performance of those who seek certification to enter teaching.

 

Let me make clear that I am not supporting or endorsing either side of this debate but am watching carefully, as I tend to be suspicious of all high-stakes testing.

 

Soon after the post appeared this morning, I received an email from a CUNY professor pointing out that the professors’ union–the Professional Staff Congress– at CUNY opposes edTPA and that those who signed the earlier statement are a minority of the faculty.

 

Due to the opposition of PSC, UUP (United University Professors of State University of New York), and NYSUT (the New York State Union of Teachers), implementation of edTPA has been delayed until June 2015.

 

PSC said this on its website:

 

The Teachers Performance Assessment (edTPA), is a high-stakes assessment for student teachers that includes filmed classroom observations. It has been opposed by PSC, UUP and NYSUT. (NYSUT edTPA resolution.) The State Education Department rushed to implement the controversial teacher certification exam, which was set to be a requirement for teacher certification after May 1, 2015. But education faculty, teachers and their unions pushed back and the implementation of the assessment has been pushed back until June 2015.

In Support of a Performance Assessment of Teaching
June, 2014

Beverly Falk, Professor and Director, Graduate Program in Early Childhood Education, The ​City College of New York

Jeanne Angus, Assistant Professor and Program Director, Graduate Program in Special Education, Brooklyn College

Greg Borman, Lecturer, Secondary Science Education, The City College of New York

Nancy Cardwell, Assistant Professor, Graduate Program in Early Childhood Education, The City ​College of New York

Joni Kolman, Assistant Professor, Department of Teaching, Learning, and Culture, The City College of New York

Geraldine Faria, Assistant Dean, School of Education, Brooklyn College

Christy Folsom, Associate Professor, Childhood Education, Lehman College

Nancy Martin, Adjunct instructor, Childhood Science Education, Brooklyn College

Andrew Ratner, Assistant Professor, Secondary English Language Arts Education, The City College of New York

Deborah Shanley, Professor, Special Education/English, Secondary Education and Dean, School of Education, Brooklyn College

Jacqueline D. Shannon, Associate Professor and Chair
, Department of Early Childhood Education/Art Education
, Brooklyn College

Beverly Smith, Associate Professor, Secondary Mathematics Education, The City College of New York

Christina Taharally, Associate Professor and Director, Graduate Programs in Early Childhood Education, Hunter College

The media and the blogosphere have been filled as of late with discussions about teacher education. Think tanks, states, and the federal government have questioned the efficacy of teacher preparation programs and are proposing accountability measures for them that resemble the high stakes testing in p-12 schools. Many have responded to these problematic policies, which emphasize targets and sanctions rather than supports to improve, with critiques of the dysfunctional consequences they generate: an over-emphasis on tests that narrow the curriculum and the use of value-added measures (such as students’ test scores to evaluate teachers and graduates of teacher education programs) that do not account for all of the complex factors influencing learning. Critics rightly point to these policies as creating disincentives to teach students who traditionally do not score well on tests (those who are poor, new immigrants, who are English language learners, or who have special needs).

Ironically, however, some who are reacting to these negative effects of the test and punish approach are including in their attack an initiative specifically designed to push back against it. They target a performance assessment for teachers designed by the profession for the profession – the edTPA – which calls on prospective teachers to demonstrate through performance (not multiple choice tests) that they have professionally-agreed upon skills and knowledge to enter a classroom ready to teach. The opposers of edTPA make inaccurate claims about it – that it is tied to a high-stakes testing regime and outsourcing evaluations to a private corporation – Pearson; that it demands a single approach to teaching and teacher education; that it usurps academic freedom and faculty control of curriculum; and that it has no research base to evaluate good teaching. Alan Singer’s recent blog post, an example of this opposition, also
claims that edTPA “distracts student teachers from the learning they must do on how to connect ideas to young people and undermines their preparation as teachers.”

We, teacher educators who have used the edTPA, write here to offer a different perspective – to share how it has supported our teaching, our program development, and our students’ learning.

Who we are:

We are teacher educators from the City University of New York, a university comprised of many campuses across NYC that serve a socioeconomically, culturally, racially, and linguistically diverse population of students. We are advocates for equity and access in education. We support culturally-responsive teaching and assessment practices that focus on deep understanding, critical thinking, and analysis of complex issues. We believe assessment should examine what learners know and can do in authentic contexts and that assessment results should be used to support and improve, not target and sanction. Additionally, we support national efforts to make educator preparation more clinically based so that graduates of educator preparation programs are supported in the context of real-life teaching to combine theoretical with practical knowledge so that they can enter their classrooms ready for the incredibly difficult realities of teaching.

Because of these values we welcome the teacher performance assessment (edTPA) –
a performance assessment of teaching developed by hundreds of teachers and teacher educators across the country, in a process led by Stanford University’s Center for Assessment, Learning, and Equity (SCALE), with support from the American Association of Colleges for Teacher Education (AACTE). Based on 25 years of research and practice, this assessment is currently being used in over 500 institutions in 34 states across the United States. In New York State, it is newly required for certification. As result of our first years of experience using it, we find the edTPA to be a tool for our improvement and a valuable guide to effective teaching.

edTPA: A performance assessment of commonly-agreed upon foundational skills
For those who are not familiar with edTPA, here is a brief outline of what it requires. It is a performance assessment that consists of 3 tasks that call on prospective teachers to demonstrate and explain their ability to carry out universally-accepted essentials of good teaching:

• 1) plan 3-5 inter-related learning experiences, taking into consideration the cultural, linguistic, and learning backgrounds of students;

• 2) teach what they have planned, demonstrating through video a segment of a learning experience that is accompanied by a reflective commentary; and

• 3) assess students’ work – examining artifacts of students’ work, including students with learning and language differences needs, for the purpose of using what has been learned from the students’ work to inform future teaching.

Alan Singer and other opponents of edTPA claim that the developers of edTPA (which he inaccurately refers to as including Pearson and New York State) “are trying to sell the public that you learn to teach, not by teaching, but by writing about it. They also want you to believe that they have perfected a magical algorithm that allows them to quickly, easily, and cheaply assess the writing package and accompanying video and instantly determine who is qualified to teach our children.”

Contrary to these claims, we do not see the edTPA as simply a tedious writing exercise.
Indeed the requirements of the edTPA are teaching: planning a curriculum, teaching it for several days, adjusting the plans based on what students are learning, assigning and evaluating student work to shape future teaching is, in fact, what teachers do. In addition to actually teaching, we have experienced edTPA as a useful opportunity to reflect on teaching strategies and students’ needs. We believe, as do the many representatives of professional associations and experienced educators who participated in the development of edTPA, that an essential part of teaching professionalism is being able to explain what we do and why we do it. Only educators who can articulate and defend their practices can uphold the professionalism needed to strengthen our field. Furthermore, we do not agree with the claim that the edTPA demands only one way to demonstrate what is good teaching. The lessons candidates plan are developed by them; the materials they use are chosen by them; the strategies they employ are their choice. The assessment offers a frame that has room for many different approaches. We do not think it takes the artistry out of teaching but instead, by sharpening the focus of our preparation on commonly-agreed upon foundational skills, it enables not only the artistry but also the joy of teaching to take place.

edTPA is controlled and scored by the profession, not the Pearson corporation
Contrary to the claims of Alan Singer and others, the edTPA is not a Pearson assessment. The facts are that Pearson is an operational partner (much like the publisher of a text), responsible for creating and managing the online platform that collects portfolios and delivers them to the teachers and teacher educators who score them. This capacity enables the assessment to be used on a national basis.

Neither is edTPA scored by Pearson. Singer inaccurately claims that “student teachers are not being evaluated by trained field supervisors or cooperating teachers, but by temporary evaluators of questionable qualifications ….who are hired by Pearson.” This is not true. The facts are that edTPA is scored by experienced teachers and teacher educators who are rigorously selected through a process developed by the national consortium of educators who designed edTPA and then taught to examine prospective teachers’ work in relation to commonly-agreed-upon descriptors of exemplary practice (also a process designed by the edTPA consortium).

In our experience, using the edTPA has not taken the development and evaluation of our students out of the hands of our faculty, trained field supervisors, or cooperating teachers. Rather, it has been a helpful complement to our coursework and to the feedback we offer in clinical experiences. In fact, as a result of working with edTPA, not only have we been prompted to develop more coherence in our courses and across our programs, but we have also found that edTPA has prompted our prospective teachers to demonstrate a more intentional and reflective approach to their teaching. Overall, we believe that the edTPA is helping us to better prepare our graduates.

Prospective teachers’ perspectives

The perspectives of prospective teachers who have taken edTPA confirm what we have experienced. In a recent state pilot of edTPA, for example, researchers found that 96% of teacher candidates reported that the edTPA was a positive influence on their learning, pointing especially to how it made them more self-aware and focused them on student learning. More than 90% of teacher educators reported the experience of supporting the edTPA enabled them to reflect on and improve their program design and instruction. For more evidence about the effects of this assessment on candidate learning and teacher education improvement, see http://edtpa.aacte.org/resources.

These research findings are reflected in the remarks of Peter Turner, a graduate of The City College of New York’s School of Education, who noted in a recent interview:

[Although I had student taught already], for me [edTPA] was the first time that I considered every aspect of what it means to be an effective teacher…. The edTPA is a good test because it scaffolds every aspect of what a teacher needs to do. It was a wonderfully educative experience for me.
— Peter Turner, City College of New York

Lehman College graduate, Roshawna Cooper, adds:

Looking at my lesson plans when I was doing edTPA and looking at my older lesson plans when I did methods courses without edTPA, I missed a whole chunk. There was a lot missing. I am so much more mindful of my students now when I am teaching and the effectiveness of my different lessons. I am more mindful about how to build on each lesson to support my students’ skills. And that is really big. ( See http://edtpa.aacte.org/resources/candidate-to-candidate-reflections-on-taking-edtpa)

Safe to practice/ready to teach: Accountability by the profession for the profession
All professions have external certification exams and a commonly-agreed upon set of foundational knowledge. In fact, professions that are responsible for the safety and well-being of humans all require that their certification processes demonstrate not only that new entrants to the profession have knowledge and skills but that they know how to apply these knowledge and skills and are safe to practice with those entrusted to their care. We believe that the edTPA, by asking new teachers to demonstrate that they know how to teach before they are given the privilege of taking responsibility for children’s lives, is a genuine and valid measure of our work. Because edTPA’s rich descriptions and analyses of teaching are aligned with critical commonly-agreed upon elements of effective practice while allowing for individuality and flexibility in content and style, we believe it serves as a useful tool and guide for teaching and is a positive step forward for us as a profession.

Although there is no such thing as a perfect assessment, especially for something as complex as teaching, we believe that edTPA vastly improves the process by which teachers are certified in New York State. It is a mechanism for us as teacher educators to demonstrate the outcomes of our work and to hold ourselves, as a profession, accountable for what we do. It stands as a viable genuine accountability measure for graduates of teacher education as opposed to sole reliance on standardized tests. While its implementation has posed challenges and calls out for changes in “business as usual” in teacher education, we believe these changes are well worth the while because edTPA reflects our aspirations, celebrating what it means to be a teacher and putting into practice the educative aspect of what high quality assessment should be.

Jordan Weissman, a business correspondent for Slate, read the Vergara decision and noted that the judge’s conclusion hinged on a strange allegation. The judge quoted David Berliner as saying that 1-3% of the teachers in the state were “grossly ineffective.” The judge then calculated that this translated into thousands of teachers, between 2,750 and 8,750, who are “grossly ineffective.”

Weissman called Professor Berliner and asked where the number 1-3% came from. Dr. Berliner said it was a “guesstimate,”

He told Weissman, “It’s not based on any specific data, or any rigorous research about California schools in particular. “I pulled that out of the air,” says Berliner, an emeritus professor of education at Arizona State University. “There’s no data on that. That’s just a ballpark estimate, based on my visiting lots and lots of classrooms.” He also never used the words “grossly ineffective.” And he does not support the judge’s belief that teacher quality can be judged by student test scores.

Dr. Berliner mailed Weissman a copy of the transcript to show that he did not use the term “grossly ineffective.”

Weissman then called Stuart Biegel, a law professor and education expert at UCLA, to ask him “whether he thought that the odd origins of the 1–3 percent figure might undermine Treu’s decision on appeal. Biegel, who represented the winning plaintiffs in one of the key cases Treu cited, said it might. But he thought that the decision’s “poor legal reasoning” and “shaky policy analysis” would be bigger problems. “If 97 to 99 percent of California teachers are effective, you don’t take away basic, hard-won rights from everybody. You focus on strengthening the process for addressing the teachers who are not effective, through strong professional development programs, and, if necessary, a procedure that makes it easier to let go of ineffective teachers,” he wrote to me in an email.”

Peter Greene doesn’t understand how the corporate bullies are celebrating their victories over teachers. He can’t understand how they lie about doing it “for the kids.” They talk about equity and social justice as they attack teachers’ hard-won rights, and they know they are making it up.

He writes:

“I mean, bloody hell, guys? Do we all have “stupid” written on our foreheads? Can you not even do me the respect of telling me convincing lies?

“It’s like talking to that kid in the third row who just punched another student in the face and is now sitting there smiling, laughing and saying “I never touched him” with that fish-eating grin that says, “Go on. I’m lying straight to you, and you’re not going to do anything about it because my dad’s on the school board and– oh yeah– you don’t have tenure.”