Archives for category: Teacher Evaluations

At a heated meeting yesterday, the Néw York Board of Regents voted to approve changes to the teacher evaluation rules. The source of the contention was a harsh plan created by Governor Cuomo and jammed hastily into the state budget bill. Cuomo wants 50% of teachers’ evaluation to be based on state tests. It is payback for the failure of teachers to support his re-election last fall.

Recently a group of seven dissident Regents issued their own statement, proposing a year-long delay in implementation and increased focus on performance assessments.

At the meeting yesterday, the dissidents won some compromises–the main one being a four-month delay , which effectively pushes implementation off for a year.

But the seven dissidents who heroically defended students, teachers, and education, dropped to six as Regent Josephine Finn, a noneducator, joined the majority.

As I understand the details better, I will post them. From what I hear, the six dissident Regents are trying to craft a wise policy that will improve education, and they won significant compromises in the formula.

The majority clings to the vain hope that more testing equals better education. Call them the NCLB majority. Time is running out for their failed ideas.

Some 200,000 students–nearly 400,000 parents–refused the tests this past spring. Expect that number to grow as the Regents majority ignores the popular rejection of their failed policies.

Like every other state, Pennsylvania spent many tens of millions (or more) to develop a new teacher evaluation system. Guess what?

Teachers got their highest ratings ever!

“In the first year of many school districts using a new statewide teacher evaluation system, a greater portion of teachers was rated satisfactory than under the old system.

“In figures released by the state Department of Education, 98.2 percent of all teachers were rated as satisfactory in 2013-14 — the highest percentage in five years — despite a new system that some thought would increase the number of unsatisfactory ratings.”

“In the four prior years, 97.7 percent of teachers were rated satisfactory in all but 2009-10, when 96.8 percent were. These figures count teachers in school districts, career and technical centers, intermediate units and charter schools.”

Pennsylvania is fortunate to have so many good teachers!

Whom shall we blame now?

Carol Burris writes of the terrible consequences that will follow implementation of Governor Cuomo’s teacher evaluation plan.

She urges support for the plan created by seven (of 17) dissident members of the Néw York Board of Regents. Almost all are experienced educators who have carefully reviewed research. Cuomo is not an educator and obviously paid no attention to research.

Two more Regents and the dissidents are a majority.

Lester Young of Brooklyn? Roger Tilles of Long Island?

Faced with the highly unpopular law on teacher evaluations rushed through the Legislature by Governor Cuomo with minimal consideration or debate, seven members of the 17-member New York State Board of Regents issued a vigorous dissent. The law requires that 50% of teacher evaluations be based on test scores, a number that is not supported by research or experience. Unlike the Governor and the Legislature, these seven members of the Regents have demonstrated respect for research and concern for the consequences of this hastily-passed law on teachers, children, principals, schools, and communities. They are courageous, they are wise, and they are visionaries. They have shown the leadership that our society so desperately needs. All New Yorkers are in their debt.

I place these wise leaders on the blog honor roll.

The dissident Regents issued the following statement:

Position Paper Amendments
to Current APPR Proposed Regulations

BY SIGNATORIES BELOW JUNE 2, 2015

We. the undersigned, have been empowered by the Constitution of the State of New York and appointed by the New York State Legislature to serve as the policy makers and guardians of educational goals for the residents of New York State. As Regents, we are obligated to determine the best contemporary approaches to meeting the educational needs of the state’s three million P-12 students as well as all students enrolled in our post secondary schools and the entire community of participants who use and value our cultural institutions.

We hold ourselves accountable to the public for the trust they have in our ability to represent and educate them about the outcomes of our actions which requires that we engage in ongoing evaluations of our efforts. The results of our efforts must be transparent and invite public comment.

We recognize that we must strengthen the accountability systems intended to ensure our students benefit from the most effective teaching practices identified in research.

After extensive deliberation that included a review of research and information gained from listening tours, we have determined that the current proposed amendments to the APPR system are based on an incomplete and inadequate understanding of how to address the task of continuously improving our educational system.

Therefore, we have determined that the following amendments are essential, and thus required, in the proposed emergency regulations to remedy the current malfunctioning APPR system.

What we seek is a well thought out, comprehensive evaluation plan which sets the framework for establishing a sound professional learning community for educators. To that end we offer these carefully considered amendments to the emergency regulations.

I. Delay implementation of district APPR plans based on April 1, 2015 legislative action until September 1, 2016.

A system that has integrity, fidelity and reliability cannot be developed absent time to review research on best practices. We must have in place a process for evaluating the evaluation system. There is insufficient evidence to support using test measures that were never meant to be used to evaluate teacher performance.

We need a large scale study, that collects rigorous evidence for fairness and reliability and the results need to be published annually. The current system should not be simply repeated with a greater emphasis on a single test score. We do not understand and do not support the elimination of the instructional evidence that defines the teaching, learning, achievement process as an element of the observation process.

Revise the submission date. Allow all districts to submit by November 15, 2015 a letter of intent regarding how they will utilize the time to review/revise their current APPR Plan.

II. A. Base the teacher evaluation process on student standardized test scores, consistent with research; the scores will account for a maximum of no more than 20% on the matrix.

B. Base 80% of teacher evaluation on student performance, leaving the following options for local school districts to select from: keeping the current local measures generating new assessments with performance –driven student activities, (performance-assessments, portfolios, scientific experiments, research projects) utilizing options like NYC Measures of Student Learning, and corresponding student growth measures.

C. Base the teacher observation category on NYSUT and UFT’s scoring ranges using their rounding up process rather than the percentage process.

III. Base no more than 10% of the teacher observation score on the work of external/peer evaluators, an option to be decided at the local district level where the decisions as to what training is needed, will also be made.

IV. Develop weighting algorithms that accommodate the developmental stages for English Language Learners (ELL) and special needs (SWD) students. Testing of ELL students who have less than 3 years of English language instruction should be prohibited.

V. Establish a work group that includes respected experts and practitioners who are to be charged with constructing an accountability system that reflects research and identifies the most effective practices. In addition, the committee will be charged with identifying rubrics and a guide for assessing our progress annually against expected outcomes.

Our recommendations should allow flexibility which allows school systems to submit locally developed accountability plans that offer evidence of rigor, validity and a theory of action that defines the system.

VI. Establish a work group to analyze the elements of the Common Core Learning Standards and Assessments to determine levels of validity, reliability, rigor and appropriateness of the developmental aspiration levels embedded in the assessment items.

No one argues against the notion of a rigorous, fair accountability system. We disagree on the implied theory of action that frames its tenet such as firing educators instead of promoting a professional learning community that attracts and retains talented educators committed to ensuring our educational goals include preparing students to be contributing members committed to sustaining and improving the standards that represent a democratic society.

We find it important to note that researchers, who often represent opposing views about the characteristics that define effective teaching, do agree on the dangers of using the VAM student growth model to measure teacher effectiveness. They agree that effectiveness can depend on a number of variables that are not constant from school year to school year. Chetty, a professor at Harvard University, often quoted as the expert in the interpretation of VAM along with co-researchers Friedman & Rockoff, offers the following two cautions: “First, using VAM for high-stakes evaluation could lead to unproductive responses such as teaching to the test or cheating; to date, there is insufficient evidence to assess the importance of this concern. Second, other measures of teacher performance, such as principal evaluations, student ratings, or classroom observations, may ultimately prove to be better predictors of teachers’ long-term impacts on students than VAMs. While we have learned much about VAM through statistical research, further work is needed to understand how VAM estimates should (or should not) be combined with other metrics to identify and retain effective teachers.”i Linda Darling Hammond agrees, in a Phi Delta Kappan March 2012 article and cautions that “none of the assumptions for the use of VAM to measure teacher effectiveness are well supported by evidence.”ii

We recommend that while the system is under review we minimize the disruption to local school districts for the 2015/16 school year and allow for a continuation of approved plans in light of the phasing in of the amended regulations.

Last year, Vicki Phillips, Executive Director for the Gates Foundation, cautioned districts to move slowly in the rollout of an accountability system based on Common Core Systems and advised a two year moratorium before using the system for high stakes outcomes. Her cautions were endorsed by Bill Gates.

We, the undersigned, wish to reach a collaborative solution to the many issues before us, specifically at this moment, the revisions to APPR. However, as we struggle with the limitations of the new law, we also wish to state that we are unwilling to forsake the ethics we value, thus this list of amendments.

Kathleen Cashin

Judith Chin

Catherine Collins

*Josephine Finn

Judith Johnson

Beverly L. Ouderkirk

Betty A. Rosa

Regent Josephine Finn said: *”I support the intent of the position paper”

i Raj Chetty, John Friedman, Jonah Rockoff, “Discussion of the American Statistical Association’s Statement (2014) on Using Value-Added Models for Educational Assessment,” May 2014, retrieved from:

http://obs.rc.fas.harvard.edu/chetty/value_added.html. The American Statistical Association (ASA) concurs with Chetty et al. (2014): “It is unknown how full implementation of an accountability system incorporating test-based indicators, such as those derived from VAMs, will affect the actions and dispositions of teachers, principals and other educators. Perceptions of transparency, fairness and credibility will be crucial in determining the degree of success of the system as a whole in achieving its goals of improving the quality of teaching. Given the unpredictability of such complex interacting forces, it is difficult to anticipate how the education system as a whole will be affected and how the educator labor market will respond. We know from experience with other quality improvement undertakings that changes in evaluation strategy have unintended consequences. A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. Certain schools may be hard to staff if there is a perception that it is harder for teachers to achieve good VAM scores when working in them. Overreliance on VAM scores may foster a competitive environment, discouraging collaboration and efforts to improve the educational system as a whole. David Morganstein & Ron Wasserstein, “ASA Statement on Using Value-Added Models for Educational Assessment,” Published with license by American Statistical Association, April 8 2014, published online November 7, 2014: http://amstat.tandfonline.com/doi/abs/10.1080/2330443X.2014.956906. Bachman-Hicks, Kane and Staiger (2014), likewise admit, “we know very little about how the validity of the value-added estimates may change when they are put to high stakes use. All of the available studies have relied primarily on data drawn from periods when there were no stakes attached to the teacher value-added measures.” Andrew Bacher-Hicks, Thomas J. Kane, Douglas O. Staiger, “Validating Teacher Effect Estimates Using Changes in Teacher Assignments in Los Angeles,” NBER Working Paper No. 20657, Issued in November 2014, 24-5: http://www.nber.org/papers/w20657.

ii Linda Darling-Hammond, “Can Value Added Add Value to Teacher Evaluation?” Educational Researcher, March 2015 44, 132-37: http://edr.sagepub.com/content/44/2/132.full.pdf+html?ijkey=jEZWtoEsiWg92&keytype=ref&siteid=spedr.

This is a letter from a reader who learned that Sheri Lederman’s case against the New York State teacher evaluation system is going forward in court, despite the New York State Education Department’s effort to quash her lawsuit.

 

He writes:

 

 

My situation is very similar to Sheri’s. I am a reading teacher in a small rural district in upstate New York along the Pennsylvania border. Every year I receive an Effective rating on my APPR [the “annual professional performance review” for teachers and principals], though my Growth score is a perfect 20 and my Teacher Evaluation score is a perfect 60. However my Achievement score is a zero every year. I work with struggling readers. They generally receive scores in the teens on the pre-tests, and generally score in the 50’s on the post tests (thus the excellent Growth score). However scores in the 50’s are still failing, so my Achievement score is always a zero. I tried to get my union and administrators to help, but no one has come up with a solution.

 

My administrators, coworkers, students, and I all know I am a more than effective teacher, but in the state of New York, I am just a few points away from being ineffective. I hope this court case goes quickly and helps end this inaccurate and unfair system.

Superintendent Roy Montesano wrote a powerful letter describing the dangers of Governor Cuomo’s education plan.

He warned that the plan would create a permanent culture of high-stakes over testing; good teachers will be fired, and the judgments of their principals will be disregarded; local control will be eroded (he adds that no one could possibly believe that more control by Albany will improve the performance of the schools of Hastings-on-Hudson); the loss of local control will drag down high-performing districts like his own.

He invites everyone who agrees to sign the petition calling for the repeal of the Cuomo law. The link is included in his letter.

Download the full letter here.

Master teacher Sheri Lederman is suing the State of New York after having received a low rating on the state’s “growth” measure. Her husband Bruce is her lawyer. She has been teaching for 18 years and has earned her doctorate. While only 31% of the students in the state “passed” the Common Core tests as proficient, 66% of the students in Dr. Lederman’s class were proficient. But the state gave her a low rating because, by the state’s convoluted formula, the students did not “grow” enough in their test scores.

The New York State Education Department tried to get the lawsuit dismissed, but their effort was rejected and the case is moving forward.

One of the strengths of the Lederman’s case is the excellent affidavits submitted by experts, as well as by parents and students. You can read the affidavits here. You will be informed by the expert statements of Linda Darling-Hammond, Audrey Amrein-Beardsley, Carol Burris, Aaron Pallas, and Brad Lindell.

Darling-Hammond says that Lederman’s rating is “utterly irrational.”

Amrein-Beardsley says that no VAM rating–given the current state of knowledge or lack thereof– is sufficient valid or fair to rate individual teachers.

You will find the testimony of parents and former students enlightening.

Mercedes Schneider reports that the National Council on Teacher Quality received a formal evaluation for the first time in its 15-year history, and, the results are “not pretty.”

Created by the conservative Thomas B. Fordham Foundation/Institute to encourage alternative routes into teaching, NCTQ labored in obscurity for several years. Then, with the rise of the corporate reform movement, NCTQ became the go-to source for journalists looking for comments about how terrible teachers and teacher education are. It also became a recipient of Gates’ funding. See its 2011 report on teacher evaluation in Los Angeles here.)

Now NCTQ issues an annual report published by U.S. News & World Report, rating the nation’s colleges of education and finding almost all of them to be substandard. Among its standards is whether the institution teaches the Common Core. It bases its ratings on course catalogues and reading lists, not on site visits. Some institutions, skeptical of NCTQ’s qualifications and motivation, have refused to cooperate or send materials.

NCTQ recently agreed to collaborate with professors at Vanderbilt University and the University of North Carolina to assess the quality and validity of NCTQ’s ratings of colleges of education. The bottom line: the ratings do not gauge or predict teacher quality.

The full study opens with these conclusions:

“In our analysis of NCTQ’s overall TPP ratings, we find that in one out of 42 comparisons the graduates of TPPs with higher NCTQ ratings have higher value-added scores than graduates of TPPs with lower ratings; in eight out of 30 comparisons graduates of TPPs with higher NCTQ ratings receive higher evaluation ratings than graduates of TPPs with lower NCTQ ratings. There are no significant negative associations between NCTQ’s overall TPP ratings and teacher performance. In our analysis of NCTQ’s TPP standards, out of 124 value-added comparisons, 15 of the associations are positive and significant and five are negative and significant; out of 140 teacher evaluation rating comparisons, 31 associations are positive and significant and 23 are negative and significant.

“With our data and analyses, we do not find strong relationships between the performance of TPP (teacher prep program) graduates and NCTQ’s overall program ratings or meeting NCTQ’s standards.”

What does it mean?

Gary Henry of Vanderbilt Universoty was quoted here:

“The study also examined teacher evaluations but failed to establish a strong relationship between good teacher evaluations and NCTQ standards, according to Henry.

“The conclusion was the same,” Henry said. “Higher NCTQ ratings don’t appear to lead to higher performing teachers.”

I think that means the NCTQ ratings have no value in rating institutions or their graduates.

Can you believe how many millions, hundreds of millions, or billions of dollars have been diverted from America’s classrooms in the search for the elusive “bad teacher”? Lest we forget, this was imposed on the nation’s public schools by Race to the Top, and it is a central narrative of the reformster ideology. Find and fire those “bad teachers” and America’s economy will grow by trillions of dollars (so said Hoover Institution economist Eric Hanushek).

Except it turns out that no one has been able to find those hordes of “bad teachers.” They must be hiding. Or they must be good at test prep. In state after state, the hugely expensive teacher evaluation systems–burdened with statistically dubious methods–have been unable to unmask them.

Politico reports that 97% of teachers in New Jersey were found to be either effective or highly effective:

MOST NEW JERSEY TEACHERS RATED EFFECTIVE OR BETTER: Three percent of New Jersey teachers earned a rating of “partially effective” or “ineffective” under the state’s new teacher evaluation system, according to a report [http://bit.ly/1K5q30f ] out Monday. That’s up from the 0.8 percent of teachers rated “not acceptable” under the state’s old acceptable/not acceptable system. The 2,900 teachers rated poorly under the new system taught about 13 percent of the state’s students, or 180,000 kids. “Those educators are now on a path to improvement with individualized support, or will face charges of inefficiency if unable or unwilling to better serve students over time,” the report says. The vast majority of teachers earned high ratings, with nearly three-quarters rated “effective” and nearly a quarter “highly effective.” State officials stressed that teachers are now receiving more detailed and personalized feedback than ever before. “While one year of this new data is insufficient for identifying sustained trends or making sweeping conclusions about the state’s teaching staff, we are proud of this significant improvement and the personalized support all educators are now receiving,” said Peter Shulman, assistant commissioner of education and chief talent officer.

-The New Jersey Education Association said it still has “deep concerns” about the implementation of the evaluation system and the data used in decision-making, but “these results show that teachers are working very hard to meet and exceed expectations.” NJEA is calling for “disaggregated data for teachers with challenging assignments. It is important to know whether the evaluation system is biased against teachers who work in special education, teach English-language learners, or who work in economically challenged communities,” NJEA said. And the union pledged to represent any member who believes his or her evaluation is flawed: http://bit.ly/1AGLG56.

– The results come just days after Gov. Chris Christie denounced [http://politico.pro/1J5ySrL] the Common Core. In remarks [http://politico.pro/1QkQgHX ], Christie also stressed that the state must continue its push on teacher evaluations. “On this we will be unyielding,” he said. “No one should stand for anything less than an excellent teacher in every classroom – not parents, other teachers, administrators or our students. Accountability in every classroom must be one of the pillars of our New Jersey based higher standards.”

It is puzzling to see that 3% of the state’s teachers taught 13% of the state’s students. How is that possible? Maybe the teachers would do better with smaller classes.

The hunt goes on, even though the hunters left empty-handed.

Bruce Lederman, an attorney acting on behalf of his wife, experienced elementary school teacher Sheri Lederman, filed suit to challenge the state’s teacher evaluation system. The New York State Education Department sought to have the case thrown out. Today, the New York Supreme Court ruled that the lawsuit can go forward. Good for the Ledermans!

From Bruce Lederman:

The NY Supreme Court has denied a motion by the NY Education Department to dismiss the Lederman v. King lawsuit, in which an 18 year veteran Great Neck teacher has challenged a rating of “ineffective” based upon a growth score of 1 out of 20 points, even though her students performed exceptionally well on standardized tests.

This means that the NY Education Department must now answer to a Judge and explain why a rating which is irrational by any reasonable standard should be permitted to remain. The NY Education Department argued that Sheri Lederman lacked standing to challenge an “ineffective” rating on her growth score since her overall rating was still effective and she was not fired. A judge disagreed and determined that an ineffective rating on a growth score is an injury which she is entitled to challenge in Court.

Now, Sheri will have her day in Court. A hearing will likely be scheduled in August.