Archives for category: Darling-Hammond, Linda

A group of educators asked me to post this petition. You may recall that Linda Darling-Hammond was President Obama’s education spokesman when he ran for office in 2008.

Dear Concerned Educators,
The Coalition for Justice in Education (CJE) has started the petition “President Barack Obama: Replace Secretary of Education, Arne Duncan, with Dr. Linda Darling-Hammond” and needs your help to get it off the ground. CJE believes it’s time that we let President Obama know how we feel about the education policies he and Mr. Duncan are enacting.
Will you take 30 seconds to sign it right now? Here’s the link:

http://www.change.org/petitions/president-barack-obama-replace-secretary-of-education-arne-duncan-with-dr-linda-darling-hammond?utm_source=guides&utm_medium=email&utm_campaign=petition_created

Here’s why it’s important:
The Obama/Duncan education policies have been totally unsuccessful in terms of creating significant, positive changes to help students become proficient citizens and scholars.
You can sign CJE’s petition by clicking here.
Thanks!

Dan Drmacich, Chairman
Coalition for Justice in Education

ps: Please help with this effort by sending it to friends & organizations who might sign it as well. Thanks for your help!

Yesterday, Secretary of Education Arne Duncan had a story in the Huffington Post extolling his work in building respect for the teaching profession.

He has accomplished this, he says, by insisting that teachers be evaluated based on the test scores of their students.

Exhibit A of his success, he says, is Tennessee. Mr. Duncan relies on a report by Kevin Huffman, the state commissioner of education (former PR director for TFA, now employed by one of the nation’s most conservative governors).

The report says that since Tennessee won Race to the Top funding in 2010, it has seen remarkable results because it is now using test scores as 50% of teachers’ evaluations.

Leave aside for the moment the fact that leading researchers (like Linda Darling-Hammond of Stanford University and the National Academy of Education and the American Educational Research Association) say that these value-added measures are inaccurate, unreliable, and unstable.

It is simply bizarre to boast about a one-year change in state test scores. It has long been obvious that state test scores are less reliable than NAEP and that any real change requires more than one year of data as evidence of anything.

According to NAEP, the scores for Tennessee in both reading and math were flat from 2009-2011. Perhaps Secretary Duncan should wait for the release of the 2013 NAEP  before boasting about the dramatic gains in Tennessee.

In the meanwhile, I urge Secretary Duncan and his staff, and Commissioner Huffman, to read the joint statement of the National Academy of Education and the American Educational Research Association on value-added testing and its misuse in evaluating teachers. It is called “Getting Teacher Evaluation Right.” I am sure that the Secretary agrees that policy should be informed by research.

Here is the executive summary:

Consensus that current teacher evaluation systems often do little to help teachers improve or to support personnel decision making has led to a range of new approaches to teacher evaluation. This brief looks at the available research about teacher evaluation strategies and their impacts on teaching and learning.

Prominent among these new approaches are value-added models (VAM) for examining changes in student test scores over time. These models control for prior scores and some student characteristics known to be related to achievement when looking at score gains. When linked to individual teachers, they are sometimes promoted as measuring teacher ―effectiveness.‖

Drawing this conclusion, however, assumes that student learning is measured well by a given test, is influenced by the teacher alone, and is independent of other aspects of the classroom context. Because these assumptions are problematic, researchers have documented problems with value-added models as measures of teachers‘ effectiveness. These include the facts that:

1. Value-Added Models of Teacher Effectiveness Are Highly Unstable: Teachers‘ ratings differ substantially from class to class and from year to year, as well as from one test to the next.

2. Teachers’ Value-Added Ratings Are Significantly Affected by Differences in the Students Who Are Assigned to Them: Even when models try to control for prior achievement and student demographic variables, teachers are advantaged or disadvantaged based on the students they teach. In particular, teachers with large numbers of new English learners and others with special needs have been found to show lower gains than the same teachers when they are teaching other students.

3. Value-Added Ratings Cannot Disentangle the Many Influences on Student Progress: Many other home, school, and student factors influence student learning gains, and these matter more than the individual teacher in explaining changes in scores.

Other tools have been found to be more stable. Some have been found both to predict teacher effectiveness and to help improve teachers’ practice. These include:

  • Performance assessments for licensure and advanced certification that are based on professional teaching standards, such as National Board Certification and beginning teacher performance assessments in states like California and Connecticut.
  • On-the-job evaluation tools that include structured observations, classroom artifacts, analysis of student learning, and frequent feedback based on professional standards.

    In addition to the use of well-grounded instruments, research has found benefits of systems that recognize teacher collaboration, which supports greater student learning.

    Finally, systems are found to be more effective when they ensure that evaluators are well-trained, evaluation and feedback are frequent, mentoring and coaching are available, and processes, such as Peer Assistance and Review systems, are in place to support due process and timely decision making by an appropriate body. 

    And here is a short summary of the report by Linda Darling-Hammond.

One of the wisest and sanest voices in the nation on the subject of teacher quality, teaching quality and teacher evaluation is Linda Darling-Hammond of Stanford University. Linda has been involved for many years in studying these issues and working directly with teachers to improve practice. During the presidential campaign of 2008, she was Barack Obama’s spokesman and chief adviser on education, but was elbowed aside by supporters of Arne Duncan when the campaign ended. The Wall Street hedge fund managers who call themselves Democrats for Education Reform (they use the term “Democrats” to disguise the reactionary quality of their goals) recommended Duncan to the newly elected president, and you know who emerged on top.

Linda, being the diligent scholar that she is, continued her work and continued to write thoughtful studies about how to improve teaching.

After the 2008 election, the issue that predominated all public discussion was how to evaluate teachers. This was no accident. Consider that in the fall of 2008, the Gates Foundation revealed its decision to drop its program of breaking up large high schools. Recall that the foundation had invested $2 billion in breaking up big schools into small schools, had persuaded some 2,500 high schools to do so, and then its researchers told the foundation that the students in the small high schools were not getting any better test scores than those in the large high schools.

Gates needed another big idea. He decided that teacher quality was the big idea. So he invested hundreds of millions of dollars in a tiny number of districts to learn how to evaluate teachers, including thousands of hours of videotapes. Where Gates went, Arne Duncan followed. The new Obama administration put teacher quality at the center of the $5 billion Race to the Top. If states wanted to be eligible for the money, they had to agree to judge teachers–to some considerable degree–by the test scores of their students. That is, they had to use value-added assessment, a still unformed methodology, in evaluating teachers.

In response to Race to the Top and Arne (“What’s there to hide?”) Duncan’s advocacy, many states have now passed laws–some extreme and punitive–directly tying teachers’ tenure, pay, and longevity to test scores.

No other nation in the world is doing this, at least none that I know of.

The unions have negotiated to reduce the impact of value-added systems but have not directly confronted their legitimacy.

After much study and deliberation, Linda Darling-Hammond decided that value-added did not work and would not work, and would ultimately say more about who was being taught than about the quality of the teacher.

The briefest summary of her work appears in an article in Education Week here.

She recently published a full research report. Here is a capsule summary of her team’s findings about the limitations of value-added assessment:

“Measuring Student Learning

There is agreement that new teacher evaluation systems should look at teaching in light of student learning. One currently popular approach is to incorporate teacher ratings from value-added models (VAM) that use statistical methods to examine changes in student test scores over time. Unfortunately, researchers have found that:

1. Value-Added Models of Teacher Effectiveness Are Highly Unstable:

Teachers’ ratings differ substantially from class to class and from year to year, as well as from one test to the next.

2. Teachers’ Value-Added Ratings Are Significantly Affected by Differences in the Students Assigned to Them: Even when models try to control for prior achievement and student demographic variables, teachers are ad- vantaged or disadvantaged based on the students they teach. In particular, teachers with large numbers of new English learners and students with special needs have been found to show lower gains than the same teachers when they are teaching other students. Students who teach low-income stu- dents are disadvantaged by the summer learning loss their children experi- ence between spring-to-spring tests.

3. Value-Added Ratings Cannot Disentangle the Many Influences on Student Progress: –––Many other home, school, and student factors influence student learning gains, and these matter more than the individual teacher in explaining changes in scores.”

The application of misleading, inaccurate and unstable measures serves mainly to demoralize teachers. Many excellent teachers will leave the profession in frustration. There will be churn, as teachers come and go, some mislabeled, some just disgusted by the utter lack of professionalism of these methods.

The tabloids will yelp and howl as they seek the raw data to publish and humiliate teachers. Even those rated at the top (knowing that next year they might be at the bottom) will feel humiliated to see their scores in the paper and online.

This is no way to improve education.

Diane

http://www.edweek.org/ew/articles/2012/03/01/kappan_hammond.html

 http://edpolicy.stanford.edu/sites/default/files/publications/creating-comprehensive-system-evaluating-and-supporting-effective-teaching_1.pdf
Follow

Get every new post delivered to your Inbox.

Join 94,772 other followers