Stephen Sawchuck notes in his blog at Education Week that a pattern is emerging from teacher evaluation programs: The highest ratings go disproportionately to teachers of advantaged students and the lowest ratings to teachers of students who are disadvantaged. He wonders whether this suggests that the ratings systems are biased against those who teach the neediest students or does it suggest that the schools with high numbers of disadvantaged students get the worst teachers.

 

I am reminded of the joint statement released a few years ago by the American Educational Research Association and the National Academy of Education, which predicted that those who taught the neediest students would get the lowest ratings because of factors beyond their control. Their schools are apt to get less resources than they need and have larger classes than is beneficial to students. It may have fewer science labs and computers. Its students are likelier to be ill and have a higher absentee rate because of inadequate medical care.

 

That report found that:

 

Even when the model includes controls for prior achievement and student demographic variables, teachers are advantaged or disadvantaged based on the students they teach. Several studies have shown this by conducting tests which look at a teacher’s “effects” on their students in grade levels before or after the grade level in which he or she teaches them. Logically, for example, 5th grade teachers can’t influence their teachers’ 3rd grade test scores. So a VAM that identifies teachers’ true effects should show no effect of 5th grade teachers on their students’ 3rd grade test scores two years earlier. But studies that have looked at this have shown large “effects” – which suggest that students have at least as much bearing on the value-added measure as the teachers who actually teach them in a given year.

 

One study that found considerable instability in teachers’ value-added scores from class to class and year to year examined changes in student characteristics associated with the changes in teacher ratings. After controlling for prior test scores of students and student characteristics, the study still found significant correlations between teachers’ ratings and their students’ race/ethnicity, income, language background, and parent education. Figure 2 illustrates this finding for an experienced English teacher in the study whose rating went from the very lowest category in one year to the very highest category the next year (a jump from the 1st to the 10th decile). In the second year, this teacher had many fewer English learners, Hispanic students, and low-income students, and more students with well-educated parents than in the first year.

This variability raises concerns that use of such ratings for evaluating teachers could create disincentives for teachers to serve high-need students. This could inadvertently reinforce current inequalities, as teachers with options would be well-advised to avoid classrooms or schools serving such students, or to seek to prevent such students from being placed in their classes.

 

So, do schools serving low-income students get worse teachers, or do teachers in low-income schools get smaller gains because it is harder to succeed when kids do not have the extra resources they need and are burdened with poverty? I would say it is some of both. For one thing, brand new teachers are disproportionately placed in low-income schools, some having just finished their teacher training, as well as TFA recruits who have only 5 weeks of training. First-year teachers are likely to be less successful than experienced teachers. At the same time, it is harder to get big test score gains in schools where there are large numbers of students who don’t speak English and who have high needs.