John Thompson, historian and teacher in Oklahoma, has reviewed the work of economists Raj Chetty. You may recall that Chetty, a Harvard professor, was co-author of a study that purported to show that teachers could be evaluated by the test scores of their students. An effective teacher, one who raised test scores, would raise lifetime income, increase high school graduation rates, prevent teen pregnancies, and have lifelong effects on students. Raj and his colleagues John Friedman and Jonah Rockoff were cited on the first page of the New York Times (before the study was peer-reviewed), appeared on the PBS NewsHour, and were hailed by President Obama in his State of the Union speech in 2012. Their study became the #1 talking point for those who thought that using test scores–their rise and fall–would be the best way to identify effective and ineffective teachers. As Professor Friedman told the New York Times, “The message is to fire people sooner rather than later.”


Critics thought the findings were fairly modest. Even the Times said:

The average effect of one teacher on a single student is modest. All else equal, a student with one excellent teacher for one year between fourth and eighth grade would gain $4,600 in lifetime income, compared to a student of similar demographics who has an average teacher. The student with the excellent teacher would also be 0.5 percent more likely to attend college.


That works out to about $105 a year for a 40-year career, or $2 a week. But the Times then looked at the results in the aggregate and calculated that the aggregate of gains for an entire class would be $266,000 over the lifetimes of the entire class, or millions of dollars in added income when multiplied by millions of classrooms. Pretty great stuff, even though it means only $2 a week for one student.


The Obama administration bought into the Chetty-Friedman-Rockoff thesis whole-heartedly. Fire teachers sooner rather than later. Use test scores to find out who is a great teacher, who is a rotten teacher. It all made sense, except that it didn’t work anywhere. The scores bounced around. A teacher who was great one year was ineffective the next year; and vice versa. Teachers were rated based on the scores of students they never taught. Tests became the goal of education rather than the measure. It was a plague of madness that overcame public education across the land, embedded in Race to the Top (2009) and certified by Ivy League professors.


Thompson writes:


As it becomes more clear that value-added teacher evaluations are headed for the scrap heap of history, true believers in corporate reform continue to respond with the same old soundbites on the ways that their statistical models (VAMs) can be valid and reliable under research conditions. But, they continue to ignore the real issue and offer no evidence that VAMs can be made reliable and valid for evaluating real individuals in real schools.


Gates Foundation scholar Dan Goldhaber recently replied to the American Educational Research Association (AERA) statement which “cautions against VAM being used to have a high-stakes, dispositive weight in evaluations.” His protest recalls the special pleading of VAM advocates Raj Chetty, John Friedman, and Jonah Rockoff in reply to the American Statistical Association’s (ASA) 2014 statement warning about the problems with using VAMs for teacher evaluations.


Goldhaber criticizes the AERA by citing a couple of studies that use random samples to defend the claim that they can be causally linked to a teacher’s performance. Using random samples makes research easier but it also makes those studies irrelevant to real world policy questions. Goldhaber then cites Chetty et. al and their claim that low-stakes 1990s test scores resulted in the increased income of individuals during the subsequent economic boom in New York City during the 2000s.


Interestingly, Chetty’s rebuttal of the ASA cited the same two random sample studies, as well as his own research that was cited by Goldhaber. Like Goldhaber and other value-added proponents, he acknowledged the myriad of problems with value-added evaluations, but added, “School administrators, teachers, and other relevant parties can be trained to understand how to interpret a VAM estimate properly, including measures of precision as well as the assumptions and limitations of VAM.”


That raises two other concerns. First, if educators should be trained in the arcane methodologies, assumptions, and limitations of regression studies in order to use VAMs, should economists not be trained in the logistics of schools so they can conduct research that is relevant to education policy? Secondly, even if they ignore the nuts and bolts of schools, isn’t it strange that Chetty and his colleagues ignore economic factors when explaining economic effects? Why are they so sure that education – not economic forces – explains economic outcomes?


These questions become particularly interesting when reading Chetty’s web site. If he was really committed to the use of his Big Data methodology to help improve schools and students’ subsequent economic outcomes, would he not engage in a conversation with practitioners, and ground his methods in reality, so information from his models could be used to improve schools? After all, architects run plenty of quantitative structural analyses of their construction projects but they also interview their clients and listen to how they will use their buildings.


Chetty could have gone back and learned what he didn’t know about schools before he joined in the social engineering experiment known as school reform. Instead, he is rushing off to promote policies for problems which seem to be equally beyond his realm of knowledge. And, he seems equally uncurious about the new people he wants to “nudge” into better behavior. His method for studying anti-poverty policy is to ignore what actually happens in schools and communities and to “treat behavioral factors like any other modeling decision, such as assuming time-separable or quasi-linear utility.” The goal of his new project is to create incentives so that policy-makers can rid poor people, especially, of their “loss aversion, present bias, mental accounting, [and] inattention” so they will move to better places.


I’m not an expert on Chetty’s new The Equality of Economic Opportunity Project but my reading of the evidence is that Robert Putnam, who combines qualitative and quantitative research to document the decline of social mobility, makes a much stronger case than Chetty, who believes social and economic mobility hasn’t declined. It seems to me that Putnam is right and that we must take a generational view in order to show that economic opportunity for the poor has been reduced. I also believe that Derek Thompson nails the case that each generation since the first half of the Baby Boom is seeing an economic deterioration.


I can’t help but wondering why Chetty doesn’t stop scurrying around complex social issues, pontificating on simplistic quick fixes, and study issues in depth. He seems more intent on promoting his Big Data methods, and defeating traditional social science, than actually solving real-world problems. Chetty (and other VAM true believers?) appear preoccupied with academic combat against traditional social scientists who still respect falsifiable hypotheses and peer review. Education and child poverty appear to be just the battlegrounds for academic combat with researchers.


Traditional school improvement was based on the imperfect process of drawing upon the scientific method to diagnose problems, policy debates, and the imperfect democratic process known as compromise. To do that, educators and researchers studied the history and the nature of the causes and effects of underperformance. Corporate reform sought the opposite. Rather than study and debate the nature of our schools’ shortcomings, problems and solutions, the contemporary reform movement attempted a series of bank shots. Ignoring their actual targets, they sought incentives and disincentives that would prompt others to devise solutions. The job of economists’ regression studies was to suggest rewards and punishments that would make educators improve.


An illustration of Chetty’s disdain for evidence-based, collaborative conversations about school improvement is the first graph on his web site. It shows the surge in student test score growth which occurs when a “High VA Teacher Enters,” and replaces a low performer. If Chetty sought to articulate a hypothesis or discuss how his hypothesis, if proven, could improve teacher quality, he would have addressed some issues. But, the graphic resembles a political attack ad more than a presentation of evidence for school improvement.


Chetty’s graphic is strangely opaque about what he means by “high VA” teachers or how many of them there are. In fact, those gains he showcased are the educational equivalent of a White Rhinoceros.
Chetty emphasizes the incredible size of his database.  His data spans the school years 1988-1989 through 2008-2009 and covers roughly 2.5 million children in grades 3-8. Because there are 974,686 unique students in the dataset, his Power Points seem impressive. But, it is extremely difficult to find the key number which a traditional social scientist would have volunteered at the beginning of a study. Chetty’s graphs that illustrate such dramatic gains are based on samples as small as 1135. In other words, about 12 to 17 of these top-performing New York City teachers transferred, per year, into low value-added classrooms.
Chetty doesn’t ask why such transfers are so rare. Moreover, he makes it extremely difficult for a reader to learn the most important facts that would prompt that essential question and a constructive discussion of solution. Instead, he indicates that the answer is using VAMs to fire low-performing teachers and, without evidence, he implies that there are enough top 5% teachers who would respond to modest incentives and transfer to those low value-added classrooms. Otherwise, Chetty’s work on transfers might earn him academic awards but it is just theory, irrelevant for real world policy.
Sadly, it looks like Chetty’s new studies are equally simplistic. The problem, he implies, is not that the economic ladder out of poverty is broken. The problem is getting poor families to move from places without opportunity to places where there is opportunity. So, we in Oklahoma City should forget that Supply Side economics incentivized the mass transfer of good-paying jobs to the exurbs. In Oklahoma County, where poor children’s economic opportunity is in the bottom 17% of the nation, we should incentivize the movement of poor families to Cleveland County where social mobility hasn’t declined.
Presumably, the additional good-paying jobs for the influx of poor families would magically appear. In other words, Chetty’s logic on moving to opportunity is the first cousin of his faith that top teachers will flock to the inner city because they want to be evaluated with an algorithm which is biased against inner city teachers.
I wish I didn’t feel compelled to sound so sarcastic. I really do. But, for every complicated question, there is an answer that is quick, simple, and wrong. Why are Chetty et. al so quick to conclude that it is schools – not the totality of market and historical forces – that drive economic outcomes? Even though the market has undermined the futures of poor families, why does he remain convinced that it can fix schools?
And, the inconsistencies of Chetty and other corporate reformers drive me up the wall. He now proclaims, “We find that every year of exposure to a better environment improves a child’s chances of success.” Were he consistent, Chetty might understand that exposure to education environments might improve his chance of studying education in a way that improves his chances of successfully helping students.
Why does Chetty not take the time to understand the environments of poor children, and build better school environments? Why not help create learning environments that would attract high value-added teachers, not drive them out of the profession? Rather than demand that teachers and poor families learn to look at their worlds the way Chetty does, why not listen to the people who he says he wants to help?