In this excerpt from her recent book, The Tyranny of the Meritocracy, Lani Guinier describes the tight linkage between standardized testing and family income. To the extent, then, that colleges rely on the SAT (or ACT) as a filter for college admission, they disproportionately screen out students who have not had the multiple advantages of living in affluence.
She cites data demonstrating that the SAT is of little value in predicting college performance, yet it effectively excludes students of color and students who are from low-income families.
Close to eight hundred colleges have decreased or eliminated reliance on high-stakes tests as the way to rank and sort students. In the current environment, however, moving away from merit by the numbers takes guts. The testing and ranking diehards, intent on maintaining their gate-keeping role, hold back and even penalize administrators who take such measures. The presidents of both Reed College and Sarah Lawrence College report experiencing forms of retribution for refusing to cooperate with the “ranking roulette.”
At the center of this conflict is the wildly popular US News & World Report’s annual college-rankings issue—the bible of university prestige. In the book Crazy U, Andrew Ferguson describes meeting Bob Morse, the director of data research for US News and the lead figure behind the publication’s college rankings. Morse, a small man who works in an unassuming office, is described by Ferguson as “the most powerful man in America.” And for good reason: students and parents often rely upon the rankings—reportedly produced only by Morse and a handful of other writers and editors—as a proxy for university quality. These rankings rely heavily on SAT scores for their calculations. Without such data available from, for example, Sarah Lawrence, which stopped using SAT scores in its admissions process in 2005, Morse calculated Sarah Lawrence’s ranking by assuming an average SAT score roughly 200 points below the average score of its peer group. How does US News justify simply making up a number? Michele Tolela Myers, the president of Sarah Lawrence at the time the school stopped using the SAT, reported that the reasoning behind the lowered ranking was explained to her this way: “[Director Morse] made it clear to me that he believes that schools that do not use SAT scores in their admission process are admitting less capable students and therefore should lose points on their selectivity index.”
This is the testocracy in action, an aristocracy determined by testing that wants to maintain its position even if it has to resort to fabrication. What is it they are so desperate to protect? The answer initially seems to be that the SAT can predict how well students will do in college and thus how well-prepared they are to enter a particular school. There is a relationship between a student’s SAT score and his first-year college grades. The problem is it’s a very modest relationship. It is a positive relationship, meaning it is more than zero. But it is not what most people would assume when they hear the term correlation.
In 2004, economist Jesse Rothstein published an independent study that found only a meager 2.7 percent of grade variance in the first year of college can be effectively predicted by the SAT. The LSAT has a similarly weak correlation to actual achievement in law school. Jane Balin, Michelle Fine, and I did a study at the University of Pennsylvania Law School, where we looked at the first-year law school grades of 981 students over several years and then looked at their LSAT scores. It turned out that there was a modest relationship between their test scores and their grades. The LSAT predicted 14 percent of the variance between the first-year grades. And it did a little better the second year: 15 percent. Which means that 85 percent of the time it was wrong. I remember being at a meeting with a person who at the time worked for the Law School Admission Council, which constructs the LSAT. When I brought these numbers up to her she actually seemed surprised they were that high. “Well,” she said, “nationwide the test is nine percent better than random.” Nine percent better than random. That’s what we’re talking about….
Meaningful participation in a democratic society depends upon citizens who are willing to develop and utilize these three skills: collaborative problem solving, independent thinking, and creative leadership. But these skills bear no relationship to success in the testocracy. Aptitude tests do not predict leadership, emotional intelligence, or the capacity to work with others to contribute to society. All that a test like the SAT promises is a (very, very slight) correlation with first-year college grades.
But once you’re past the first year or two of higher education, success isn’t about being the best test taker in the room any longer. It’s about being able to work with other people who have different strengths than you and who are also prepared to back you up when you make a mistake or when you feel vulnerable. Our colleges and universities have to take pride not in compiling an individualistic group of very-high-scoring students but in nurturing a diverse group of thinkers and facilitating how they solve complex problems creatively—because complex problems seem to be all the world has in store for us these days.