For many years, I was a staunch advocate of standardized testing. But I lost my enthusiasm for standardized testing after spending seven years on the governing board of NAEP (the National Assessment of Educational Progress). NAEP is the federal test administered every two years to measure academic progress in reading and math, as well as testing other subjects. The test takers are randomly selected; not every student answers the questions on any test. There are no stakes attached to NAEP scores for any student, teacher, or school. The scores are reported nationally and by state and for nearly two dozen urban districts. NAEP is useful for gauging trends.

Why did I lose faith in the value of standardized testing?

First, over the course of my term, I saw questions that had more than one right answer. A thoughtful student might easily select the “wrong” answer. I also saw questions where the “right” answer was wrong.

Second, it troubled me that test scores were so highly correlated with socioeconomic status. Invariably, the students from families with the highest income had the highest scores. Those from the poorest families had the lowest scores.

Third, the latter observation spurred me to look at this correlation between family wealth and test scores. I saw it on the results of every standardized test, be it the SAT, the ACT, or international tests. I wondered why we were spending so much money to tell us what we already knew: rich kids have better medical care, fewer absences, better nutrition, more secure and stable housing, and are less likely to be exposed to vermin, violence, and other health hazards.

Fourth, when I read books like Daniel Koretz’s “Measuring Up” and “The Testing Charade” and Todd Farley’s “Making the Grades: My Misadventures in the Standardized Testing Industry,” my faith in the tests dissipated to the vanishing point.

Fifth, when I realized that the results of the tests are not available until the late summer or fall when the student has a new teacher, and that the tests offer no diagnostic information because the questions and answers are top-secret, I concluded that the tests had no value. They were akin to a medical test whose result is available four months after you see the doctor, and whose result is a rating comparing you to others but utterly lacking in diagnostic information about what needs medication.

So, all of this is background to presenting a recent study that you might find useful in assessing the value of standardized tests:

Jamil Maroun and Christopher Tienken have written a paper that will help you understand why standardized tested is fatally flawed. The paper is on the web and its title is:

The Pernicious Predictability of State-Mandated Tests of Academic Achievement in the United States

Here is the abstract:

The purpose of this study was to determine the predictiveness of community and family demographic variables related to the development of student academic background knowledge on the percentage of students who pass a state-mandated, commercially prepared, standardized Algebra 1 test in the state of New Jersey, USA. This explanatory, cross-sectional study utilized quantitative methods through hierarchical regression analysis. The results suggest that family demographic variables found in the United States Census data related to the development of student academic background knowledge predicted 75 percent of schools in which students achieved a passing score on a state standardized high school assessment of Algebra 1. We can conclude that construct-irrelevant variance, influenced in part by student background knowledge, can be used to predict standardized test results. The results call into question the use of standardized tests as tools for policy makers and educational leaders to accurately judge student learning or school quality.

The paper was peer-reviewed. It was published last week.