James Harvey recently retired as executive director of the National Superintendents’ Roundtable. He is a member of the board of the Network for Public Education. In this post, he describes how the benchmarks used by the National Assessment of Educational Progress are misused to attack American education. The “achievement levels” were created in 1990 when Chester Finn Jr., an enemy of public schools, was chair of the National Assessment Governing Board. They were designed to make American student achievement look worse than it was. The media and the public think that “proficient” means “grade level.” It does not. It is equivalent to a solid A. Yet how many hundreds or thousands of times (e.g. the charter propaganda film “Waiting for Superman”) have you been told that most American students score “below grade level”? It’s not true. To be blunt, it’s a lie.
James Harvey wrote on Valerie Strauss’s “Answer Sheet” blog at The Washington Post:
Every couple of years, public alarm spikes over reports that only one-third of American students are performing at grade level in reading and math. No matter the grade — fourth, eighth or 12th — these reports claim that tests designed by the federal government, the National Assessment of Educational Progress (NAEP), demonstrate that our kids can’t walk and chew gum at the same time. It’s nonsense.
In fact, digging into the data on NAEP’s website reveals, for example, that 81 percent of American fourth-graders are performing at grade level in mathematics. Reading? Sixty-six percent. How could this one-third distortion come to be so widely accepted? Through a phenomenon that Humpty Dumpty described best to Alice in “Through the Looking Glass”: “When I use a word it means just what I choose it to mean.”
Here, the part of Humpty Dumpty was played by Reagan-era political appointees to a policy board overseeing NAEP. The members of the National Assessment Governing Board, most with almost no grounding in statistics, chose to define the term “proficient” as a desirable goal in the face of expert opinion that such a goal was “indefensible.”
Here’s a typical account from the New York Times in 2019 reporting on something that is accurate as far as it goes: results from NAEP indicate that only about one-third of fourth- and eighth-graders are “proficient” in reading.
But that statement quickly turns into the misleading claim that only one-third of American students are on grade level. The 74, for example, obtained $4 million from the Walton and DeVos foundations in 2015 by insisting that “less than half of our students can read or do math at grade-level.”
The claim rests on a careless conflation of NAEP’s “proficient” benchmark with grade-level performance. The NAEP assessment sorts student scores into three achievement levels — basic, proficient, and advanced. The terms are mushy and imprecise. Still, there’s no doubt that the federal test makers who designed NAEP see “proficient” as the desirable standard, what they like to describe as “aspirational.”
However, as Peggy Carr from the National Center for Education Statistics, which funds NAEP, has said repeatedly, if people want to know how many students are performing at grade level, they should be looking at the “basic” benchmark. By that logic, students at grade level would be all those at the basic level or above, which is to say that grade-level performance in reading and mathematics in grades 4, 8 and 12, is almost never below 60 percent and reaches as high as 81 percent.
And the damage doesn’t stop with NAEP. State assessments linked to NAEP’s benchmarks amplify this absurd claim annually, state by state.
While there’s plenty to be concerned about in the NAEP results, anxiety about the findings should focus on the inequities they reveal, not the proportion of students who are “proficient.”
Considering the expenditure of more than a billion dollars on NAEP over 50-odd years, one would expect that NAEP could defend its benchmarks by pointing to rock-solid studies of their validity and the science behind them. It cannot.
Instead, the department has spent the better part of 30 years fending off a scientific consensus that the benchmarks are absurd. Indeed, the science behind these benchmarks is so weak that Congress insists that every NAEP report include the following disclaimer: “[The Department of Education] has determined that NAEP achievement levels should continue to be used on a trial basis and should be interpreted with caution” (emphasis added).
Criticisms of the NAEP achievement levels
What is striking in reviewing the history of NAEP is how easily its policy board has shrugged off criticisms about the standards-setting process. The critics constitute a roll call of the statistical establishment’s heavyweights. Criticisms from the likes of the National Academy of Education, the Government Accounting Office, the National Academy of Sciences, and the Brookings Institution have issued scorching complaints that the benchmark-setting processes were “fundamentally flawed,” “indefensible,” and “of doubtful validity,” while producing “results that are not believable.”
How unbelievable? Fully half the 17-year-olds maligned as being just basic by NAEP obtained four-year college degrees. About one-third of Advanced Placement Calculus students, the crème de la crème of American high school students, failed to meet the NAEP proficiency benchmark. While only one-third of American fourth-graders are said to be proficient in reading by NAEP, international assessments of fourth-grade reading judged American students to rank as high as No. 2 in the world.
For the most part, such pointed criticism from assessment experts has been greeted with silence from NAEP’s policy board.
Proficient doesn’t mean proficient
Oddly, NAEP’s definition of proficiency has little or nothing to do with proficiency as most people understand the term. NAEP experts think of NAEP’s standard as “aspirational.” In 2001, two experts associated with NAGB made it clear that:
“[T]he proficient achievement level does not refer to “at grade” performance. Nor is performance at the Proficient level synonymous with ‘proficiency’ in the subject. That is, students who may be considered proficient in a subject, given the common usage of the term, might not satisfy the requirements for performance at the NAEP achievement level.”
Lewis Carroll’s insight into Humpty Dumpty’s hubris leads ineluctably to George Orwell’s observation that “[T]he slovenliness of our language makes it easier for us to have foolish thoughts.”
NAEP and international assessments
NAEP’s proficiency benchmark might be more convincing if most students abroad could handily meet it. That case cannot be made. Sophisticated analyses between 2007 and 2019 demonstrate that not a single nation can demonstrate that even 50 percent of its students can clear the proficiency benchmark in fourth-grade reading, while only three could do so in eighth-grade math and one in eighth-grade science. NAEP’s “aspirational” benchmark is pie-in-the-sky on a truly global scale.
What to do?
NAEP is widely understood to be the “gold standard” in large-scale assessments. That appellation applies to the technical qualities of the assessment (sampling, questionnaire development, quality control and the like) not to the benchmarks. It is important to say that the problem with NAEP doesn’t lie in the assessments themselves, the students, or the schools. The fault lies in the peculiar definition of proficiency applied after the fact to the results.
Here are three simple things that could help fix the problem:
• The Department of Education should simply rename the NAEP benchmarks as low, intermediate, high, and advanced.
• The department should insist that the congressional demand that these benchmarks are to be used on a trial basis and interpreted with caution should figure prominently, not obscurely, in NAEP publications and on its website.
• States should revisit the decision to tie their “college readiness” standards to NAEP’s proficiency or advanced benchmarks. (They should also stop pretending they can identify whether fourth-graders are “on track” to be “college ready.”)
The truth is that NAEP governing board lets down the American people by laying the foundation for this confusion. In doing so, board members help undermine faith in our government, already under attack for promoting “fake news.” The “fake news” here is that only one-third of American kids are performing at grade level.
It’s time the Department of Education made a serious effort to stamp out that falsehood.

“He who controls the language controls the masses”.
–Saul Alinsky, Rules for Radicals
LikeLiked by 1 person
Also:
He/she who controls the narrative wins.
LikeLike
And they have been winning for far too long! Word distortion, skewing of data/cherry picking and the omission of facts has been the political scheme for far too long. It’s a “nice” way of lying IMHO. Look at the damage this has caused to our country. It’s taken years to get to this level of madness.
LikeLike
And in the Age of Tech, “The one who controls the maths controls the masses”
LikeLiked by 1 person
RIGH, he who controls the math and the messaging: those who do the coding and create the software bring their own understanding of the world right into the middle of the game.
LikeLike
YES and that is exactly and strategically why we are in the mess we are in with public education
LikeLike
I would be very happy if the topic of this post were discussed all day every day in every education journal existent. That fact that every decision on and every discussion of education rests on bullsh-t is maddening.
LikeLike
LCT,
You are right. James Harvey’s dissection of NAEP proficiency levels is very important. I should have posted it all alone, with no other topic on the same day.
LikeLike
The “dissection of NAEP proficiency” is just more in the losing game of using the edudeformers’ language games.
Discussing/dissecting the NAEP results is nothing more than normal BS of the standards and testing malpractice regime. It’s all the same game and it’s guaranteed that ALL students will lose. NAEP is just as invalid as any other test in the standards and testing malpractice regime.
LikeLike
Duane, I totally agree. The ‘standards’ that an educational system need can be assessed by the resulting society (20 or 30 years later). On that basis, I could conclude that relying on standardized test scores has been a miserable failure.
Of course, it did ‘privilege’ and make money for the usual ruling class, but it destroyed the dominance of American Education as a model for the world to follow.
LikeLiked by 1 person
Standardized testing is the whip that wants to beat public schools to death. Perceived “failure” on tests is then sold by mainstream media. Privatizers do not care about accuracy. They care about a perception that will catapult their privatization message into public awareness. Privatizers are demagogues that spread propaganda against public education. They are flush with billionaire corporate cash that enable them to purchase influence all around the nation. They have an army of pawns in think tanks and foundations that continuously spread lies about public schools and teachers. Paper tiger metrics are part of the propaganda campaign.
While I am not expert in statistics, I took a statistics course when I was certified to teach reading in New York. Here’s what I know about standardized testing. No test is totally valid. Proficiency as a benchmark is a subjective term that is being used as club against students, teachers and schools. The tests based on the Common Core and the various state tests have never been subjected to the rigorous demands of the validation process. The NAEP is a test that is more difficult than most traditional standardized tests so the “proficiency” level is higher than the 50% that was considered the midpoint on standardized tests in the past. Algorithms are not objective. They reflect the bias of whoever is designing the algorithm. They are quasi-scientific method of snowing the public about students, teachers and schools. and the so-called ratings from Great Schools or Niche reflect this bias. Testing is all part of the propaganda machine paid for by the oligarchs that want to destroy democratic public schools.
LikeLike
“The one who controls the words controls the herds” or “The one who controls the words controls the worlds” — SomeDAM Poet
LikeLike
Was supposed to go above in reply to LisaM
LikeLike
The big problem with standardized testing is that we assume they actually test something. Yes, they do measure how well someone ‘did’ on that particular test, however they do not predict future activity other than activity on similar tests. Furthermore, tests like the NAEP are designed to measure a ‘range’, and are, therefore, always going to show a ‘bell curve’. If everyone is ‘excellent’, then the scoring will be adjusted. And, of course, almost every competent teacher knows that in order to appear “accurate”, most of the questions are trivial because only such questions can be accurately ‘scored’.
The one thing you did learn in your statistics course it that there is no such thing as a ‘pure’ number, and, therefore, anyone who cites a measured number without a legitimate error estimate is not telling the whole truth. That’s why Gauss invented ‘standard deviation’, and that was to explain results from those who were trying to find the truth, not people trying to deceive for profit or power.
As a teacher, I was forced to give a ‘grade’. I didn’t like that. I taught math and physical science, and my object was to open a student’s mind to both inductive and deductive logic with a hope that they would find at least a more comfortable relationship to the world around them and a better chance of predicting future pitfalls. Even in ‘math’, it was instructive (to me) to learn HOW a student arrived at an answer (even if the ultimate one was ‘wrong’ based upon the ‘correct’ answer). Sometimes, as it turned out, the student who got the problem ‘right’ was less informed then the one who got it ‘wrong’. So, what are we measuring, or, more importantly, why are we measuring?
I always felt Socrates had the right idea. Generate dialogue, point out logical inconsistencies, and let the chips fall as they may. That way, not only can we ‘teach’, but we can also learn.
LikeLike
“anyone who cites a measured number without a legitimate error estimate is not telling the whole truth.”
Sometimes they do it because they just don’t understand science, but sometimes they do it to actually deceive.
I have often pointed out here that a “raw” number without an associated uncertainty is essentially meaningless.
Specifically, whenever someone (and it was often the same someone) posted “COVID positivity” rates for schools that were well below (sometimes by a factor of ten or better) than the uncertainty (false negative rate) associated with the most accurate COVID test (PCR) (allegedly) to “prove” that the COViD infection rate in schools was very low, I noted their error.
But alas, I feel that my attempts to explain that ” it ain’t science without an uncertainty estimate” fell on deaf ears. I might just as well have been talking to a wall because the same claims kept appearing.
After the nth time, you start to wonder whether the person does not understand or simply does not want to understand.
LikeLike
Yeah… It’s enough to drive you to poetry, if only to maintain some degree of sanity.
We live in a ‘Strange Land’ (Heinlein reference).
LikeLike
Sadly, it’s not just people who are ignorant of science who are guilty of this.
John’s Hopkins University (who are supposed to be the experts) has or at least had an entire site chock full of “positivities” for all the states and nowhere (at least not that I could find) did they have estimated uncertainties, probably because they did not actually know details of testing protocols (tests used, etc), which one HAS to have to estimate uncertainties . Some of the antigen tests can be wildly inaccurate (with 50% false negative rates or even higher depending on when in the infection cycle they are given) and even PCR can yield high false negative rates when it is performed early in the infection cycle when the viral load is low.
The only thing I could find about false negatives on the John’s Hopkins site was a general statement that COViD tests can give false negatives (duh), sans links to research about just what those might be.
Finally, I would also comment that trying to compare the “positivity” of one state to another when one has no idea of the testing details is really just a fools errand.
LikeLike
To say nothing of the fact that average rates don’t really mean much in this case.
LikeLike
Daedalus,
There is evidence that standardized test scores do predict more than simply future performance on standardized tests. The University of California Faculty Senate released a report in 2020 (https://senate.universityofcalifornia.edu/_files/committees/sttf/sttf-report.pdf) looking into this question. They found 1) standardized test scores did a slightly better job of predicting freshman grades at UC schools than high school grade averages, 2) adding test scores to high school grades greatly improves forecasts of retention, 3) improved forecasts from adding standardized test scores occur across all income levels, ethnic groups, and first generation college students, and 4) the lowest scoring students admitted to the University of California through their holistic admission process typically do not perform well. The report, while long at 118 pages plus appendices, is worth reading.
My department looked into this a few years ago. We found that the math SAT score was a better predictor of performance in our introductory economics courses than high school math grades. I expect that making standardized test scores optional, as my university along with many others have done, will result in higher DFW rates for our introductory course.
LikeLike
Well, perhaps this helps your department, however ‘economics’ has always been a dicey ‘science’. Yes, it does use math, but does that make it science? It may describe, but can it predict? I’m reminded of the Greenspan testimony after the collapse in 2008 (?)
One question involves the relationship between cause and effect. Do kids flunk out of economics because they did (somewhat) poorly on the on the math SAT, or was the curriculum structured in such a way as to eliminate students that did not conform to the SAT test?
And, just because kids flunk out of economics, does that mean that they are not productive and creative members of society later? Hopefully, most of those students simply change their major.
One problem with testing is that ‘tests’ label certain kids as being ‘dumb’, and, thus, discourages them from further study. This is not a problem with the NAEP since it doesn’t label individual students. However, however ‘testing’ limits our human ‘capital’, as it advances the ‘elite’. The ‘predictive’ value of tests seems to be much more correlated with economic status than it is to any other factor, so you might want to put that into your economic hat and find out about the economic status of your ‘successful’ students. Is economic status as good a predictor as SAT scores? You can do the math.
LikeLike
The economics department.
LikeLike
Left Coast Teacher,
Perhaps you could tell us where the left coast faculty at the University of California made errors in their analysis. This would help in increasing the blogs reputation as a place where evidence based discussion takes place.
LikeLike
TE, if you think the blog does not respect evidence, why do you continue to read and comment? Surely you can find other places to comment.
LikeLike
Daedalus,
Did you read the report of the University of California Faculty Senate? I am interested in your opinions and any concerns you have with the methodology. Do you think the conclusions are not supported by the evidence?
LikeLike
A test of the “facility with mathturbation” would undoubtedly be the best predictor of grades in college econ classes — and acceptance of papers for publication in econ journals and award of (fake) “Nobel” prizes in economics as well.
And by the way, it’s spelled with a “th” rather than an “st” for a reason. Though they have things in common, the two are not the same.
In a nutshell, mathturbation is dressing up empty (usually economic) arguments in what appears to be legitimate (preferably fancy) math (especially statistics) in order to convince the uninitiated that the arguments are something more than pure unadulterated bullshit.
I think most people are probably familiar with the other word, so there is no need to define that here.
LikeLike
Make that a test of the “dexterity with mathturbation”
LikeLike
Maybe that’s why the SAT is a good predictor of college econ grades , because it already is such a test.
LikeLike
I do think it would be helpful if people commenting on this thread make specific criticisms of the linked report of the University of California report.
LikeLike
TE, is this the only research ever done on the validity of the SAT? I don’t think so. Have you ever been a part of a committee reviewing standardized test questions? I have. They are loaded with errors. Errors in the questions. Errors in the answers. Errors in the scoring. Read Daniel Koretz.
LikeLike
I do think it would be helpful if economists like TE would stop claiming their field is a science and stop referring to the” Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel” as the “Nobel Prize in Economics”, which does not exist, never has and never will — snuck in as a marketing ploy 70 years after the fact by Sweden’s central bank to lend economics and economists credibility that they do not deserve.
The Economics Nobel Isn’t Really A Nobel”
https://fivethirtyeight.com/features/the-economics-nobel-isnt-really-a-nobel/
“There Is No Nobel Prize in Economics”
https://www.alternet.org/2012/10/there-no-nobel-prize-economics/
LikeLike
Yep.
LikeLike
“Economics does not have a true Nobel Prize, so a central bank decided to create a near-beer variant. The central bankers have frequently made a hash of it, often awarding economists who got it disastrously wrong and inflicted policies that caused immense suffering. This year, not for the first time, the central bankers decided to hedge their bets – awarding their prize to economists who contradict each other (Eugene Fama and Robert Shiller). The hedge strategy might be thought to ensure that the central bank’s prize winners were right at least half the time (which would be an improvement over the central bankers’ batting average in their awards), but that is a logical error. It is perfectly possible for both of the prize winners to be wrong”
From Economics could be a science if more economists were scientists” by William Black
https://neweconomicperspectives.org/2013/10/economics-science-economists-scientists.html
A Near-beer variant”
And that’s the assessment of an economist!
Ha ha ha
LikeLike
Yep.
And, I must say that I’ve read Adam Smith’s ‘Wealth of Nations’ cover to cover (few can make that claim), and I found it to be a worthy endeavor upon which to move toward making a ‘science’. Poor Adam was missing tons of data from most of the world (in particular the East and indigenous America) however he did a masterful job with what he had. He laid the foundation for what might have become a science. But, his work was misreported and misconstrued and co-opted by the ruling class to justify their own position. Much of what he actually said (or tried to say) has been erased by subsequent ‘economists’, and we are left with the specter of Greenspan presenting to Congress a future of endless economic well-being three months before the 2008 collapse.
If you can’t predict with a fair degree of accuracy, much better than a random guess, it ain’t a science in my book (no matter how much ‘mathterbation’ is involved).
LikeLike
I still think it would be a good idea to address the issues raised by the University of California report. If Dr. Ravitch is correct that the SAT is full of errors but still does a better job of predicting first year grades and retention at UC schools than grades, perhaps that means that high school grades are filled with even more errors than standardized tests. What do folks here think?
LikeLike
I deee the SAT is better at predicting first year grades or retention than grade point averages. Colleges and universities have many ways to judge whether students are prepared for their institution. The SAT is one of them, but far from the best one. The more colleges that go test-optional, the more the colleges exercise judgment about whom to admit. Most colleges admit everyone who applies. The elite colleges have more applicants than ever, and a more diverse pool of applicants after dropping the SAT requirement.
LikeLike
Daedalus,
I am happy that you are back commenting on your thread, and I am sorry it has been hijacked to discuss things other than your original claim that standardized tests “…do not predict future activity other than activity on similar tests.” I have provided some evidence that this view is mistaken. I hope that you could give an argument that the University of California Faculty Senate report is incorrect, or perhaps acknowledge that standardized test scores DO predict future activity, at least grades in University of California classes and retention at University of California campuses.
LikeLike
I recall that a professor at the University of Texas, Walter Stroup, published a study in 2020 that demonstrated that the only thing predicted by standardized tests is the ability to do well or poorly on standardized tests.
LikeLike
LikeLike
TE complaining about people hijacking threads is kinda like Mohammed Atta complaining about people hijacking planes.
LikeLike
Dr. Ravitch,
There is an unfortunate typo in one of your responses (https://dianeravitch.net/2022/05/24/james-harvey-the-lies-promoted-by-naeps-absurd-benchmarks/comment-page-1/#comment-3377038) that makes the meaning unclear.
The University of California report does speak to how standardized test scores are used in admissions. Do you find fault with the methodology?
LikeLike
You are quite right.
The opening typo distorts my view.
From research I have seen, the SAT does not predict 1st year grades better than GPA. Quite the reverse.
LikeLike
I will repair the typo when I get to a computer. I’m using my cell.
LikeLike
Daedalus,
I am sorry that I missed your response to my post. Perhaps you have email notification that will put your responses not at the bottom of the thread, but higher up.
I think students who can not do algebra very well have trouble with both the math SAT exam and introductory economics. I think they would have equal problems with other disciplines that use the language of mathematics: engineering, physics, and computer science come to mind but of course there are others as well.
Of course I made no suggestion that people who do not do well on the math SAT are not creative and valuable members of society. I simply pointed out that your statement that standardized test scores “…do not predict future activity other than activity on similar tests” Is it possible that you and NYC Public School Parent are related?
LikeLike
I, too, find it difficult to follow some of the discussions here due to the format. However, i can assure you that I have (many times) found myself in total disagreement with NYCT. Not always, of course. Often, I counsel that person to focus on sending out just a bit more compassion and to try to curb the vitriol, however I’m sure NYCT doesn’t welcome my attempt at helpful criticism. We might be related, though, because we are teachers (at least I was).
As to Algebra…. The average person seldom uses it. I pursued a degree in Astronomy, and another on Physics, and I found Algebra useful, but now that I’m retired, I almost never use it. I think it’s a very clever invention (by the Indian culture, I think, transmitted to Europe like so many other insights by the Arabs). I didn’t say that Algebra was useless, however. I COULD be a platform to point out the ignorance of Europe during the Dark Ages, and manipulating symbols could be made into a rather entertaining game.
I once taught ‘remedial math’ (basically, Algebra for kids who had flunked Algebra the first time through). Once they understood that it was like a board or card game, they did pretty well. ‘Winning’ made them happy. Learn the rules, turn the crank, and ‘Bingo’.
You might wonder how someone with my background found himself teaching ‘remedial math’. Well, I decided I would rather help raise a new generation rather than use my ‘skills’ to kill people, or make the ruling class richer. Perhaps my big mistake was moving from Astronomy to Physics, but in both cases the ’employers’ knocking at the door always seemed to be related to killing people. However, once I found my niche teaching High School kids, I found myself to be very happy. I discovered this after doing a stint teaching pre-med students as a graduate student, and then medical students during a brief stint in a Bilchemistry Dept.
I think you should consider the ‘chicken and egg’ thing. Do poor scores on the SAT Math ‘predict’? Perhaps. But are the courses set up to make sure that happens? Getting to Economics, remember Greenspan trying to explain how he could predict a rosy future as far as the eye could see a mere 3 months before an economic collapse, he said, “There must have been something wrong with my model”.
The human mind is not limitless. It can’t think like a dog’s nose, nor see like a hawk. Humans have only been around for (at most) a few million years, and the cockroach has been here for at lease 100 times longer. Perhaps (if we are so smart) we should be looking to the cockroach, or the hawk, or the dog for intellectual guidance.
Read S.J.Goulds, “The Mismeasurement of Man”
LikeLike
Economists are easily impressed by, take great stock in, assume cause and effect for and often draw sweeping conclusions from nothing more than pure correlations — and often correlations that any real scientist would consider rather weak, to boot.
Mathturbation at its finest.
If that’s what gets them off, who are we to spoil the excitement?
LikeLike
Yeah. Reminds me of some math teachers I’ve known. They’re great at jargon (which they think makes them appear ‘smart’) but poor at sensing the needs of their students (which makes them bad teachers). And, many are so poorly prepared that they are barely more than a year or two ahead of their students. This explains the insecurity and pretense of superiority.
Now, as I’ve said, I’ve taught math (not by choice, but because there are so few certified to do so). I much preferred teaching science (I’ve been an ‘inductive’ guy since I was 4 years old, or, probably, since I was born). Kids are remarkably sensitive to a teacher’s attitude toward a subject, so I avoided math if I could, but ended up having many fond memories of students in my math classes. It was the Department Chairs and (sometimes) co-workers that were the ‘downer’.
To me, a good teacher listens to students and has the capacity to adjust to each according to their needs. To a typical math teacher, it is the student’s job to conform. The typical math teacher and the typical economist have a lot in common.
LikeLike
To understand just some of those invalidities see:
A Little Less than Valid: An Essay Review
http://edrev.asu.edu/index.php/ER/article/view/1372/43
From the article:
“To the extent that these categorisations are accurate or valid at an individual level, these decisions may be both ethically acceptable to the decision makers, and rationally and emotionally acceptable to the test takers and their advocates. They accept the judgments of their society regarding their mental or emotional capabilities. But to the extent that such categorisations are invalid, they must be deemed unacceptable to all concerned.
Further, to the extent that this invalidity is hidden or denied, they are all involved in a culture of symbolic violence. This is violence related to the meaning of the categorisation event where, firstly, the real source of violation, the state or educational institution that controls the meanings of the categorisations, are disguised, and the authority appears to come from another source, in this case from professional opinion backed by scientific research. If you do not believe this, then consider that no matter how high the status of an educator, his voice is unheard unless he belongs to the relevant institution.
And finally a symbolically violent event is one in which what is manifestly unjust is asserted to be fair and just. In the case of testing, where massive errors and thus miscategorisations are suppressed, scores and categorisations are given with no hint of their large invalidity components. It is significant that in the chapter on Rights and responsibilities of test users, considerable attention is given to the responsibility of the test taker not to cheat. Fair enough. But where is the balancing responsibility of the test user not to cheat, not to pretend that a test event has accuracy vastly exceeding technical or social reality? Indeed where is the indication to the test taker of any inaccuracy at all, except possibly arithmetic additions?”
LikeLike
Have you seen the recent skit on SNL about voting?
It’s easy to fool STUPID people with lies and considering how many voters voted for the loser in 2020, there are a lot of STUPID easy to fool voters out there.
LikeLike
Seems like there are always a lot of people in the US who vote for the loser.
Sometimes even more people than vote for the winner.
LikeLike
And some people intentionally vote for someone they know will lose because they see voting as preventing the candidate they hate the most from winning.
Some people believe that voting to make sure a candidate they don’t like loses is not the same as voting to make sure that a different candidate wins. But of course it is the same thing — a voters’ preferences are evidenced by who it is they want to prevent winning.
And I don’t think the people who vote for someone they know will lose are stupid. They are simply revealing their preferences for which candidate they want to win in a different way.
LikeLike
NYC public school parent
Sounds like you can’t forgive yourself for voting for Anderson. How has that worked out . Just teasing .
LikeLike
It was an attempt at humor — at least the first part.
I guess I need to explain.
In any race which is close, which is most of the Presidential races in the US, there are a lot (tens of millions) who voted for the loser. It’s just the way things work in a democracy.
The second part was a commentary on the sad state of our electoral system.
But oh well.
LikeLike
When you have to explain a joke, it loses something.
LikeLike
Joel,
I honestly believe that election changed the course of history! Not that Carter would have won with Anderson’s votes, but it was the constant drumbeat of negatives about Carter — only the bad things he did were amplified, with anything good forgotten — that doomed him.
But maybe Carter would have turned the Democrats into a neoliberal pro-business anti-progressive party when it comes to supporting conservative economics and the Democrats would be extremely anti-interventionist when it comes to foreign policy. The Dems would look like the Trump Republicans!
And Jimmy Carter was pro-voucher and pro-charter so maybe we’d have no public schools anymore at all, with Jimmy Carter leading Democrats far beyond their support for charters into a wholesale privatization of public education. Then, with the Jimmy Carter Dems, this country could all be unified against public schools!
(Now I remember why I voted for Anderson!)
PS – I like Carter and I think his 2nd term would have been far preferable to Reagan. But his economic policies were to the right of today’s Dems.
LikeLike
President Jimmy Carter and his wife sent their daughter to a public school. He did not support vouchers or send her to a private school.
LikeLike
NYC public school parent
It would have been tough for Carter to have been Pro Charter when he was President. There were none.
” Don’t blame me I voted for Ted Kennedy. (Then Carter ).
LikeLike
Joel,
Thanks for the correction – of course there weren’t charter schools. I was thinking about Carter’s support of providing public funding to parochial and private schools, through vouchers and other means. To Carter’s credit, he limited his desire to publicly fund students in parochial and private schools to schools that didn’t practice racial discrimination.
Funny you supported Ted. Now that I am no longer a teenager I see his huge personal failings. But boy, his progressive economics. And that speech at the 1980 convention. The dream will never die.
I had to read the speech again, and it stands up to time.
“The Grand Old Party thinks it has found a great new trick, but 40 years ago an earlier generation of Republicans attempted the same trick. And Franklin Roosevelt himself replied, “Most Republican leaders have bitterly fought and blocked the forward surge of average men and women in their pursuit of happiness. Let us not be deluded that overnight those leaders have suddenly become the friends of average men and women.”
“You know,” he continued, “very few of us are that gullible.” And four years later when the Republicans tried that trick again, Franklin Roosevelt asked, “Can the Old Guard pass itself off as the New Deal? I think not. We have all seen many marvelous stunts in the circus, but no performing elephant could turn a handspring without falling flat on its back.”
The 1980 Republican convention was awash with crocodile tears for our economic distress, but it is by their long record and not their recent words that you shall know them…”
There were so many relevant passages I realized I had copied and pasted nearly the whole speech. well worth reading again.
LikeLike
“Oddly, NAEP’s definition of proficiency has little or nothing to do with proficiency as most people understand the term.”
Then “most people” don’t use a dictionary. “Proficient” means “competent, skilled, well-versed.” It does not mean “basic” or “adequate.” When a boss says “X is proficient at his job,” she is not indicating that her subordinate fulfills the basic requirements but is “meh.”
The culprit here is state and fed Depts of Ed who came along decades after NAEP benchmarks were established and re-defined “proficient” to mean “grade-level.” Nevertheless I have no problem renaming the benchmarks as low, intermediate, high, advanced. No dictionary required.
LikeLike
These are excellent points.
No matter what the definition, 50% of America’s students will always be below average. Except in Lake Wobegon where all the children are above average.
I think proficient was re-defined to mean what the average student knows, with the bar constantly raised no matter how much schools improved.
And to the surprise of some people who do not understand math, public schools just couldn’t make 100% of the students “proficient”.
And to the awe-struck but embarrassingly math-challenged education reporters, charter schools that dumped students who performed below average had very high proficiency rates.
To be fair, it may just be the implicit racism of education reporters and not their lack of any mathematical understanding that made them so awestruck. If you start with the certainty – so far too many education reporters do – that there simply are no Black or Latino students who are at or above average (despite that being patently false), you would be awestruck at charters who tell you that it is their secret sauce that is responsible for the ones in their charters being at or above average.
LikeLike
^^to be more accurate, 50% of the students will always be below the median. It is possible for the average to be much higher or lower than the median. As we know from the fact that 100 billionaires controlling 50% of the wealth presents an “average” that has no relationship to what 99% of the population earns
LikeLike
You can define NAEP Proficient a thousand times but it does not sink in like the examples
the author gave.
“Fully half the 17-year-olds maligned as being just basic by NAEP obtained four-year college degrees. About one-third of Advanced Placement Calculus students, the crème de la crème of American high school students, failed to meet the NAEP proficiency “
LikeLike
Sobering piece about what happens when efforts to quantify and categorize educational achievement meet up with the imprecision of language.
I wonder what Professor Daniel Koretz would say about this.
LikeLike
All schools, public, private, charter, etc. should GET BACK TO BASICS. Make sure kids can actually read, balance a checkbook/do simple interest calculations, write a letter to their grandparents, and so on. Drop all of the socialist propaganda that clutters their minds…they’ll find all of that crap on the internet, on their own. Streamline all school systems…fire a bunch of bureaucrats and cap the pay of superintendents and principals at just over six figures.
LikeLike
If we did that then all would be as ignorant as you. Whatever we do. We would not want to teach our children to recognize who is screwing them and how. They love the poorly educated . You have enough company already.
LikeLike
Can we please refrain from calling others ‘ignorant’. I think ‘somewhat uninformed’ might be a better term, but that needs to be followed with the information that would help solve the problem.
LikeLike
I do, however, agree that ‘manicmikey’ may be a bit over excited (although in the right direction)
LikeLike
What I want to know is the supposed “randomness” of the selection. My school gets picked every. Single. Time. Every 2 years like clockwork. Out of thousands of schools in Utah, why is it always us? This has happened 5 times in a row (so 10 years) now.
LikeLike
The NAEPers (Jim, Andy, Barney and Friends?) Claim on their site that students within a school are selected at random, but not the schools themselves.
“To ensure that a representative sample of students is assessed, NAEP is given in a sample of schools whose students reflect the varying demographics of a specific jurisdiction, be it the nation, a state, or a district. Within each selected school and grade to be assessed, students are chosen at random to participate in NAEP. Every student has the same chance of being chosen—regardless of race/ethnicity, socioeconomic status, disability, status as an English learner, or any other factors.”
So, apparently you — or more specially, your students — are somehow “representative”.
Lucky you — and them.
LikeLike
The NAEPer Song
It’s a beautiful day in the NAEPerhood
A beautiful day for a NAEPER
Could you be mine?
Would you be mine
It’s a NAEPerly day in this beauty wood
A NAEPerly day for a beauty
Could you be mine?
Would you be mine?
Would you be my, could you be my
Won’t you be my NAEPer?
LikeLike
I am having an OMG moment! 17 years in Chicago ed research centers and I had no idea! This changes how to report on progress for Black and Hispanic students. Looks much stronger at the basic level! We should SEE this. It should cause an earthquake in state-level accountability systems where “proficiency” (aligned with NAEP) remains the great differentiator and schools that serve Black students usually rank lowest (true in Chicago and Illinois). It should be a blow to our attitudes about possibilities for improvement — should strike at the inferiority narrative that our data presentations keep reinforcing. It might even lead us to think that students of color are making more progress than White students. Thank you! It’s rare for me to read something that breaks with conventional perspectives in my field. This changes my understanding.
LikeLike
NAEP suffers all the invalidities identified by Noel Wilson in his 1997 dissertation “Educational Standards and the Problem of Error”. As he notes using the results of any standards and testing malpractice regime [my term] for anything is “vain and illusory”. Or to put it in more terms–bullshit.
LikeLike