Julian Vasquez Heilig, a scholar at California State University in Sacramento, reports on a research project comparing the performance of charter schools to public schools, using state scores on the National Assessment of Educational Progress (NAEP). NAEP is administered by the U.S. Department of Education and governed by a nonpartisan board appointed by the Secretary of Education. Since members serve for four-year terms, most were appointed by Arne Duncan or John King.
Heilig, with the assistance of Blake zclark, Jr., reviewed NAEP data and reached the following conclusions:
“One would most likely suspect from the current positive public discourse about charter schools that they would display higher national and large city NAEP performance when compared non-charter neighborhood schools, however, this is not actually the case when examining achievement data at the school level. Out of the 28 total comparison tests run, only 4 times did charters produce higher composite score averages than non-charter neighborhood public schools— 8th grade reading and math in the years 2013 and 2015. There was a tie in the large city comparison for 4th grade reading in the year 2013 as charter schools and non-charter neighborhood public schools displayed the same average composite scale scores. In the other 23 cases charter schools produced lower average composite scores on the NAEP (math, reading, science) than non-charter neighborhood public schools.”
The difference favoring public schools in 12th grade was very large.
In light of this disparity, why do so many federal and state policy makers consider charters a remedy for low-scoring public schools? What is the remedy for low-performing charter schools?
“NAEP is administered by the U.S. Department of Education and governed by a nonpartisan board appointed by the Secretary of Education.”
Do you really think a board appointed by the Secretary of Education is going to be non-partisan?
By law, the NAGB board must be bipartisan. A certain number of seats are allowed to Republicans, an equal number to Democrats.
Yes, but they’re still beholden to the administration which appointed them.
Of course, when it comes to education, I don’t think “partisan” is necessarily the relevant term anyway. I think “ideological” might be the better way of looking at it, and ideologically Democrats and Republicans both favor privatization, standardized testing, “accountability”, etc.
The claims for and goals of charter schools have always been “spin”. The only goal is to transfer public funds to private company coffers.
Exactly, Steve, exactly!
Yes. Time for all who vacillate on this subject to see the full truth in these words: today all charter school “fixing” has become spin. THE UNDERLYING GOAL: TO TRANSFER PUBLIC FUNDS TO PRIVATE COFFERS.
How right you are. In Nevada, our governor was approached to run for office while he was a sitting federal judge. The Chamber of Commerce offered to bankroll his campaign. He worked with them on what he considers his legacy, school “reform.” He stated before his run for governor that he hoped to get the public out of public education, and did not believe the state should be paying for schools. His view is that parents should shop for their children’s education and should get what THEY can afford. He of course went to private and catholic schools. His main advisor on policy was Michelle Rhee. Education in Nevada is nearly dead. The best we can hope for is that after it is destroyed it can be rebuilt. Nevada will be the last place to let go of stupid policies.
Yes, I understand the “coin of the realm” concept. However when that coin is pyrite why would one exchange it for a silver or gold coin??
The valued gold/silver coins in the charter debates are our total understanding and valid critiques of the private charter realm. The critiques are many and varied and hardly ever addressed by the privateers. Ignoring our critiques is their only course of action, illegitimate inaction at that.
In this case the pyrite coin is the standardized testing and results that are COMPLETELY INVALID. Noel Wilson has proven that COMPLETE INVALIDITY in his never refuted nor rebutted (ignore being the best course of action for the defenders of standardized testing) “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other words all the logical errors involved in the process render any conclusions invalid.
The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
Oops my question is backwards “Why would anyone exchange a gold or silver coin for a pyrite one?” ay ay ay
Those of us who read carelessly perceived exactly what you were saying. Oddly enough, the fact that I misread what you said because I understood the context is one of several things that supports the idea of the rest of your argument. Knowing is more like nailing a catfish to a tree than measuring a board.
And testing is like nailing jello to a tree.
Whether you agree with using standardized testing to rate, measure, assess the quality of education, it occurred to me that I still wonder why public school scores are so frequently higher than charters given what we know about the apples to oranges types of comparisons that are made. I totally accept that standardized tests are a terrible, indefensible way to make the high stakes decisions that have been made in recent years, but the consistency of outcome makes me wonder what conclusions we can draw.
I know that we can’t draw any valid conclusions. Yes, I know that thought is not very palatable to many but to use those scores is a perfect example of GIGO-Garbage In Garbage Out.
Invalidities and falsehoods cannot form a logico-rational foundation for arguing anything. Until we in education realize just how far we have veered from providing a true positive experience for the children in our care, we will continue to harm many children.
And that harming pisses me off immeasurably!
That consistency of result still bothers me. I don’t know what conclusions to draw from them. We know that the only significant correlation seems to be between socioeconomic class and scores when we look at all those testing. If we can make that assumption because of the consistency in scores, we ought to be able to wonder what is driving the consistent difference in this comparison.
We need to get out of the “comparing” students, teachers and schools in whatever fashion and start looking at assessment and evaluation of what is best for the individual, not for claiming top spot in some supposed rigged competition.
It depends on why you are making comparisons. Wouldn’t it be wonderful if that difference in scores was a reflection of the importance of neighborhood/community engagement? Are there particular programs, like wrap-around services, that seem to drive better scores? Is it a reflection of a more experienced, stable teaching staff? With public schools subjected to major abuse over the past two decades, what is it that still fuels their ability to be successful learning communities? There is a reason or reasons for these consistent differences in test data. We really should be exploring and highlighting the (possible) reasons why as possible explanations for the testing data. Forget about competition or rating. That really doesn’t matter. In this charter-public comparison, we have a chance to look at factors that still allow public schools do so well. We often point to socioeconomic factors as driving the disparity in test scores between poor and wealthy districts. Why not look at what allows charter-public schools of roughly similar populations and investigate why they score differently? I would hope that such comparisons would move far beyond test scores, which on their own can’t tell us much.
To answer your questions:
Not necessarily, actually not at all because those scores aren’t valid to tell us anything.
Who cares about what “drives better scores.” Attaining those “scores” have so warped curriculum and instruction that the false goal of “high scores” can only be seen as insane.
No, not a reflection of anything. Again the scores invalid and any conclusions drawn are therefore “vain and illusory” as Wilson puts it.
Simple, we already know it is socio-economic status that fuels those communities.
“There is a reason or reasons for these consistent differences in test data.”
Yes, there is, it’s called socio-economic status and all that entails in providing not only a good and proper teaching and learning environment but also the home environment needed for language acquisition that is amenable to schooling.
“Why not. . . ”
Because the scores are fundamentally invalid and therefore any meaning assigned to them is useless. A standardized test score can tell us nothing more than what a particular student did on a particular test on a particular day-nothing more, anything other conclusions drawn are a gross unethical misuse of the scores. It’s really that plain and simple-anything else is a gross unethical misuse of the scores.
Now if I may turn the table, speduktr, and ask a few questions:
Why do you insist on attempting to unethically use test scores for any other thing other than telling us what a student did on that test?
Why would you use an unethical practice to make invalid comparisons?
Do you not understand the complete invalidity of the whole standards and testing process that render any usage of the results “vain and illusory”?
Do you understand what “vain and illusory” means?
Do you understand that by making these specious comparisons you are playing right into the hands of the privateers by not adhering to a paradigm of “fidelity to truth” wherein, falsehoods abound in which they are far more practiced at disseminating than most educators?
“There is a reason or reasons for these consistent differences in test data.”
“Yes, there is, it’s called socio-economic status and all that entails in providing not only a good and proper teaching and learning environment but also the home environment needed for language acquisition that is amenable to schooling.”
Sorry, you can’t tease out even socio-economic factors, if the testing is totally invalid. You have to label that inference as invalid as well.
I totally agree that the uses claimed for standardized tests in the reform game are totally invalid. I will continue to maintain that coupled with observational data we can tease out possible factors to help us make policy decisions. As I said, I am not into rating and ranking. I didn’t do that with my special ed students and I see no purpose other than destructive in rating and ranking teachers and/or schools in that manner either. I used testing data in combination with observational data to make programming decisions that I hoped would benefit my students. I believe we can use standardized testing data to lead us in directions worth investigating.
In this particular comparison, let’s assume that we are dealing with lower socioeconomic schools, public and charter. Let’s look at the potential reasons why public school students still can produce higher test scores despite diminishing resources and unstable communities. Let’s use the fact that public schools can do well in relation to charters in spite of the loss of resources. Let’s use the experience and knowledge of professionals to ferret out possible factors for success and use it to help reformulate the debate.
Let me tell a story I have told before to illustrate where I’m coming from. I was sitting with a fellow teacher as he graded a history test he had just given. He was grading the test of one of his IEP students and was pleasantly surprised by how well she had done on the first page. Using my knowledge of common struggles of special ed students and testing, I predicted that she had tanked on the second page. She did. I told him that she had probably used all her mental energy on doing the first page and had nothing left when she got to the second. She needed a break and/or a test that was specifically designed to combat that fatigue. The test score alone told nothing about this student, but there was much to learn from how she took the test and possible ways she could show her knowledge. Just giving her a poor grade would in no way be an indication of what she knew. In the same way, standardized tests alone don’t give us useful information, and it is a travesty to use them to rate and rank, but they can lead us to look more closely at what still works in public schools in spite of all attempts to dismantle them.
I am in no way supporting the testing mania, and we should continue to do everything possible to discredit how it is being misused and overused.
Reminds me of this:
“To Turn A School Around, Put Students At The Center
“As the only Massachusetts high school ever to exit from state turnaround status, the 500-student Jeremiah E. Burke High School continues to see dramatic academic improvements, including more than doubling the rates of student proficiency in math and English Language Arts, in the face of what at one time seemed like insurmountable odds.
“So what made the turnaround possible, in a school with historically high dropout rates, a persistently low graduation rate and multiple years of failing to meet the state’s academic proficiency goals for students?
“From Day One of the school improvement effort, beginning in 2010, the Burke sought to identify and address the root causes of previous academic failure: specifically, the past or ongoing trauma and lack of connection to appropriate social supports that their students have experienced — and that research shows can impede academic success.”
[…]
http://www.wbur.org/edify/2016/12/02/commentary-burke-school-turnaround
Exactly what I mean.
Duane
Immeasurably pissed off?
Let’s not exaggerate.
Everyone knows that irkometers exist to measure that quite accurately (within 0.001 irks) by analyzing facial expressions.
Are irkometers sold by Chetty & Smith Inc.?
While I was looking for Amrein-Beardsleys article on the assumptions of VAM, Truths Devoid of Empirical Proof, which can be downloaded from
https://www.researchgate.net/publication/282914755_Truths_Devoid_of_Empirical_Proof_Underlying_Assumptions_Surrounding_Value-Added_Models_in_Teacher_Evaluation
I came across an article by Gene Glass, who formerly worked on NAEP
“Why I am no longer a measurement specialist”
http://ed2worlds.blogspot.com/2015/08/why-i-am-no-longer-measurement.html
Glass disagreed with “grading” based on NAEP ” pass/fail;ABCDEF, Advanced,Proficient, Basic” because “such grading was purely arbitrary, and worse, would only be used politically”
Thanks for the links SDP! I’ve read Glass’ arguments before (and that particular article) but it’s always good to reread things. Many times new things pop out in the second/third/fourth readings. Will have to read the AB article.
Reblogged this on David R. Taylor-Thoughts on Education.
I thought the final presidential debate pretty much settled this issue, along with a lot of other educational matters. Despite several attempts by the moderators to get answers, Trump did not know anything, and Hillary was unwilling to say anything, and the message was clear: Education is not important enough to discuss in presidential debates, or a lot of other places. It is just too complicated and boring.
I believe nobody wanted to touch education in the debates because it is a hot button issue that would be highly divisive. The candidates know most citizens believe in strong public schools. Both candidates represented various forms of privatization. Hillary and Trump did not want to alienate voters and donors. Most voters believe in public education. Most wealthy donors want privatization so both were reluctant to take a stand. Today education is a controversial issue, and many candidates would rather not have to take a stand.
I was, of course being facetious in saying moderators made any effort whatsoever to get answers about education issues. The fact that it is a high risk subject for candidates should be reason for reporters to make a try…..but it is high risk for them, too, because issues require a high level of knowing what they are talking about. Candidates and the press have a subtle ring around the issue, protecting them all……but robbing a lot of well-educated and directly affected parents. Who cares?
Could anyone clarify how they defined: “non-charter neighborhood public schools”?
Did they eliminate from that category schools that select students on the basis of exams, grades, interviews, essays, auditions and the like?
Any link to where they explain their methods would be appreciated particularly if it allows us to understand which schools are included in each category for the purpose of comparison. Thanks.
Not sure this helps but in this study that references NAEP Mathematics data footnoted this –
” We are aware that charter schools are, legally, also “public” schools. Yet they also resemble private schools in
some important respects, particularly in terms of autonomy from local education authorities — as reformers
intended for these schools. For the sake of clarity in categories in the analysis, we use the term “public school” as
short-hand for non-charter public schools; the “independent sectors” include private and charter schools.”
Charter, Private, Public Schools and Academic Achievement:
New Evidence from NAEP Mathematics Data – 2006
Click to access EPRU-0601-137-OWI.pdf
Thanks, SpewingTruth, that does help.
Do any voucher students participate in the NAEP? It would be interesting to see those comparisons. It is unfortunate that we live in a world where evidence seems not to matter. Billionaires and corporations will continue their hostile takeover of public schools that serve all students and the “common good” because elected representatives are complicit. They want to hijack the cash.
Your last sentence is all anyone needs to know about the privateer agenda.
NAEP tests private school but to date the number of voucher students is too small to sample
It would seem that charters are serving the poor masses generally ignored by public schools. And the poor masses do not generally score as well as the kids of the better off. So this can be explained by sociology.
But wait! Charters are not serving everybody, so their scores should be higher. Do we read of charters telling the smart kids to go back to public school so they can help the desperate victims of prejudice? You know that is opposite of what we hear. Could it be that this is not a civil rights issue?
It is really obvious that the days of this reform movement are numbered. Vouchers will fall as well. We all know that things were not perfect before this present spate of reform. We know that reform in education is a constantly bouncing ball. When the ball comes to us, what way will we bounce it?
The most important question is what will the inevitable replacement for “Reform” be called?
I just hope they call it something that I can rhyme a lot of words with.
Reform was actually a hard one in that regard.
“It would seem that charters are serving the poor masses generally ignored by public schools.”
The key words in that sentence is “would seem” because the community public schools serve “the poor masses” far better than any other entity. Those “poor masses” are not “generally ignored by public schools”. Without “would seem” your statement is false on the face of it.
It would seem to me that you may not have written what you wanted to say.
I don’t believe poor children are intentionally ignored by those that teach in urban districts. I know several teachers that worked in the south Bronx. They did their best, but some class sizes in elementary schools were about thirty-eight students. Many surrounding suburban districts had classes of about twenty-two for the same grades. This is inherently unfair, particularly when the disparity in the level of poverty is considered, and NYC has some of the most expensive real estate in the world. The way in which we fund public education through property taxes creates a de facto inequity. The system short changes the students, not the teachers. Then, the urban schools get the label of “failing” that puts the schools in jeopardy. That sad truth is that the charters offer no improvement unless they are highly selective and rejecting of students with individual differences. A much better approach would be a public school with wrap around services, but this approach does not make money for the 1%.
NAEP is “governed by a nonpartisan board appointed by the Secretary of Education.”
“Since members serve for four-year terms, most were appointed by Arne Duncan or John King.”
I think this is almost accurate.
I have been looking at the website of the NAEP Governing Board. I discovered that Susan Pimentel served two terms on the NAEP Governing Board from 2007 to 2015. She became the Vice-Chair of NAEP in 2012, serving until 2015. In both terms she was identified as a curriculum specialist. She was first appointed by Margaret Spellings and reappointed by Arne Duncan.
For readers who are unfamiliar with Susan Pimenthal, this bio is from the NAEP website.
Susan Pimentel, founding partner of the nonprofit Student Achievement Partners, is an education analyst and standards and curriculum specialist with established credentials in building consensus among diverse constituents.
For close to three decades, her work has focused on helping communities, districts and states work together to advance enduring education reform and champion proven tools for increasing academic rigor, including standard setting, curriculum building, assessment alignment, and teacher development and evaluation systems.
Before her work as a lead writer of the Common Core State Standards for English Language Arts/Literacy, Ms. Pimentel was a chief architect of the American Diploma Project Benchmarks designed to close the gap between high school demands and postsecondary expectations.
Some background: The American Diploma Project was the forerunner of the Common Core State Standards project. Started in 2001, it was intended to “determine the prerequisite English and mathematics knowledge and skills required for success in entry-level, credit-bearing courses in English, mathematics, the sciences, and the humanities.”
That effort was funded by Achieve, Inc., The Education Trust, and the Thomas B. Fordham Foundation. By 2004, the American Diploma Project had released “Ready or Not: Creating a High School Diploma that Counts” delineating college and career ready graduation benchmarks.
By 2008, Achieve and participants in the American Diploma Project had released “backward-mapped” grade level standards in English and Math. In other words the 2001 initiative morphed into the Common Core State Standards. Late revisions and marketing were enabled by the National Governors Association, Council of Chief State School Officers, Alliance for Excellent Education, and the James B. Hunt, Jr. Institute for Educational Leadership and Policy and not to be forgotten Bill Gates.
For more on Susan Pimenthal’s work on standards-related projects, with David Coleman see https://deutsch29.wordpress.com/2013/12/02/more-on-the-common-core-achieve-inc-and-then-some/
Susan Pimenthal’s extended stay on the National Assessment Governing Board led me to follow links between the work of the Governing Board and the CCSS.
I did a “Common Core” search of the National Assessment Governing Board’s website. There were over 270 returns, including one as late as May 2017. The earliest was 2008. Many of the Common Core references were “concerns” raised by “a partnership and outreach” arrangement of the National Assessment Governing Board with the Council of Chief State School Officers and perhaps by the presence of CCSSO members on the National Assessment Governing Board.
In any case, there was sufficient concern that the Governing Board of NAEP put this entry into the “Frequently Asked Questions” panel sometime before the PARCC and SMARTER tests were published.
Begin quote
Will NAEP be matched to the Common Core curriculum standards?
The groups that prepared the Common Core State Standards and are developing the Common Core tests have drawn many of their approaches and ideas from the National Assessment of Educational Progress (NAEP). Many of the same people have been involved in both programs, including several members of the National Assessment Governing Board. Cooperation is ongoing, but there are no plans for NAEP and the Common Core to become wholly similar or matched. The Governing Board believes strongly that NAEP should continue to play an important role as an independent measure of student achievement under whatever education policies [or reforms] that states adopt.
For more than 40 years the National Assessment has provided the public with reliable, representative-sample information on what students know and can do in a wide range of academic subjects. Because of NAEP’s sampling methodology, designed to produce sound results for large groups of students, the NAEP assessments are much broader-in content, item types, and levels of difficulty-than any exams designed to produce individual results, including those being developed for the Common Core.
Once the two sets of Common Core assessments are available in 2014-2015, there surely will be comparisons between their content and NAEP’s. We expect these will show some differences as well as substantial similarities. The Board believes it will be important to maintain NAEP’s distinctiveness and its trends in order to provide the nation and the states with a stable, independent measure of whether educational progress is indeed being made.
The CCSS has caused a lot of mischief. Among the later entries at the Governing Board were concerns about efforts to “equate” NAEP scores with those from PARCC and SMARTER…any other unintended and invalid uses of the NAEP scores.
Whether NAEP survives the current administration is another matter. DeVos is not a “numbers person” but would be responsible for appointments to the Governing Board.
I know some oppose all standardized testing, but it’s important to understand how the NAEP differs from NCLB, now ESSA testing mandates. In short, NAEP is an indicator of student performance in states and territories and participating urban school districts. It tests sample classrooms, not all students, every other year. Results are reported by state and, for TUDA participants, by school district. They are NOT reported by school. There are no built-in consequences.
By contrast, the testing mandated by NCLB and continued by ESSA is aimed disrupting public schools and transferring students and buildings to charter operators. That’s why students’ performance on tests is reported as “school performance” and terms like “performing school,” “under-performing school,” etc. are bandied about as if they were valid concepts. Like all authoritarian ideologies, the current “education reform ideology” employs ambiguous and deceptive terms and euphemisms to create a fake reality.
We all know that “a school” is a building that does not take tests. Students take tests, but the consequences fall on the teachers and staff. They don’t affect students’ grades or promotion. All of the consequences are transferred to teachers and staff, who can be forced to “reapply for their positions” and not reinstated. The term “school performance” blurs that unstated transfer of accountability from students to teachers.
The NAEP-TUDA reports results from the school district. That places accountability for many of the factors that go into student learning where they start: with the chancellor/ superintendent and school board/mayor, depending on governance structure. They’re responsible for all the supports that teachers need in the classroom: curricula,textbooks and/or other materials, adequate staffing, behavior support, etc.