Juan Gonzalez of the New York Daily News says that 999 is the code for students who opted out in New York state, and their numbers are huge. At last count, with slightly more than half the districts tallied, protest organizers estimate that about 180,000 students opted out of the English language arts exams. In some districts, 70-80% of the students did not take the tests. State officials, acting with all due speed, as usual, said that they won’t know how many students opted out of the test until the summer, maybe.
Remember that these are not the tests that we took when we were in school. They are tests that last several hours over a three-day period for each subject. Two full weeks of school are devoted to testing, one week for ELA, one week for math, three days of testing each week. Why can’t the testing companies figure out what students know and can do with a one-hour test, as our teachers used to do by themselves?
Parents opted out despite threats from state and local officials that their child would jeopardize his/her future or the school would lose funding.
Gonzalez writes:
Whatever the final number, it was a startling act of mass civil disobedience, given that each parent had to write a letter to the local school demanding an opt out for their child.
It’s even more impressive because top education officials publicly warned school districts they risk losing federal funds if nonparticipation surpasses 5%.
“To react to parents who are speaking out by threatening to defund our schools is outrageous,” said Megan Diver, the mother of twin girls who refused their third-grade test at Public School 321 in Park Slope, Brooklyn.
Gonzalez sees the game that the state is playing with the tests:
Back in 2009, the old state tests showed 77% of students statewide were proficient in English. The next year, the pass level was raised and the proficiency percentage dropped to 57%. A few years later, Albany introduced Common Core and the level plummeted even more — to 31% statewide.
Same children. Same teachers. Different test.
The politicians created a test that says all schools are failing, not just the ones in the big cities, then declare a crisis, so they can close more neighborhood schools, launch more charter schools, and target more teachers for firing.
Meanwhile, the private company that fashioned this new test, Pearson, insists on total secrecy over its content.
This week, test instructions even warned teachers not to “read, review, or duplicate the contents of secure test material before, during, or after test administration.”
What kind of testing company forbids a teacher from reading the test he or she administers?

State Ed says they don’t/won’t know the refusal results for months. Is being a bald faced liar a prerequisite for employment there?
LikeLike
Assessment is not the same as Testing. Assessments rate a product ,STUDENT, in the system. Values, Attitudes and Beliefs are Assessed.
Subject matter retention and comprehension can be tested. Big difference…
LikeLike
“Subject matter retention and comprehension can be tested. Big difference”
And I’m not so sure that there is a big difference.
It may be able to be tested. The question is how accurately?* And what are the exact constructs being tested? And I’m not so sure that there is a big difference.
*The answer is “not very”.
LikeLike
Reblogged this on David R. Taylor-Thoughts on Texas Education.
LikeLike
“999 is the code for students who opted out in New York state”
yes, and that’s just “666” upside down …
and we all know what thatmeans
LikeLike
“Devil’s code”
Opt-out’s evil
Little doubt
It’s the Devil
Drive him out!
Sever ties
With “999”
Exorcise
With holy wine
LikeLike
First thing that came to my mind when I saw the post’s title.
LikeLike
I have found that everything is much clearer when I stand on my head.
Perhaps you share this view?
LikeLike
Me, too, Duane. But, as we all know, the 666 rightfully belongs to: Pear$on execs, carpetbaggers & stockholders; the education-industrial complex; the Gates, Broads, Waltons & Kochs & their villainthropist partners in crime–.and ANY 1%er or
legislator or stuperintendent (one who won’t stand up to this) or administrator who perpetrates the hell in our schools and practices/profits from child abuse–in the extreme, & of other people’s children.
But I have failed to mention the elite 666–POTUS & his spawn, Arne, who have traversed far beyond NCLB.
Shame on you all!
LikeLike
Hi Diane, setting aside the quality and stakes of the tests, what’s your take on the utility of gap scores for improving instruction, in the abstract? I’ve heard this argument from pro-testing people — that gap scores can be calculated down to the teacher, student, and question and then be used to recalibrate lesson plans.
LikeLike
Ted Paul,
There is nothing on the test score results that is useful. Students are told they are at performance level 1, 2, 3, or 4. They then get a state percentile rank: for example, 52%. Of what value is that in reducing the achievement gap or planning lessons? After 13 years of NCLB testing, don’t you think that every teacher knows where her students rank? These tests have no value to anyone other than to know how many students are labeled 1, 2, 3, or 4, but these designations are themselves completely arbitrary, and would change if the state decided to raise or lower the cut score (passing mark).
LikeLike
Thanks for the clarification, Diane.
LikeLike
Ted, I have a somewhat different view than Diane does.
Gap scores can provide useful information at the institutional level, but not really at the classroom level and not for lesson planning.
For lesson planning, how much value should a teacher place on a single piece of information that is itself a small sample size (maybe even an n of 1) and distant from the actual curriculum? Very little if any. Teachers have formal and informal methods to understand how students are learning and what their misconceptions are. If conditions permit, team lesson planning allows teachers to compare how their students are learning to find trends and share ideas for planning their lessons.
Where tests can provide useful information is at the institutional level (schools and districts).
I can’t speak for states that use PARCC, but in states using Smarter Balanced assessments schools and educators will receive reports that contain the scale score, achievement level, and percentile rank information that Diane noted and will also receive claim level information, four claims in English language arts, four claims in math. A grade level team might receive information from the state assessment that their students perform less well on the assessment in speaking and listing and research than in reading and writing and consider whether this confirms or contrasts with their judgment of student learning.
For gaps, a school or district can track whether achievement of economically disadvantaged students is trending or stable on the state assessment just as they might do the same with local GPA, D/F lists, or enrollment participation in electives like world languages and music at the secondary level.
If the state assessment results for other schools and districts are publicly available, you can compare schools for districts that are similar demographically to yours. If 35% of my middle school students who are economically disadvantaged score proficient on the state math assessment but 55% do in a similar district, that can be a good starting point for further study. Now the discussion should not be that Point Mountain is getting better results than us in Mayberry so we need to teach harder with more rigor and grit. Rather, we might consider learning more about Point Mountain’s practices and see if they are doing something different that we might want to incorporate. In our district, this is probably the most important use of state assessment data although we use it as part at the student level as a piece of information in a much larger portrait of an individual child’s development.
The question always needs to be asked whether these uses justify the time and resources required for the state assessment.
LikeLike
OMG, Stiles–do you WORK for Pear$on?! As to your last sentence, I strongly suspect that you have not been reading this blog, nor do you have any idea about what’s been going on in the world of (mis)education.
My advice? (Not that you’ve asked.) Read, read some more, and then join in the conversation–it’s for all.
“Rigor” & “grit” are worth a word that rhymes w/the last one, here, in quotation marks!
LikeLike
Retiredbutmissthekids, I am in my 24th year in public education and do not work for Pearson or any other test company or textbook publisher. I have also been reading this blog for three years.
I could have worded the Point Mountain and Mayberry sentence more clearly, but my point was that a useful discussion is not where we can add more “rigor” and “grit” but to collaborate with other schools and districts to learn from each other.
LikeLike
I understand what you are saying, but I am also willing to bet, Stiles, that you would agree that you could find a better vehicle to gain that information than the PARCC or SBAC. Our state assessments provided comparative data that districts could use to see if one or the other was doing something that should be shared. I really don’t quite see the usefulness of the test data to curriculum teams. There are much better vehicles within schools to inform curriculum decisions. I think that you would probably agree that these particular instruments are overkill if all they intended to get was institutional data. As you indicate, I think, institution level data is probably the only thing they could claim to provide and not very efficiently at that.
LikeLike
Thanks for your take as well. Was just looking for clarity on this point.
LikeLike
2old2teach, Absolutely. The benefits are not great enough to justify the costs. The tests are too long.
LikeLike
Not Diane but Duane here.
One can’t “set aside the quality and stakes of the test”, that’s part and parcel of the process. And that part-quality is the main problem area.
There can’t be any utility in using any of the results for anything since those results are proven to be completely invalid. When one starts with invalidities any conclusions drawn will be invalid. There are just too many epistemological and ontological errors and falsehoods to justify using the results for anything.
It’s all a complete waste of effort, time and money. Validity and reliability issues trump all other issues.
By the way, by “gap scores” do you mean “achievement gap scores”?
LikeLike
Defund Pearson
this whole process is obscene
LikeLike
I guess I should be ashamed to admit I was a professor of education for thirty eight years and therefore part of the problem, but all I can do is get more and more angry with the idiots who are not only destroying the schools but the profession, and indirectly corrupting our democracy. It is too bad that many involved in the common core have either not read or ignore what has been offered by great minds to answer the question what schools are for. I just want to share some of John Goodlad’s thoughts on what schools are for. He explains:
“Norms used to guide schools are either inadequate or corrupt; affixing of accountability to standards inhibits creative process; equating education with schooling has burdened schools and been exceedingly difficult to attain; since schooling is good- more is better- but longer or more deprives the educational process of other nourishing resources.
He states that there is a flaw in those who supposedly suggest that the system is derived from rhetorical principles exhorting individual opportunity, egalitarianism, and openness. Yet in reality, he explains,the system is quite closed; and theories of change using factories and assembly lines as models do not fit.
Goodlad explains that we need to go beyond quantitative appraisals to qualitative appraisals of what goes on in schools. And finally It seems to him that how a student spends precious time in school and how he/she feels about what goes on there is of much greater significance than how he/she scores on a standardized achievement test.”
LikeLike
“Goodlad explains that we need to go beyond quantitative appraisals to qualitative appraisals of what goes on in schools. And finally It seems to him that how a student spends precious time in school and how he/she feels about what goes on there is of much greater significance than how he/she scores on a standardized achievement test.”
LikeLike
Here’s some extremely disturbing info about what kids who are not opting-out are having to face: “Common Core Testing Requires Saville Row Savvy”
http://susanohanian.org/core.php?id=814
LikeLike
“Why can’t the testing companies figure out what students know and can do with a one-hour test, as our teachers used to do by themselves?”
Because none of the test writers know the individual classroom/subject matter curriculum nor are in the classrooms with the individual students.
And think about this, they’re trying to cram in the learning of a whole school year into one test. Insanity thy name is standardized testing.
LikeLike
“What kind of testing company forbids a teacher from reading the test he or she administers?”
What kind of teacher administers a test that he or she has not read?
The answer to both questions is: UNETHICAL
LikeLike
What Duane said.
LikeLike
I think that number could be a new symbol… like the 99%
“I am part of the 999”
“Call 999 for a better education!”
“Be 999!”
“It’s that time of season-999”
“Be sure and ask your child’s principal for the 999 form.”
🙂
LikeLike