Bob Shepherd, veteran designer of curricula and textbooks, explains why he objects to PARCC:



How to Prevent Another PARCC Mugging: A Public Service Announcement



The Common Core Curriculum Commissariat College and Career Ready Assessment Program (CCCCCCRAP) needs to be scrapped. Here are a few of the reasons why:


1.The CCSS ELA exams are invalid.

First, much of attainment in ELA consists in world knowledge (knowledge of what—the stuff of declarative memories of subject matter). The “standards” being tested cover almost no world knowledge and so the tests based on those standards miss much of what constitutes attainment in this subject. Imagine a test of biology that left out almost all world knowledge about biology and covered only biology “skills” like—I don’t know—slide-staining ability—and you’ll get what I mean here. This has been a problem with all of these summative standardized tests in ELA since their inception.


Second, much of attainment in ELA consists in procedural knowledge (knowledge of how—the stuff of procedural memories of subject matter). The “standards” being tested define skills so vaguely and so generally that they cannot be validly operationalized for testing purposes as written.


Third, nothing that students do on these exams EVEN REMOTELY resembles real reading and writing as it is actually done in the real world. The test consists largely of what I call New Criticism Lite, or New Criticism for Dummies—inane exercises on identification of examples of literary elements that for the most part skip over entirely what is being communicated in the piece of writing. In other words, these are tests of literature that for the most part skip over the literature, tests of the reading of informative texts that for the most part skip over the content of those texts. Since what is done on these tests does not resemble, even remotely, what actual readers and writers do in the real world when they actually read and write, the tests, ipso facto, cannot be valid tests of real reading and writing.


Fourth, standard standardized test development practice requires that the testing instrument be validated. Such validation requires that the test maker show that the test correlates strongly with other accepted measures of what is being tested, both generally and specifically (that is, with regard to specific materials and/or skills being tested). No such validation was done for these tests. NONE. And as they are written, based on the standards they are based upon, none COULD BE done. Where is the independent measure of proficiency in CCSS.Literacy.ELA.11-12.4b against which the items in PARCC that are supposed to measure that standard on this test have been validated? Answer: There is no such measure. None. And PARCC has not been validated against it, obviously LOL. So, the tests fail to meet a minimal standard for a high-stakes standardized assessment—that they have been independently validated.


2. The test formats are inappropriate.


First, the tests consist largely of objective-format items (multiple-choice and EBSR). These item types are most appropriate for testing very low-level skills (e.g., recall of factual detail). However, on these tests, such item formats are pressed into a kind of service for which they are, generally, not appropriate. They are used to test “higher-order thinking.” The test questions therefore tend to be tricky and convoluted. The test makers, these days, all insist on answer choices all being plausible. Well, what does plausible mean? Well, at a minimum, plausible means “reasonable.” So, the questions are supposed to deal with higher-order thinking, and the wrong answers are all supposed to be plausible, so the test questions end up being extraordinarily complex and confusing and tricky, all because the “experts” who designed these tests didn’t understand the most basic stuff about creating assessments–that objective question formats are generally not great for testing higher-order thinking, for example. For many of the sample released questions, there is, arguably, no answer among the answer choices that is correct or more than one answer that is correct, or the question simply is not, arguably, actually answerable as written.


Second, at the early grades, the tests end up being as much a test of keyboarding skills as of attainment in ELA. The online testing format is entirely inappropriate for most third graders.


3. The tests are diagnostically and instructionally useless.


Many kinds of assessment—diagnostic assessment, formative assessment, performative assessment, some classroom summative assessment—have instructional value. They can be used to inform instruction and/or are themselves instructive. The results of these tests are not broken down in any way that is of diagnostic or instructional use. Teachers and students cannot even see the tests to find out what students got wrong on them and why. So the tests are of no diagnostic or instructional value. None. None whatsoever.


4. The tests have enormous incurred costs and opportunity costs.


First, they steal away valuable instructional time. Administrators at many schools now report that they spend as much as a third of the school year preparing students to take these tests. That time includes the actual time spent taking the tests, the time spent taking pretests and benchmark tests and other practice tests, the time spent on test prep materials, the time spent doing exercises and activities in textbooks and online materials that have been modeled on the test questions in order to prepare kids to answer questions of those kinds, and the time spent on reporting, data analysis, data chats, proctoring, and other test housekeeping.


Second, they have enormous cost in dollars. In 2010-11, the US spent 1.7 billion on state standardized testing alone. Under CCSS, this increases. The PARCC contract by itself is worth over a billion dollars to Pearson in the first three years, and you have to add the cost of SBAC and the other state tests (another billion and a half?), to that. No one, to my knowledge, has accurately estimated the cost of the computer upgrades that will be necessary for online testing of every child, but those costs probably run to 50 or 60 billion. This is money that could be spent on stuff that matters—on making sure that poor kids have eye exams and warm clothes and food in their bellies, on making sure that libraries are open and that schools have nurses on duty to keep kids from dying. How many dead kids is all this testing worth, given that it is, again, of no instructional value? IF THE ANSWER TO THAT IS NOT OBVIOUS TO YOU, YOU SHOULD NOT BE ALLOWED ANYWHERE NEAR A SCHOOL OR AN EDUCATIONAL POLICY-MAKING DESK.


5. The tests distort curricula and pedagogy.


The tests drive how and what people teach, and they drive much of what is created by curriculum developers. This is a vast subject, so I won’t go into it in this brief note. Suffice it to say that the distortions are grave. In U.S. curriculum development today, the tail is wagging the dog.


6. The tests are abusive and demotivating.


Our prime directive as educators is to nurture intrinsic motivation—to create independent, life-long learners. The tests create climates of anxiety and fear. Both science and common sense teach that extrinsic punishment and reward systems like this testing system are highly DEMOTIVATING for cognitive tasks. The summative standardized testing system is a really, really backward extrinsic punishment and reward approach to motivation. It reminds me of the line from the alphabet in the Puritan New England Primer, the first textbook published on these shores:


The idle Fool
Is whip’t in school.


7. The tests have shown no positive results.


We have had more than a decade, now, of standards-and-testing-based accountability under NCLB. We have seen only miniscule increases in outcomes, and those are well within the margin of error of the calculations. Simply from the Hawthorne Effect, we should have seen SOME improvement!!! And that suggests that the testing has actually DECREASED OUTCOMES, which is consistent with what we know about the demotivational effects of extrinsic punishment and reward systems. It’s the height of stupidity to look at a clearly failed approach and to say, “Gee, we should to a lot more of that.”


8. The tests will worsen the achievement and gender gaps.


Both the achievement and gender gaps in educational performance are largely due to motivational issues, and these tests and the curricula and pedagogical strategies tied to them are extremely demotivating. They create new expectations and new hurdles that will widen existing gaps, not close them. Ten percent fewer boys than girls, BTW, received a proficient score on the NY CCSS exams–this in a time when 60 percent of kids in college and 3/5ths of people in MA programs are female. The CCSS exams drive more regimentation and standardization of curricula, which will further turn off kids already turned off by school, causing more to tune out and drop out.


Unlike most of the CCSS-related messages that you have seen–the ones pouring out of the propaganda mills–this message is not brought to you by


PARCC: Spell that backward
notSmarter, imBalanced
AIRy nonsense
CTB McGraw-SkillDrill
MAP to nowhere
the College Bored, makers of the Scholastic Common Core Achievement Test (SCCAT),


nor by the masters behind it all,

The Bill and Melinda Gates Foundation (“All your base are belong to us”)