A reader left this comment:
Insofar as the PARCC exam is concerned, as a reader, I’ve found the following to be true:
1. Many of the passages are insanely difficult, and most students are not psychologically mature enough to handle them, nor do they have enough background information to handle the passages and tasks.
2. Many from PARCC and Pearson HATE glossing. Trust me, I argued about several passages with them, and they refused to do so. I think it depends on the team you get, though. Other people at various meetings said they glossed a bit more than my team was allowed.
3. The test is bloody difficult, and there are a few answers choices for many of the passages that could be justified; however, according to Pearson, they were not the “best” answers… Whatever that means.
Insanity, power, and money are in cahoots to destroy public education.

what does glossing mean in this context? you mean putting the definition of the word in the margins?
LikeLike
Can we have a What’s Wrong with the SBAC conversation too? Teachers are regularly told that the new Smarter Balanced tests were created and vetted by educators and teachers; yet, in my school, not a single teacher was consulted directly nor does anyone recall being invited to do so. Three weeks ago, a group of roughly 10 high school educators, mostly English and Social Studies teachers, spent a couple of hours taking the English Language Arts SBAC practice test for juniors. As a professional who has taught reading, writing and critical thinking skills to juniors in high school for over 14 years, I found this test more than problematic on multiple levels. First, the Smarter Balanced marketing materials celebrate that these tests hold students to higher standards because they are no longer multiple choice. However, many of the questions that masquerade as open-ended are in fact multiple choice. For example, a number of questions called for students to read a text and highlight sentences that demonstrate a particular argument or meaning. Ostensibly, these were open-ended reading comprehension questions; however, when I was asked to highlight a passage from the Life of Pi that demonstrated the main character’s concern for the tiger, the passage I wanted to choose was unavailable! It turns out only some of the sentences in this “open-ended” exercise are clickable, which means the questions are multiple choice after all (to say nothing of the fact the a college-educated 40 year-old who has read the book thought the best answer was one that was not possible to select)! More upsetting to me was another question, again, about the Life of Pi, where students were asked to sum up the main emotions of the main character. At the outset of this group of questions, the student is asked to read a very long passage; for this specific question they are asked to read just one paragraph from that longer section. The question is insanely misleading because if you have read the longer passage, ALL the answers regarding the shorter passage are true. The whole point is that the character is experiencing a mixture of emotions, some of which are contradictory. None of the teachers in the room could figure out what the correct answer was since a nuanced reading of the text would make all of them possible. What did the test makers intend? Great question! We never knew, because the practice tests do not allow you to see correct answers nor do they provide explanations of strategies students might consider: yet another flaw. My fellow teachers and I created a list of the additional defects we saw with the practice test and it is long. I know I cannot detail them all here. But surely, if teachers were consulted, in legitimate numbers, many of these same flaws would have been identified.
LikeLike
Please at least give us the list!
LikeLike
Smarter balanced test?
Like jumbo shrimp, an oxymoron.
LikeLike
Ursula, your experience is shared by every thoughtful teacher I know who has taken the SBAC ELA practice test. Only if you don’t look at them closely can you continue to believe the proponents’ PR that they’re “smarter”, better tests. The proponents have framed the tests in a way that propagates falsehoods. They say the old ELA tests only tested rote knowledge. Wrong. The old ELA tests often tested metacognitive reading and writing skills; they were often pretty content-free. And so what if tests measure knowledge? What’s wrong with knowledge? Do we prefer knowledge’s opposite, ignorance? The proponents tout the lack of multiple choice questions, as if multiple-choice tests are inherently bad. Wrong. Multiple choice tests can be –but aren’t always –decent measures of knowledge (which, contrary to popular opinion, is actually important). The multiple choice questions the SBAC does use are used for improper purposes –to test thinking skills. They’re not good at that. Of course open-ended responses, which the SBAC does have, are better than multiple choice questions for seeing what’s going on in a kid’s mind, but essay (or any other open-ended) questions can be done and used badly. I use essays to test a student’s depth of history knowledge. I believe the process of composing an essay helps cement the knowledge in kids’ brains. Core knowledge about the world we live in, cemented in long-term memory, is an essential ingredient in reading and writing ability –in intelligence in general. The value, therefore, is two-fold. I can get an X-ray into what the kid knows and assign a fair grade, and the kid starts to know the stuff better and more permanently as a result of studying for and composing the essay. The open-ended responses on the SBAC purport not to measure depth of knowledge but to tease out kids’ critical thinking and other skills. But do they really? And have the makers demonstrated that these skills can actually be imparted by the student’s ELA teacher? What if the questions are really measuring a mixture of vocab knowledge (which depends on general knowledge of the world, which enters a kids’ head from myriad sources, not by any means just the ELA teacher’s mouth; in fact, sadly, these days an ELA teacher isn’t supposed to impart content knowledge –just “skills”) and raw mental processing power (which depends on genetics and the mother’s prenatal self-care) as well as practice on similar tasks in school? What if the test prep activities the ELA teacher provides do NOT *impart* any skills but merely *lubricate* pre-existing, built-in cognitive functionalities that interact with the knowledge-base that the student happens, by chance, to have acquired? (And what if the opportunity cost of these test-prep activities is starving kids’ minds of the reams of world knowledge they need to become superior readers, writers, thinkers and citizens?) In this case, the test cannot be considered a measure of the ELA teacher’s –or even the school’s! –efficacy. Before anyone can get on-board with these tests, we need the makers to spell out the theory of mental development, teaching and learning that lies beneath these tests. I have a feeling that all we’ll find there is half-baked gobbledygook.
LikeLike
Agreed. The 3rd grade SBAC ELA included many poorly worded questions, ambiguous answer choices, and design issues that forced kids into choosing a different answer than the one they intended. Many of the questions are hidden within 8 sentence block of texts. Kids are having to read these blocks of text multiple times to understand what they are even supposed to do – and honestly, many of them don’t understand what the question is asking because the questions are not worded in vocabulary 3rd graders are familiar with. There is no special formatting of text such as block text or italics to differentiate a question, a quote, etc. It’s terrible.
LikeLike
The Life of Pi was part of the 7th grade Code X curriculum that we had to use with my 80% of students who entered our middle school reading on a K-3rd grade level. Although the students did enjoy reading the Life of Pi, it had to be read to them by their teachers, who had to create multiple entry points so that our 33% new comer ELLs and our 30% SWD, not to mention our 13% SWD who were also ELLs could even understand what was happening. I can’t forget that the rest of my general population could not read most of it because they were so significantly behind in reading.
Anyway, my teachers did an amazing job of helping the students make meaning out of complex, deep texts that required much front loading and a week of pre-lessons just to get into the real lesson so to speak. So, why are we making our 7th graders read complex texts that they are using for high school high-stakes testing? Doesn’t anyone who is publishing these tests and curriculum know how to write purposefully for specific grade levels.
I am all for children reading complex texts, but it has to come at the right time, not at the cost of frustration for many of our students.
LikeLike
You bet they are intent on destroying public education! It is evident! I was helping my 8th grade daughter with her EOY PARCC Math Practice Test. I encourage everyone to go online and view the EOY PARCC 8th grade Math Practice Test. I bet you that legislators could not pass that test. It made me furious as I helped her through that test. These are 13 and 14 year old children…….Some of the questions have 6 choices for answers…and, oh yes, choose all the correct ones from those 6 possible answers. Please go to the PARCC monster web site…and view this monstrosity of a test….and keep in mind that 8th grade Math teachers have much, much less time to teach their students because of the February PBA tasks testing! It is all insane, and I am so sad and frustrated with all of it….as a mom and a teacher.
LikeLike
We need to be able to scrutinize the questions and answers after the tests are administered so we can judge whether or not they are fair and meaningful. If they don’t release the questions to allow this inspection, this alone is grounds for protesting the tests.
LikeLike
It’s not about test security. It’s all about test scrutiny.
LikeLike
I like it! Well-put.
LikeLike
No, not after but before.
I contend that any teacher that administers a test that he/she has not read and vetted is being unprofessional and unethical to boot!
LikeLike
The “what’s right with the PARCC test?” list would be much shorter: { }
LikeLike
“test is bloody difficult” I’m guessing for the reader, American might be a second language. 🙂
LikeLike
To answer the question posed in this posting: Read and comprehend all of the epistemological and ontological falsehoods and errors contained in the concepts of educational standards and standardized testing which renders the results/conclusions of the PARCC COMPLETELY INVALID as proven by Noel Wilson in his never refuted nor rebutted treatise on those educational malpractices “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.”
The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self evident consequences. And so the circle is complete.”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
LikeLike
Is there an answer to “What is glossing?”
LikeLike
“Glossing” refers to words that are not commonly used in texts, words that are difficult to infer due to historic implications, etc. As defined by the most important app of all time (dictionary.com): an explanation or translation, by means of a marginal or interlinear note, of a technical or unusual expression in a manuscript text. Many things on that test are Victorian… Wankers don’t have to pay for copyright then; they’re getting most of the test for free. Travesty number 1,000,256. Diane, if you need more info and evidence, e-mail me at the address I provided that no one else can see. I’ll do what I can.
LikeLike
Seriously, what IS glossing?
LikeLike
See above comment. Footnoting unfamiliar terms and references for students.
LikeLike
Isn’t that when a gal puts that stuff-gloss on her lips?
LikeLike