The state of New York released some of the questions that were used on its tests for grades 3-8. Kate Taylor and Elizabeth Harris of the New York Times wrote about a question on the third grade test that more than half of the children got wrong. When the author of the passage was asked the same question, he got it wrong. After he heard the “right answer,” it made sense to him.
The East African fable goes like this: A man frees a snake that is trapped between two rocks, and as a reward the snake gives him a charm that will allow him to hear what animals say, but only if the man keeps it a secret. The man betrays his new power by giggling at the things he hears, arousing his wife’s curiosity. He eventually tells her about the charm, and it stops working.
This story, which was included on this year’s New York State third-grade reading test, is easy to read. But a couple of the questions that went along with it on the test were trickier, stumping many third graders and, perhaps, even a few much older readers.
Some of the questions were relatively easy, others were “hard.” But “hard” seems to mean that they were confusing, poorly written, and made no sense, neither to children nor to many adults.
Peter Afflerbach, a professor of education at the University of Maryland and an expert in reading assessments and comprehension, said he considered the questions to be a mix. While some of the simpler questions seemed acceptable, he said, the more complex ones could sometimes be confusing.
“A really important guideline for item-writing is you never want the prompt to be more complex than the text the child actually read,” Dr. Afflerbach said.
Dr. Afflerbach was troubled by the third question: “How does Niel add to the problem in this story?” The correct answer is B, that “he laughs at what the animals say.” But the entire premise of the question might not make sense to a child, he said.
“As a third grader, I’d be thinking, ‘How is a magical charm a problem?’ ” Dr. Afflerbach said.
The reporter tracked down the writer of the passage. “He was initially stumped by the same question Dr. Afflerbach took issue with, but when told the answer, he said it made sense.”
Probably the third graders had the same response. When confronted with the question, they were stumped. And they picked the wrong answer. If they had been told the right answer as the author was, then it would make sense. If the author of the passage can’t understand the question or the answer, on first hearing it, why should third grade children? This is a “gotcha” question, unfair on its face.
If
Prob
P
Some of the question
Some of the

Until private school students of privilege take the exact same exams that public school students take, they will continue to be designed to “prove” public school students aren’t learning enough. If you put enough ambiguous questions into a test, even the most well-educated 8 year olds are likely to get some questions wrong. If the children at the University of Chicago Lab School had to take this exam, their well-connected parents would shut it down in a minute. If the education at the Lab School was designed to teach children how to think like a test maker and answer these questions correctly, their parents would be outraged.
The thing I love most about the opt-out movement is that it is centered in affluent suburbia among college-educated parents. The point of these poorly-designed state tests is to convince those parents that their excellent public schools aren’t really as good as they think, so that the charter schools chomping at the bit to do the difficult job of educating the upper middle class children in affluent neighborhoods can move in and market to them. Those charter schools and their wealthy backers are desperate to convince the public that these tests are fantastic, because their whole curriculum is designed to get students to “think like a test maker” and put aside their own logical reasoning. Unfortunately, it has backfired because when the students have to take an exam that private school students and public school students take that involves logical reasoning — like the SHSAT — they have to unlearn all their bad habits. So smart parents are realizing that being prepped for 8 years to robotically answer these questions will hurt you in the long run.
LikeLike
The problem is not robotic answering of questions… if only that was the problem… you could do down and dirty test prep and continue on with a rich curriculum. The problem is that you CAN’T definitely answer questions that are so interpretive that well educated and capable adults could choose different responses and come up with equally plausible explanations for them. The test makers could choose a different correct response if they wanted to and write perfectly reasonable justifications. That invalidates the test as a tool for any valid assessment purpose..
LikeLike
I agree — I should not have said “robotic”. But in fact, the test prep material that is in most public schools these days IS designed to suss out what the test makers want. It involves teaching students to put aside logic and think like a test maker. And these exams are far worse than other standardized exams that are taken by ALL students — including private school students — where the far fewer ambiguous questions seemed to be sincere mistakes. Look at how unnecessarily convoluted these questions are worded, even in the ones that most students answered correctly? Why? So lots of time must be spent to teach students to figure out what is the “right” answer to the convoluted question.
By the way, the good public schools that have students from middle class families DO “continue on with a rich curriculum” after the down and dirty test prep. It’s to the credit of those public school teachers that they are juggling both so well. But that is much harder to do if you are given a class of 28 or 30 below average at-risk students and told that your job depends on how well they score on standardized tests. Those students miss out because answering these kinds of questions correctly is what constitutes a “good education”.
LikeLike
My husband and I went through a Barron ELA prep book with our child who will be entering third grade this year. I encountered in the test prep materials what I felt were “gotcha” questions. I thought the material presented was at the level of a gifted and talented program. My child is bright and will do well on these tests. However, I can’t imagine how a child from a disadvantaged household will do. The tests to me just seem to be designed to designate poor kids’ schools as failing schools so they can be converted to charter schools.
LikeLike
The problem here is that these questions, particularly the tougher ones, are about a skill that isn’t taught in school for the most part and may not be particularly teachable: how to think exactly like the people who make up standardized test questions.
And furthermore, while I’m someone who excels at that skill (I had no problem with any of these questions and knew exactly what the right answers were and where I was being misled by certain wrong answers), my prowess has nothing to do per se with what we value about reading, particularly not reading tales like this charming story (and it is a very nice example of a folk tale).
If we continue to bow to the demands of reactionary politicians and educational deformers in making tests like this central to both curriculum and pedagogy, we’re doomed and we’re ruining reading and other key aspects of education for our children.
The folks who sell this nonsense and those who buy into it should be ashamed.
LikeLike
Even if that skill could be taught, why would we want to?
LikeLike
I don’t agree that the test is merely evaluating a skill we don’t or may not be able to teach. These are badly composed, interpretive multiple choice questions with distractors that could be right answers. The skill of logical thinking is insufficient for such questions. The example given is admittedly not an exemplar of this practice, but there are some, and, as you might expect, the worst offenders do not make it to the release items. In cases like these, whether you choose the right response or not doesn’t matter. Whether kids choose the right response or not doesn’t matter. What matters is if you can identify equally balanced, well reasoned, evidence based, logical arguments for two responses, the question is not a fair question and provides you with no definitive information about the skill acquisition of a child, the teaching skill of their teacher, the leadership of their principal or the quality and right to self determination of their district. #optout
LikeLike
curious idle, you are 100% correct. Thank you. I will add that the tests can only be this “badly composed” because private school students get to opt out. If all private school students were tested and their school-wide scores released, this kind of intentionally ambiguous test would be thrown out in a minute — their well connected parents would be embarrassed when so many students were judged “not meeting standards”.
LikeLike
Rather than all the practice tests and prep, we should teach the kids how questions: good and bad are constructed. Hey boys and girls. Where’s the stem? Hey – hey which answer(s) are the distractors? Hey here’s who you can write to and complain to about those tests and questions.Yeah. That’s the ticket.
LikeLike
Exactly. That is what I taught for twenty or so years. I also taught how scant tons could fail to read correctly.
LikeLike
I began teaching students how tests were constructed in 1982. Remember, students have been taking these sorts of tests for a very long time.
I often wondered why I was intimidated by Miller’s Analogies when I was in high school. Once I began teaching students how to answer them, I realized it was because I had not been taught how they were constructed.
As a voracious reader, I could find multiple creative analogies. So I taught students to limit their reasoning when answering questions of this nature. Don’t think outside the box!
Fortunately, I spent little time on test-taking skills but did feel it was essential given that I could not change the system on my own. I needed to prepare them for the state tests which at that time were spread over a week and did not completely replace the curriculum as they do now.
LikeLike
Niel eating all the food in the house was a problem for the mice.
LikeLike
Aha! Finally someone who grasped the *true meaning* of the fable! Thank you, concerned mom.
Or maybe not, if fables, myths, and folktales do not have a “best answer” interpretations.
LikeLike
Case closed.
LikeLike
What case??
LikeLike
Teacher licensure tests are the same, as most of us know. The MA MTEL’s by Pearson have many questions that are convoluted and confusing. You have to first figure out what the question is asking. Why? What’s wrong with a straight-forward question? In the past 50% of takers failed. Not sure if that is still the case. I recently had to take 3 tests, in my areas of expertise, to get a “”new” license and barely passed them.(At least they didn’t get any more money out of me.!) In many areas, my knowledge was deemed “limited.” After 30 years. Maybe I should quit!
LikeLike
After spending multiple years teaching test-taking skills, I was faced with taking a state reading, writing, and math test in order to add a credential to my license. I was convinced I had no business passing the math section and lined up a tutor in case I didn’t.
When I took the math section, I did what I taught my students. I answered everything I knew, then I estimated the answer for ones I had some knowledge of, and finally used reading skills to answer the questios I had no clue about.
There must be a better way of determining one’s knowledge since multiple choice tests can be gamed.
LikeLike
“add”, “problem”? boo and double boo. The only problem is a snake trapped between two rocks. Nothing else in the story meets the definition of a problem.
A man doesn’t get paid unless he goes to work. Is that a problem, or a condition?
LikeLike
Agreed.
LikeLike
Who scores the NYS state exams? I know many schools participate in regional scoring, but are the exams then scored again?
LikeLike
I wonder this, too. It used to be that charter schools graded their exams separately from public schools, which seems fraught for cheating. Is that still the case that the state allows charter schools to grade their own exams, with own “paid monitors”.
LikeLike
The comments from the public on that story are very sympathetic to the third graders who took the test. Worth a look. Lots and lots of people have problems with the phrasing and content of these questions and felt strongly enough that they left a comment.
Maybe there’s hope after all 🙂
They feel sorry for the kids who had to answer the questions.
LikeLike
Hate to be a downer but all the comment threads I’ve read on NYT ed articles over the last couple of years show the great majority of their readers have been anti ed-reform right along. Probably got wise during the Bloomberg reign. I very nearly cancelled my subscription because the Times would routinely find the pro-ed-reform comments and label them “NYT Picks”, echoing the way Tisch & Cuomo ignore the voting public.
one pinprick of light: “NYT Picks” seem to be coming around. Now if they would just cut back on the charter-school press releases…
LikeLike
Aaahhh, the sound of MMoOO*ing of the cattle being led to the slaughterhouse oblivious to the final product of steaks and hamburgers with the steaks garnering more MMoOO*lah than the burger. The somnambulistic baying at the blue MMoOO*n over Montauk keeps the cattle at ease as they trudge along towards their finaltude.
MMoOO*ing what these debates over standardized test questions are, for the fundamental concepts, epistemological and ontological underpinnings of the whole educational standards and standardized testing nonsensical claptrap, bunk and bombast are so COMPLETELY INVALID that to argue test question validity is insanity.
To understand this “udder” insanity read and comprehend what Noel Wilson has proven about the many errors, falsehoods and fudges of the educational standards and standardized testing that render (double endendre intended) the whole process COMPLETELY INVALID see: “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine.
1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.
2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).
3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.
4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”
In other word all the logical errors involved in the process render any conclusions invalid.
5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.
6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.
7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”
In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?
My answer is NO!!!!!
One final note with Wilson channeling Foucault and his concept of subjectivization:
“So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.
”
In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.
*MMoOOing = the act of mental masturbation or obligatory onanism
LikeLike
I’m following The 74 and it’s really kind of fun.
Today’s stories include: 6 on how awesome charter schools are and 5 on terrible events in public schools OR terrible public schools.
Also, Jeb Bush and Scott Walker are ultra-fabulous while Bill deBlasio is terrible.
What absolute hacks.
https://www.the74million.org/
LikeLike
Chiara, Thanks for the link. First time I glanced over this website. The most amusing item I found were the common core flash cards. Can’t wait for everyone to print them out and hand them out to all their friends. They mention Anthony Cody and Diane.
LikeLike
What’s funny about the Common Core is I haven’t heard a thing about it in Ohio other than the Common Core tests. There is literally no mention at all of the Common Core standards themselves- the thing has been utterly consumed by Common Core testing.
I figured that would happen given the near-obsessive focus on test scores in ed reform, but I’m surprised there isn’t more of a marketing effort to at least pretend that this goes deeper than more difficult and longer tests.
LikeLike
For context, what were the other possible answers? A, C, D…
LikeLike
You have to open the link in the New York Times article.
LikeLike
Thanks, Duane (up there at 4:53 PM), for the never redundant, always bears repeating reminder that NONE of these tests are valid or reliable. Pear$on has NEVER had any type of quality control whatsoever, & has NEVER been held accountable for any of their crappy (& now CCRAPy) tests & equally crappy scoring methods. POTUS & his CCRAPy Ed. Sec. & Congress has just left Pear$on alone…to make billion$ on the back$ of our kid$–as well a$ their teacher$ (how many teacher$ have been evaluated on the ba$i$ of the$e NOT $tandardized–nor, again, NOT well-$cored te$t$?) & their schools (HOW many schools have been labeled “failing” due to poor te$t $cores–“poor te$r $core$” which most certainly are due to POOR te$t que$tion$!)
But–Pear$on Soldier$ on, unregulated & unaccountable. Sounds like it’s time for another Manhattan Pear$on Field Trip. Also, in Glenview, ILL-Annoy, & anywhere also Pear$on ha$ a campu$. The Opt Out movement is great & strong, but we ALL need to be pointing the BIG finger DIRECTLY at Pear$on–so many people out there don’t know how much of their taxpayer public ed. dollar$ are taken from the schools & pumped directly into Pear$on. (ILL-Annoy, for example, has a no-bid, 4-year, $160 million contract with them. When I inform people of this, they are truly shocked–& angry. I’ve told them to write & call their state & U.S. legislators on this & tell them just how angry they are.)
Let’s get EVERYONE angry!!!
&–BTW–since Arne is getting ready to move back to ILL-Annoy, someone want to make a bet w/me {Duane-?} that he gets a cushy, executive, 6-figure-salary job, here, w/the company that’$ “Alway$ Earning, Never Learning?”
LikeLike
Yes it’s of value to get some good articles out there detailing the premium cost to taxpayers of the CCSS testing arm, & where the $ goes. But I see no value in vilifying the biggest hog at the trough; there are plenty more squeezing in to take their place. The anger & energy needs to be channeled toward changing the legislation that makes the trough available to the pigs.
Cutting DOE power to the bone seems one viable avenue. ESEA re-auth seems to be getting a start on it. The idea may originally have been to facilitate enforcement of Civil Rights laws, but it never pays to give so much power to the unelected. They can use it for whatever they want, while disingenuously coloring it civilrightsy.
Getting behind legislative efforts to put down Citizens’ United, and for campaign-finance reform. Making sure we get a Democrat in the WH to lock in a more moderate or left-leaning SCOTUS.
Meanwhile help grow the OPT-OUT movement.
LikeLike
The ‘correct’ answer doesn’t make sense, either. There is no problem, it’s a fable. There is a point to the story but no problem is demonstrated in it.
LikeLike
“A man frees a snake [Pearson] that is trapped between two rocks, and as a reward the snake gives him a charm that will allow him to hear what animals [standardized test writers] say, but only if the man keeps it a secret. The man betrays his new power by giggling at the things he hears, arousing his wife’s curiosity. He eventually tells her about the charm, and it stops working.”
LikeLike
DAM, SomeDAM, you are BRILLIANT!
LikeLike
It is routinely the case these days that the test questions are more complex than are the passages because the test makers are attempting for reasons of expediency in grading to use multiple-choice questions to test sophisticated thinking (what in EdSpeak is called “higher-order thinking” even though much of that thinking involves what is arguably “lower order” inferencing in the sense that we largely do it automatically, below the level of conscious awareness of the processes by which the inferences are being formed).
I have an IQ of 180. I got a perfect score on the verbal section of the GRE. I have worked as an editor and writer for many decades. My publications list runs to 12 pages, single spaced. Its common for me to read a novel or a book-length work of nonfiction in an evening. Yet I often “have trouble” with these tests. From what I’ve seen, I would say that MOST of the questions on the latest generation of high-stakes state ELA tests are unacceptably poorly and written and, more importantly, have poorly conceived formats.
Often, I know what answer the question author was probably looking for, but the question format is so convoluted or the question so poorly written or the passage so poorly edited that the question doesn’t ask what its author thought that he or she was asking, and either a different answer than the one that the author believes to be correct its demonstrably actually correct or more than one answer is actually correct or the question is not actually answerable as written.
When I was a kid taking education methods classes decades ago, we were taught to use so-called “objective-question formats” (multiple-choice, true/false, etc.) to test basic factual recall and essay question formats to test more sophisticated thinking. That’s a good rule of thumb for test makers but one that is completely ignored by this generation of high-stakes tests (and the supposed testing experts constructing them).
Think of trying to use a butter knife to turn tiny Philips-head screws when assembling a children’s toy on Christmas morning. That’s the kind of thing these test makers are doing. They are using the wrong tool for the job.
That’s but only one of the many, many problems with these exams. Among the others, the most significant is that the ELA tests do not by any stretch of the imagination validly or reliably measure what they purport to be measuring.
LikeLike
type: it’s, not its in fifth sentence of paragraph 2, above
LikeLike
typo, not type. sorry, rushing here
LikeLike
There are a couple more typos in that post, above. Sorry. Again, rushing here.
LikeLike
Another big problem, and this one involves a terrible irony, is that there is no accountability on the part of the test makers because we aren’t allowed to see most of their questions or to talk about them if we have seen them. So, the folks providing the accountability instruments aren’t being held accountable.
Yet another is that the results of the tests are of ZERO instructional value. In most cases, teachers do not get a detailed breakdown of the scores or get such a breakdown only after many months have gone by, and at any rate, such a detailed breakdown would be useless. Why? Because the questions purportedly measure proficiency with standards that are themselves so vaguely and generally worded that operationalizing them sufficiently to test them objectively is not difficult but, in fact, impossible.
Duane is right to think that the testing is an utter scam. A billion-dollar-a-year scam.
LikeLike
There are a lot of geeky types who support the state testing because they believe that education is simple. Make a list of what people need to know and do. Teach them those things. Then test them to see if they have learned them.
But here’s the problem”: “What people need to know and do” are not at all the same kinds of things in ELA as in a subject like elementary math..
It’s SIMPLE to put together a multiple-choice quiz to find out whether a kid has mastered the multiplication table through 12 x 12. It’s a task of an altogether different level of complexity to put together such a quiz to answer, validly, the question of whether that kid can “Determine the meaning of words and phrases as they are used in a text” (CCSS.ELA-Literacy.RL.8.4). Most of the CCSS ELA standards, like that one, are so general, so broad, so vague that, again, it is not only difficult to operationalize them sufficiently to test them validly but IMPOSSIBLE to do so, for any valid system of operations that one might come up with would rest on an operational definition of the standard too unlike the original standard to be recognizable as that being that standard at all.
See the problem? This really needs to be explained to people.
LikeLike
Determine the meaning of words and phrases as they are used in a text” (CCSS.ELA-Literacy.RL.8.4)
Just look at that one excerpt from that one ELA standard. What is meant by “the meaning”? Meaning in written and spoken language is extraordinarily complex and takes many, many forms. Does “meaning” here mean intention or significance or both, for example? Those are VERY DIFFERENT things, measured in COMPLETELY DIFFERENT ways, and both are VERY IMPORTANT. So, we’re already lumping together apples and oranges, shoelaces and conundrums, wishes and the Los Angeles Rams, and we haven’t gotten past the third word of the standard.
What do we mean by “a text”? A text could be anything, literally anything, but the authors of the standards probably meant written texts. What texts, then? Texts of “grade-level complexity” as measured by Lexile scores, the standards [sic] tell us elsewhere. But Dylan Thomas’s phrase “Time held me green and dying.” is of a less than grade 2 complexity by a Lexile measure, though it is doubtful that most 2nd graders would grok it. The fact is that by referencing “texts” so vaguely here, the standard becomes so broad that it might be translated “The kid can read,” so it’s of no help, at all, in providing any sort of operational definition of what that means.
LikeLike
I pulled that one line AT RANDOM from the CCSS for ELA to give the flavor of the whole. The standards are a bunch of unoperationalizable vagaries, so of course, the tests based on them are invalid.
LikeLike
And the standards leave out almost entirely what kids might be expected to know. They are, for the most part, a vague, unoperationalizable attempt at a list of skills–content and context free, which is insane because texts exist in context and have content and one can’t treat them in isolation from either.
The folks who conceived this system of accountability for ELA don’t understand the most basic stuff about how language works.
LikeLike
Back to that original test question in Diane’s post. Of course, the question is ridiculous because there was no “problem” that the main character “add[ed] to.” A bright kid will go back to the passage and look in vain for such a “problem.” And given that that kid will have about a minute and a half per question, she will run out of time to complete the test because she will have spent so much time trying to figure out what the [expletive deleted] the questions mean.
LikeLike
The whole thing is ridiculous. It’s akin to what fundamentalists say about reading the Bible: you will be inspired in your life, yet many years before people learned to read, the people still were inspired by Nature. You just can’t test children’s dreams and ideas.
LikeLike
Bear in mind that when the publishers put out a few “sample release questions,” they carefully scrutinize their own tests to find the LEAST objectionable of the questions. In general, the questions on the tests are MUCH WORSE than are the ones that they release.
LikeLike
In order to use MC questions to test “higher-order thinking,” the test publishers have come up with some new test question formats that are insanely difficult for writers of test questions. One of these involves sets of paired questions. The first set asks a question about the passage. The second asks the student to identify the evidence from the passage on which the correct answer was based. Now, the question writer will be told that the distractors (the wrong answers) have to be “plausible,” so the writer has to come up with a question that has one right answer and three plausible but wrong answers and then three pieces of evidence from the selection, one of which is the actual evidence for the right answer and three of which are plausibly evidence for that answer but not actually evidence for that answer.
The result is predictable–questions that have several answers that are arguably correct, which is, after all, what “plausible” means.
LikeLike
Try taking a 250-word selection and writing ten sets of such questions for it, and you’ll soon see why the latest batch of ELA tests are ridiculous. They were misconceived from the start.
LikeLike
oops, why the latest batch . . . is ridiculous
LikeLike
Bob–thanks so much for all the information (& from a knowledgeable & brilliant individual).
To everyone–having said that, past time to point the finger at Pear$on–let’s ALL give them the fist. Stop whining & start winning–as I’d commented above, tell EVERYONE about how much in taxpayer $$$–meant to be used for REAL education, not teaching test-taking–your district & your state is absolutely wasting. Get people MAD.
Then, organize a march on your nearest Pear$on building(s). First, send out press releases. For those of you who have friends in those places, make sure what you do is covered in the media (& don’t be skeptical–some of us do, & even our events get media coverage). The school year is starting (early, again, to get those kids test-ready–& that’s the ONLY reason we start school earlier & earlier every year–it’s not so low income kids can get breakfast & lunch). It’s beyond time to shut Pear$on down (esp. before any more action is taken on TPP)!
LikeLike