Rachel Rich is a retired English teacher who has taken a deep interest in standardized testing. She wrote the following review of one of the two federally subsidized tests. Normally, I would tell you which test she has analyzed, but I have recently become acutely aware that the testing corporations hire security agencies to scan the Internet, looking for blogs and tweets that dare to mention their name. If you mention their name, the testing corporation goes to the Internet Service Provider and complains that you violated their copyright. The ISP then deletes your post or tweet. So I won’t tell you which national test she is writing about. I will just give you a hint: it is not the one that is CCRAP spelled backwards. It is the other one. (Let’s see if they miss this one.)
Rachel Rich writes:
S——r B—–ed Exposed
The online Third Grade SB Practice Test is the tip of the testing iceberg, but presumably made of the same basic material as the larger, submerged test. The “real” test is so hidden from view that you, other parents, teachers and even the students themselves are not even allowed to whisper about it, let alone criticize. Given that other standardized tests publish their questions once the test is over, the SB never-ending code of silence is unprecedented, probably to hide flaws. If the public mini-version is any indication, the final is a sloppily written, tricky, grossly unfair mess.
The current level of censorship surrounding SB would make Nixon proud. The test originators, Pearson, CTB/McGraw-Hill, and AIR, unleash internet spies like TRAXX and Caveon who set webcrawlers after key words like test names. Next, human spies dig into the Facebook, Twitter and other accounts of any griping parents, teachers, bloggers and especially children!
SB even sends out annual flyers to school administrators detailing how to spy on kids’ Facebook and Twitter accounts. Principals are supposed to suspend kids as young as eight simply for telling their parents there was a question about the Wizard of Oz on a Common Core test. Teachers are forced to sign gag orders or face firing for discussing the uber-test even in the most general terms. And right this very minute testing companies are forcing the removal of internet discussions under threat of lawsuits. Censorship is now as common as head lice in kindergarten.
Now let’s find out what they’re hiding:
The Language Arts Third Grade SB Practice Test is twenty pages long! Since it’s supposed to take an hour, we can easily calculate the length of the final. Officially third graders need at least seven hours to finish the math and English portions combined, meaning the real deal is a grueling 140 pages long!!! Tenth graders are assigned at least 8 1/2 hours, which would mean their tests are about 170 pages long! Endurance is now as key as knowledge. One kid told me afterwards his fingers hurt.
The test opens with a colossal three page reading passage totaling 580 words. That is triple the length of passages in other tests, a length only suitable for in-class discussion, not a cold read. Still the test repeats this flaw with similar, lengthy, redundant passages. Such a quantum leap in expectations renders all comparisons with other tests useless, meaning it can’t be proven to be a legitimate measure.
Previous tests were only 60-120 minutes long, while today’s third graders must sit still for 90 minute intervals totaling a minimum of seven hours for English and math combined. This minimum doesn’t include time needed for individual log-in, bathroom breaks, computer crashes, or SB transmission snafus. Already heaped with challenges, special education students need up to fifteen hours to finish, though knowing in advance they’ll probably fail. Even recent immigrants who can’t read English are required to simply sit and stare at the screen until the clock runs out. Test makers call that rigor, but it’s really just plain mean and stupid.
Sophisticated computer skills are required of kids, even though the makers don’t have their own act together. I had to click back and forth between passage and questions, which sent my answers into a black hole, as does pressing the tab button during typing. Eight-year-olds are also expected to highlight, drag and type fluently, which most cannot. I wanted to throw myself off the front porch as a martyr for the millions biting their nails and pulling out their eyebrows in sheer frustration.
Question 1: “Click the two details that best support this conclusion.” Kids are faced with choices twenty words long, although adult tests typically warm up with soft pitches, like choosing from short phrases.
SB, way to destroy kids’ confidence right out of the gate!
Questions 2, 11, 13, 27 have a Part A/Part B format. Question 2: “This question has two parts. First, answer part A. Then, answer part B.” This format is unfamiliar to adults, let alone eight-year-olds. Since these quirks don’t exist on the ACT, ASVAB, or Meyers-Briggs, etc., they negate the Smarter Balanced claim that they prepare K-12 students for future tests. Equally befuddling, the fifth choice for Question 2 Part B is on the next page and since you can only open one page at a time, even I, an adult, overlooked it.
So why such trickiness? Teacher, teacher, I know! The more students SB fails, the more test prep they sell! As soon as last year’s testing month was over, SB solicited teachers through our district email to purchase out of their own pockets tutorials for improving student scores. In some districts they even use school contact lists to advertise directly to parents! These profits from the private sector are on top of their profits from federal and state funds. In 2012 alone, a year of limited pilot testing, the industry pocketed a cool $8.1 billion. No one is saying what today’s total is, probably for fear of alerting the public to this gigantic waste of tax dollars.
Question 3: “Arrange the events from the passage in the order in which they happen. Click on the sentences to drag them into the correct locations.” Many eight-year-olds don’t have the experience, let alone the dexterity to do this. Consequently they fail not from lack of knowledge, but from a lack of intelligent tests.
Question 5: “What inference can be made about the author’s message about animals? Include information from the passage to support your answer.” Also, Question 12: “What inference can be made about why the author includes the backpack in the passage?” Where do I begin? Little children’s brains can’t “infer” anything, because they still think only literally. It’s developmentally impossible for them to read between the lines or think figuratively. To say, “The girl has a chip on her shoulder” merely signals them to look for something on her shoulder, not that she’s angry. Teaching inference at this age is as unrealistic as trying to potty train every single one-year-old. Sure, a few precocious babies might succeed, but the rest will be driven batty.
These little kids are even required to type their answers! You have to be living in la-la land to expect fluent keyboarding at the age of eight. According to the US Census, a whopping 16% of students lack the home computers or hand-held devices necessary for practice, and most schools don’t have enough computers for all. Ironically, exploding testing expenditures have also forced most districts to drop keyboarding courses.
This boondoggle isn’t age appropriate precisely because zero elementary specialists were allowed to help with its design. Instead, reps from the College Board, ACT and Aspire idiotically “backward mapped” expectations for each age starting with college entrance exams and assuming every child should attend college. No Child Left Behind agreed, but who do you think set that agenda for US Department of Education?
No joke, SB requires a B to pass!!! That’s to align with a B requirement for college entrance. But as kids didn’t we all need only a C to pass? Even reading levels are now one full year higher by graduation. No wonder only about a third pass. Meanwhile, doesn’t requiring all students be college eligible mean they all must be above average? Hard to believe intelligent adults fall for this. It’s oxymoronic!
Question 10: “The author uses a word that means placed one on top of another.”
Punctuation rules require quotation marks around “placed one on top of another”. Rushing to publication in just nine months, test makers clearly ignored the thousands of pleas for corrections, proving once again that they’re not about quality, but about profits. Investment sites squealed with delight over the chance to stuff $2.2 trillion dollars in public education funds into their private pockets.
Question 16 has a typographical error: “Move the groups of sentences so that the group that makes the bestbeginning (sic) comes first.” Even English majors don’t agree on the correct sequence for this story, but they do agree SB should have hired a copy editor. I actually heard one test designer complain that since corrections impact multiple contractors, from software to print, they’re just too expensive to make. That’s because they’re beholden to shareholders, not students.
Question 21’s phrasing is light years above grade level: “Which of the following sentences has an error in grammar usage?” Seriously? Why not, “Which sentence uses incorrect grammar?” Strangely, teachers aren’t even allowed to help kids understand these obtuse questions, but instead must parrot “Do your best.” Kids get so stressed out not knowing what they’re supposed to do that SB manuals actually detail how to handle crying, vomiting and peeing pants. You wouldn’t believe how many parents and teachers tell me this is actually happening to their own students! It’s epidemic.
Question 23: Who in their right mind gives a listening test about The International Space Station to third graders? On what planet do little ones have either the background or the interest? It’s also grossly unfair because they don’t study this until the fourth grade. Besides, not everyone is a white, suburban, middle-class kid whose school and parents can afford trips to the planetarium.
No surprise, SB has never passed any validity studies that compare it with other measures such as the NAEP, PISA, SAT, ACT, high school or college graduation rates. In fact, they’ve quietly issued disclaimers. If the test did have validity, they’d be crowing it from the rooftops. But why should they bother when they’ve already pocketed the cash?
Now would someone please blow the lid off the real test, preferably before quitting or retiring?!
Rachel Rich
I’ve copied Rachel’s comments for re-posting if and when the thought police come and erase your post.
Just getting ready…
thanks, Rockhound. One never knows, do one?
Respectfully, your guest blogger is critiquing material published by SBAC. I “inferred” that from what she wrote.
“The online Third Grade SB Practice Test is the tip of the testing iceberg….”
“The Language Arts Third Grade SB Practice Test is twenty pages long!”
“The test opens with a colossal three page reading passage totaling 580 words.”
The ELA practice tests were released months ago. Russ Walsh, among others, critiqued them.
http://russonreading.blogspot.com/2015/02/readability-of-sample-sbac-passages.html
As you were.
Lucia,
The guest blogger ended her post by saying, “Now would someone please blow the lid off the real test, preferably before quitting or retiring?!”
She did not claim to be giving away secret information (although I implied that she was, given the current climate of censorship by PARCC and Pearson).
Yes, she did not claim.
I linked to Russ Walsh’s analysis because it provides another perspective from a respected source. He found the passage “relatively easy to read for an average third grader” and described it as (a) straightforward and pleasant story that follows regular narrative structure. Vocabulary appears very appropriate for a third grade reader.”
Comparing SBAC’s questions (in general) to PARCC’s, Walsh said, “As would be expected from a test tied to the CCSS, a number of questions asked students to cite evidence for their answers. In the PARCC test this accounted for almost 50% of the questions. On the SBAC this percentage was closer to 30%. Every grade level was asked a question requiring determining the meaning of a word from context. This is also aligned with skills emphasized in the CCSS. Every passage also included questions aimed at the understanding of key ideas in the text and at an overall understanding of the text. While some questions were aimed at text analysis, the balance on the SBAC appeared to me to be more in keeping with a focus on a general comprehension of the text than were the PARCC samples I looked at, which were more focused on passage analysis.”
I appreciate that Ms. Rich also comments on the challenges of third graders taking the test online, which Walsh didn’t address.
Lucia,
I understand that you like SBAC, but neither I nor most readers of this blog are admirers of Common Core or the CC tests. We don’t care for standardized testing that is misused and lacks any diagnostic value.
Lucia “As would be expected from a test tied to the CCSS, a number of questions asked students to cite evidence for their answers. In the PARCC test this accounted for almost 50% of the questions. On the SBAC this percentage was closer to 30%.”
That’s a weird argument. Why have more than 0% of these kinds questions for third graders?
I also don’t understand why you quote the analysis of Walsh. Wasn’t Rachel’s analysis clear enough? Even if she just showed us the questions, we could have understood immediately that the test was inappropriate.
Walsh gives us the analysis—without quoting the actual questions— which then relies on CC modified Lexile scores and other impressive stuff like Flesch-Kincaid and Fry measures. Paradoxically, Walsh’s article is completely unreadable to me, though it’s about the readability of grade school texts. While I went through it, I asked many times “What the heck is he talking about?”
I wonder at point has it become acceptable to talk about grade school education using a jargon that requires special training to understand?
Mate, I’m not sure whether Walsh was making an argument; I read it as a comparison based on descriptive statistics he produced through his own analysis, though you may disagree with me.
I not commenting on the clarity of Rich’s analysis, or Walsh’s. I believe Walsh has been described here and elsewhere as a literacy expert; Walsh and Rich took different approaches to examining and describing the SBAC ELA tests, and you can use that information or disregard it.
I also linked to Walsh’s analysis so as not to narrowly circumscribe it or make its context difficult to locate.
Perhaps you’re not an ELA teacher, so you’re not familiar with the various ways readability is measured and described. The question of whether the test passages are of an grade-appropriate readability been a topic of interest to many since the practice tests were made public.
Thanks, Lucia, Imo, Walsh is making an argument for the readability of the SBAC (practice) tests. He writes
Unlike the passages I reviewed for the PARCC test, I think the passages I examined from the SBAC test are fair representations of what children in those grades can and should be able to read.
So we could say, he further argues that the SBAC tests are more readable than the PARCC tests he critiqued in
http://russonreading.blogspot.hu/2015/02/parcc-tests-and-readability-close-look.html
But you are correct that I am not an ELA teacher, and while I understood the basic conclusions, I had no comprehension how he drew them.
Lucia: Russ Walsh’s comments are quite general in nature. His conclusions do not address the specifics cited, question by question, in Ravhel Rich’s critique.
Mate, I would not disagree with your conclusion and your basis for drawing it.
Bethree, yes, I indicated that Walsh’s analysis is general. If you followed the link, you know that he analyzed one ELA passage and question set at every grade.
In my view Walsh, whose credentials as a literacy expert are well established, supports his claims with a range of data and explains how he generated them.
Without knowing more about Rider’s credentials (i.e., what grades did she teach as an English teacher) I can’t evaluate some of her claims (e.g., “Teaching inference at this age is as unrealistic as trying to potty train every single one-year-old. Sure, a few precocious babies might succeed, but the rest will be driven batty.”).
I appreciate that Rider considers the issues of taking the test on a computer, which Walsh does not address.
(my apologies for getting Rachel Rich’s name wrong above)
War is peace.
Freedom is slavery.
Ignorance is strength.
Tests are education.
So beautifully said. I despair that anyone will be educated in another few years.
You forgot one
“Reform” is reform
SomeDAM Poet: please forgive another quibble from me, but…
Did Microsoft auto-correct kick in without you realizing it?
To wit: shouldn’t your add be—
“Rheephorm” is reform.
¿😳?
Or, as has been suggested before on this blog, perhaps my moniker is sadly appropriate and I just got it all wrong again…
😎
Not related to this post, but I wondered if something happened with your post about the retired Texas teacher running for state board of education who accused President Obama of being a prostitute. I went to click on the story and got a message that the post was not available. Did someone remove it, or was it just some computer glitch? After reading this morning;s NYT article on the PARCC assessment and this post, I am getting a bit paranoid for good reason.
She was defeated in a run-off over the weekend.
I am a Seattle parent of a 2nd grader who feels guilty about not doing enough “TypingAgent” homework during the school year, all so that my 8 year old can type for this ridiculous test this year. The school doesn’t have time to teach this during the school day (not surprising given our underfunding issues). Fortunately, we have an utterly sane, wonderful teacher who hasn’t forced the issue at all. My 2nd grader actually enjoys the “learn to type” computer program, but since we have only have an iPad, and two work laptops at home that we don’t want the kids to use, we don’t really have a good set up for this. Plus, I am just anti-homework for kids this age anyway. We did purchase a keyboard that she can plug into the iPad that she can use during summer break. It seems like an ok thing to do for 10 minutes a day during the school break. But it boggles my mind that we expect kids from high poverty schools to do this. It boggles my mind that we expect kids without good fine motor skills to do this. I mean, some 8 year olds are all thumbs! I learned to type in high school. Why do we expect 8 year olds to type?? And not only type, but type a 3 paragraph essay???? Nuts, nuts nuts!!!!
Kay, you are right!
You are so tight. Make your opinion known at the district school.
I so agree with you regarding primary homework. In the early.’90’s, when my 3 were in primary– perhaps as a result of nat’l exhortations that public schools were ‘failing’ [tho our district was among the best in the state]– there was an acceleration of age-inappropriate hw assnts for youngest students. To the ridiculous point where my eldest (among others) was required by his 2nd-grade teacher to attend school Saturdays to ‘catch up.’
I was way overinvolved w/my 3 closely-spaced kids’ hw. Evenings were grueling. Took control at the beginning of eldest’s 3rd grade, sat w/ teacher & told her henceforth I would limit work on each nightly assnt to 30 mins, making a note on the paper to that effect; my 8yo was not going to be devoting more than 1.5 hrs daily to hw.
Clearly, the writers of this CCRAP test know nothing about child development and perhaps little about test design. Parents need to understand how unrealistic this test is for eight year old children. The test is much too long, and the logistics are cumbersome. We are trying to format our children for the ease of the computer rather than determining what is a fair and reasonable demand for young children. Too much depends on the access, experience and quality of the technology available, and the poorer children are at an even greater disadvantage. Parents across America must protect their children from inappropriate activities such as “extreme testing.” Just OPT OUT!!!
It is cruel and unrealistic to expect ELLs to sit staring at a computer screen for a test has has no purpose and little understanding for them. Time is precious to ELLs! They are generally years behind in academic skills. Forcing them to waste valuable time is malfeasance of educational practice. We need to bring common sense back to education!
@retired teacher Here in Oregon we have an interesting situation with the Smarter Balanced exam. Juniors must take and pass the math and reading in order to demonstrate proficiency in essential skills and earn an “Oregon Diploma.” For ELA, students must make the subscore cut off setby the state in both reading and writing in order to pass either. Miss one, you miss both. However, and this happened to one of my ELL students, a student can meet the SBAC proficiency composite cut score and if they miss either one of the subscores, they aren’t counted as having demonstrated essential skills. Curiously, this means that while the Smarter Balanced performance demonstrates that this native Spanish speaker whose family speaks Spanish at home is considered by Smarter Balanced to be career and college ready, the State of Oregon doesn’t consider her to be qualified to earn an “Oregon Diploma.”
“Such a quantum leap in expectations renders all comparisons with other tests useless, meaning it can’t be proven to be a legitimate measure.”
Doesn’t matter if it’s a “quantum leap” or just a single baby step in expectations. The expectations are not what “render all comparisons with other tests useless. And no that doesn’t mean “it can’t be proven to be a legitimate measure”. There is no measurement whatsoever of any kind going on in the standardized testing regime. There is no standard of measurement, no agreed upon definition, no measuring device calibrated against said standard. It’s all 100% Pure Grade AA Bovine Excrement to begin with as in start with CRAPP and you’ll end with CRAPP.
Don’t care about “comparisons with other tests”. They are worse than useless and any results are, as Noel Wilson puts it “vain and illusory”. In other words COMPLETELY INVALID. Comparing invalidities like the results of standardized tests is pure mental masturbation, nothing more.
Scam
Betrayal
Against
Children
Oops! You typed out S_____r B______e! They may hunt your post down.
Reblogged this on Politicians Are Poody Heads and commented:
Inappropriate tests. Inappropriate educationally, developmentally, psychologically.
The purpose is not to tell us anything about what the children should be learning and have learned. The purpose is to punish schools, punish teachers, and sell more tests, more test-prep materials, more software, etc, all in the name of profits for the education-industrial complex.
I thought it was annoying in NYS where the tests require lots of flipping back and forth to answer the questions, but at least the test is in paper form, and a kid can use his or her finger to mark the place and fold the paper over to compare passages or figure out which line says what. To expect this on a computer screen is even more ridiculous.
Oh, and in my day, a D was passing!
These excerpts in the posting jumped out at me:
1), “This boondoggle isn’t age appropriate precisely because zero elementary specialists were allowed to help with its design. Instead, reps from the College Board, ACT and Aspire idiotically ‘backward mapped’ expectations for each age starting with college entrance exams and assuming every child should attend college.”
2), “In 2012 alone, a year of limited pilot testing, the industry pocketed a cool $8.1 billion.”
3), “You have to be living in la-la land to expect fluent keyboarding at the age of eight.”
4), “Rushing to publication in just nine months, test makers clearly ignored the thousands of pleas for corrections, proving once again that they’re not about quality, but about profits. Investment sites squealed with delight over the chance to stuff $2.2 trillion dollars in public education funds into their private pockets.”
Just two comments.
First: Cui bono? Follow the $tudent $ucce$$.
Second: for all the rheephorm nonsense about their failed pedagogical and management articles of faith—dressed up with hyperbolic frippery like “cage busting” and “achievement gap crushing” and “21st century” and “creatively disruptive” and such—they have nothing new to offer. Note that mapping backwards (e.g., intellectually and skill-wise) from 16 year-olds to 8 year-olds, let’s say, is a mindlessly self-aggrandizing variant of the very old and utterly baseless idea that children are simply miniature versions of adults.
Just sayin’…
😎
What they also do not tell the public is that some kids are spending up to 18 hours to finish. It is not timed therefore the following week after testing hundreds of students were pulled out of their classes to finish. Some of those students spent an entire school day sitting in a room trying to finish. Some didn’t finish even after another five hours or so of testing and needed to come back another day to finish. How in the world is 18 hours of testing necessary for anyone, at any age?
The length of these exams is crazy. Why are the test so long? Money? Faux rigor? I just don’t get it.
“solicited teachers through our district email to purchase out of their own pockets tutorials for improving student scores”
Sounds like that would skew the scores of those who actively ‘tutored’ and the whole ‘standardized’ concept is completely kaput.
Rachel Rich, thank you so much for speaking up about SBAC tests. How sad that one must be retired to speak power to truth, & how welcom is truth-speaking by retired teachers. You go, Rachel!!
Great work, Rachel. The only flaw I see: you say 3rd graders can’t make inferences. When a third grader sees daddy’s car in the driveway and declares, “Daddy’s home”, she’s made an inference. Our brains can’t help but make inferences. The problem comes when you ask her to make inferences about things she doesn’t know enough about –e.g. a text on an unfamiliar topic that uses too many unfamiliar words. I can’t make inferences about weapons by the sound they make, but soldiers can. Our ability to infer is coextensive with our knowledge base. Thus it makes no sense to “teach” or test inference making as if it’s a muscle that, once strong enough, will be able to make inferences about anything. The way to expand a child’s ability to make inferences is to expand her knowledge base. SBAC and PARCC purport to test skills like inference making, but what they’re really testing (unwittingly) is knowledge base. The people who designed the tests don’t really know what they’re doing.
“The people who designed the tests don’t really know what they’re doing.”
Test writers are constrained by the standards. I agree with your argument here, but we have to also point our fingers at the Common Core standards too. Convoluted and confusing syntax is on the writers; the reading passages and endless demand for children to identify text based evidence and to make inferences and to determine author’s tone/intent – they are all on the CCSS.
I think Ponderosa’s main point is that the CC bs about skills (in this case “inference”) is not tested, makes no sense to test it and is probably not testable—at least not in a speed test. What they test is plain old accumulated data in kids’ heads, or to make it sound more reformist, knowledge base.
The other problem with writing test items using the CC is that the standards are basically lists of subjective and often vague skill sets that are completely incompatible with the MC format. Every teacher here should pick a CC standard (ELA) and try to write an MC test item using the constraints of the Common Core. The best you could do would be to avoid confusing syntax, but you would still end up with a CRRAP test item. I was forced to write many a science item that I knew was not very good by equally bad standard.
Let’s take your example and watch what happens in the MC format.
In line 4 of the first paragraph of this story, Jill is “surprised” to see her father’s car in the driveway when she comes home from school. Which can Jill conclude?
a) Her father is home early from work
b) Her father is home sick
c) Her father’s car didn’t start
d) Her father’s car is out of gas
Turn a subjective standard into an objective MC item and voila you have a CRRAP item!
I think the basic problem with treating these tests as “objective measures” is that they’re products and there’s an array of testing products out there and these (along with the PARCC test) were sold to the public as “the best”. To make it even more muddled, state actors and the companies that sell the products then joined hands to promote them to parents. They stuck a nonprofit in the middle in an attempt to add some neutrality but “nonprofit” doesn’t mean anything outside a legal definition. “Nonprofit” does’t mean “credible” or “without conflicts”- it just means “nonprofit”.
There’s no reliable “neutral” to evaluate anything.
They destroyed their own credibility because there’s no meaningful distance between “Smarter Balanced” and the government actors who mandate the tests.
The 74 is a non-profit. Here’s a piece promoting an edtech product The 74 published as “news”:
https://www.the74million.org/article/zaption-changing-the-way-students-and-teachers-use-interactive-videos-to-learn
Come on. This is blatant product promotion. The editors at The 74 can’t tell the difference between an advertisement and “news”?
The same thing happens at the US Dept of Ed. They link to specific ed tech products and use language that could be ad copy. None of these people are credible. They destroy their own credibility because they respect no boundaries between the commercial space and the public interest space. I don’t know if they know the difference.
Walsh certainly has earned his credentials. However, he can’t see the forest for the trees. The bottom line is what third grade teachers observe as their students take the test. They tell me the kids really need one on one help just to understand most questions, as well as to use the necessary computer techniques, especially typing. More than Walsh, more than anyone, elementary teachers know what kids are capable of.
“More than Walsh, more than anyone, elementary teachers know what kids are capable of.”
Kids are capable of all kinds of thing if they are trained for it. They can be trained to tell on each other, wear neckties, kill people, among other things. I think the most important question is what they enjoy doing, what they are curious and excited about.
Rachel, you’re spot on!
My middle school ELA students have completed another round of time-wasting weeks on their Math, Science, and Reading tests. As if middle school kids don’t have enough angst in their lives, during the three years they spend in our building they are subjected to not one, not two, but three consecutive years of high-stakes tests. Small wonder that my 8th graders did poorly after countless hours (could we say years?) of disappointment at failing to pass. What incentive could I provide them with to encourage them to “do their best” when I know that they are being set up to fail? It is pathetic that the public is kept in the dark because those of us who are in the classroom are threatened into silence.
I am a firm believer in standardized testing when used intelligently and appropriately. Of course it is possible to cobble together a bit of crap and apply it foolishly.
This review seems to suggest that the test in question was put together by a bunch of total incompetents and is being used totally irresponsibly and idiotic manner.
Just the 7 hour length of the test for a 3rd grader shows the level of incompetence on the part of the test constructors and some of the questions plus the response formats seem to be product of an evil twisted mind.
Has the entire US political/educational elite gone stark raving mad? Or is there a reason they want to destroy the American educational system?
“I am a firm believer in standardized testing when used intelligently and appropriately.”
Why? What do they accomplish?