Gerri K. Songer of the Illinois Township High School, District 214, conducted a Lexile analysis of the PARCC assessment and what she found was very alarming. The reading levels embedded in the assessment are absurdly high. Many young people will fail the PARCC test because it is developmentally inappropriate for high school students.
What exactly is the point of writing a test at a level that large numbers of students are guaranteed to fail? What will be the consequences for their teachers, who will be rated ineffective based on a test that is not written for high school students? As Songer writes: “Efforts can be made by educators to raise the level of reading comprehension, yet there is not much teachers can do to change the natural development of the human brain.” If she is right in her analysis, then PARCC is not only developmentally inappropriate but is designed to fail large numbers of students who will not be able to graduate, to go to college, or to enter a career.
The current state of education is a multi-faceted issue that tends to initiate accusations of blame rather than the generation of solutions. With the rollout of Common Core State Standards (CCSS) and the Partnership for Assessment of Readiness for College and Careers (PARCC), I find myself pondering over which category applies to education legislation. For years, the ACT has been the standardized assessment measure used in Illinois. There is much criticism regarding its validity, its effect on college entrance opportunities for students, and more recently, its effect on teacher evaluation. Many students are not able to meet the criteria established by College Readiness Standards for a variety of different reasons. This ‘presumed’ underachievement has resulted in teachers becoming the target for public animosity. I assert that the problem warrants a solution that first begins by examining the assessment.
According to GAINS Education Group, the average Lexile score, a measure used to evaluate text complexity, of text used in the ACT assessment is 1140L, which means students must read at an independent reading level of 1240L in order to comprehend the majority of text utilized in the assessment. If students cannot comprehend the text, then they cannot possibly respond with accuracy even if they are capable of demonstrating the skill being assessed. This would be the equivalent of taking a test in a foreign language. Today, there is no mandatory Lexile testing performed in schools across the country, but based on 23 years of experience working with high school students, I contend that it is very likely many students, particularly those in less affluent areas, do not read at 1240L.
If this is the case with ACT, then what is the average Lexile used by PARCC? After spending much time trying to find an answer to no avail, I analyzed the text of the ELA/Literacy sample items available on the PARCC website.
What I found was that these samples ranged in Lexile from 730-2140L. The sample passages were written at the following Lexiles: 11130L, 1220L, 1370L. To independently read the most complex of these passages, students will need to read at 1470L by April of their junior year.
The following is a list of some of the sample items analyzed:
A Lexile analyzer is available at www.lexile.com to confirm these findings.
The framework utilizes three sequences of instruction based on high school entrance reading levels. Please note that in Sequence 2 (Average), to score between 28-32 pts.(CRS), students need to read at an independent reading level of 1275L, yet students following the Common Core sequence would only be reading between 970-1120L. This would in no way be appropriate if the average Lexile used on the PARCC assessment far exceeds this score band with an independent reading level of 1470L.
I also have concerns regarding the developmental appropriateness of the PARCC assessment. The frontal lobe of the brain is not fully developed in human beings until after age twenty. The frontal lobe is the part of the brain that is concerned with reasoning, planning, parts of speech and movement (motor cortex), emotions, and problem-solving. I contend that many students are not yet developmentally able to meet the cognitive requirements necessary to perform complex, multi-step tasks at the level of sophistication in text such as that inherent in the sample items produced by PARCC. I am not at all surprised that the first round of PARCC assessment results show a significant drop in student achievement. Efforts can be made by educators to raise the level of reading comprehension, yet there is not much teachers can do to change the natural development of the human brain.
Steve Cordogan, Director of Research and Evaluation at Township High School District 214 in Illinois, feels, “There are good uses for standardized testing that would provide better validity.” For example, the ACT does not provide valid results since there are not enough questions to validate the scores generated. What can really be inferred from two points growth? He explained that this could simply mean a student answered a couple more questions correctly. The only portions of the ACT assessment that do produce “somewhat” valid results are the math and English sections. Yet, he feels that PARCC may not necessarily be the answer either since it could be testing at a level that is unrealistic for students.
Career readiness information from MetaMetrix shows the following:
LEXILES AND LIFELONG READING:
Federal Tax Form 1260L
Aetna Health Care Discount Form 1360L
GM Protection Plan 1150L
Medical Insurance Benefit Package 1280L
Application for Student Loan 1270L
CD-DVD Player Instructions 1080L
Installing Child Safety Seat 1170L
Microsoft Windows User Manual 1150L
Drivers’ Manual 1220L
READING IN THE WORKPLACE:
Labor 1000L
Service 1050L
Construction 1080L
Craftsman 1100L
Clerk 1110L
Foreman 1200L
Secretary 1250L
Sales 1270L
Supervisor 1270L
Nurse 1310L
Executive 1320L
Teacher 1340L
Accountant 1400L
Scientist 1450L
LEXILE SCORES NEEDED FOR:
Education (11–12) 1130L
Work 1260L
Community College 1295L
University 1395L
Unless the majority of our students plan to become scientists immediately upon graduation, there is no career-related reason to support a target reading comprehension level of 1470L such as that needed to comprehend the sample passages available on the PARCC website. The sample questions would require an independent reading level as high as 2240L.
Also, note that complex text is used when companies prefer that citizens do not receive money in which they may be entitled (Aetna Health Care Discount Form 1360L), and more simplistic text is used when companies want information to be accessible to their patrons (CD-DVD Player Instructions 1080L, Installing Child Safety Seat 1170L). Therefore, it may be more socially responsible to teach students how to effectively and clearly articulate information using a vocabulary that is accessible to the vast majority of the public. Isn’t that what newspapers do?
I question if intelligence can truly be measured by how well students can weed through detailed and complex information. Wouldn’t students actually demonstrate a greater level of intellect if they could speak, read, and write in an organized manner using a vocabulary with which most people in the country can understand? Could PARCC assessment actually turn out to be the instrument used to manifest the resurrection of Babylon – a land of confusion?
In addition to my concern for students, I am very troubled regarding the potential effects this assessment may have on educators. The current teacher evaluation mandated by the state is extremely subjective. I went through the training myself, and I would find it highly unlikely that a cross-section of evaluators could possibly produce the same evaluation results.
In 2016, standardized assessment is to be included as part of a teacher’s evaluation. Teacher evaluation, when combined with PARCC assessment results, equals a potentially grim future for educators. Teachers with over 6-8 years of experience will encounter a significant financial loss if their employment is terminated. Standard practice is that credit be given to new hires for only 6-8 years experience, depending upon the district. Teachers with over 20 years of experience will find that not only their salaries will be devastated (I estimated over a $200,000 loss by the time they could retire from the district in which I am currently employed), but their pension (which may likely already be negatively impacted by current legislation) will also be reduced by over one third of what they had planned for, with very little time to make additional provision. Finally, there are currently no severance packages offered in the public sector, so teachers could find themselves in an extremely bad place within a very short span of time.
If legislators are truly interested in finding solutions for educators, my recommendation is that they more closely examine the problem and respectfully include educators in the decision-making process. Many minds united can solve enormous challenges. Yet, what I see brewing in legislation pertaining to public education today is tragically disturbing. What I am witnessing is top down authoritarian, or Machiavellian, rule through ill-planned, uninformed legislative-making bodies that are looking through the magnifying lens of meticulous detail while missing the big picture that is glaring directly at them. What made Lincoln one of the most successful leaders in the history of this country is that he made an effort to spend time out on the front line. He talked with those of lowest rank and made sure they had what they needed to be successful. He built his people up, rather than tore them down. He offered them strength, rather than left them weak.
One real flashpoint here is going to be over the secrecy of the tests. It is just not going to fly if they don’t release the actual items, at least from the pilots. Even if they don’t it is hard to imagine that they’ll be able to keep it all secret while administering it to hundreds of thousands of kids around the country.
LikeLike
It’s all about the money. Months later, I continue to be intrigued by Arne’s mention of white, suburban moms. As I digest his messages, and the messages of the so-called education reformers, I realize the suburbia mention was was less of an insult than an actual explanation, perhaps subconsciously by Arne himself, of how our nation is desperate to climb out of our economic turmoil. Fabricating widespread failure, and including the typically academically successful upper-middle class kids, will tickle the economy through a higher demand for goods and services. Private money is generated to troubleshoot perceived shortcomings.. And so it goes..
.
LikeLike
Lexile scores are an insipid way to characterize texts, period. Comparing Lexile scores is just doubly insipid. This is just a waste of time.
The only point that may be possibly interesting here is to compare the Lexile scores of the PARCC readings with those of the recommended or exemplary texts on the CCSS. A large difference there would be interesting from the standpoint of test design.
LikeLike
You may want to contact CCSS and let them know your point of view. If you take a look at Appendix A of the Common Core State Standards, Section II in this study of text complexity analyzers states, “The goal of the study was to provide state of the science information regarding the variety of ways text complexity can be measured quantitatively and to encourage the development of text complexity tools that are valid, transparent, user friendly, and reliable.”
One of the computer programs researched in the study was The Lexile Framework for Reading by MetaMetrics, which is the analyzer I used to evaluate the PARCC sample items. Appendix A also states, “The research that has yielded additional information and validated these text measurement tools was led by Jessica Nelson of Carnegie Mellon University, Charles Perfetti of University of Pittsburgh and David and Meredith Liben of Student Achievement Partners (in association with Susan Pimentel, lead author of the CCSS for ELA).” Appendix A goes on to say, “The major comparability finding of the research was that all of the quantitative metrics were reliably and often highly correlated with grade level and student performance based measures of text difficulty across a variety of text sets and reference measures.”
If CCSS is driving our educational standards, and PARCC is assessing CCSS, then CCSS should definitely hear that they should add a bit more salt.
I would argue that some method of text comparison is better than no method. Also, it is not the ‘specific’ score that is important in instruction, but rather it is the score band to which teachers should pay attention. Yet, sometimes a visual helps – please follow the following link: http://gerriksonger.wordpress.com/2014/03/15/could-sentence-diagramming-be-making-a-comeback/. It will show you what the complexity of PARCC text looks like from the perspective of a grammarian. I understand that the quality of this image is not ideal, but understand that each line identifies a word in the sentence.
The sentence diagrammed is the first sentence of the Declaration of Independence, which is 20L less than the text used in PARCC. I hope you find this helpful, and I do appreciate your feedback. I am just a teacher who loves her students and believes they are intelligent human beings with feelings, talents, and dreams. It’s one thing to have standards, but it’s another to have standards that most students will not be able to attain – by design.
The most damaging consequence of PARCC implementation is the effect it will have on our youngest students. At least the students who perform poorly on the current standardized assessment (ACT) will not find out the degree of their “underachievement” until they transition into high school (EXPLORE). (I am not insinuating this is acceptable) With PARCC, these students will learn how “substandard” they are from the onset of their education, three times a year, until the year they finally graduate from their prison. This is tragic.
LikeLike
From the rather small sample of released math questions from PARCC, most typical high school math students will have enormous difficulty. It’s interesting to me that SBAC questions are much more in line with typical student ability. PARCC tests match what I would expect from a student capable of doing well on the AP Calculus test…but I question how realistic this is for the average (and below-average) students that I teach. Aspirational at best, I think.
LikeLike
Some of the language arts SBAC practice questions are poorly-written, poorly edited, and lacking in clarity. The “correct” answers to some are debatable.
LikeLike
My fifth grader (I’ve opted her out of Smarter Balance field test) came home and said the teacher was so excited when she told the class they would be doing “practice” tests all week. The teacher hyped the class up, telling them it would be so much and all they had to do was “click and drag,” not use pesky pencils and paper!
Well, after just a few minutes on the math test the class started complaining and asking why my daughter got to sit in the back of the room and read about colonial times. My daughter said kids were frustrated and some gave up and slammed down their chrome book. (I wonder how many will go home and tell their parents to opt out, knowing that my daughter was having fun reading a good book?)
My 9th grader, who always disagrees with his sister, didn’t believe her when she said the test was really hard so he got online and pulled up the practice test for ninth grade. After 5 minutes he tried the fifth grade version, and soon after that he tried the third grade practice test. Ok…he said no one in his geometry class would pass the ninth grade test and most wouldn’t even pass fifth grade (including him). He felt the third grade test should really be the ninth grade test, and he felt most could pass that one.
He said the test was confusing and very difficult to do without paper and a pencil. He was very frustrated with the computer part as well as questions that were long and tedious. He said the English test had boring reading material. Ninth graders won’t take the practice test, but he begged me to opt him out next year.
LikeLike
Interesting perspective, Momoffive. Believe it or not, the PARCC tests are significantly more difficult than the SBAC ones. Will be interesting to see what comes from the field tests that were given last week.
LikeLike
Yes, it would be interesting to see the scores, but it is my understanding (at least in California) that the parents and students will NOT get any scores! They’ve decided to “hide” scores because it is only a practice test.
LikeLike
Momoffive, even in California, the tests will come. If the cut scores (passing marks) remain the same, watch the explosion.
LikeLike
That’s just it, can they adjust cut scores to make it appear the kids did well? Would they do that? With SBAC no one will take the same test since it is adaptive, which could lead to manipulation in grading.
LikeLike
All test scores are completely arbitrary. They depend on the cut score, which is a subjective judgment! The state commissioner can move the cut score up or down to produce the result he or she wants.
LikeLike
A difference this time (in what I see) is that the people behind this don’t seem to mind seeing schools, students, and teachers fail. The whole “students aren’t as brilliant as they thought they were and schools aren’t quite as good as they thought they were” attitude.
LikeLike
Its not that they “dont mind seeing students and teachers fail” – its their goal!
You can’t sell snake oil to healthy people.
You cant solve a problem if it doesnt exist.
You cant sell educational solutions, like computer adaptive learning and assessment programs if students test scores are fine and America’s schools are working. So, one key step in the master business plan was to create a need for their solution by creating tests that are impossibly challenging for all but the best and brightest and most priveleged students. Creating them in the name of “rigor” or “grit” in order to trp students, to trick, confuse, and frustrate them into failing.
The proof is in the pudding (ie the tests):
Tests that teachers cannot look at or discuss.
Tests that they refuse to publish.
Test that would not withstand the scrutiny of experts in the field
A testing regime that defies transpareny.
SECRET TESTS.
SHHHHHHHHH
SUPER DUPER SECRET TESTS
Tests that provide teachers with no feedback.
Test with arbitrarily high cut scores.
Tests that are norm-referenced.
Test used to threaten teaching careers and frighten teachers into into using their educational snale oil (modules)
Tests that every single major reform advocate refuses to have their own children take.
Tests that will be their undoing, I hope.
LikeLike
LOL. You’re more blunt than I am. I want to give the benefit of a doubt (hard as that is) that there are good intentions behind this…
LikeLike
Here is a link to a great discussion of te stupidity of the Lexile algorithm for selecting CCSS compliant tests. Of course, Bill Gates invested heavily in promoting this stupidit.
Highly recommend the following for background http://www.susanohanian.org/core.php?id=607
LikeLike
Lexiles are admittedly a blunt instrument for measuring text complexity when choosing appropriate literature for an individual or a class. However, if as the writer indicates, the complexity of the sentence structure and the vocabulary in test items is beyond most students that should sound alarms whether the text is of an appropriate interest or developmental level or not.
LikeLike
The PARCC, online assessments are TRAPS – NOT TESTS. They are clearly designed to TRICK, CONFUSE, FRUSTRATE, TIRE OUT. and WEAR DOWN students into FAILING. Why? In order to support the false claim that America’s schools are “failure factories.”
Parents, please do not let the bogus test scores produces through Common Core (by PARCC?PEARSON) be used to dismantle your neighborhood public schools, do not allow them to be used to threaten and fire teachers, and above all, refuse to let the neo-liberals behind this trash masquerading as “rigor” be used to define, denegrate, or humilate your children.
Governor Andrew Cuomo has been a very outspoken critic of the soon to be administered PEARSON math and ELA tests. In a campaign ad aired earlier this month, he stated, “These tests are premature, they cause anxiety in children, and they are UNFAIR. I will not allow the test scores to be used against your children. We are supposed to help kids, not harm them.”
Governor Cuomo has effectively told the students of NY (grades 3 to 8) that these tests
DO NOT COUNT. If that’s really the case, one must wonder WHY on Andy’s name are they wasting their time taking tthem? Tests that are premature, nerve racking, unfair; and so invalid that they don’t even count! Why?
LikeLike
I appreciate Gerri Songer’s analysis. It confirms my fears about the absurd level of complexity that the PARCC developers wrongly consider rigor. Here is the link to sample PARCC questions. I have an MA and taught ELA to middle school and high school students at the RI School for the Deaf for over 25 years. I looked at the ELA grade 3-5 (?) sample passages and questions and was sick. As a veteran ELA teacher, I didn’t know what they were looking for most of the time. Huge numbers of students will “fail” this test, as they did in NY. It seems to me that the computer format of the test is actually training future drones to point and click and drag and drop. (And 3rd graders will have the expertise to do this? And at what cost?) Have any ELA actual authorities critiqued the sample tests for all grade levels? This is a travesty and must be resisted.
https://www.parcconline.org/samples/item-task-prototypes
LikeLike
Does anyone know if there will be actual humans grading the short answer and essay questions on SBAC? I am dying to know the answer to this as I work to prepare my students to take the SBAC field test. (No, I don’t tell them it will be fun. I try not to lie to young people.)
LikeLike
What does anyone know about ACT’s Aspire, the test that Alabama will begin using next month as its Common Core standardized test?
LikeLike
Here is their website: https://www.act.org/products/k-12-act-aspire/
If you check back here, I will see if I can find some of their sample items to analyze. I would be more than happy to get you that information if I can.
LikeLike
A map of Nelson County is laid out in the standard (x,y) coordinate plane below, where the center of the county is at (0,0). A cell phone tower is at (5,4), and Esteban’s house is at (10,–2). Each coordinate unit represents 1 mile. The tower’s signal range is 10 miles in all directions.
15. The strength of the tower’s signal to Esteban’s house depends on the straight-line distance between his house and the tower. What is the straight-line distance, in miles, between Esteban’s house and the tower?
HERE’S A 9TH GRADE MATH PROBLEM FROM THE ASPIRE WEBSITE.
THIS TYPIFIES THE CONVOLUTED APPROACH TO MATHEMATICS SEEN DURING THE CCSS ERA. THE DUNCAN REVOLUTION IF YOU WILL.
TEST WRITING GONE WILD.
TEST WRITING DONE BACKWARDS.
THEY START WITH THE MATH CONCEPT (COORDINATE GEOMETRY) AND THEN PRETEND TO SEARCH FOR A REAL WORLD APPLICATION TO MATCH.
WHAT THEY FIND IS AN OUT OF CONTEXT EXAMPLE THAT NO ONE IN THE REAL WORLD WOULD EVER USE.
FOR LOCATIONS ON A MAP WE ACTUALLY HAVE AN EXISTING COORDINATE SYSTEM: LATTITUDE AND LONGITUDE.
FOR FINDING THE STRAIGHT LINE DISTANCE IN MILES BETWEEN TWO LOCATIONS WE HAVE MAP SCALES AND RULERS.
NOW THE ICING ON THE CCSS CAKE OF DISTORTION AND DECEPTION:
REMINDER, THE TEST ITEM READS:
What is the straight-line distance, in miles, between Esteban’s house and the tower?
THE CORRECT ANSWER TO IS . . .
E) THE SQAURE ROOT OF 61 MILES.
NO JOKE.
LikeLike
The following is an analysis of the ACT Aspire Exemplar Test Items for Reading (http://www.discoveractaspire.org/uploads/2/4/0/7/24070377/act_aspire_rea…):
1 Early High School-Social Science-Biscotti di Prato 1150L (Ind. 1250L) – Grade 12+ Text Complexity
1 Social Science-Biscotti di Prato (#8) 1070L (Ind. 1170L) – Grade 10 Text Complexity
2 Grade-8-Social Science-A Capital Capitol 1060L (Ind. 1160L) – Grade 9 Text Complexity
2 Social Science-A Capital Capitol (#7) 1040L (Ind. 1140L) – Grade 9 Text Complexity
3 Grade-6-Literary Narrative-White Fang 1000L (Ind. 1100L) – Grade 8 Text Complexity
3 Literary Narative-White Fang (#8) 1120L (Ind. 1220L) – Grade 12+ Text Complexity
4 Grade-4-Reading-Citizen Scientists 1130L (Ind. 1230L) – Grade 12+ Text Complexity
As you can see, based on the highest Lexile listed in each range of the Lexile-to-Grade Correspondence chart (https://www.lexile.com/about-lexile/grade-equivalent/grade-equivalent-ch…), the complexity of passages is grossly inappropriate students of all grade levels represented in the Exemplars.
The Early High School reading passage requires an independent reading level of 1250L. This text would be appropriate for the score band of students above Grade 12, post-secondary education (Grades 11 and 12 – 940-1210L). The text in this passage should be in the range of 855-1165L.
The Grade 8 reading passage requires an independent reading level of 1160L. This text would be appropriate for the score band of students in Grade 9 ( Grade 9 – 855-1165L). The text in this passage should be in the range of 805-1100L.
The Grade 6 reading passage requires an independent reading level of 1100L. This text would be appropriate for the score band of students in Grade 8 ( Grade 8 – 805-1100L). The text in this passage should be in the range of 665-1000L.
And my personal favorite:
The Grade 4 reading passage requires an independent reading level of 1230L. This text would be appropriate for the score band of students in Grade 12+ (Grades 11 and 12 – 940-1210L). The text in this passage should be in the range of 445-810L.
I argue that students, in actuality, probably perform closer to the lower end of the score band than at the higher.
This analysis is based on findings using the analyzer produced by The Lexile Framework for Reading by MetaMetrics (https://www.lexile.com), one that is positively represented in a research study discussed in Appendix A of CCSS , “The major comparability finding of the research was that all of the quantitative metrics were reliably and often highly correlated with grade level and student performance based measures of text difficulty across a variety of text sets and reference measures.” (https://d1jt5u2s0h3gkt.cloudfront.net/m/cms_page_media/135/E0813_Appendi…).
The link below leads to a document that offers an explanation of how to use the text analyzer provided by The Lexile Framework for Reading.
Click to access _Using%20a%20Lexile%20Analyzer-2.pdf
The Green Reading Framework offers three sequences of score bands that can be used for instructional purposes. It also shows a comparison between score bands used in CRS and CCSS.
Click to access The%20Green%20Reading%20Framework-update%208.2.13_1.pdf
LikeLike
Liam is making chocolate chip cookies. The recipe calls for 1 cup of sugar for every 3 cups of flour.
Liam has only 2 cups of flour.
• How much sugar should Liam use?
Explain why your answer is correct.
A total of 8 students decorated the front surface of 2 different bulletin boards, 1 in the computer
lab and 1 in the library.
The computer lab bulletin board has 4 sides and 4 right angles and is 10 feet long and 9 feet tall.
The library bulletin board is divided into 6 equal parts, as shown below, and is shaded to show the
fraction of the front surface the students finished decorating on Tuesday.
6. What is the area, in square feet, of the front surface of the computer lab bulletin board?
A. 19
B. 38
C. 76
*D. 90
E. 94
7. Each student decorated one or the other of the bulletin boards. More students decorated the
computer bulletin board than the library bulletin board. Which of the following numbers could be
the fraction of students who decorated the computer lab bulletin board?
A. 1/3
B. 1/5
C. 4/8
D. 4/5
*E. 5/8
The principal of a school must buy 19 desks for a new classroom. Each desk costs $61. A student
calculates the total cost of the desks using the thought process below:
• Identify any mistakes in the student’s thought process.
• Write an expression that represents the total cost of the 19 desks, and explain why it is correct.
9. The principal of a school must buy 19 desks for a new classroom. Each desk costs $61. A student calculates the
total cost of the desks using the thought process below:
20 desks at $60 each would cost $1,200.
So 19 desks at $60 each would cost $1,200 – $60.
So the total cost is $1,200 – $60 + $1.
Write an expression that represents the total cost of the 19 desks, and explain why it is correct
SAMPLE FROM ACT ASPIRE MATH GRADES 3, 4, 5
LikeLike
Looks like the little tikes (8, 9 and 10 year olds) of Alabama are in for a real treat.
LikeLike
Oops. Last item is for grades 6 to 8.
LikeLike
That 5-item MC format should really test their test taking grit
LikeLike
What a nightmare. I teach in a Catholic school and although for whatever reason, my Archdiocese adopted CCSS for math and language arts, we are still giving the Iowa Test of Basic Skills to our 2nd-8th graders. In fact, testing started this past Wednesday and teachers have through next Friday to finish. They are allowed to spread testing sessions out as much as they wish during this window. At least this year, that test has not changed any. I teach 1st grade so I do not have to do any testing, but I feel for the state’s public school students and teachers, especially as our state legislature seems to have gone insane and NCLB waver requirements for performance reviews to include test results start to kick in. I certainly hope that my school system does not adopt a different test and that the ITBS remains as it is, but I also hope that as Alabama’s parents and students truly begin to experience the effects of Common Core and its related testing, that more people in this state will join the fight to bring this type of insanity to a stop.
LikeLike
Readability is a big problem with the majority of standardized tests at all grade levels. Whenever I am asked to review a test, the first thing I check is readability. I have seen very few tests that were written at the appropriate grade level. Math test are the worst. I have seen math tests for grade 1 that were written at a 10th-grade reading level. Now I’m trying to remember if any of the publishers I’ve pointed this out to have called me again. I don’t think they have.
LikeLike
Reblogged this on onewomansjournal and commented:
Of course this is totally ridiculous. Songer is right.
LikeLike
Reblogged this on Front Line Teachers and commented:
Very interesting lexile comparisons here.
LikeLike
Arleyne
Songer: Is PARCC Developmentally Inappropriate for High School Students? | Diane Ravitch’s blog
LikeLike