This is a fascinating article.

Mimi Swartz of the Texas Monthly asks an important question: Are Texas kids failing or are the tests rigged against them?  

Researchers with no axe to grind say the state tests are two grade levels above where the kids are. The state doesn’t agree.

State Commissioner of Education Mike Morath is not an educator, though he was a school board member in Dallas. He was appointed by rightwing Governor Gregg Abbott, a leader in the effort to defund and privatize public schools. Of course, he believes that public schools are  horrible and charters are wonderful. He will believe anything that puts public schools in a bad light, regardless of evidence or research. The legislature slashed the state budget by over $5 billion in 2011, and has never restored funding to where it was before the 2008 recession.

Swartz begins:

 

“Over the last few years, something strange has been happening in Texas classrooms. Accomplished teachers who knew their kids were reading on grade level by virtually all other measures were seeing those same kids fail the STAAR, the infamous State of Texas Assessments of Academic Readiness test.

“The effect on students was predictable: kids who were diligently doing their homework and making good grades in class were suddenly told they were failing in the eyes of the state, which wasn’t so great for their motivation. Parents were desperate to find out why their once high-performing kids were suddenly seen as stumbling. Teachers felt like failures, too, but had no idea what they were doing wrong, after years of striving to adopt practices proven in successful schools across the country. What’s more, the test results were quickly weaponized by critics of Texas public schools, many of whom advocate state-funded vouchers that would allow parents to send their kids to religious and other private schools.

“The stakes of such exams are perilously high. The STAAR test, developed by the Educational Testing Service in Princeton, N.J., had replaced one provided by the British firm Pearson, which Texas officials considered too easy. The STAAR test is used to evaluate students, teachers, individual schools and principals, school districts, and, by extension, the entire enterprise of public education in Texas. Fifth and eighth graders who fail the test can be forced to repeat a grade; high school students may not graduate if they don’t pass three of the five STAAR year-end exams.

“On its face, this approach makes sense. This is, after all, the Age of Accountability, and, according to Governor Greg Abbott and other prominent state leaders, only 40 percent of Texas third graders are reading at grade level. The STAAR numbers are cited as positive proof of that. Texas has to get its kids and its public schools up to the highest standards if we want to have the educated workers and informed citizens we need. There isn’t a minute to lose.

“This reasoning may explain why a report issued in 2012 by two associate professors at Texas A&M was overlooked. Called “STAAR Reading Passages: The Readability is Too High,” by Susan Szabo and Becky Sinclair, the report suggested that questions on the STAAR test were too hard to accurately measure whether students were reading at their grade level.

“The researchers’ examination of five different “readability tests”—commonly used academic measures that rate the appropriateness of written passages for various grade levels—showed, for instance, that in order to comprehend various passages, a third grader would have to read on a fifth-grade level. A fifth grader would have to read on a seventh-grade level, and so on. Generally, the testing showed a gap of about two years. Szabo and Sinclair’s paper made no waves. The STAAR test was new, and if there was a warning included in the research, no one in power thought to consider it. An organization called Texans Advocating for Meaningful Student Assessment lodged protests, but they were rebuffed.

“Years passed. The STAAR reading test reported more failures and stirred more concerns. Teachers and administrators continued to see that the STAAR scores didn’t “align” with other indicators of reading levels. Specifically, the numbers didn’t match those of the Lexile scale, which is regarded nationally as the standard gauge of any publication’s degree of difficulty. (Libraries use the Lexile scale to direct kids to age-appropriate books.)

“In 2016, another study was released, this time by Michael Lopez and Jodi Pilgrim, two professors at the University of Mary Hardin-Baylor, in Belton, Texas. They, too, found that readability formulas showed that the STAAR test contained too many difficult passages for the targeted age groups—“materials may be problematic for teaching and learning”—which confirmed what many teachers were seeing in their classrooms. That same year, a group of fifty Texas school superintendents lodged their protests with the Texas Education Agency (TEA), which administers the STAAR test.

“It’s easy, especially in Texas, to explain away some of the complaints as just so much whining. According to recent Education Week studies, our state ranks 40th in education quality. The blame for our sad showing has been placed on allegedly unqualified and unaccountable teachers, uninvolved parents, and corrupt administrators and school boards.

“But what if that showing isn’t as sad as we’ve been told? What if the STAAR test isn’t measuring what it says it’s measuring: i.e., that a third grader is reading at a third-grade level, rather than a fifth-grade level?….

“Morath did not respond to our request for an interview, but we were able to speak with Jeff Cottrill, TEA’s deputy commissioner of standards and engagement. He explained that TEA’s research on the STAAR reading test included early reviews by Texas teachers and students. “The test is rooted in Texas standards and reviewed by Texas teachers and field tested by Texas students,” Cottrill said. “I have to tell you the process by which TEA determines what goes in this test is solid.” Critics dismiss that method as nothing more than “a gut check,” as none of the test passages were run through standard readability measurements such as the Lexile. Cottrill confirmed that the test was not sent through a Lexile analysis. “TEA relies much more on people to assess the quality of the test than computer based algorithms… Some Dr. Seuss books are actually written at a higher Lexile than The Grapes of Wrath,” he said.

“The Lexile scale was not the only readability test by which researchers outside the TEA have evaluated the STAAR reading test. Dee Carney, the Austin testing expert, pointed out that the A&M research used five readability studies and the Mary Hardin-Baylor research used six. Chambers [the superintendent of the Alief district and president of the Texas School Alliance, representing the state’s largest districts] says new research conducted at A&M is to be released in the next few months and shows even more misalignment, or failing kids, today than in 2012. “If the decision was made to test kids in reading passages that are above their grade level, everyone needs to know that,” Chambers said. “If a third grade reading test is meant to determine if a student is reading at the third grade level, then the test questions should be based solely on what was taught in [and before] third grade, not what might be taught in the fourth, fifth, sixth, or seventh grade.”

“The consequences, Chambers said, can be severe. “To me, here is the bottom line: if Texas expects every third grader to read like a fifth grader or every fourth grader to read like a sixth grader, then we all need to be prepared to see lower performance. Based on all the expert information that has been provided, these unrealistic standards have the potential to destroy learning.”

 

Question from me: can anyone name a book by Dr. Seuss that has a higher Lexile level than “Grapes of Wrath”?

Proposal: How aboutif Governor Abbott, Lt. Governor Dan Patrick, and Superintendent Morath agree to take the tests in English and math and publish their scores? 12th grade? Eighth grade? How about it, guys?