These days, no debate can move forward without hearing what Peter Greene thinks. A teacher in Pennsylvania, he has established himself as one of the most astute observers of education issues in the nation today through his writings.

Peter Greene here expresses his profound frustration with the Thomas B. Fordham Institute’s review of “next generation assessments.”

He begins by noting that none of those associated with the study are neutral participants. TBF has received millions of dollars to promote and advocate for the Common Core. Greene questions whether the researchers are objective, given their past connection to reform projects. [I, on the other hand, do not question the researchers’ independence, but I agree with Peter that they are enmeshed in reform assumptions that should be subjects of debate.]

Greene quotes Polikoff:

“A key hope of these new tests is that they will overcome the weaknesses of the previous generation of state tests. Among these weaknesses were poor alignment with the standards they were designed to represent and low overall levels of cognitive demand (i.e., most items requiring simple recall or procedures, rather than deeper skills such as demonstrating understanding). There was widespread belief that these features of NCLB-era state tests sent teachers conflicting messages about what to teach, undermining the standards and leading to undesired instructional responses.”

Or consider this blurb from the Fordham website:

“Evaluating the Content and Quality of Next Generation Assessments examines previously unreleased items from three multi-state tests (ACT Aspire, PARCC, and Smarter Balanced) and one best-in-class state assessment, Massachusetts’ state exam (MCAS), to answer policymakers’ most pressing questions: Do these tests reflect strong content? Are they rigorous? What are their strengths and areas for improvement? No one has ever gotten under the hood of these tests and published an objective third-party review of their content, quality, and rigor. Until now.”

Peter questions the assumptions on which the study is built:

So, two main questions– are the new tests well-aligned to the Core, and do they serve as a clear “unambiguous” driver of curriculum and instruction?

We start from the very beginning with a host of unexamined assumptions. The notion that Polikoff and Doorey or the Fordham Institute are in any way an objective third parties seems absurd, but it’s not possible to objectively consider the questions because that would require us to unobjectively accept the premise that national or higher standards have anything to do with educational achievement, that the Core standards are in any way connected to college and career success, that a standardized test can measure any of the important parts of an education, and that having a Big Standardized Test drive instruction and curriculum is a good idea for any reason at all. These assumptions are at best highly debatable topics and at worst unsupportable baloney, but they are all accepted as givens before this study even begins.

Again, I am willing to grant that Polikoff and Doorey are objective, and that Fordham is not paying respects to its principal outside funder, the Gates Foundation. But note that the researchers and Fordham are enmeshed in the assumption that higher standards and more rigorous tests improve test scores and education. Since I don’t think that is accurate, I question the foundations of the report, not its findings. In my view, tests should not drive instruction, and tests don’t improve educational achievement. Curriculum and instruction should drive tests. Instruction drives education. The quality of one’s living conditions has more to do with test scores than the tests.

But back to Peter Greene:

The study was built around three questions:

Do the assessments place strong emphasis on the most important content for college and career readiness(CCR), as called for by the Common Core State Standards and other CCR standards? (Content)

Do they require all students to demonstrate the range of thinking skills, including higher-order skills, called for by those standards? (Depth)

What are the overall strengths and weaknesses of each assessment relative to the examined criteria forELA/Literacy and mathematics? (Overall Strengths and Weaknesses)

The first question assumes that Common Core (and its generic replacements) actually includes anything that truly prepares students for college and career. The second question assumes that such standards include calls for higher-order thinking skills. And the third assumes that the examined criteria are a legitimate measures of how weak or strong literacy and math instruction might be.

So we’re on shaky ground already. Do things get better?

Well, the methodology involves using the CCSSO “Criteria for Procuring and Evaluating High-Quality Assessments.” So, here’s what we’re doing. We’ve got a new ruler from the emperor, and we want to make sure that it really measures twelve inches, a foot. We need something to check it against, some reference. So the emperor says, “Here, check it against this.” And he hands us a ruler.

So who was selected for this objective study of the tests, and how were they selected.

We began by soliciting reviewer recommendations from each participating testing program and other sources, including content and assessment experts, individuals with experience in prior alignment studies, and several national and state organizations.

That’s right. They asked for reviewer recommendations from the test manufacturers. They picked up the phone and said, “Hey, do you anybody who would be good to use on a study of whether or not your product is any good?”

I nominate Peter Greene to serve as the next U.S. Secretary of Education. Imagine that: classroom experience and a built-in junk-science detector.