I’m hoping to get a link to the full study by Adam Maltese of Indiana University and Craig Hochbein of the University of Louisville, but in the meantime, here is the abstract. It provides interesting additional details:
Abstract
For more than half a century concerns about the ability of American students to compete in a global workplace focused policymakers’ attention on improving school performance generally, and student achievement in science, technology, engineering, and mathematics (STEM) specifically. In its most recent form—No Child Left Behind—there is evidence this focus led to a repurposing of instructional time to dedicate more attention to tested subjects. While this meant a narrowing of the curriculum to focus on English and mathematics at the elementary level, the effects on high school curricula have been less clear and generally absent from the research literature. In this study, we sought to explore the relationship between school improvement efforts and student achievement in science and thus explore the intersection of school reform and STEM policies. We used school-level data on state standardized test scores in English and math to identify schools as either improving or declining over three consecutive years. We then compared the science achievement of students from these schools as measured by the ACT Science exams. Our findings from three consecutive cohorts, including thousands of high school students who attended 12th grade in 2008, 2009, and 2010 indicate that students attending improving schools identified by state administered standardized tests generally performed no better on a widely administered college entrance exam with tests in science, math and English. In 2010, students from schools identified as improving in English scored nearly one-half of a point lower than their peers from declining schools on both the ACT Science and Math exams. We discuss various interpretations and implications of these results and suggest areas for future research. © 2012 Wiley Periodicals, Inc. J Res Sci Teach 49: 804–830, 2012
It will be important to see the whole report as if one is comparing scores from the ACT, of which only a portion of the student population usually takes, with the entire student population, then there is a problem. However, if they have been able to separate out the scores to compare the ACT test takers to each other then it would be more like comparing apples to apples.
Be that as it may, by starting with invalid measuring devices such as the ACT and state level standardized testing are, any conclusions drawn from such comparisons are as Wilson states “vain and illusory”, fantastical, chimerical, invalid, a waste of time and effort, in effect mental masturbation (yes that’s a strong term, meant to cause consternation and people don’t like it due to its sexual reference, you know, a taboo in talking about public education-everyone knows you don’t talk like that, but strong language is needed to shock people out of their complacent/non-rational, illogical ways of thinking of the false measures that are grades, standardized testing and standards).
Goddang it, there goes another run on sentence (it’s meant to wrap around itself where the end is the beginning and the beginning is the end which are both the middle), which by the way is perfectly acceptable in many other languages, but not in American English writing as currently taught. Paragraph, even page long sentences are ubiquitous in Spanish. As a matter of fact, I’ve demonstrated this to my Spanish students (you know to “compare and contrast target language with the learners’ native language”) by randomly picking a page out of a novel or other reading and finding very long sentences, works every time.