Archives for category: Testing

A few days ago, a prominent education researcher tweeted that only rightwing nuts oppose the Common Core. Brookings scholar Tom Loveless tweeted back that this was not true, that there are liberals, progressives, and classroom teachers who do not like Common Core.

The Twitter exchange prompted me to offer a list of books about Common Core that I consider essential reading for those who want to learn more about the criticism of Common Core.Those who take the time to read these books will understand the opposition to Common Core and stop stereotyping them (as Arne Duncan did) as people who wear tin-foil hats, which seems to be the ultimate insult these days.

Mercedes Schneider, Common Core Dilemma: Who Owns Our Schools? Schneider is a teacher and researcher. Her book is a thoroughly researched and comprehensive history of the development of Common Core.

Nicholas Tampio, Common Core: National Education Standards and the Threat to Democracy. Tampio, a political scientist, argues persuasively that the creation of national standards by a small group of unaccountable people is fundamentally undemocratic and that national standards themselves are guaranteed to stamp out creativity, authentic teaching, and diversity of thought.

Terry Marselle, Perfectly Incorrect: Why the Common Core is Psychologically and Cognitively Unsound. Written by a teacher, this book compares the Common Core standards to recognized research about teaching and learning and finds the standards to be “unsound.”

Kris Nielsen, Children of the Core. This book, written by a teacher, explains how the standardization and mandates of the standards are demoralizing teachers and harming students.

There are many other books that explain why teachers and parents, regardless of their political views, oppose the Common Core.

If you have read others and want to recommend them, leave a comment.

If you want to inform yourself, please read these books.

The test results are in from last March-April in New York. 85% of all 718 school districts in the state did not meet the federally mandated 95% participation rate in the state tests.

18% of the 950,000 eligible students did not take the tests at all. That’s 210,000 students who said no.

Newsday, the main newspaper on Long Island, reports:

Long Island is opt-out central for New York, laying claim to 19 of the 20 school systems statewide with the highest numbers of students boycotting standardized tests, a Newsday analysis shows.

Upstate, the movement has gained a foothold, too, but still isn’t as popular as it is in Nassau and Suffolk counties, the review found.

The biggest boycotts draw students mostly from middle class communities in Suffolk. Comsewogue and Rocky Point, for example, had opt-out rates higher than 80 percent. Commack, Eastport-South Manor and Middle Country had rates of more than 65 percent.

Of 100 districts statewide with the highest numbers of test refusals, 70 are on the Island. All have opt-out rates of 45 percent or higher, according to the analysis. Statewide, opt-out rates averaged 18 percent. The average for the Nassau-Suffolk region stood about 50 percent.

Newsday reviewed the test results in English Language Arts and mathematics, released in late September by the state Education Department. More than 950,000 students in grades three though eight took the exams, while more than 210,000 opted out. Of those who boycotted the tests, more than 90,000 live on the Island.

The opt-out movement, now in its sixth year, appears most successful in middle class communities, which political experts attribute largely to close contacts there between parents and teachers. Many live in the communities; they have children in school and they carry weight with parents when they express doubt about the benefit of state exams. And educators belong to strong unions, which have pushed hard to keep student scores from being tied to mandatory teacher evaluations, the experts said.

The state offered threats and bribes, but to no avail.

Opt out is alive and well on Long Island and parts of upstate New York, driven by parents, not teachers.

Every year the eighth grade ages out. Every year, a new group of third graders is eligible. The fact that the movement has persisted and drawn roughly one-Fifth is a testament to parent power.

Why do parents opt out? They understand that the tests are not diagnostic and serve no purpose other than to compare their children to other children, a function of no value to the children.

Hats off to NYSAPE, New York State Allies for Public Education, which has led the opt out movement.

Rick Hess and Michael McShane of the AMERICAN Enterprise Institute bring a fresh perspective from their perch on the right. Writing in the conservative journal Education Next, they speculate on the reasons for the disappointing results of No Child Left Behind and Race to the Top, the twin policies of Bush and Obama.

Policy makers in Washington loved the ideas of testing, accountability, choice, and national standards. Yet, we now know that these policies were controversial and ultimately ineffective. NAEP scores flatlined, and there is little or no evidence that these policies succeeded.

They write:

“Within a few years, though, those Obama administration efforts—especially its support for teacher evaluation and the Common Core state standards—would themselves turn controversial, breeding backlash that rivaled the dissatisfaction with NCLB. Obama’s reforms would get mired in bitter debates about their emphasis on test scores and whether they constituted federal overreach.

“The results of all this activity were decidedly mixed. There’s some evidence that NCLB’s accountability push led to modest test score gains, at least early on (though one can reasonably ask how much of those gains was evidence of schools “getting better” and how much might have been due to teachers shifting time and energy from other subjects to reading and math instruction). Over the past decade, however, the National Assessment of Educational Progress has shown an unprecedented flat-lining of achievement growth. Research suggests that ambitious efforts to remake teacher evaluation did not lead to meaningful changes in how candidly teachers are actually evaluated, and that the $7 billion in the federal School Improvement Grant program did not, on average, improve achievement in participating schools. The Common Core and many of these other efforts may yield benefits down the road, but the results have certainly not been revolutionary and are widely perceived to be disappointing.

“This brief recap prompts a simple query: What happened? Why did each of these initially promising, seemingly popular efforts at federal leadership ultimately lose its luster? Were the high-profile initiatives of the Bush-Obama years a much-needed kick-start that forced America to get serious about school improvement, or a recipe for slipshod policymaking and rushed implementation that ultimately undermined reform? Did these reforms reflect a gutsy commitment to putting students first or political gamesmanship that yielded a counterproductive series of distracting mandates?”

There is no reason to believe that the latest version of these policies—the Every Students Succeeds Act—Will fare any differently.

Steven Singer writes here about the mechanistic, anti-child implicationsand consequences of data-driven Instruction. He identifies six issues. I offer only the first of these problems. To learn about the other five, open the link.

He writes:

No teacher should ever be data-driven. Every teacher should be student-driven.

You should base your instruction around what’s best for your students – what motivates them, inspires them, gets them ready and interested in learning.

To be sure, you should be data-informed – you should know what their test scores are and that should factor into your lessons in one way or another – but test scores should not be the driving force behind your instruction, especially since standardized test scores are incredibly poor indicators of student knowledge.

No one really believes that the Be All and End All of student knowledge is children’s ability to choose the “correct” answer on a multiple-choice test. No one sits back in awe at Albert Einstein’s test scores – it’s what he was able to do with the knowledge he had. Indeed, his understanding of the universe could not be adequately captured in a simple choice between four possible answers.

As I see it, there are at least six major problems with this dependence on student data at the heart of the data-driven movement.

So without further ado, here is a sextet of major flaws in the theory of data-driven instruction:

The Data is Unscientific

When we talk about student data, we’re talking about statistics. We’re talking about a quantity computed from a sample or a random variable.

As such, it needs to be a measure of something specific, something clearly defined and agreed upon.

For instance, you could measure the brightness of a star or its position in space.

However, when dealing with student knowledge, we leave the hard sciences and enter the realm of psychology. The focus of study is not and cannot be as clearly defined. What, after all, are we measuring when we give a standardized test? What are the units we’re using to measure it?

We find ourselves in the same sticky situation as those trying to measure intelligence. What is this thing we’re trying to quantify and how exactly do we go about quantifying it?

The result is intensely subjective. Sure we throw numbers up there to represent our assumptions, but – make no mistake – these are not the same numbers that measure distances on the globe or the density of an atomic nucleus.

These are approximations made up by human beings to justify deeply subjective assumptions about human nature.

It looks like statistics. It looks like math. But it is neither of these things.

We just get tricked by the numbers. We see them and mistake what we’re seeing for the hard sciences. We fall victim to the cult of numerology. That’s what data-driven instruction really is – the deepest type of mysticism passed off as science.

The idea that high stakes test scores are the best way to assess learning and that instruction should center around them is essentially a faith based initiative.

Before we can go any further, we must understand that.

Leonie Haimson demonstrates the disconnect between the Boasting of officials in New York City and State about test scores and the NAEP flatlines of the city and state.

To make matters worse, the state says that it is impossible to compare the scores between 2017 and 2018, because the test timing changed. But then the state and the city proceeded to boast about the “gains” between those years.

She adds:

“Here are some additional questions that I would have asked the Commissioner and/or the Mayor if I’d had the chance:

“How can NYSED or DOE or mayor claim progress has been made, if as clearly stated that as a result in the change in the tests, this year’s scores aren’t comparable to previous years?

“Why did they so radically change the scoring range, from a maximum of about 428 to about 651 this year?

“Why does the state no longer report scale scores in its summaries, rather than proficiency levels which are notoriously easy to manipulate?

“Where are the NYSED technical reports for 2016, 2017, and 2018 that could back up the reliability of the scoring and the scaling?

“Why was the public release of the scores delayed though schools have had student level scores t for a month?

“How were the state vs the city comparisons affected by the fact that opt out rates in the rest of the state averaged more than 18% while they were only about 4% here?

“Finally, how can either the state or the city claim that these tests are reliable or valid, when neither the scoring nor the trends have been matched on the NAEPs, in which NYC scores have NEVER equaled the state in any category and results for the state & city have fallen in 4th grade math and reading since 2013?

“Though the Mayor apparently tempered his tone at this afternoon’s press conference, according to Twitter he apparently claimed that he expects next year’s scores to show significant gains because those 3rd graders will have had the benefit of Universal preK.

“Sorry to say I won’t trust the state test results next year either. We will have take those scores with several handfuls of salt too — and wait for the 2019 NAEP scores to judge their reliability.“

Jim Miller, professor at the San Diego City College, has posed exactly the right question: Who will save us from “our billionaire saviors?” The question was inspired by Andrea Gabor’s excellent new book After the Education Wars, and by the possibility that billionaire Michael Bloomberg will run for the Democratic nomination for president in 2020.

In New York City, we remember him as a data-driven, test-loving, top-down Reformer, who hired non-educator Joel Klein to terrorize teachers and principals and introduce choice and charters. The result was a public relations success and an education failure. Much boasting, vast disruption, constant reorganization. Change for the sake of change. Bloomberg is one of the billionaires identified in the NPE report about the super-rich who fund anti-public education candidates in state and local elections.

Miller writes:

After failing to prop-up Antonio Villaraigosa’s flagging gubernatorial campaign last June, Michael Bloomberg apparently spent the summer pondering whether it would be wiser for him to personally save the United States rather than waste his time trying to rescue California by proxy. Last week the New York Times reported that Bloomberg was mulling a run for the Presidency as a Democrat because that represented the most viable path to victory. As the Times story observed, while Bloomberg has engaged in some good work on guns and the environment, many of his other positions might not be very likely to win over the liberal base of the Democratic Party…

As Andrea Gabor, (ironically) the Bloomberg chair of business journalism at Baruch College/CUNY, writes in her excellent new book After the Education Wars: How Smart Schools Upend the Business of Reform, Bloomberg’s reign in New York hardly represented a golden era for education: “to be an educator in Bloomberg’s New York was a little like being a Trotskyite in Bolshevik Russia—never fully trusted and ultimately sidelined…”

The business reformers came to the education table with their truths: a belief in market competition and quantitative measures. They came with their prejudices—favoring ideas and expertise forged in corporate boardrooms over knowledge and experience gleaned in the messy trenches of inner-city classrooms. They came with distrust of an education culture that values social justice over more practical considerations like wealth and position. They came with the arrogance that elevated polished, but often mediocre (or worse), technocrats over scruffy but knowledgeable educators. And most of all, they came with their suspicion—even their hatred—of organized labor and their contempt for ordinary public school teachers.

What this has resulted in, according to Gabor, is that the corporate reformers “adopted all the wrong lessons from American business.” Rather than innovating by harnessing “the energy and the knowledge of ordinary employees,” who are the most “knowledgeable about problems—and solutions” because they know the process, the billionaire boys club has favored a punitive, hierarchical, undemocratic, one-size fits all approach that has hurt students more than it has helped them.

Wedded to a factory-style approach to education, corporate reformers “focused on a Taylorite effort to standardize teaching so that teachers can be easily substituted like widgets on an assembly line. This despite the fact that, on average, ‘unions have a positive effect on student achievement’ and the best charter schools are often the independent charters that give teachers voice, often via union contracts.” All of this reflects the fact, Gabor reminds us, that “the corporate education-reform movement has deeply undemocratic roots.”

What this movement has brought us is not pretty. We have systematically devalued the “art” of teaching in favor of a dumbed-down, accountability regimen that prefers standardization and over-testing to empowering educators and students to think more creatively and independently. It has assailed teachers and attacked educational culture to such a degree that it should be no surprise that our society has become increasingly anti-intellectual and hostile to fact-based analysis. As Gabor observes of the Trump era:

[T]he election of this larger-than-life Chucky demagogue, with his multiple bankruptcies and divorces, his sexual predations and business malfeasance, his hate-filled speeches and tweets, also represented a failure of corporate-style education reform as it has taken shape over more than twenty years. Among an electorate that often favors “ordinary” people they can identify with, Trump, the consummate philistine—unread and uninterested, crude, unthinking, and disdainful of facts and any attempt at rational truth—holds up a dystopian mirror of the electorate…

It may not have been the intended outcome of those who simply wished to produce a more useful workforce, but it does show the profound limits of their debased instrumentalism. Hence Gabor again observes: “Corporate education reformers cannot be directly blamed for the ascendance of Trump. However, over two decades of an ed-reform apparatus that has emphasized the production of math and ELA test scores over civics and learning for learning’s sake has helped produce an electorate that is ignorant of constitutional democracy and thus more vulnerable to demagoguery.”

Gabor’s thorough study does more than just criticize the failures of corporate education reform. She outlines how multiple examples of innovative educational practices across the country have defied the technocratic dictates of the well-heeled and focused instead on “bottom-up” strategies that have relied heavily on “a participative, collaborative, deeply democratic approach to continuous improvement, drawing on diverse constituencies—including students, teachers, and local business leaders—in their effort.”

Thus, there are some insights to be found in approaches that rely on “local democracy” that can help do right for our children and the society at large. Following these examples, rather than the lead of self-important billionaires, is where we can find hope for a better education system and a more democratic society.

As for Bloomberg, maybe he should just go away and let the people lead. We’ve had too much “reform” from self-declared rich saviors and philanthrocapitalists already. In fact, it’s long past time that we save ourselves from them.

The National Assessment Governing Board, the federal agency in charge of the NAEP assessments, is aware that the achievement levels (Basic, Proficient, Advanced) are being misused. They are considering tinkering with the definitions of the levels. NAGB has invited the public to express its views. Below is my letter. If you want to weigh in, please write to NAEPALSpolicy@ed.gov and Peggy.Carr@ed.gov. Responses must be received by September 30.

My letter:


Dear NAEP Achievement-Level-Setting Program,

As a former member of the National Assessment Governing Board, I am keenly interested in the improvement and credibility of the NAEP program.

I am writing to express my strong support for a complete rethinking of the NAEP “achievement levels.” I urge the National Assessment Governing Board to abandon the achievement levels, because they are technically unsound and utterly confusing to the public and the media. They serve no purpose other than to mislead the public about the condition of American education.

The achievement levels were adopted in 1992 for political reasons: to make the schools look bad, to convey simplistically to the media and the public that “our schools are failing.”

The public has never understood the levels. The media and prominent public figures regularly report that any proportion of students who score below “NAEP proficient” is failing, which is absurd. The two Common Core-aligned tests (PARCC and SBAC) adopted “NAEP Proficient” as their passing marks, and the majority of students in every state that use these tests have allegedly “failed,” because the passing mark is out of reach, as it will always be.

The National Center for Educational Statistics (NCES) has stated clearly that “Proficient is not synonymous with grade level performance.” Nonetheless, public figures like Michelle Rhee (who was chancellor of the DC public schools) and Campbell Brown (founder of the website “The 74”) have publicly claimed that the proficiency standard of NAEP is the bar that ALL students should attain. They have publicly stated that American public education is a failure because there are many students who have not reached NAEP proficient.

In reality, there is only one state in the nation–Massachusetts–where as much as 50% of students have attained NAEP Proficient. No state has reached 100% proficient, and no state ever will.

When I served on NAGB for seven years, the board understood very well that proficient was a high bar, not a pass-fail mark. No member of the board or the staff expected that some day all students would attain “NAEP Proficient.” Yet critics and newspaper consistently use NAEP proficient as an indicator that “all students” should one day reach. This misperception has been magnified by the No Child Left Behind Act, which declared in law that all students should be “proficient” by the year 2014.

Schools have been closed, and teachers and principals have been fired and lost their careers and their reputations because their students were not on track to reach an impossible goal.

As you well know, panels of technical experts over the years have warned that the achievement levels were not technically sound, and that in fact, they are “fatally flawed.” They continue to be “fatally flawed.” They cannot be fixed because they are in fact arbitrary and capricious. The standards and the process for setting them have been criticized by the General Accounting Office, the National Academy of Sciences, and expert psychometricians.

Whether using the Angoff Method or the Bookmarking Method or any other method, there is no way to set achievement levels that are sound, valid, reliable, and reasonable. If the public knew that the standards are set by laypersons using their “best judgment,” they would understand that the standards are arbitrary. It is time to admit that the standard-setting method lacks any scientific validity.

When they were instituted in 1992, their alleged purpose was to make NAEP results comprehensible to the general public. They have had the opposite effect. They have utterly confused the public and presented a false picture of the condition and progress of American education.

As you know, when Congress approved the achievement levels in 1992, they were considered experimental. They have never been approved by Congress, because of the many critiques of their validity by respected authorities.

My strong recommendation is that the board acknowledge the fatally flawed nature of achievement levels. They should be abolished as a failed experiment.

NAGB should use scale scores as the only valid means of conveying accurate information about the results of NAEP assessments.

Thank you for your consideration,

Diane Ravitch
NAGB, 1997-2004
Ph.D.
New York University

ALSO:

The National Superintendents Roundtable wrote a letter.

I urge you to read this here.

The letter documents the many scholarly studies criticizing the NAEP achievement levels.

Here is an excerpt:

“NAGB hired a team of evaluators in 1990 to study the process involved in developing the three levels. A year later the evaluators were fired after their draft report concluded that the process “must be viewed as insufficiently tested and validated, politically dominated, and of questionable credibility.”

“In 1993, the U.S. General Accounting Office labeled the standard-setting process as “procedurally flawed” producing results of “doubtful accuracy.”

“In 1999, the National Academy of Sciences reported the achievement-level setting procedures were flawed: “difficult and confusing . . . internally inconsistent . . . validity evidence for the cut scores is lacking . . . and the process has produced unreasonable results.”

“Shortly after No Child Left Behind was signed into law in 2001, Robert Linn, past president of the American Educational Association and of the National Council on Measurement in Education, and former editor of the Journal of Educational Measurement called the “target of 100% proficient or above according to the NAEP standards is more like wishful thinking than a realistic possibility.”

“In 2007, researchers concluded that fully a third of high school seniors who completed calculus, the best students with the best teachers in the country, could not clear the proficiency bar. Moreover, they added, fully 50 percent of those who scored “basic” in twelfth grade math had achieved a bachelor’s degree (a proportion comparing favorably with four-year degree rates at public universities).

“The Buros Institute, named after the father of Mental Measurements Yearbook, criticized the lack of a validity framework for NAEP assessment scores in 2009 and recommending continuing “to explore achievement level methodologies.”

“Fully 30 percent of 12th-graders who completed calculus were deemed to be less than proficient, said a Brookings Institution scholar in 2016, a figure that jumped to 69 percent for pre-calculus students and 92 percent for students who completed trigonometry and Algebra I. These data “defy reason” and “refute common sense,” he concluded.

“Finally, the NAS study to which the proposed rule responds took note in 2016 of the “controversy and disagreement around the achievement levels, noting that Congress has insisted since 1994 that the achievement levels are to be used on a trial basis until on objective evaluation determined them to be “reasonable, reliable, valid, and informative to the public.”

“In the Roundtable’s judgment, such an objective evaluation has yet to be completed and a determination that the achievement levels are “reasonable, reliable, valid, and informative to the public” has yet to be seen.

“Linking studies conclude most students in most nations cannot clear “proficiency” bar

“The Roundtable points also to research studies dating from 2007 to 2018 indicating NAEP’s proficiency bar is beyond the reach of most students in most nations. When Gary Phillips of the American Institutes of Research (and former Acting Commissioner of NCES) asked how students in other nations would perform if their international assessment results were expressed in terms of NAEP achievement levels, his results were sobering. The results demonstrated that just three nations (Singapore, the Republic of Korea, and Japan) would have a majority of their students clear the NAEP bar in 8th-grade mathematics, while Singapore alone could meet that standard (more than 50% of students clearing the bar) in science.

“Subsequently Hambleton, Sireci, and Smith (2007) and also Lim and Sireci (2017) reached conclusions similar to those of Phillips.”

The fact is that “NAEP proficiency” is an impossible goal for most students. To recognize that does not lower standards. It acknowledges common sense.

Not every runner will ever run a four-minute mile. Some will. Most wont.

Alyson Klein wrote a useful overview of emerging critiques of our national obsession with standardized testing. As Marc Tucker points out, we are likely the only country that tests every child every year. As Daniel Koretz says in the article in Education Week, human judgement should be part of any consequential decision about school quality.

More to the point, and she doesn’t mention this, our massive spending on standardized tests has brought diminishing returns. How many more years will we wait before policymakers and legislators conclude that the Testing Charade (Koretz’s term) has exhausted its value and has become a costly burden?

Because she writes as a journalist, not an expert, she includes contrary views from spokesmen for assessment corporations who make a living selling the same old tests, the more the better for the bottom line.

She begins:

It’s a spring ritual: Every year in the U.S., millions of schoolchildren take annual, standardized state tests to get a sense of how well their states, districts, schools, and even teachers are helping them learn.

Another sampling of students take the National Assessment of Educational Progress—or NAEP, better known as the Nation’s Report Card. Those results, released periodically, fill in the gaps to show how students in a particular state are performing relative to their peers.

That’s how accountability and assessment have worked in the United States at least since the advent of the No Child Left Behind Act back in 2002 and continuing with its replacement, the Every Student Succeeds Act of 2015.

And in fact, NAEP and Advanced Placement tests are prime components of Quality Counts’s Achievement Index, which grades and ranks states in this politically fraught category.

The United States is unique among countries in subjecting students so often to standardized tests, but as testing experts note, the resulting deluge of data comes with significant trade-offs on exam quality. And despite a few innovations under ESSA, plenty of them also wonder whether the road-not-taken might have produced a more nuanced and useful, if less frequent trove of information.

Testing every student every year is a costly prospect, said Marc Tucker, the president and CEO of the National Center on Education and the Economy, a research and policy organization in Washington. Tucker’s research has focused on the policies and practices of the countries with the best education systems.

And the expense means that the tests are often lower quality than tests used in other countries, and a poor gauge of the higher-order critical thinking skills that students need in college, the workforce, and life, he added.

We’ve made it virtually impossible to have the quality of tests that other nations that are far ahead of us are using to determine how well their own kids are doing,” Tucker said. “So what we’ve done is to deprive ourselves of tests that will enable us to measure the things that are the most important about whether or not are kids are going to be ready for what’s coming. That’s a very poor trade. A very poor trade.”

By contrast, very few of the highest-performing countries test students every year, Tucker said. And when they do test, they often use deeper assessments that include performance tasks or writing prompts, giving educators a richer understanding of what students know and are able to do.

Singapore, for example, outperforms the U.S. on international measures such as the Program for International Student Achievement, or PISA, in average reading, math, and science performance. It tests students only about three times in the course of their careers—once at the end of elementary school, once in middle school, and once in high school, Tucker said.”

Someone should calculate the billions spent on standardized testing over the past 20 years, and we could then imagine how that same money might have been used to improve the conditions in schools.

But then, the standardized testing industry has lobbyists, and the children don’t.

This is a beautiful statement. Computer-graded essays represent the ultimate dumbing down of education. Professor Les Perelman of MIT has has written many studies about the stupidity of machines. They are indifferent to factual accuracy. They don’t understand tone or irony or wit. They can be fooled by pretentious gibberish.

Keep fighting!

Open Letter to Ohio Department of Education from English teachers. Concerning: Computer-graded exams.

Open Letter to Ohio Department of Education from English teachers. Concerning: Computer-graded exams.
by inthevalleyofthedoanSeptember 8, 2018
OPEN LETTER

TO: Paolo DeMaria

Superintendent of Public Instruction

Ohio Department of Education

superintendent@education.ohio.gov

CC: Office of Curriculum and Assessment:

Brian.Roget@education.ohio.gov

Sarah.Wilson@education.ohio.gov

Shantelle.Hill@education.ohio.gov

Daniel.Badea@education.ohio.gov

Sarah.McClusky@education.ohio.gov

FROM: English teachers of Shaker Heights High School

September 7, 2018

Dear Superintendent DeMaria and the Office of Curriculum and Assessment,

We are English teachers at Shaker Heights High School, and we would like to voice our profound dismay over the direction that the Ohio Department of Education has taken with the End of Course exams.

In the nation’s unthinking rush to test, test, test, we have reached a new low: We are now expected to teach our students how to write for a machine to read.

We have been given a document called, “Machine-Scored Grading: Initial Suggestions for Preparing Students,” produced by the Westerville City Schools “in consultation with the ODE.” According to these guidelines, “When composing text to be read by a computer, the writer cannot assume that the machine will ‘know’ and be able to interpret communicative intent.”

Imagine for a moment how humiliating it is for students to hear that what they write will be read by a machine, not by a human. Can you think of anything as pointless? Would anybody be inspired to do their best work?

The message that we send students is this: Your inner self, the ground from which all writing springs, has no value, no relevance. We do not care about the content of your mind, only that you have the mental machinery to decipher and generate informational text.

Writing for a computer is antithetical to everything that led us to become educators. Our overseers in Columbus, however, have a very different attitude. In support of machine scoring, this is from an official statement from an Associate Director of the Office of Curriculum and Assessment:

“This is the only way to get to adaptive testing and to return results faster, with the goal to be eventual on demand results, which has been an extremely vocal issue by the field to legislators, ODE Leadership, etc.”

First of all, this is an appalling sentence. But once we get past the errors in syntax, grammar and capitalization, and the sloppy, confusing phrasing, we are still left with an absurdity. We teachers are supposed to set students before a computer and then wait breathlessly for the machine to tell us how well or poorly the student writes? That is the ultimate goal? And the person in charge doesn’t even know how to write? How much are Ohio taxpayers spending on this?

There are always the same three justifications for computer grading:

It’s fast.
It’s cheap.
It’s objective.
But we can point to a system that is faster, cheaper, and maybe even more objective. There just happens to be a group of trained professionals handy: people who are dedicated to the wellbeing and growth of Ohio’s schoolchildren, people who love writing and literature, people who are trained to the standards of the Ohio Department of Education, people who continually strive to improve their ability to provide meaningful evaluation of student writing:

Teachers.

We can do the job fast because we’re with the students every day. We can do it cheap, in fact at no extra cost to Ohio taxpayers, because it’s what we’re paid to do anyway.

You might assume that machines have us beat when it comes to objectivity. But computers are only as objective as the humans who program them. And we have good reason to distrust multinational corporations when they invoke proprietary trade secrets to hide the systems that determine the fates of millions of public school children.

But objectivity may be the wrong criterion. As English teachers, we love writing because it is one of the most subjective things taught in school. We love the teaching of writing because we love to see students develop their unique voices, their sense of themselves as the subjects of their own lives.

If we begin our thinking with the assumption that standardized tests are a sacred imperative, then, surely the fastest, cheapest, most objective thing is to grade them is with a machine. However, if we begin our thinking with the belief that students should learn how to write well, then we see that artificial intelligence is not just irrelevant, but counterproductive.

Superintendent DeMaria, what is truly being tested here is the ODE itself. Are you so captive to the testing-industrial complex that you throw millions of taxpayer dollars into an unnecessary technology? Or are you so committed to educating students that you are willing to use your available human capital to do it for free?

Yours sincerely,

English teachers at Shaker Heights High School

New York State Commissioner of Educatuon MaryEllen Elia defended the state tests in a letter to the editor of an upstate newspaper.

What was interesting was what she did not say.

She wrote:

Your recent editorial “Benefits of Regents testing still unclear” (“Another View,” Adirondack Daily Enterprise, Aug. 28) is riddled with inaccurate information about New York’s student testing requirements. For the benefit of your readers, I am writing to set the record straight.

Earlier this year, the U.S. Department of Education approved New York’s Every Student Succeeds Act plan. It reflects more than a year of collaboration with a comprehensive group of stakeholders throughout the state. Approval of our plan by USDE ensures that New York will continue to receive about $1.6 billion annually in federal funding to support elementary and secondary education in New York’s schools. Had we not received federal approval, that money would have been left on the table, to the great detriment of our students and teachers.

Over the past three years, I have communicated frequently with the USDE about test participation rates and the importance of not penalizing schools, students or anyone else when a district’s participation rate falls below the federally required level.

The editorial states that in June the Board of Regents adopted regulations to implement the state’s ESSA plan — leading your readers to believe, erroneously, that these regulations are now final. In fact, the implementing regulations are temporary. We continue to make changes to the regulations based on the many public comments received.

We anticipate the Board of Regents will discuss these comments and proposed modifications to the draft regulations at its September meeting. The revised regulations will again go out for comment before they are permanently adopted. We hope your readers will participate in this ongoing public comment process.

Your editorial also is misleading in its claim that releasing state test results in September “makes the testing data nearly useless for school districts.” Here are the facts. In early June, schools and school districts were able to access instructional reports for the 2018 state assessments. At the same time, the department released about 75 percent of the test questions that contribute to student scores. The instructional reports, together with the released test questions, are used by schools and districts for summer curriculum-writing and professional development activities. Additionally, while statewide test results are not yet publicly available, we have already provided districts with their students’ score information. Districts can — and should — use this information to help inform instructional decisions for the upcoming school year.

The state Education Department’s stance remains unchanged: There should be no financial penalties for schools with high opt out rates. We continue to review the public comments on this and other proposed regulations, and those comments will be carefully considered as we finalize the state’s ESSA regulations.

Ultimately, it is for parents to decide whether their child should participate in the state assessments. In making that decision, though, they should have accurate information. I hope this letter gives them a better understanding of the facts.

MaryEllen Elia
Albany
The writer is state commissioner of education.

I checked with teachers, and this is what they said.

The test scores are released long after the student has left his or her teacher and moved to a different teacher.

Most of the questions are released, but the teacher never learns which questions individual students got right or wrong.

The tests have NO DIAGNOSTIC VALUE.

The tests have NO INSTRUCTIONAL VALUE.

Apparently, it means a lot to Commissioner Elia to compare the scores of different districts, but that comparison is of no value to teachers, principals, or parents.

One middle school teacher said this to me:

“…the whole exercise is meaningless at the classroom level. Admins might look at the data when it comes to certain skills/content areas, but without looking at the questions/answers, it is not helpful for us in the trenches.”

Another teacher told me:

“…we do not get student-specific results for each question, we are supposed to look at statewide results and then somehow extrapolate that back to our classrooms, the following year, with different kids. So this is a BLUNT tool at best and students get no individual diagnostic benefit.”

The state tests are pointless and meaningless. They have no diagnostic value whatever for individual students.

Every parent in New York should understand that their children are subjected to hours of testing for no reason, other than to allow the Commissioner to compare districts. Their children receive no benefit from the testing. No teacher learns anything about their students, other than their scores.

The state tests are pointless and meaningless. They have no diagnostic value for students—or teachers.

OPT OUT.

OPT OUT.

OPT OUT.