New York State Commissioner of Educatuon MaryEllen Elia defended the state tests in a letter to the editor of an upstate newspaper.
What was interesting was what she did not say.
She wrote:
Your recent editorial “Benefits of Regents testing still unclear” (“Another View,” Adirondack Daily Enterprise, Aug. 28) is riddled with inaccurate information about New York’s student testing requirements. For the benefit of your readers, I am writing to set the record straight.
Earlier this year, the U.S. Department of Education approved New York’s Every Student Succeeds Act plan. It reflects more than a year of collaboration with a comprehensive group of stakeholders throughout the state. Approval of our plan by USDE ensures that New York will continue to receive about $1.6 billion annually in federal funding to support elementary and secondary education in New York’s schools. Had we not received federal approval, that money would have been left on the table, to the great detriment of our students and teachers.
Over the past three years, I have communicated frequently with the USDE about test participation rates and the importance of not penalizing schools, students or anyone else when a district’s participation rate falls below the federally required level.
The editorial states that in June the Board of Regents adopted regulations to implement the state’s ESSA plan — leading your readers to believe, erroneously, that these regulations are now final. In fact, the implementing regulations are temporary. We continue to make changes to the regulations based on the many public comments received.
We anticipate the Board of Regents will discuss these comments and proposed modifications to the draft regulations at its September meeting. The revised regulations will again go out for comment before they are permanently adopted. We hope your readers will participate in this ongoing public comment process.
Your editorial also is misleading in its claim that releasing state test results in September “makes the testing data nearly useless for school districts.” Here are the facts. In early June, schools and school districts were able to access instructional reports for the 2018 state assessments. At the same time, the department released about 75 percent of the test questions that contribute to student scores. The instructional reports, together with the released test questions, are used by schools and districts for summer curriculum-writing and professional development activities. Additionally, while statewide test results are not yet publicly available, we have already provided districts with their students’ score information. Districts can — and should — use this information to help inform instructional decisions for the upcoming school year.
The state Education Department’s stance remains unchanged: There should be no financial penalties for schools with high opt out rates. We continue to review the public comments on this and other proposed regulations, and those comments will be carefully considered as we finalize the state’s ESSA regulations.
Ultimately, it is for parents to decide whether their child should participate in the state assessments. In making that decision, though, they should have accurate information. I hope this letter gives them a better understanding of the facts.
MaryEllen Elia
Albany
The writer is state commissioner of education.
I checked with teachers, and this is what they said.
The test scores are released long after the student has left his or her teacher and moved to a different teacher.
Most of the questions are released, but the teacher never learns which questions individual students got right or wrong.
The tests have NO DIAGNOSTIC VALUE.
The tests have NO INSTRUCTIONAL VALUE.
Apparently, it means a lot to Commissioner Elia to compare the scores of different districts, but that comparison is of no value to teachers, principals, or parents.
One middle school teacher said this to me:
“…the whole exercise is meaningless at the classroom level. Admins might look at the data when it comes to certain skills/content areas, but without looking at the questions/answers, it is not helpful for us in the trenches.”
Another teacher told me:
“…we do not get student-specific results for each question, we are supposed to look at statewide results and then somehow extrapolate that back to our classrooms, the following year, with different kids. So this is a BLUNT tool at best and students get no individual diagnostic benefit.”
The state tests are pointless and meaningless. They have no diagnostic value whatever for individual students.
Every parent in New York should understand that their children are subjected to hours of testing for no reason, other than to allow the Commissioner to compare districts. Their children receive no benefit from the testing. No teacher learns anything about their students, other than their scores.
The state tests are pointless and meaningless. They have no diagnostic value for students—or teachers.
OPT OUT.
OPT OUT.
OPT OUT.
For many years my district administered the CAT Test. The results were available In May. Teachers used curriculum meetings in May and June to do an item analysis on the results. Even with this, the results were of no use to students in the same academic year. In addition most of the results gave us mostly the same information each year. In reading many more students faltered on questions requiring higher order thinking that required students to analyze or synthesize information. In math a greater number of students faltered on multi-step word problems than computation. In essence, the harder the question, the more questions students missed There are no earth shattering findings from doing an item analysis on a standardized test. Teachers can find out much more useful information through in class assessments and other informal means. Standardized tests have little to no instructional value. The state and administrators want the data, and that is the real goal of standardized tests.
Elia wrote: “I have communicated frequently with the USDE about test participation rates and the importance of not penalizing schools, students or anyone else when a district’s participation rate falls below the federally required level.”
SO WHAT? Communication that results only in an approved ESSA plan that fails to protect NYS schools and children means nothing. I call B.S.
Just wondering… how does Elia justify/explain using faulty formulas and calculations (Weighted Performance Index + Core Subject Performance Index = Composite Performance Index) that will have the unintended negative consequence of catapulting (and wrongly labeling) otherwise effective schools with significant opt out rates into ESSA Comprehensive and/or Targeted Support and Improvement categories???? And how does Elia explain forcing schools with less than 95% participation, absent systematic exclusion, to engage in multi-tiered participation rate improvement (propaganda) plans that cost (read WASTE) time and energy and money and personnel resources???? How are these measures NOT penalizing schools, students, or anyone else?????
The tests have NO DIAGNOSTIC VALUE.
The tests have NO INSTRUCTIONAL VALUE.
The tests are a “Violation of personhood”.
Run, run from the testing gun (opt out), for “we” are “forced” to pull the
testing trigger…
Holy “innocent oppressor” Batman, the illusion of opposition continues.
“Apparently, it means a lot to Commissioner Elia to compare the scores of different districts, but that comparison is of no value to teachers, principals, or parents.”
And that comparison makes absolutely no difference to those whom we should be most concerned about–THE STUDENTS.
When oh when will we get over, get out of this “diagnosing regime” and get back into helping, guiding each student to reach her/his fullest desired potential mode of teaching and learning?
We have yet to enter a “diagnosing regime.” the tests are incapable of “diagnosing” anything. We have a ranking regime that is obviously of no use toward improvement in instruction or learning.
As with the standards and measurement malpractice regime, these things don’t have to actually do what the supporters say that they do. The diagnosing regime is an illogical extension of the standards and measurement regime. Both designed to garner profits for the supporters of those malpractices, both for profit, i.e., Pearson and others and even if those that are supposedly non-profit, i.e., College Board, the PARCC consortium, etc. . . .
Elia is one of those people who will do and say anything as long as she gets paid enough for it.
She did it down in Florida for Bill Gates and now she is doing it in NY for Cuomo. And if she ever leaves NY, she will certainly do it somewhere else.
Are you suggesting that she is an Edu-Ho?
How energized is the NY opt out movement? The people behind are among the great anti-Reform heroes.
You bet they are heroes!
The tests have NO DIAGNOSTIC VALUE.
The tests have NO INSTRUCTIONAL VALUE.
And here’s why.
The language of the CC/NextGen standards is mostly incompatible with proper test development.
The scores have been corrupted and rendered less than useless by the NYS moratorium, the opt out movement, and by hyper-failure rates themselves.
There is no such thing as diagnostic value in standardized testing. They are written to subjectively (Ha!) assess achievement only. These tests CANNOT inform teachers as to WHY a student answered any one test item incorrectly. If you don’t know WHY you can’t fix the problem.
Fortunately for teachers, we don’t need no stinkin’ tests to know WHY any one of our students responded incorrectly. And 99% of the reasons have nothing to do with instructional practices. If “teaching” was to blame than ALL students with “highly effective teachers” would score well, and ALL students with “ineffective teachers would score poorly. Of course this is not the case. We can’t standardize attention spans, intrinsic motivation, attendance, personalities, mind-set, or intelligence. So this IS what the tests inform us about: young children have a wide range of abilities and varying levels of brain development, much less motivation to do well on tests that don’t count. Duh.
As a science teacher I get to score the ILS in house. We read every question and assess every response. No secrets, Complete transparency. We see some student score in the very highest level and others very low – all with the same level of instruction and opportunity. Here’ what we’ve learned since 2001:
The tests have NO DIAGNOSTIC VALUE.
The tests have NO INSTRUCTIONAL VALUE.
Oh yeah, and if you test-prep them into zombieville, we can improve the scores slightly.
“They are written to subjectively (Ha!) assess achievement only.”
Ummmm, no they don’t assess achievement. Why?
Well, first off what is this concept that is called “achievement”. I’ve yet to see an adequate description of that term when it comes to the teaching and learning process. Without a solid definition how can one even begin to assess “achievement”?
“Oh, Swacker, there you go playing semantics again!”
YEP! Because those “semantics” are very important. Without agreed upon definitions we are left with a mish-mash Babel of inanity resulting in insane policies that harm students being implemented.
Can we expect clarity in public education discourse?
Probably not.
Should we?
Proper? test development
“The General frame is the basis for educational measurement, for psychometrics. The
focus is on the test itself, its content and the measurement it makes. Such terms as
reliability and ability are essential to its mythological credibility. It purports to be
objective science, and hence independent of faith. As such the world it relates to is
static, so there is no essential activity. It is explicit in discourses about educational
measurement, standardised tests, grades, norms; it is implicit in most discourses about
standards and their definitions.
The Specific frame is about the whole assessment event, and is the basis for the
literature that derived from the notion of specific behavioural objectives. The focus is on
the student behaviour described within controlled events; in these events the context,
task, and criteria for adequate performance are unambiguously pre-determined. Reality is
observable in the phenomenological world; the essential activity is what the student
does. This frame is explicit in discourses about objectives and outcomes; it is implicit,
though rarely empirically present, in discourses about criteria, performance, competence
and absolute standards.
The Responsive frame focuses on the assessor’s response to the assessment product.
Unlike the other frames it makes no claims to objectivity; as such its mythical tone is
ephemeral, its status low. This frame is explicit in discourses about formative
assessment, teacher feedback, qualitative assessment; it is implicit though hidden in the
discourses within other frames, recognised by absences in logic and stressful silences in
reflexive thought. Within the confines of communal safety such discourses are alluded
to, skirted around, or at times discussed; on rare occasions such discourses emerge
triumphantly as ideologies within discourse communities.”
“Highly effective Judge”
“The Judge’s frame is far more often evoked than talked about. The focus is on the
assessor’s judgment of the product. The major activity is in the mind of the assessor.
Such terms as expert and connoisseur are essential to the construction of the
accompanying myth. Faith is the requirement of all participants. It is explicit in
discourses about teacher tests, public examinations, and tertiary assessment, and implicit
in all human activities that involve the categorisation of people by assessors…
Talk to them of normative curves or rank orders or percentiles, all of which imply
relative standards, and they will hear you out, wish you well, and with scarcely disguised
distain send you on your way. In their absolute world such matters are irrelevant. They
know what the standard is, and therefore their job is simple. Simply to allocate students,
or their work, to various positions above or below that standard…”
For those not familiar, NB is quoting Wilson from his never refuted nor rebutted 1997 dissertation “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700
THAT most important piece of education writing of the last 50 years ought to be mandatory reading for all who are involved with the teaching and learning process that goes on in our community public schools.
If all tests are so inaccurate, why do they correlate so closely to classroom grades and GPA? Are you suggesting that a properly developed test cannot assess subjective knowledge and understanding? Why do tests help reveal our best and brightest? Doctors, surgeons, engineers, scientists, lawyers, Do you have a better way to find and certify these people? And why do you speak in tongues?
To whom are you addressing your questions, Rager?
I was the only one responding on the Watertown Daily Times comment section. Here is what I said (and continue to say): With all due respect Commissioner Elia, time is up. Let’s rid our schools of Common Core and this flawed testing regime. I sincerely hope teachers are closing their doors more often and teaching what they know is best for our kids-a well rounded education that will enhance life-long learning. Too many gimmicks-too many years have been wasted. Think of the children we have lost with this type of education. To think we are still forcing something on them that wasn’t even created by educators is truly wrong. Do what’s right. Only you and the Board of Regents can end this. Please do so now. Thank you for saying it is the parents choice if their children do not take the tests. Hopefully they have realized by now, it is the only thing left to do, if the change does not come from you.