Steve Hinnefeld is a veteran reporter on Indiana education.
In this post, he describes the shift from a simplistic A-F rating system (the one devised by Jeb Bush) to the federal rating system, which includes more factors.
The problem with both ratings systems is that they accurately measure student income.
The highest rated schools have students with the highest income.
The lowest rated schools have students with the lowest income.
So if teachers choose to teach the neediest students, they will be teaching in a “failing” school, no matter how dedicated they are.
If teachers land a job in an affluent suburb, they can consider themselves successful.
He writes:
For example, at schools that exceeded expectations, the overall rate of students who qualified by family income for free and reduced-price school meals was 17.6%, compared to the state average of about 48%. At schools that did not meet expectations, the free-and-reduced meal rate was 74.2%. The correlation between poverty and federal ratings held for charter schools as it did for public schools.
What worthless junk!
Another extrinsic punishment and reward system that will perpetuate inequities.
When will they ever learn?
Coming soon from the Deformers: State “Balanced Scorecard” evaluation systems, Orwellian databases of evaluations, and a national Curriculum Commissariat and Thought Police to serve as centralized curriculum gatekeeper.
And, ofc, the Trump misadministration just gutted the school lunch program, cutting a million poor students from the rolls. Because feeding poor kids is a terrible waste of money that could be spent on golf trips to Bedminister, Mar-a-Lago, and the Trump International Golf Links in Ireland and Scotland.
What then, is a vision for the future for assessing schools. What if we stopped the ridiculous simplistic rating by a letter. Maybe it’s time to assess schools recognizing that schools have different students from different backgrounds that learn at different rates in different ways.
Quit making assessment for politicians who prefer shortcuts that allow them to camouflage the truth. Recognize individual progress. School assessment is complex and may not be cheapened by a letter grade.
It’s time to present a vision for the future.
caplee68
I would like to see two different reports. No ratings.
The first would be the traditional snapshot (assessment) of student accomplishment where there is a one to one correlation between correct answers and their score on a scale of zero to 100.
The minimum expectation would be a score of 70 or above.
There would be a percentage breakdown of students scoring in the following score ranges on a scale of zero to 100:
% scoring 0 to 30
% scoring 31 to 40
% scoring 41 to 50
% scoring 51 to 60
% scoring 61 to 70
All the way to % scoring 91 to 100
And the % of students failing to score a 70 or better.
And there would be no mention of growth, but there would be data provided showing how many students were left behind (all who did not score 70 or better at a minimum), and how many were provided additional instruction or remediation and the results, broken down as already described.
The second would be a report of the demographics of each future graduation cohort.
I can provide a list if necessary.
And the point would be?
The point would be a more honest and understandable report of the condition of each school or state for the previous year.
Based on standardized test scores.
Which report on the economic condition of families.
I call BS.
dianeravitch: I’d like to see a report on the physical condition of the school. Then add does the school have a full time nurse, social workers, money for busing of kids, a librarian with money to purchase books each year, money for the arts including money for art supplies and instrument repairs, money to feed children who come to school starving, clothing for children who only wear one outfit, washers & dryers to clean clothing, medical care for ill kids who have no insurance, class field trips to expand experiences, SMALL CLASS SIZES, teachers who are able to be creative in their classrooms and are NOT dominated by worthless standardized tests.
That would be a report that would mean something. Forget the testing that never ends.
Carol,
You are assuming that the state should be held accountable for providing appropriate resources for students and teachers.
When I evaluate Indiana I will use the published datum.
Which I was given a link to.
And what they present to the public is misdirection at best.
YES.
“there would be data provided showing how many students were left behind (all who did not score 70 or better at a minimum)”
Who decides on such a cutoff?
Test companies? Politicians? State education officials?
You?
Such cutoffs are arbitrary, which makes any claimed “mathematical analysis” of them simply “mathturbation” (meaningless math that is often employed to impress those who are intimidated by math)
One, I chose 70% correct answers as a minimum because everywhere I have lived since 1975 in the states or overseas used 70 as the cut point between passing and failing.
Two, I have yet to find an assessment where meeting standards or proficient did not range from a 39% correct answers to a % in the low 60’s. While I do not remember the grade or discipline, I remember the 39 only because it was a meet standard in Georgia some years past.
If your state or local system uses a different standard than 70 or better for passing in the classroom please let me know. I would appreciate it.
70% of what?
You obviously know very little about the deep flaws in standardized testing?
Questions that have no right answer.
Questions that have two right answers.
Poorly written questions.
Low-paid scorers (read Todd Farley “Making the Grades”).
Machine errors.
The scores have a very high correlation with family income and education.
70% of what?
All cut scores are arbitrary and can be manipulated at will to the benefit of politicians.
The Measure of the Test
Test measures wallet
Of those who sell test
Short and the tall of it:
Fattest is best
So if a state gave a test with questions in Swahili, a cutoff of 70% would still be meaningful because “everywhere [you] have lived since 1975 in the states or overseas used 70 as the cut point between passing and failing”?
I don’t think you appreciate just how absurd your cutoff argument is.
That depends.
(1) If the primary language in the State were Swahili, then 70% or better for an assessment written in Swahili would not be an unreasonable expectation.
(2) At the moment (to the best of my current knowledge), American Standard is the primary language in every state in the United States. However, an assessment, written in Swahili for students whose first language was Swahili and administered only to those students, then an expectation of 70% or better could still be reasonable. But only if their instruction was in Swahili.
(3) I am going to presume that you were not playing, but asked a serious question to the best of your ability.
I was not making an academic argument. What I was trying to point out to you was that everywhere I have lived the passing percentage has been 70% or better, and it seems (unless you provide evidence and want to make this an academic discussion) that most folks expect or hope their kids earn passing grades of a 70 or better, even on Standardized assessments.
I am sorry I did not make it simple enough for you. I can try to make it simpler if you are still confused and want to engage in a meaningful discussion.
Why 70%? Why not 78% or 62% or 85%?
Where is the science?
Where is the evidence that the “tests” are valid or reliable?
Why don’t states make their own tests and take the profit motive out?
Why not use a reading test that takes 45 minutes instead of Six-eight hours of tests over two weeks?
Who converts the raw scores to test scores?
Where do they set the cut score?
Who determines the p value of every question?
Who can defend the stupid irrational practices now imposed on students?
Worthwhile Rating System
Rating politicians
Instead of rating schools
Rating their positions
And booting out the fools
SDP,
We must advocate for standardized tests for our politicians! Publish their scores!
Perhaps the best way to use the scores of political leadership is to contrast their existing scores (I would bet that most of them did well since they were probably given every advantage) with their performance on the social and emotional IQ tests that are trending now. There might be a problem coming up with a new number set to describe numbers below absolute zero. Vectors? Vipers? Perhaps something imaginary.
Give the politicians a test which involves fixing a running toilet, changing the spark plugs on a car, wiring an outlet, changing the nozzle on an oil burner and sweeping a chimney.
I bet zero would pass and I would rather have someone who can do the above making important decisions than 100 people who got a perfect score on the SAT because the latter means absolutely nothing. It does not mean you can DO anything.
This is one more article that I have sent to my state Senator Niemeyer [R-IN] and Representative Chyung [D-IN].
I met on Saturday with Rep. Chyung. He is quite aware of the situation in our public schools and knows and respects Diane Ravitch!!! He said he reads all of my letters that I send to him.
……………………………….
Dear Senator Niemeyer and Representative Chyung,
Standardized tests measure the economic and educational level of the parents. How interesting that ONLY 37.1% of Hoosier students in grades 3-8 were proficient in math and english/language arts. Does this state have really stupid children OR are the tests continuing to measure nothing of value? STOP WASTING TAXPAYER MONEY TO FIND THE PERFECT STANDARDIZED TEST. IT DOES NOT EXIST. It is, however, demoralizing to the teachers and the students. Who wants to be told every year that you are failing? No student will work harder after being told that their whole school is failing. This is part of the reason that there is a severe shortage of teachers in Indiana. Low pay doesn’t make this situation better.
Schools get points for growth only if students improve their test scores enough to be on track to become “proficient” in four years.
And “proficiency” is a high bar. On the 2019 ILEARN assessment, only 37.1% of Hoosier students in grades 3-8 were proficient in both the math and English/language arts sections of the test. For some high-poverty schools, proficiency rates were in the single digits. Many students who were below proficiency would have to do a lot of growing to reach their achievement targets in four years.
I AGREE: INDIANA’S SCHOOL RATING SYSTEM IS WORTHLESS!!
Sincerely,
Carol Ring [Retired Teacher]
Carol,
I have not looked at Indiana yet. Do you know if Indiana hides the state results (datum) behind a dashboard, or are they easily found? Florida was the easiest result to find so far. I live in Georgia and have been tracking results since 2007. I have yet to find Alabama results. And Californias was not too difficult. I ask because Indiana is the first state datum I have found where I can not simply calculate the percent of a perfect score. And it is a mathematical challenge I want to crack. I find it interesting how educational authorities camouflage results to complicate any attempts at understanding them. My work is unconventional and tends to disturb some folks. I suspect it is because they do not like the translated/decoded results. Thanks in advance for any consideration.
bkendall527: “Do you know if Indiana hides the state results (datum) behind a dashboard, or are they easily found?”
[I am mathematically challenged.] I do know that each year people can check the scores of individual schools. They are published online.
Does that information help?????
Thanks, I was provided a link to the files, which I will download after I return home.
Review all of the data you want, it won’t alter the politicking of the bishops/state Catholic Conferences to prevent their schools from the accountability imposed on public schools.
The false premise that improved outcomes was the basis for attacks against public education masks the goal of theocracy and profit taking.
Linda, you are correct.
Nothing I evaluate as an education outcomes researcher will change the harm that political-factions inflict on the public.
What I publish is for the public. It is a different POV of the results. Providing the kind of information I wanted to know during the nine years I was responsible for improving the quality of education in an Elementary, Middle, and High school here in Georgia.
If some stakeholders do not want understandable results that is their choice. But for those that do, as far as I know, I am the only person in the U.S. decoding and translating the results for the public.
“I find it interesting how educational authorities camouflage results to complicate any attempts at understanding them”
Of course, if they did not do that everyone would know what a bunch of malarkey their rating system is, so it’s only interesting in so far as dishonesty and cowardice are interesting.
Not incidentally, the poster child for such “camouflaging” is VAM, which uses statistical mumbo jumbo to hide manure.
It’s what is commonly referred to as mathturbation.
Several judges (ieg, in NY and Texas) have actually recognized this and ruled in favor of teachers.
Thumbs-up!
VAM was invented by and is used by people who are either too stupid or too dishonest to do legitimate statistics.
Rhetorically -decoding what measures, created by whom for what purpose
Until state legislators post their own scores after taking the tests, we will have no baseline.
Ignore my earlier question. I was provided a link to the datum.
bkendall527: Opps.
What have you found and can you report it?
I am going to do my best. My local school district is next on my list, then Florida. My 50th high school reunion (Flagler County School District, Florida) is this year and I will evaluate both the state and the school district I attended. It should make the reunion interesting. I will put Indiana after Florida, bumping California back a place.
Quoting the opinion of the author of, “CatholicPac: Why the USCCB should (probably) lose its 501(c) (3) tax-exempt status” …”it seems clear that the USCCB has an agenda in the education industry… money to the Catholic education system …and subject to none of the rules which public schools must follow”.
The author, Jesse Ryan Loffer quotes a source who studied the state Catholic Conferences, “90% or more of state Catholic conferences testify at legislative hearings, help draft legislation, attempt to shape implementation of policies, inspire letter-writing or telegram campaigns, consult with government officials to plan legislative strategy and talk with people from media.”
From the Archdiocese of Indiana website (3-19-2019,) “House Passes Budget that Includes New School Choice Incentives… to follow priority legislation of the Indiana Catholic Conference, visit http://www.indianaCC.org…ways to contact their elected representatives…”
The University of Notre Dame ACE’s Reform Leaders Summit, held in New Orleans this year will be held in Indianapolis next year.
I well remember when Tony Bennett who at the time was our “illustrious” supt of education in Indiana was in Highland pontificating and the Supt of Merrillville stood and repeated almost exactly the same words. He said in response to Bennett, I can tell you right now who will have the highest scores. Of course he was only one of the leading educators in our area and that made a HUGE impression on Bennett who went on to “distinguish” himself in Florida. Not exactly honest in monetary affairs if my memory is correct.
Get politics OUT OF EDUCATION. What Bennett was doing was the politically correct thing to do at that time.. Nauseating.
Get religion out of politics, especially the opportunity to benefit from the gutting of public schools.
“The problem with both ratings systems is that they accurately measure student income.”
Ummm, no they don’t “accurately measure student income”. They show a correlation between parental income and standardized test scores which are much of the basis for the school ratings system. But nothing, quite literally, is being measured by those school ratings systems. How can one measure nothing?
Turn on the facetious font for that last question
Duane,
If the tests measure “nothing,” why do rich kids always cone out on top and poor kids on the bottom.
The tests do measure something
First, it is not always, there are exceptions.
The tests assess, evaluate but they do not measure. What is the agreed upon standard unit of measure? What is the agreed upon exemplar of that standard?What is the measuring device calibrated against said exemplar? What is the acceptable tolerance range of that supposed measure?
None of those things exist, so there is no measuring involved. THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????
PURE LOGICAL INSANITY!
The basic fallacy of this is the confusing and conflating metrological (metrology is the scientific study of measurement) measuring and measuring that connotes assessing, evaluating and judging. The two meanings are not the same and confusing and conflating them is a very easy way to make it appear that standards and standardized testing are “scientific endeavors”-objective and not subjective like assessing, evaluating and judging.
That supposedly objective results are used to justify discrimination against many students for their life circumstances and inherent intellectual traits.
“how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????”
Duane
You are never going to get an answer, not only because there IS no answer to your most astute question but also because the entire educational establishment has bought into the idea that they are “measuring” something, even though they can not say precisely what that something is.
The whole measurement field has become bastardized and polluted by the Pearson correlation coefficient.
Correlation is NOT metrology and should never have been associated with it.
Even in hard sciences like physics and chemistry, correlation can only indicate a possible relationship between two variables — and of course, it says nothing at all about cause and effect.
The meaning of the correlation coefficient itself is very nebulous , as indicated by the vast difference in meaning attributed to specific correlations by workers in different fields. In hard sciences, a correlation coefficient less than 0.5 would not even be taken seriously, but in soft social sciences, correlation coefficients of 0.3 and even less are regularly taken seriously and used as the basis for important policy decisions that can affect the lives of millions of people. Ask workers in ten different fields what they consider a strong correlation and you will get 15 different answers (the economist will give you give 5 different answers depending on the time of day)
Here is a hard problem to solve: schools that enroll high proportions of students in poverty, ELLs, and students with disabilities tend to get low test scores.
Schools with low proportions of those groups and high proportions of students from affluent homes tend to get high scores.
What can we learn from these correlations?
1) high-needs schools attract bad teachers?
2) high-needs schools should be closed?
3) the principals of high-needs schools should be fired, along with the teachers?
4) high-needs schools need smaller classes and experienced teachers?
Correction
Economist will give you SIX different answers.
Speaking of economists, I just read that Larry Summers is criticizing the tax plans of Sanders and Warren.
Yes, the same Larry Summers whose “derivative free for all” policy tanked the world economy in 2007-2008. Harvard alone lost billions from their endowment due to Summers’ “brilliant” economic analysis.
What a 🤡.
Then again, who in Harvard’s econ department is NOT a 🤡? Certainly not Chetty. Certainly not Rogoff and Rheinhart. Certainly not Mankiw. Being a 🤡 seems to be a requirement for the job.
Even what some social scientists might consider a “strong” correlation between two variables might actually be due to a third factor. Or it might simply be spurious.
Lots of apparent correlations mean absolutely nothing.
https://www.tylervigen.com/spurious-correlations
That’s not to say that correlations always mean nothing. Taken with other information, they MIGHT be useful to inform decisions in certain circumstances.
But unfortunately, they are all too often misused (eg, as by Raj Chetty) by people who either don’t understand them or dishonestly claim they mean something that they do not.
Regardless, correlations are not measurements.
There must be more to the Raj Chetty story and his love of VAM. How could he not acknowledge correlation and mistake it for causation? He is an economist, not a professor of literature.
SDP,
“You are never going to get an answer”
I may not get any answers but hopefully we will get enough understanding and agreement that we can eventually abandon, eliminate the standards and testing malpractice.
Completely agree with you about the Pearson Correlation Coefficient.
Diane,
You asked:
“What can we learn from these correlations?
1) high-needs schools attract bad teachers?
2) high-needs schools should be closed?
3) the principals of high-needs schools should be fired, along with the teachers?
4) high-needs schools need smaller classes and experienced teachers?”
First question:
Nothing, as the results of an invalid process-NAEP, PISA, TIMSS, and other state sponsored supposed achievement test, can only give invalid results, in other words, quoting Wilson, the results are “vain and illusory”.
The brilliance of Wilson’s work is that he shows/proves the onto-epistemological invalidities involved in the standards and testing malpractice regime. He also points to the fact that if one starts with invalid data any conclusions drawn will be invalid, false, “vain and illusory”, a chimera, a duende.
In over twenty years of researching, asking for, looking for any cogent rebuttal or refutation to his arguments? NOTHING! If anyone can point me to any I’d be quite happy to read, and probably rebut said rebuttal.
As far as questions 1-4, should I assume that the questions are meant facetiously? My answers, 1-no, 2-no, 3-no (although my gut instinct is to say all adminimals* should be put out to pasture), 4-yes, but that yes is not based on test scores.
*Adminimal (n.) A spineless creature formerly known as an administrator and/or principal. Adminimals are known by/for their brown-nosing behavior in kissing the arses of those above them in the testucation hierarchy. These sycophantic toadies (not to be confused with cane toads, adminimals are far worse to the environment) are infamous for demanding that those below them in the testucation hierarchy kiss the adminimal’s arse on a daily basis, having the teachers simultaneously telling said adminimals that their arse and its byproducts don’t stink. Adminimals are experts at Eichmanizing their staff through using techniques of fear and compliance inducing mind control. Beware, any interaction with an adminimal will sully one’s soul forever unless one has been properly intellectually vaccinated.
“There must be more to the Raj Chetty story and his love of VAM.”
Yes, it’s called money!
Larry Summers from the Epstein news cycle, is a senior fellow at the charter-loving CAP.
CAP’s talking points come out of Buttigieg’s mouth. CAP’s people lost the election for Hillary.
Neo-liberal Bill Clinton’s Carville has come out for corporate shill- Bennett of Colorado
The work product of academicians who fast track through promotions, Roland Fryer and Raj Chetty for example, may have certain merits.
A minor point of disagreement about “the test measures nothing”
Test measures wallet
Of those who sell test
Short and the tall of it:
Fattest is best
“Enter some of their high-profile advisers, who in addition to Stern, include the former education secretary for President Barack Obama, John King, who is now the president and CEO of The Education Trust, and Shavar Jeffries, the president of Democrats for Education Reform, a powerful political action committee that supports candidates who favor, among other things, school choice policies.”
Another ed reform group to lobby for charters and vouchers. You’ll notice NONE of these people ever advocate on behalf of any public school or public school student, anywhere.
Here’s my question- why is it legitimate to lobby for charters and vouchers but NOT legitimate to lobby on behalf of public schools?
Public school students are not permitted to have advocates? That’s forbidden? The only students who may have powerful, high profile and politically connected advocates for their schools are charter and private school students?
Under what set of circumstances would the ed reform echo chamber accept advocacy for a public school or public school student as legitimate and not dismiss it as “protecting the status quo”?
https://www.usnews.com/news/education-news/articles/2020-01-13/national-parents-union-to-challenge-political-influence-of-teachers-groups
I would turn the question on its head. Under what circumstances is it acceptable to run a competing set of schools from the same public purse, adding costs not only for duplicate facilities and administration, but also time and personnel for the sales/ marketing/ advocacy/ lobbying required to compete?
“Baker, the Indiana Department of Education spokesman, said the system is designed to provide “actionable/useful data to schools on how students are doing in relation to achieving proficiency.”
Love that word “actionable.” It’s the appropriate word: giving sufficient reason to take legal action. When pushing for tests that would reveal how poorly their kids were being educated [result -NCLB law], minority activists once imagined “actionable meant feds would charge in with money and special supports.
School grading system = Affluent kids test at A, poorest minority kids test at F. Then we change the names: Affluent kids = schools exceeding academic expectations, poorest minorities = schools not meeting expectations. “Action” = state takeover/ charter conversion/ closing for F schools. What a koinkydink, all the kids we’re taking legal action against are black, brown, poor.
The blog post is confusing. There is no federal rating system in ESSA. If there was a federal rating system, a link to it would be helpful.
Every link in this post takes you to the Indiana Department of Education where dubious claims are made a “federal accountability system.”
Nothing in ESSA or the final regulations bearing on Title I, part A and Title I, part B correspond to the Indiana’s rating schemes.One says that “A school receives one of the following overall ratings based on the points earned for each available accountability indicator: Exceeds Expectations,• Meets Expectations• Approaches Expectations • Does Not Meet Expectations, then refers to state scores and state cutoff scores for the STATEWIDE tests it administered.
Click to access determining-federal-ratings.pdf
That said, I wonder why Indiana makes claims about federal regulations that require rating schools? ESSA requires states administer annual statewide assessments in reading/language arts and mathematics in grades 3-8 and once in high school, as well as assessments once in each grade span in science for all students and annual English language proficiency assessments in grades K-12 for all English learners. In this respect ESSA is not much different from NCLB. Under ESSA every state had flexibility to add other “accountability indicators.”
I have a hunch that the Indiana Department of Education and state legislature want to promote an image of the feds imposing a system that includes A-F grades and those legacy rubrics from NCLB–Blame the miserable scheme on the feds.
ESSA does not require the rubric ”Exceeds Expectations,• Meets Expectations• Approaches Expectations • Does Not Meet Expectations.” These are schemes adopted by the state, not imposed by the feds.
Click to access determining-federal-ratings.pdf
Indiana’s schemes ARE TERRIBLE. State officials concocted the schemes and received USDE approval for this mess–a plan. That plan was approved by anonymous reviewers selected by someone in Betsy Devos’s office. Florida’s plan was the last to be approved and not until September 2018. In theory, those reviewers followed the yellow brick road in this document. https://www2.ed.gov/policy/elsec/leg/essa/essaassessmentfactsheet1207.pdf
Thanks for this, Laura: I reviewed a brief guideline of ESSA accountability reqts at https://www2.ed.gov/policy/elsec/leg/essa/essafactsheet170103.pdf
I was surprised to find that ESSA not only does not suggest reductive school-grading systems, as you say [in fact, it discourages that]– it also does not suggest such corrective measures as school closings, charter conversion, or state takeovers. The required action is “comprehensive or targeted measures for improvement.” There are some considerations required to be addressed in corrective plans, and monies reqd to be expended, and reqd involvement of stakeholders.
I’d thought the law’s reqt that DofEd review/ approve state accountability plans was a serious drawback. But review shows the law would allow an enlightened Secy of Ed to direct states toward, e.g., equitable funding/ supports as needed. Now I’m thinking, the only thing needing change is ESSA’s data-collection reqts, i.e., state-stdzd test scores for 3rd-8th plus one hisch grade.
Thanks for the link.
Thanks for the links