Julie Vassilatos read the report of the University of Chicago Consortium on School Research about the closing of 50 schools in one day in 2013. She knew that there was no academic gain for the children affected.
But there was one measurable result that no one talked about: Sorrow.
“The sorrow of children whose schools were closed.
“It’s measurable. The researchers measured it. They liken the losses that the students–and teachers, staff, and families–experienced, to grief. The technical term for it is “institutional mourning.” Children and staff talked about losing their school “families,” spoke of the forced separations like a divorce, or a death. Generations-long relationships with schools ended abruptly after a pained, humiliating school year of battling to keep them open–schools that served as neighborhood anchors, social roots, home of beloved teachers. Most of the 50 shuttered schools have since stood empty and fallow after the closings, untended eyesores perpetually in the view of the kids who lived nearby, monuments to loss.
“Thousands of children who experienced this loss, all at once. And it’s long term–it did not go away in a week or a month or a year.
“Does it matter to anyone? Does it matter to the mayor? Would he say: but what is that to me?
“What is it to him? The wholesale destruction of 50 communities in predominantly poor and minority neighborhoods, for no measurable benefit, leaving the measurable sadness of thousands of children in its wake?
“We can only hope it’s the beginning of the end of mayoral control of CPS.”
I don’t mean to be a jerk, and I do get the point, but, no, grief and sorrow are not measurable. As Sr. Swacker likes to remind us, in order to be measurable, there has to be a standard unit of measurement. I don’t even want to think what a standard unit of sorrow would be. And thank the stars it isn’t measurable or the next thing you know, we’re going to have standardized tests for it. Please spare us.
“We can only hope it’s the beginning of the end of mayoral control of CPS.”
I’d like to think so too, but remember that Rahmbo was re-elected in 2015 after he closed 50 schools in 2013. I have no hope that those school closings will have any effect in 2019. Many of the worst affected families have been forced out of Chicago anyway.
According to this source, “Yes, You Can Measure Grief and Here’s How”
“You can’t measure grief,” is something I have heard numerous times as I’ve written about evidence-based bereavement coping strategies. I don’t think these statements mean anything like, “I have studied this matter at length and tried many different methods and I have concluded you can’t measure grief.”
I think it’s more along the lines of, “I haven’t really thought about it before but I can’t see any obvious way to measure grief. It’s not like, say, height and weight. And, besides, I’m a little offended that anyone would presume to say that my deeply personal feelings about loss can be measured like my shoe size.” …
Since it’s not a physical quality, grief is not assessed the way you’d measure, say, your height, with a yardstick. Instead, researchers use written assessments that are filled out by the person being assessed. A question on one of these may ask how often the person feels sadness or yearning. Answers might be “always” and “often” and “rarely” and “never.”
Each answer generally gets a numerical score, typically between 0 and 4. On most assessments, the values of all the answers are added to arrive at a sum that is your score. The score is used to assess the level of grief.
https://grievewellblog.wordpress.com/2017/09/16/yes-you-can-measure-grief-and-heres-how/
Above that link, they are all copied pull quotes.
Then there is this ncbi.nlm.nih.gov post:
“Complicated grief, which is often under-recognized and under-treated, can lead to substantial impairment in functioning. The Brief Grief Questionnaire (BGQ) is a 5-item self-report or interview instrument for screening complicated grief. Although investigations with help-seeking samples suggest that the BGQ is valid and reliable, it has not been validated in a broader population. …
“Conclusions
“The results of this study support the reliability and validity of the BGQ in the Japanese population. Future studies should examine predictive validity by using structured interviews or more detailed scales for complicated grief.”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3279351/
Grief can probably be assessed – roughly. It cannot be measured. Again, measurement requires a standard unit of measurement. What would a standard unit of measurement for grief be? One tear? A minute of crying?
Nothing abstract, and most especially nothing internal, can ever be measured. At best we can assess through gestimation based on observable and reported factors. Anything that can be measured can be standardized. Abstract, internal human qualities can neither be measured nor standardized.
Your opinion is noted and I disagree with it. That, of course, doesn’t mean I’m attempting to censure what you think. It is just my individual opinion that what you think about not being able to measure grief is wrong.
If the VA can rate and measure the PTSD among combat vets, grief can be measured.
I’ve been through the process where the VA measures and rates PTSD, and it is a long process from interviews and sessions with counselors and psychologists to questionnaires.
Measuring Grief and PTSD is pretty much a subjective measurement (possibly guided and based on rubrics) so that means someone is always going to disagree with it. In fact, when I was teaching, I was trained to subjectively measure essays using rubrics. We worked in teams and if the members of a team disagreed on a subjective measurement of an essay, a team leader was called over to be the tiebreaker. That was rare because the teams almost always agreed.
Except that mine is not an opinion and it’s not something that can be disagreed with. It’s inherent in the definition of “measure”: “to ascertain the size, amount, or degree of (something) by using an instrument or device marked in standard units or by comparing it with an object of known size.”
Again, nothing abstract, internal or human can be measured, by definition. Assessed, yes. Measured, no.
“If the VA can rate and measure the PTSD among combat vets, grief can be measured.”
Except that they can’t. PTSD can be evaluated and diagnosed, but not measured. There is not standard unit of trauma.
I hate to sound pedantic, but it’s an important point. Exactly what the rephormers/neoliberals are trying to do is quantify everything human so it can be digitized and commodified. I will continue to resist the standardization and stack-ranking of anything human because humanity cannot be measured. That includes “intelligence”, learning, “grit”, “executive functioning”, grief, sorrow, “mindset” or any other human quality. We can certainly talk about how (and whether) those things can be assessed, but none of them can be measured.
VA Claim Exams: Psychological
The VA can claim anything they want. That doesn’t make it true. “Measure” has a specific meaning, and it’s different than “assess” or “evaluate”.
dienne77 can also claim anything she wants. That doesn’t make it true just because she says or thinks it.
Lloyd can also claim anything he wants but that doesn’t make it true just because he said it.
Each individual must make up their own mind on every issue, but I think we should ignore all individuals that think and spout off that they are the only one that are right.
Those aren’t measures, Lloyd. They are assessments, evaluations, judgments, etc. . . not measuring in the metrological sense, but those who make and use these types of assessments would love to have everybody believe that they are objective, reliable, valid, you know “scientific”. But putting numbers on a scale and have the patient tell you where he/she is is a notoriously bad methodology. As one who goes through a 1-10 pain assessment all the time, I can tell you that my responses are all over the place, and I’m trying to be as consistent as I can. What’s a 5 vs 7 pain level? No one knows. And it varies from one patient to the next.
Whats a 3 vs 4 in a grief level? How can one, well one can’t, have any consistency (reliability) in responses if there is no one checking the supposed level other than the person responding?
Now don’t get me wrong I understand the need for psychological assays for all kinds of emotional/feeling states, psychological issues. And that’s all we have, but those assays are always used in conjunction with a whole lot of other information before the medical diagnosis is made. It’s a holistic, as much as possible, approach, or at least it should be. And those assays are nowhere near replicable in the true scientific sense in using the word “measure”.
So, Lloyd, in order to measure something in the manner in which the proponents use “measure”, measure in a metrological meaning, what is the standard unit of measurement of grief? There is none (and having a patient pick a number on a number line is not measuring), therefore there is no measuring. Call it what it is, assessing, evaluating, judging, inspecting and/or any other synonyms but do not call it measuring.
The misuse of the term measure (and standard) in educational discourse lies at the heart of almost all educational malpractices.
“But putting numbers on a scale and have the patient tell you where he/she is is a notoriously bad methodology”
The VA does not put numbers on a scale and have combat vets tell them where he/she is on that scale.
No one asked me or any of the combat vets I know living with PTSD where we fit on any 1 to 10 scale of any kind.
“What’s the different foundation between subjective and objective measurement?
“Objective measurement is based on how well people perform a task, irrespective of what they experience while performing the task. Subjective measurement on the other hand refers to measures that have to do with what people say they actually experience.”
https://www.researchgate.net/post/Whats_the_different_foundation_between_subjective_and_objective_measurement
That’s an interesting discussion that has nothing to do with the difference between metrological measuring and assessing/evaluating/judging.
You stated: “I’ve been through the process where the VA measures and rates PTSD, and it is a long process from interviews and sessions with counselors and psychologists to questionnaires.”
To make your statement accurate one only has to substitute the right word with the intended meaning. The VA is assessing and/or evaluating but they are not “measuring”. Again, using the word measure in this context is a false usage. It’s really that plain and simple, Lloyd.
I’d rather be assessed and/or evaluated — not measured.
Exactly!
Lloyd, I’ve already stated why it’s important that we use the word “measure” in its correct definitional sense. Why don’t you explain why it’s so important to you to redefine the word, when we have so many other words (assess, judge, evaluate, etc.) that mean exactly what you’re saying?
Measurement implies numbers that can be standardized and stack-ranked. Is that really what you’re arguing for when it comes to things like grief and PTSD?
We are not on the same frequency.
I do not be measured and stack ranked.
I prefer being assessed and evaluated. I think being assessed and evaluated is suprior by far from being measured.
The argument is a semantic one and probably important nowadays and in these circles with the misplaced emphasis on standardized testing. With a well developed assessment tool, each item has been compared to a standard. A certain identified population usually responds in a predictable manner to the item. There is a strong correlation between the response to a test item and a standard comparison, i.e. 85% of people identified as suffering from PTSD will answer this question in this manner. It helps in determining whether someone is suffering from PTSD, but it does not measure an quantifiable amount of PTSD. On the face of it, that is rather obvious, but that distinction has been lost when it comes to instruments used to assess student/school success. Students and schools are being judged to be failures based on instruments that were never designed to consider all the factors we know go into assessing academic success nor can they. How fair is it to judge a student living below the poverty level using the same instrument as a student raised in a upper middle class environment? What is the standard measure that can be fairly applied to both students a majority of the time? I am with you Lloyd that we can make fairly accurate assessments that are useful in a wide range of applications. We just can’t measure them like a ounces in a cup of water. Precision means something different in these to arenas. Eight ounces of tea are always a cup, but two people who score exactly the same on an instrument designed to help diagnose PTSD may still look very different and need different interventions. The dictionary is much more forgiving in its definition of measure. With the high stakes decisions being made on the basis of standardized tests, it is important that we remember what those tests can actually tell us. It is much safer to use the synonyms “evaluate, analyze, or assess” even if it is just a reminder of how testing data can be misused.
“It is much safer to use the synonyms “evaluate, analyze, or assess” even if it is just a reminder of how testing data can be misused.”
Yes
“I prefer being assessed and evaluated. I think being assessed and evaluated is suprior by far from being measured.”
Then why on earth have you spent this entire thread arguing in favor of the word “measured”???
I wasn’t arguing for measurement. I was pointing out that assessments and evaluations were a more preferable way to judge people. I despise rank stacking. Like I said we were on different frequencies.
dienne77 and Lloyd Lofthouse: I much appreciate your back-and-forth.
IMHO, I don’t think either of you are off the mark.
As I see it, it depends on how you approach the matter and what you are trying to highlight.
To keep this brief, I might have used a term like “immense enduring grief” and the clear message that those shuttered schools send to the students, parents and others in that area, namely, that they—
COUNT FOR NOTHING. HAVE NO VALUE. ARE INFERIOR.
And that constant reminder, if you think about it, rubs salt in the wound every day that folks in the affected communities have to look at those reminders of what the corporate education reform crowd—in all its political/religious/philosophical/gendered/racial/ethnic iterations—think of them.
My hat is off to both of you.
Many thanks from you local neighborhood KrazyTA.
😎
Only the buildings have had some value (not the students, parents, community or educators:”COUNT FOR NOTHING. HAVE NO VALUE. ARE INFERIOR.”): one closed building was being turned into luxury apartments (this, in an area where the homeless are being run out from under the viaducts of Lake Shore Drive, because they a putting in–a mayoral favorite!–more bike paths, not to mention that about a year ago, the alderman told the Salvation Army to NOT give food to those people! {This created a huge protest outside his office, & lots of media backlash.}).
To add insult to injury, the “luxury” apartment co. put up a sign by the now shuttered school, “Best in class!” (or something close to that).
Needless to say, area residents & parents were outraged, & the sign was removed.
retiredbutmissthekids: thank you for the update.
Just when you think rheephormsters can’t go any lower…
😎
“Many…have been forced out of Chicago” And there is the rub: the gentrification game which Rahm and his connections in Wash DC (sorry, but Pres. Obama was very much in on this) were promoting and executing played out just as they expected. You don’t have to do ANYTHING about the poor or culturally disconnected if they are no longer living in your district.
Poor children in Chicago were sacrificed so that the mayor can work with developers to gentrify neighborhoods in the city near the CBD. The vacant buildings were supposed to create “value” to the community, but most remain vacant. Gentrification efforts continue unphased by continued protests by minority residents.http://www.chicagoreporter.com/behind-sale-of-closed-schools-a-legacy-of-segregation/
These 50 schools should be a monument, a constant reminder, of the dictatorial, democracy-hating tyranny that closed them.
But schools are just service providers. You redeem your voucher at the service provider of your choice, like a government subsidized cell phone plan.
Arne Duncan told us again and again that people “don’t care” if they keep a public school.
That’s the elite consensus- ordinary people hate public schools as much as elites do.
It isn’t true but that doesn’t matter. They’ll say it over and over until it becomes true.
Now that you mentioned vouchers, this new initiative from DeVos’ DoEd appears to be a foot-in-the-door for a voucher type funding by weighting funds and consolidating of Federal Title monies into 1 block grant. They call it “student-centered funding” and eliminates LEAs who take the money from required tracking, attendance & needs assessments. It appears that DoEd is OK if school districts loose track of kids in this program.
Dear Colleagues,
Earlier this year, the U.S. Department of Education (Department) announced a new pilot to afford local educational agencies (LEAs) flexibility to create equitable, student-centered funding systems. The purpose of this pilot is to provide an LEA with the flexibility to combine state, local, and eligible federal funds that it will allocate to schools using a formula that provides additional funding for students from low-income families, English learners, and other disadvantaged students. In exchange for meeting the requirements of the pilot and using a student-centered funding formula, the LEA receives freedom from many of the federal requirements for the funds included in the system (e.g., tracking time and attendance; creating schoolwide needs assessments and schoolwide plans for operating a schoolwide program under Title I, Part A; spending funds on particular allowable uses).
The Department first accepted applications in March and will accept a second round of applications by July 15, 2018. Please note that the Department recently updated the application, which is available on our website.
To support LEAs interested in applying this summer, the Department is hosting a series of two webinars, each of which will be repeated.
The first webinars focus on describing how student-centered funding systems function and how to complete the application. These will take place on Wednesday, May 30 from 2 to 3:30 PM Eastern Time and Thursday, May 31 from 12:30 to 2 PM ET.
The second webinars will go into more detail about the requirements of the pilot, provide plenty of time for Q&A, and address lessons learned during the spring 2018 submission cycle. These will take place on Wednesday, June 20 from 2 to 3:30 PM ET and Thursday, June 21 from 12 to 1:30 PM ET.
The intended audience is LEA staff, though other interested parties are also welcome. To join a webinar, please select the link for the relevant session. The webinars will be recorded, and the recordings as well as slides will be posted with related resources on our website.
Please share this message as broadly as possible, especially to your LEA contacts.
An LEA applying by July 15, 2018, will be proposing a system that would be implemented in the 2019-2020 school year, which provides time for transition to implementing an approved plan. We are eager to receive applications from interested LEAs who share our enthusiasm about the program, the flexibility it gives local leaders, and its potential impact on equity and transparency in resource allocation. If you have questions about the webinars or the application, please contact WeightedFundingPilot@ed.gov.
Best,
The Communications and Outreach Team
oops, here’s the link
https://www2.ed.gov/policy/elsec/leg/essa/scfp/studentcentered.html?utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term=
I looked this section of ESSA over. The section of the law is clear as mud but the proposed “flexibility” in the distribution of funds is clearly designed to diminish the voice of local school boards and district administrators is allocating funds….while marketing this option as if it is just what is needed to benefit the students with the greatest need. The scheme also seems to follow one proposed for per-student funding from the Center for Reinventing Public Education.
Here are the research questions from the study:
“Our study addresses two primary research questions:
Research Question 1: How did staff and students affected by school closings experience the school closings process and subsequent transfer into designated welcoming schools?
Research Question 2: What effect did closing schools have on closed and welcoming schools students’ mobility, attendance, suspensions, test scores, and core GPAs?
Hmmm. And the author’s conclusions:
In particular, we offer the following points for consideration:
–Schools slated for closure need support the year of the announcement.
–Inadequate preparation of the learning environment can aggravate feelings of loss.
–There is a need for active relationship building that acknowledges both loss and opportunity.
–Closing schools—even poorly performing ones—does not improve the outcomes of displaced students, on average.
–Closing schools can also have some short-term negative impacts, on average, for the students in receiving schools.
From the article:
“None of the results of the research surprise me, but one outcome does stand out. The sorrow. The sorrow of children whose schools were closed. It’s measurable. The researchers measured it.”
From the quick perusal of the research paper itself I see nothing about “measuring grief”. Unless I missed it, I’d say Vassilatos is making up that conclusion.
The report refers to “institutional mourning” as a concept on page 36. The commentary/interpretation beginning in Chapter 5, p. 57. Student and teacher surveys and structures interviews seem to have been the basis for judgments about the feelings of participants in the school closures.
Thanks, Laura, will check it out. I missed it going through quickly.
Looked it over. I stand by my analysis that there was no “measuring” of grief. There are anecdotes, such as answers to questionnaires by students, teachers and parents about their “feelings of loss”-which one would certainly expect, especially if the closed school had been around for generations. No doubt that the study analyzed, evaluated, and/or assessed and commented on those feelings of loss-overall, though that is a minor aspect to the study. Again, I see no direct indication of a “measuring” of grief.
Reblogged this on Laura Lee and commented:
Educators could have predicted this! Why don’t we listen to those who in the know?
One answer: ALEC. Master plan since 1971 (probably before).
Close down public schools. Take away school libraries. Test kids to death, & feed them test preps instead of educating them. Send them to for-profit universities that are uncredited. Dump tons of unpayable debt onto their shoulders, make sure there aren’t jobs in their fields of study (there will be robots) & turn them into minimum-wage-never-question-authority-Walmart-workers-who-need-to-get-food-stamps-to survive.
Although, too, the food stamp program will probably disappear under this administration.
The oligarchy doesn’t listen to the people who eat cake; that’s us–educators & all the other peasants.
To what extent are academics bound by their professions to present everything as a metric? The presentation of a metric for sorrow, grief, or anything else supports the premise that modern academics will not countenance anything that is not connected to a number. It was not always so. Marc Bloch’s monumental work, Feudal Society, managed to revolutionize history without many statistics. Cliometrics has furthered history as it relates to the common man, but historians are justified in their reticence at broad, sweeping generalization.
A sociologist at my university once argued in my presence that he could tell the difference in a student’s performance in his class to the hundredth of a percentage point. I responded that he could not tell a B from a C.