Jersey Jazzman, aka Mark Weber, updated his commentary on Question 2 after noting a tweet by David Leonhardt about his original post. Leonhardt said that JJ’s analysis was “based on a falsehood.” Leonhardt said the study in question compared lottery winners to lottery losers. JJ said that you can’t generate from a select sample to the entire population of Boston public school students.

David Leonhardt keeps claiming the study “compares lottery winners to lottery losers”.
But no one ever explains how this can possibly be done. Because we have no idea how many of the “lottery winners” who don’t give good results are drummed out of the school after a day or a week or a month or a year. Which means that every year beyond the entry year lacks all those ‘undesirable’ lottery-winners who have disappeared from the cohort.
Scientists test two drugs on two different control groups in a 4 year study. 100 patients in each study – how many respond to the two drugs? Year 1: in both groups 50/100 patients respond. Year 2: in Group 1, 50/75 patients respond and in Group 2, 50/100 patients respond. Year 3: In Group 1, 50/60 patients respond, and in Group 2, 50/100 patients respond. Year 4: In Group 1, 50/50 patients respond and in Group 2, 50/100 patients respond.
Despite absurd claims, those studies do not account for attrition. And that’s the red flag. No study of a drug would ever dismiss findings that a miracle drug lost half of its testing group over the years. If it’s a miracle drug, patients lucky enough to be taking it don’t say “no thanks, I prefer to be sick”. That’s why scientists who get claims of a drug study with 100% cure rates and 50% or 75% loss of patients immediately get suspicious.
Not so with education studies and the reporters like Leonhardt who accept them without question. In the pro-charter school world, those miracle drugs SHOULD lose the most patients! Because those patients are low-income and obviously stupid and why would we ever expect them to want to keep taking the medicine that cures them. It’s just their “choice” to drop out of the study and we refuse to consider whether there is hanky-panky going on because we are racists and snobs who assume all those patients are idiots. No questions asked.
LikeLike
To be fair: some of the studies, as I recall, do track students who leave a charter — but only if they attrit into a MA public school. Students who leave for a private school or out of state are not tracked.
I actually don’t think that’s such a big deal, though. The two big limitations of these studies, to my mind, are:
1) They can’t necessarily generalize to a larger population than the self-selected sample, and there’s a good chance choosing to enter the lottery correlates with parental involvement, which is going to raise test scores.
2) The treatment isn’t just “charteriness.” It may well be peers, extra spending adjusted for student population, young, highly-educated faculty, etc. Some of the lottery studies have tried to account for this; whether they succeed is an open question.
Can this be replicated, and is it worth the cost? The lottery studies do shed some light, but it’s not, IMO, enough to base a policy decision on.
MW
LikeLike
Jersey Jazzman,
I really appreciate you replying. I’ve been racking my brain trying to understand how it is possible to track the students who leave the charter — even just the ones who attrit to public schools. Have you figured out their methodology? I just don’t see how they can use one that passes the smell test.
If, 20% of the lottery winners leave a charter school in the first month to return to a public school, are those students’ scores included as charter school scores? It seems a bit off if the students spent a week in the charter and 8 months in the public school and then they are tested.
And as that cohort continues in the charter, another 20% of the students leave over the summer. The class is now 60% of its starting size. Are you certain that the test scores of the 40% of the students who left during the early weeks of Year 1 and then over the summer would be included with the test scores of the remaining students who began Year 2?
And then in Year 3, when it is possible that a “high performing” charter has left than half their original cohort starting the year, all the scores of the students who haven’t been in that school for 2 years are counted?
I do agree with your points except one — how can we know if attrition is a big deal or not without understanding the methodology used in which researchers claim they “track students who leave a charter”? If this were a study of the effects of “XX” on a group of children, reviewers would question results that included children who left the study weeks, months, or years ago. It would invalidate the study as there would be no way to determine whether the results were due to being exposed to “XX” or not being exposed to “XX” for the past few years. So how do these Massachusetts studies get around that?
For example, instead of testing the effects of an “education”, the study could test the effects of exposure to mercury. The study found that students regularly exposed to mercury have no ill effects academically after 3 years when compared to a control group of students who weren’t exposed to mercury. A huge percentage of the students exposed to mercury dropped out of the study (but we have no interest in how many). But the ones who remained in the study and were exposed to mercury showed no ill effects. And when people questioned the validity, the scientists conducting the study “reassured” us that they also included the unnamed number of students who had dropped out of the study and had not been exposed to mercury for the last year or two. When you include the scores of the students who dropped out of the study and weren’t exposed to mercury, along with the ones who stayed in the study, you have absolute proof that exposure to mercury is not harmful.
Except you don’t.
Isn’t that why scientific studies take hard looks at attrition and don’t presume the results of subjects long gone from the study should be included? Instead, isn’t the drop out rate of studies examined very carefully to make sure it isn’t being used to manipulate results. And if a study is claiming outsize results, but it also has an outsize attrition rate, that is always a red flag. And reviewers aren’t content with “we included the results of all the subjects who dropped out” because the next question by reviewers would be “Explain exactly how that was done so we know that your “inclusion” of those subjects who left the study was reasonable.
LikeLike
Jersey Jazzman,
I’m sorry to have written such a lengthy reply above. My real question is this:
Why haven’t there been any real studies that look at the longitudinal attrition rates (over a period of years) of the entering classes of charter schools? Are high-performing charters really getting their outsized results with all the random students who win the lottery for their entry year? Are charters that are outliers in terms of their extraordinarily high results also outliers in terms of their high attrition rates? Because common sense tells us that the highest performing charters should also be the charters with the very lowest attrition rates. Attrition rates are hard to come by, but it does appear that parents don’t behave rationally when it comes to choosing to leave charters and a close look at longitudinal attrition rates would confirm that. (Attrition studies limited to 1 year are useless).
LikeLike
JJ’s assertion that those entering the lottery do not represent the general student population is a point well taken. Students in the lottery more likely represent students from better functioning homes and perhaps more affluent families. The parents more likely represent those that are more savvy and know how to advocate for their children. The comparing of apples to oranges has been evident throughout many “studies” from reformers. Anyone that follows JJ’s blog knows that he has been very critical of the inaccurate and misleading research methods employed by “reform,” as well as the many bogus conclusions.
LikeLiked by 1 person
As I posted above, the reformers are now claiming that they agree with you and have designed ideal studies that only compare lottery winners to lottery losers.
Of course, they keep saying this but never actually explain their methodology. For example:
100 students “win” the lottery for a seat, but the charter makes it clear that they have very high expectations and discourages any winner who can’t meet those expectations from enrolling. So 20 of those students don’t enroll and are replaced with 20 students who get the warning about very high expectations again and the ones willing to take that on do enroll. How many of the original lottery winners accept the seat and how far down a wait list does a charter go to fill their class? That tells you a lot about a charter. If a charter is getting outsized results and has a wait list of thousands, you’d expect most winners to accept the spot. For example: Harvard’s “yield” rate is always significantly higher than Brandeis U. because few students turn down a chance at a school perceived to be so excellent. Is that the case with charters or is the yield rate for lottery winners not significantly higher for the top-performing charters? If so, why would parents turn them down? If Harvard’s yield suddenly dropped so that it was much lower than Brandeis, you can bet people would start asking questions about why students are turning the school down.
How do these studies account for the attrition of lottery winners from the first day of school? If 100 students are randomly assigned to Brandeis and 100 to Harvard and four years later there are only 25 high performing students left at Brandeis, and 90 left at Harvard, Brandeis’ claims of having 100% success rate with the 25 students who graduate is nonsense.
The new studies also claim that they still keep the 75 students who left Brandeis and spent their 3 years studying somewhere else as part of the Brandeis cohort, but that is nonsensical.
In other words, the researchers assure us they account for attrition and include those students but they never explain how it is possible to do this. Gullible reporters like David Leonhartdt don’t question it because they are so certain that charters are better they see no need to use their common sense to ask inconvenient questions of their PR folks.
LikeLiked by 1 person
They can always cling to the Massachusetts DESE claim that “attrition” includes only those students who ‘leave’ school during summer months.
LikeLike
Love the word nonsensical; it, along with a word Diane used recently, vainglorious, describe the growing behind-the-scenes school reform profiteering exactly.
LikeLike
It’s amusing to me how ed reformers are pretending this Mass. charter expansion was a hard call for them, or based on “evidence”.
They lock-step support each and every expansion of charter schools. In fact, if you read ed reform you will be hard-pressed to find a single criticism of any charter school, anywhere.
Is there a recorded instance where these people DIDN’T support an expansion of charter schools? They wholeheartedly supported expansions in the following states where charters DON’T do better than public schools:
OH, MI, PA, FL and CA
If they’re basing their Mass. decision on “data” it will be the first time they have done so.
The question to ask ed reformers is not “do you support charter schools in Boston?
The question is “where DON’T you support expansion of charter schools?”
LikeLike
Also, how sad is it that the US Department of Education couldn’t find a single ballot measure to support other than this one, expanding charter schools?
One would think they would find ONE that benefits public schools somewhere, if only because that would make them APPEAR not to be biased against public sector schools.
Thousands of school funding initiatives all over the country- tens of millions of kids of all income levels, and the only thing the feds care about is expanding charter schools.
LikeLike
WBUR reported, on 11/1/16, that a DFER-connected group sent out a leaflet with Obama’s photo and the heading, “Help Secure Pres. Obama’s Education Legacy by Voting Yes on Question 2.” DFER claims it didn’t intend to imply that Obama endorsed issue 2. (SLIME) According to the White House, Obama has not endorsed a yes vote on 2.
LikeLike