Archives for category: Data

Blogger Jersey Jazzman is an experienced teacher and graduate student at Rutgers, where he has learned how reformers play games with data. He is better than they are and can be counted on to expose their tricks.

In this post, he blows away the myth of the “success” of Boston charter schools.

The public schools and the charter schools in Boston do not enroll the same kinds of students, due to high attrition rates in the charters (called Commonwealth charter schools).

He writes:

“As I pointed out before, the Commonwealth charter schools are a tiny fraction of the total Boston high school population. What happens if the cap is lifted and they instead enroll 25 percent of Boston’s students? What about 50 percent?

“Let’s suppose we ignore the evidence above and concede a large part of the cohort shrinkage in charters is due to retention. Will the city be able to afford to have retention rates that high for so many students? In other words: what happens to the schools budget if even more students take five or six or more years to get through high school?

“In a way, it doesn’t really matter if the high schools get their modest performance increases through attrition or retention: neither is an especially innovative way to boost student achievement, and neither requires charter school expansion. If Boston wants to invest in drawing out the high school careers of its students, why not do that within the framework of the existing schools? Especially since we know redundant school systems can have adverse effects on public school finances?”

Conclusion: Jersey Jazzman opposes Amendment 2, which would lead to an unsustainable growth in charter schools, free to push out the students they don’t want.

Cathy O’Neil has written s new book called “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” I haven’t read it yet, but I will.

In this article, she explains that VAM is a failure and a fraud. The VAM fanatics in the federal Department of Education and state officials could not admit they were wrong, could not admit that Bill Gates had suckered the nation’s education leaders into buying his goofy data-based evaluation mania, and could not abandon the stupidity they inflicted on the nation’s teachers and schools. So they say now that VAM will be one of many measures. But why include an invalid measure at all?

As she is out on book tour, people ask questions and the most common is that VAM is only one of multiple measures.

She writes:

“Here’s an example of an argument I’ve seen consistently when it comes to the defense of the teacher value-added model (VAM) scores, and sometimes the recidivism risk scores as well. Namely, that the teacher’s VAM scores were “one of many considerations” taken to establish an overall teacher’s score. The use of something that is unfair is less unfair, in other words, if you also use other things which balance it out and are fair.

“If you don’t know what a VAM is, or what my critique about it is, take a look at this post, or read my book. The very short version is that it’s little better than a random number generator.

“The obvious irony of the “one of many” argument is, besides the mathematical one I will make below, that the VAM was supposed to actually have a real effect on teachers assessments, and that effect was meant to be valuable and objective. So any argument about it which basically implies that it’s okay to use it because it has very little power seems odd and self-defeating.

“Sometimes it’s true that a single inconsistent or badly conceived ingredient in an overall score is diluted by the other stronger and fairer assessment constituents. But I’d argue that this is not the case for how teachers’ VAM scores work in their overall teacher evaluations.

“Here’s what I learned by researching and talking to people who build teacher scores. That most of the other things they use – primarily scores derived from categorical evaluations by principals, teachers, and outsider observers – have very little variance. Almost all teachers are considered “acceptable” or “excellent” by those measurements, so they all turn into the same number or numbers when scored. That’s not a lot to work with, if the bottom 60% of teachers have essentially the same score, and you’re trying to locate the worst 2% of teachers.

“The VAM was brought in precisely to introduce variance to the overall mix. You introduce numeric VAM scores so that there’s more “spread” between teachers, so you can rank them and you’ll be sure to get teachers at the bottom.

“But if those VAM scores are actually meaningless, or at least extremely noisy, then what you have is “spread” without accuracy. And it doesn’t help to mix in the other scores.”

This is a book I want to read. Bill Gates should read it too. Send it to him and John King too. Would they read it? Not likely.

We have had quite a lot of back and forth on this blog about Boston charter schools, in anticipation of the vote this November in Massachusetts about lifting the charter cap and adding another 12 charter schools every year forever. Pro-charter advocates argue that the Boston charters are not only outstanding in test scores but that their attrition rate is no different from that of the public schools, or possibly even less than the public schools.

Jersey Jazzman (aka Mark Weber) is a teacher and is studying for his doctorate at Rutgers, where he specializes in data analysis.

In this post, he demolishes the claim that Boston charters have a low attrition rate. As he shows, using state data,

In the last decade, Boston’s charter sector has had substantially greater cohort attrition than the Boston Public Schools. In fact, even though the data is noisy, you could make a pretty good case the difference in cohort attrition rates has grown over the last five years.

Is this proof that the independent charters are doing a bad job? I wouldn’t say so; I’m sure these schools are full of dedicated staff, working hard to serve their students. But there is little doubt that the public schools are doing a job that charters are not: they are educating the kids who don’t stay in the charters, or who arrive too late to feel like enrolling in them is a good choice.

This is a serious issue, and the voters of Massachusetts should be made aware of it before they cast their votes. We know that charter schools have had detrimental effects on the finances of their host school systems in other states. Massachusetts’ charter law has one of the more generous reimbursement policies for host schools, but these laws do little more than delay the inevitable: charter expansion, by definition, is inefficient because administrative functions are replicated. And that means less money in the classroom.

Is it really worth expanding charters and risking further injury to BPS when the charter sector appears, at least at the high school level, to rely so heavily on cohort attrition?

Launa Hall, a third grade teacher in northern Virginia, is writing a book of essays about education. This one appeared in the Washington Post.

She writes:

My third-graders tumbled into the classroom, and one child I’d especially been watching for — I need to protect her privacy, so I’ll call her Janie — immediately noticed the two poster-size charts I’d hung low on the wall. Still wearing her jacket, she let her backpack drop to the floor and raised one finger to touch her name on the math achievement chart. Slowly, she traced the row of dots representing her scores for each state standard on the latest practice test. Red, red, yellow, red, green, red, red. Janie is a child capable of much drama, but that morning she just lowered her gaze to the floor and shuffled to her chair.

In our test-mired public schools, those charts are known as data walls, and before I caved in and made some for my Northern Virginia classroom last spring, they’d been proliferating in schools across the country — an outgrowth of “data-driven instruction” and the scramble for test scores at all costs. Making data public, say advocates such as Boston Plan for Excellence, instills a “healthy competitive culture.” But that’s not what I saw in my classroom.

She put up the data walls with reluctance, and the more she saw of them, the more convinced she became that they served to humiliate children.

I regretted those data walls immediately. Even an adult faced with a row of red dots after her name for all her peers to see would have to dig deep into her hard-won sense of self to put into context what those red dots meant in her life and what she would do about them. An 8-year-old just feels shame….

It also turns out that posting students’ names on data walls without parental consent may violate privacy laws. At the time, neither I nor my colleagues at the school knew that, and judging from the pictures on Pinterest, we were hardly alone. The Education Department encourages teachers to swap out names for numbers or some other code. And sure, that would be more palatable and consistent with the letter, if not the intent, of the Family Educational Rights and Privacy Act. But it would be every bit as dispiriting. My third-graders would have figured out in 30 seconds who was who, coded or not.

The data walls made it harder for me to reach and teach my students, driving a wedge into relationships I’d worked hard to establish. I knew Janie to be an extremely bright child — with lots of stresses in her life. She and I had been working as a team in small group sessions and in extra practice after school. But the morning I hung the data walls, she became Child X with lots of red dots, and I became Teacher X with a chart.

Why does official policy these days aim to hurt children as a way of motivating them? What kind of motivation grows from shame?

Articles like this are sad and even sickening. It is the story of a 29-year veteran in Brookline, Massachusetts, who teaches first grade. He is leaving.

It is outrageous to see beloved, dedicated teachers leave the classroom. Yet when you think of the steady barrage of hostile propaganda directed at them by the Gates Foundation, the Broad Foundation, the Walton Family Foundation, D.C. think tanks, and others, you can understand why they find it impossible to stay. I hope there is a new wave of articles about teachers who said: No matter what, I will not leave! I love my kids! I love my work! I will not let the reformers drive me away!

David Weinstein is throwing in the towel. He is in his early 50s. He shouldn’t be leaving so soon. He explains how teaching has changed, how much pressure is on the children, how much time is wasted collecting data that doesn’t help him as a teacher or his students.

He sums up:

I guess the big-picture problem is that all this stuff we’re talking about here is coming from on top, from above, be it the federal government, the commonwealth of Massachusetts, the school administration. But the voices of teachers are lost. I mean, nobody talks to teachers. Or, if they do talk to teachers, they’re not listening to teachers.

Since former Governor Bobby Jindal took control of the state education department in Louisiana, there have been numerous battles over access to public information.

The state superintendent appointed by Jindal, ex-TFA Broadie John White, has just made plain that public information is not public.

Mercedes Schneider reports that White has sued a citizen who made the mistake of seeking information from the state education department. Apparently John White doesn’t realize that he works for the public and is paid by the public.

As she says, this is a new low.

Katherine Stewart and Matthew Stewart, parents in the renowned Brookline school district in Massachusetts, are concerned about their school board’s ties to Bill Gates and  other corporate reformers. Katherine Stewart is the author of “The Good News Club: The Christian Right’s Stealth Assault on America’s Children.”

They write:

“In the ongoing standoff between the Brookline Educators Union and the Brookline School Committee, the School Committee has framed the dispute as one of making do with limited resources and ensuring equity for all students. But in fact, fundamental choices about how we educate our children are also at stake. The teachers are asking for more time to spend with students and more control over their own teaching. The School Committee, on the other hand, appears intent on investing teacher time and town funds in a management system aimed at top-down control of educators through data collection and high-stakes, standardized testing. The differences are not about the value of equity but how best to achieve it….”

The Stewarts go on to detail the connections between at least three members of the board and corporate reform. They implicitly raise the question: Is the board working for the children of Brookline or for Bill Gates and other corporate reformers?



“The Chairman of the School Committee, Susan Wolf Ditkoff, is a partner at The Bridgespan Group, a management consulting firm specializing in the philanthropy sector. Another member, Beth Jackson Stram, is also an associate at the same firm. A third member, Lisa Jackson, operates a consulting company that lists Bridgespan as one of its founders. In 2010, Bridgespan played an instrumental role in bringing Common Core to Massachusetts. The firm was hired to assist the state in its application for Race-to-the-Top funds from the federal government. Bridgespan reportedly received a $500,000 fee for that project, half of which was paid by the Bill & Melinda Gates Foundation….


“According to its tax filings, the Gates Foundation disbursed more than $5.5 million to The Bridgespan Group between 2010 and 2014. To judge from flattering material posted on its website, Bridgespan is also closely involved with The Broad Foundation and the Laura and John Arnold Foundation, both of which promote similar education reform agendas. Tax filings from Bridgespan show that Susan Ditkoff’s total compensation in 2014 was just short of $300,000.”


Transparency would be a good start.

Peter Greene reliably reads all the studies, think tank reports, foundation proclamations, and other stuff that pours forth from the Think About Education Industry.


In this post, he is thinking about something else, something very important: his 18-month-old grandson.


This is a young man with a long list of studies, reports, and policy briefs. Well, diapers, not so much briefs.


As Peter writes:


He is, at 18 months, a Man of Adventure. He knows many exciting activities, such as Putting One Thing Inside of Another Thing, or Stomping Vigorously Upon the Ground. He knows the word “dog” and is involved extensive survey of just how many dogs there are in the world, which also involves working out which survey items are dogs, and which are not. In the photo above, you can gauge his mastery of Spoon Technique as applied to Ice Cream. This is part of his extensive study on What Can Be Safely and Enjoyably Eaten.


 While outdoors he devotes his time to Running Studies, by which I don’t mean the management of studies, but the study of actual running. A popular game– Walking Up The Top of the Hill, followed by the sequel, Running to the Bottom of the Hill (“Hill” here defined as “Stretch of mildly tilted ground”). This dovetails with another one of his spirited experiments on the question of When Is It a Good Time To Applaud and Cheer? (The complete answer has not yet been compiled, but it clearly includes “after you have made it to the top of the hill” and “after you have run down.”)


Peter knows that somewhere there are people with Very Important Titles trying to figure out ways to determine whether this child is improving. What test should be devised? How should he be measured? Will he ever amount to anything if he doesn’t have a battery of tests to rate him, rank him, and enable comparison to children of the same age in other states and nations?

I recently posted about a new partnership between the National PTA and the Data Quality Campaign. In response, our wonderful reader-researcher Laura Chapman dug deep into the money flow and produced this commentary:



Ah data. You can be sure the PTA is uninformed about the data being collected with their tax dollars. Here are some not widely publicized facts.


Between 2005 and early 2011, the Gates’ Foundation invested $75 million in a major advocacy campaign for data gathering, aided by the National Governor’s Association, the Council of Chief State School Officers, Achieve, and The Education Trust—most of these groups recipients of Gates money. During the same period, the Gates Foundation also awarded grants totaling $390,493,545 for projects to gather data and build systems for reporting on teacher effectiveness. This multi-faceted campaign, called the Teacher Student Data Link (TSDL) envisioned the linked data serving eight purposes:
1. Determine which teachers help students become college-ready and successful,
2. Determine characteristics of effective educators,
3. Identify programs that prepare highly qualified and effective teachers,
4. Assess the value of non-traditional teacher preparation programs,


5. Evaluate professional development programs,
6. Determine variables that help or hinder student learning,
7. Plan effective assistance for teachers early in their career, and
8. Inform policy makers of best value practices, including compensation.
Gates and his friends intended to document and rate the work of teachers and a bit more: They wanted data that required major restructuring of the work of teachers so everything about the new system of education would be based on data-gathering and surveillance.


The TSDL system ( in use in many states) required that all courses be identified by standards for achievement and alphanumeric codes for data-entry. All responsibilities for learning had to be assigned to one or more “teachers of record” in charge of a student or class. A teacher of record was assigned a unique identifier (think barcode) for an entire career in teaching. A record would be generated whenever a teacher of record has some specified proportion of responsibility for a student’s learning activities.


Learning activities had to be defined by the performance measures (e.g., cut scores for proficiency) for each particular standard for every subject and grade level. The TSDL system was designed to enable period-by-period tracking of teachers and students every day; including “tests, quizzes, projects, homework, classroom participation, or other forms of day-to-day assessments and progress measures”—a level of surveillance that proponents claimed was comparable to business practices (TSDL, 2011, “Key Components”).


The system was and is intended to keep current and longitudinal data on the performance of teachers and individual students, as well schools, districts, states, and educators ranging from principals to higher education faculty. Why? All of this data could be used to determine the “best value” investments to make in education, with monitoring and changes in policies to ensure improvements in outcomes. Data analyses would include as many demographic factors as possible, including health records for preschoolers.


The Gates-funded TSDL campaign added resources to a parallel federal initiative. Between 2006 and 2015, the US Department of Education (USDE) has invested nearly $900 million in the Statewide Longitudinal Data Systems (SLDS) Grant Program. Almost every state has received multi-year grants to standardize data on education. Operated by the Institute of Education Sciences, the SLDS program is: “designed to aid state education agencies in developing and implementing longitudinal data systems.


What is the point of the SLDS program? “These systems are intended to enhance the ability of States to efficiently and accurately manage, analyze, and use education data, including individual student records…to help States, districts, schools, and teachers make data-driven decisions to improve student learning, as well as facilitate research to increase student achievement and close achievement gaps” (USDE, 2011, Overview).


The most recent data-mongering activity from USDE, rationalized as “helping keep students safe and improving their learning environments” is a suite of on-line School Climate Surveys (EDSCLS). The surveys will allow states, districts, and schools to “collect and act on reliable, nationally-validated school climate data in real-time,” (as soon as it is entered).


The School Climate Surveys are for students in grades 5-12, instructional staff, non-instructional staff in the schools they attend and parents/guardians. Data is stored on local data systems, not by USDE. Even so, but the aim is to have national “benchmarks” online by 2017 for local and state comparisons with national scores.


Student surveys (73 questions) offer scores for the entire school disaggregated by gender, grade level, ethnicity (Hispanic/Latino or not), and race (five mentioned, combinations allowed).
The Instructional Staff Survey has 82 Questions. Responses can be disaggregated by gender, grade level assignment, ethnicity, race, teaching assignment (special education or not), years working at this school (1-3, 4-9, 10-19, 20 or more).
The Non-instructional Staff Survey has 103 questions, but 21 are only for the principal. Demographic information for disaggregated scores is the same as for s instructional staff)
The Parent Survey has 43 questions, for item-by-item analysis, without any sub-scores or and summary scores. Demographic information is requested for gender, ethnicity, and race.


These four surveys address three domains of school climate: Engagement, Safety, and Environment, and thirteen topics (constructs).
Engagement topics are: 1. Cultural and linguistic competence, 2. Relationships, and 3. School participation.
Safety topics are: 4. Emotional safety, 5. Physical safety, 6. Bullying/cyberbullying, 7. Substance abuse, and 8. Emergency readiness/management (item-by-item analysis, no summary score)


Environment topics are: 9. Physical environment, 10. Instructional environment, 11. Physical health (information for staff, but no scores for students) 12. Mental health, and 13. Discipline.


Almost all questions call for marking answers “yes” or “no,” or with the scale “strongly agree,” agree,” ”disagree” “strongly disagree.” Some questions about drug, alcohol and tobacco abuse ask for one of these responses: “Not a problem,” Small problem,” “ Somewhat a problem,” “Large problem.” None of the questions can be answered “Do not know.”



I have looked at the survey questions, developed by the American Institutes for Research (AIR), and concluded they are not ready for prime time. Here are a few of the problems.


This whole project looks like a rush job. The time for public comment about this project was extremely short. USDE did not change flaws in the piloted surveys, claiming that there was no budget for revisions.


The flaws are numerous. Many of the survey questions assume that respondents have an all encompassing and informed basis for offering judgments about school practices and policies. Some questions are so poorly crafted they incorporate several well-known problems in survey design–including more than one important idea, referring to abstract concepts, and assuming responders have sufficient knowledge. Here is an example from the student survey with all three problems. “Question 8. This school provides instructional materials (e.g., textbooks, handouts) that reflect my cultural background, ethnicity, and identity.”



Many questions have no frame of reference for a personal judgment:



From the student survey:


“17. Students respect one another.”

“18. Students like one another.“
Other questions call for inferences about the thinking of others.


“50. Students at this school think it is okay to get drunk.”


Some questions assert values, then ask for agreement or disagreement.


In the parents survey,


“7. This school communicates how important it is to respect students of all sexual orientations.”
Others assume omniscience:


“41. School rules are applied equally to all students.” Some questions seem to hold staff responsible for circumstances beyond their immediate control. 74. [Principal Only] The following are a problem in the neighborhood where this school is located: garbage, litter, or broken glass in the street or road, on the sidewalks, or in yards. ( Strongly agree, Agree, Disagree, Strongly disagree).



Overall, the surveys and the examples of data analysis they provides are unlikely to produce “actionable interventions” as intended. The questions are so poorly crafted that they are likely to generate misleading data with many schools cast in a very bad light. See, for example, page 26 data from this source.



The responsibility for privacy rests with the schools, districts and states, but everything in on line. A brief inspection of the background questions should raise major questions about privacy, especially for students who identify themselves with enough detail–gender, ethnicity, race, grade level (20 data points minimum)–to produce survey answers that match only one person or a very few individuals.



My advice, not just to the PTA: Stay away from these data monsters. They drown everyone in data points. Results from the School Climate Surveys are processed to make colorful charts and graphs, but they are based of fuzzy and flawed “perceptions” and unwarranted assumptions. The surveys offer 63 data points for profiling the participants, but only four possible responses to each of 283 questions of dubious technical merit.



Perhaps most important for parents: Some questions seem to breech the Family Educational Rights and Privacy Act (FERPA) for topics in student surveys, especially questions pertaining to “illegal, anti-social, self-incriminating, or demeaning behavior.” More on FERPA at


It would be nice to think that FERPA really protects student privacy. But former Secretary Duncan loosened the FERPA protections in 2011, to make it easier for outsiders to obtain student data. That was the premise behind the Gates’ Foundation’s inBloom project, which was set to collect personally identifiable data from several states and districts and store it in a cloud managed by Amazon. That project was brought down by parental objections, which caused the states and districts to back out.

Pasi Sahlberg and Jonathan Hasak wrote a post about the failure of Big Data to introduce effective reforms into education. Big Data are the kind of massive globs of information that cannot be analyzed by one or several people; they require a computer to seek the meaning in the numbers. Big Data are supposed to change everything, and indeed they have proved useful in many areas of modern life in understanding large patterns of activity. Traffic patterns, disease outbreaks, criminal behavior, and so on. But those who try to understand children and teaching and learning through Big Data have failed to produce useful insights. They have produced correlations but not revealed causation. In reading their article, I am reminded of the sense of frustration I felt when I was a member of the National Assessment Governing Board, which oversees the National Assessment of Educational Progress (NAEP). In the early years of my seven-year stint, I was excited by the data. About the fourth or fifth year, I began to be disillusioned when I realized that we got virtually the same results every time. Scores went up or down a point or two. The basic patterns were the same. We learned nothing about what was causing the patterns.


Sahlberg and Hasak argue on behalf of “small data,” the information about interactions and events that happen in the classroom, where learning does or does not take place:


We believe that it is becoming evident that big data alone won’t be able to fix education systems. Decision-makers need to gain a better understanding of what good teaching is and how it leads to better learning in schools. This is where information about details, relationships and narratives in schools become important. These are what Martin Lindstrom calls “small data”: small clues that uncover huge trends. In education, these small clues are often hidden in the invisible fabric of schools. Understanding this fabric must become a priority for improving education.


To be sure, there is not one right way to gather small data in education. Perhaps the most important next step is to realize the limitations of current big data-driven policies and practices. Too strong reliance on externally collected data may be misleading in policy-making. This is an example of what small data look like in practice:


*It reduces census-based national student assessments to the necessary minimum and transfer saved resources to enhance the quality of formative assessments in schools and teacher education on other alternative assessment methods. Evidence shows that formative and other school-based assessments are much more likely to improve quality of education than conventional standardized tests.
*It strengthens collective autonomy of schools by giving teachers more independence from bureaucracy and investing in teamwork in schools. This would enhance social capital that is proved to be critical aspects of building trust within education and enhancing student learning.
*It empowers students by involving them in assessing and reflecting their own learning and then incorporating that information into collective human judgment about teaching and learning (supported by national big data). Because there are different ways students can be smart in schools, no one way of measuring student achievement will reveal success. Students’ voices about their own growth may be those tiny clues that can uncover important trends of improving learning.