Archives for category: Data

Sherm Koons left this comment. Check out Sherm’s blog, Tales from the Classroom. He is a veteran high school English teacher in Ohio.

 

Down the Rabbit Hole with PARCC.

It’s taken me a while to begin to wrap my head around what’s really going on with PARCC and what makes it so absolutely wrong, but standing in the hall after school today talking to some fellow teachers I think got a glimpse. As we discussed the inappropriateness of the exams for our students, it occurred to me that actually it all makes perfect sense if your goal is to generate the most data that you possibly can. If you believe that, given enough data, you can predict human behavior, environmental, societal and other factors, and all the infinite variables of existence to a degree that mimics reality, of course you would want the most data that you could get. And you become obsessed with data. And eventually you lose track of what you initially were hoping to measure. It becomes data for data’s sake. And soon it has absolutely nothing to do with education, students, or anything human. And as you disappear further and further down the rabbit hole, you can’t understand why nobody gets it but you. The reason we don’t “get it” is that IT MAKES NO SENSE. You have become lost in your never-ending quest for data. You are delusional. And you must be stopped.

Robert Shepherd, a frequent commenter on the blog, is an experienced veteran in the world of education publishing, having developed curriculum, textbooks, and assessments.

 

 

He writes:

 

The New York legislature just voted to dump inBloom. But Diane Ravitch’s first post about that subjected noted, wisely, that inBloom was dead “for Now.”

 

Don’t think for a moment that Big Data has been beaten. I am going to explain why. I hope that you will take the time and effort to follow what I am going to say below. It’s a little complicated, but it’s a great story. It’s a birth narrative–the astonishing but, I think, undeniably true story of the birth of the Common Core.

 

The emergence of the Internet presented a challenge to the business model of the big educational publishers. It presented the very real possibility that they might go the way of the Dodo and the Passenger Pigeon. Why? I can point you, right now, to about 80 complete, high-quality, FREE open-source textbooks on the Net–ones written by various professors–textbooks on geology, law, astronomy, physics, grammar, biology, every conceivable topic in mathematics.

 

Pixels are cheap. The emergence of the possibility of publishing via the Internet, combined with the wiring of all public schools for broadband access, removed an important barrier to entry to the educational publishing business–paper, printing, and binding costs. In the Internet Age, small publishers with alternative texts could easily flourish. Some of those–academic self publishers interested not in making money but in spreading knowledge of their subjects–would even do that work for free. Many have, already. There are a dozen great free intro statistics texts with support materials on the web today.

 

Think of what Wikipedia did to the Encyclopedia Britannica. That’s what open-source textbooks were poised to do to the K-12 educational materials monopolists. The process had already begun in college textbook publishing. The big publishers were starting to loose sales to free, open-source competitors. The number of open-source alternatives would grow exponentially, and the phenomenon would spread down through the grade levels. Soon. . . .

 

How were the purveyors of textbooks going to compete with FREE?
What’s a monopolist to do in such a situation?

 

Answer: Create a computer-adaptive ed tech revolution. The monopolists figured out that they could create computer-adaptive software keyed to student responses IN DATABASES that they, AND THEY ALONE, could get access to. No open-source providers admitted.

 

Added benefit: By switching to computerized delivery of their materials, the educational publishing monopolists would dramatically reduce their costs and increase their profits, for the biggest items on the textbook P&L, after the profits, are costs related to the physical nature of their products–costs for paper, printing, binding, sampling, warehousing, and shipping.

 

By engineering the computer-adaptive ed tech revolution and having that ed tech keyed to responses in proprietary databases that only they had access to, the ed book publishers could kill open source in its cradle and keep themselves from going the way of Smith Corona and whoever it was that manufactured telephone booths.

 

Doing that would prevent the REAL DISRUPTIVE REVOLUTION in education that the educational publishers saw looming–the disruption of THEIR BUSINESS MODEL posed by OPEN-SOURCE TEXTBOOKS.

 

A little history:

Just before its business entirely tanked because of computers, typewriter manufacturer Smith Corona put up a website, the Home page of which read, “And on the 8th day God created Smith Corona.” 2007 was the 50th anniversary of the Standard and Poors Index. On the day the S&P turned 50, 70 percent of the companies that were originally on the Index no longer existed. They had been killed by disruptions that they didn’t see coming.
The educational materials monopolists were smarter. They saw coming at them the disruption of their business model that open-source textbooks would bring about. And so they cooked up computer-adaptive ed tech keyed to standards, with responses in proprietary databases that they would control, to prevent that. The adaptive ed tech/big data/big database transition would maintain and even strengthen their monopoly position.

 

But to make that computer-adaptive ed tech revolution happen and so prevent open-source textbooks from killing their business model, the publishers would first need ONE SET OF NATIONAL STANDARDS. That’s why they paid to have the Common [sic] Core [sic] created. That one set of national standards would provide the tags for their computer-adaptive software. That set of standards would be the list of skills that the software would keep track of in the databases that open-source providers could not get access to. Only they would have access to the BIG DATA.

 

As I have been explaining for a long, long time now, here and elsewhere, the Common Core was the first step in A BUSINESS PLAN.

 

Bill Gates described that business plan DECADES ago. He’s an extraordinarily bright man. Visionary.

 

So, that’s the story, in a nutshell. And it’s not an education story. It’s a business story.

 

And a WHOLE LOTTA EDUCRATS haven’t figured that out and have been totally PLAYED. They are dutifully working for PARCC or SBAC and dutifully attending conferences on implementing the “new, higher standards” and are basically unaware that they have been USED to implement a business plan. They don’t understand that the national standards were simply a necessary part of that plan.

 

And here’s the kicker: The folks behind this plan also see it is a way to reduce, dramatically, the cost of U.S. education. How? Well, the biggest cost, by far, in education is teachers’ salaries and benefits. But, imagine 300 students in a room, all using software, with a single “teacher” walking around to make sure that the tablets are working and to assist when necessary. Good-enough training for the children of the proles. Fewer teacher salaries. More money for data systems and software.

 

Think of the money to be saved.

 

And the money to be made.

 

The wrinkle in the publishers’ plan, of course, is that people don’t like the idea of a single, Orwellian national database. From the point of view of the monopolists, that’s a BIG problem. The database is, after all, the part of the plan that keeps the real disruption, open-source textbooks, from happening–the disruption that would end the traditional textbook business as surely as MP3 downloads ended the music CD business and video killed the radio star.

 

So, with the national database dead, for now, the deformers have to go to plan B.

 

What will they do? Here’s something that’s VERY likely: They will sell database systems state by state, to state education departments, or district by district. Those database systems will simply be each state’s or district’s system (who could object to that?), and only approved vendors (guess who?) will flow through each. Which vendors? Well, the ones with the lobbying bucks and with the money to navigate whatever arcane procedures are created by the states and districts implementing them, with the monopolists’ help, of course. So, the new systems will work basically as the old textbook adoption system did, as an educational materials monopoly protection plan.

 

All this is part of a business plan put in place to prevent the open-source textbook revolution from destroying the business model of the educational materials monopolists.

 

In business, such thinking as I have outlined, above, is called Strategic Planning.

 

So, to recap: to hold onto their monopolies in the age of the Internet, the publishers would use the Big Data ed tech model, which would shut out competitors, and for that, they would need a single set of national standards. The plan that Gates had long had for ed tech proved to be just the ticket. Gates’s plan, and the need to disrupt the open-source disruption before it happened, proved to be a perfect confluence of interest–a confluence that would become a great river of green.

 

The educational publishing monopolists would not only survive but thrive. There would be billions to be made in the switch from textbooks to Big Data and computer-adaptive ed tech. Billions and billions and billions.

 

And that’s why you have the Common [sic] Core [sic].

 

In case you didn’t know it already, privacy is dead. The
National Security Agency has asserted the power to listen to your
phone calls and read your emails.

Now we
learn from Pearson and the esteemed (Sir) Michael Barber (the
architect of a philosophy known as “Deliverology”) that the
capability to monitor the actions, behaviors,
even
thoughts of every student is at hand. We are all about to take a
dive into the Digital Ocean, whether we want to or not. Big data
will tell Pearson and other vendors whatever they want to know.
They will know more about our children and our grandchildren
than we do. Arne Duncan loosened the federal privacy regulations in
2011, so there is no limit on the information that Pearson and
others will collect. But never forget: It is all for the
kids.

Peter Greene shared his thoughts about Pearson’s digital ocean here.

he writes:

“Barber assures us that personalized learning at scale will be possible, and again I want to point out that we already have a system that can totally do that (though of course the present system does not provide corporations such as Pearson nearly enough money). I will not pretend that the traditional US public ed system always provides the personalized learning it should, but when reformy types suggest that’s a reason to scrap the whole system, I wonder if they also buy a new car every time the old car runs out of gas (plus, in that metaphor, government is repeatedly pouring sand into the gas tank).

“But no. There will have to be revolution:

“…schools will need to have digital materials of high quality, teachers will have to change how they teach and how they themselves learn…

“This shtick I recognize, because it is as old as education technology. Every software salesman who ever set foot in a school used this one– “This will be really great tool if you just change everything about how you work.” No. No, no, no. You do not tell a carpenter, “Hey, newspaper is a great building material as long as you change your expectations about how strong and protective a house is supposed to be.”

“You pick a tool because it can help you do the job. You do not change the job so that it will fit the tool…..Barber praises the authors of the paper for their “aspirational vision” of what success in schools would look like.

“They see teaching,learning and assessment as different aspects of one integrated process, complementing each other at all times, in real time;

To which I reply, “Wow! Amazing! Do they also envision water that is wet? Wheels that are round?”

San Antonio is set for a major expansion of privately managed charter schools. Several national chains will open there, welcomed by the mayor and the business community. The San Antonio Express News published an opinion column by an advocate for the corporate charter chains, but refused to print Professor Julian Vasquez Heilig’s succinct rebuttal.

Despite the blue-sky promises of the charter industry, Heilig writes, the vast majority of Latino and African-American students are prepared for college in public schools. The Stanford CREDO study showed that charters in Texas underperform the state’s public schools. Don’t believe the tales of 100% graduation rates and 100% college-admission rates, he warns. They mask high attrition rates.

For example:

“Same story with BASIS. At the original campus of BASIS charter school in Tucson, Ariz., the class of 2012 had 97 students when they were 6th graders. By the time those students were seniors, their numbers had dwindled to 33, a drop of 66 percent.

“So what happens to families who get churned out of charters like KIPP and BASIS? They end up back at their neighborhood public schools, who welcome them with open arms as they do all students, regardless of race, class, circumstance or level of ability.”

Why not tell the truth about charters? They do not accept the same students. They have high attrition rates. When they enroll the same students, they get the same results, so they get rid of low-performing students. It works for some kids, who can attend a schol where there are few if any kids with disabilities, English learners, or troublemakers. But it creates a dual system that harms public education.

New York State cut all ties with inBloom, the controversial data-mining project sponsored by the Gates Foundation and the Carnegie Corporation.

The legislature, which totally ignored parent demands for new faces on the New York Board of Regents, bowed to parent protests against the State Education Department’s determination to share confidential student data with inBloom.

In this post, Leonie Haimson describes how parents organized–not only in New York, but wherever inBloom planned to gather confidential student data–and fought back to protect their children’s privacy.

Give Haimson credit for being the spark plug that ignited parent resistance across the nation.

Normally, the federal law called FERPA would have prevented the release of the data that inBloom planned to collect, but in 2011, the U.S. Department of Education changed the regulations to permit inBloom and other data-mining to access student data without parental consent.

Gates and Carnegie contracted with Rupert Murdoch’s Wireless Generation, and the plan was to put millions of student records in an electronic data managed by amazon.com.

No one was able to assure that the data could never be hacked.

In every state and district where inBloom thought it was operative, parents brought pressure on public officials, and the contract was severed.

At present, inBloom has no known clients.

But as Haimson points out, this could change.

The thirst for data mining seems to be insatiable, and as I posted not long ago, the president of Knewton boasted that education is one sector that is ripe for data mining and that his company and Pearson would be using online tests to gather information about every student and storing it.

Protecting student privacy must remain high on every parent’s agenda.

A frequent commentator, Bob Shepherd, with many years in curriculum development, education publishing, and assessment, offers sage advice:

“The tests are infallible. They are objective measures. And we know that because they produce data. And not just any old data. Data with numbers and stuff. Very rigorously determined raw-to-scaled-score conversions and cut scores and proficiencies. Super-dooper, charterific, infallible data. Lots and lots of it. I mean lots. Tons. You wouldn’t believe the data!!! Data for days. Rivers of data. Big, big data.

“If the new tests show that 70 percent of students are failures, that’s because 70 percent of students are failures. And if the tests show that 70 percent of our students are failures, that’s because 70 percent of our teachers are failures too.

“You see? The data show that those shiftless, ungritful kids and teachers just can’t measure up to “higher standards” produced by folks with VAST experience as educators. Folks like David Coleman.

“And that’s why teachers need to be replaced with educational technology.

“And that’s why the public schools need to be closed down and replaced with private schools and charter schools.

“And that’s why the country needs to spend about 50 billion dollars making the transition to the Common Core and Big Data.

“Because the Common Core data show a 70 percent failure rate!!!

“Because numbers in a report, however they got there, are never wrong!

“Why are they never wrong? Because they are data!

“data data data data data

“You see?

“It couldn’t POSSIBLY BE that the tests are poorly conceived and written. It couldn’t possibly be that the standards are likewise poorly conceived and written. It couldn’t possibly be that what’s being called data-driven decision making is a variety of NUMEROLOGY.

“Because the masters who designed these tests and these standards are infallible. They are the best makers of tests and standards (well, if you use those terms very, very loosely) that a plutocrat’s money can buy, that is, if the plutocrat is in a hurry, and if he doesn’t really give the matter much thought. You know, if he does this in the way that ordinary, nonplutocratic folks might, say, order up a pizza.

“Glad I could straighten that out for you.

“Just remember: The DATA show that everybody failed and needs to be fired and that everything needs to be privatized.

“Oh, and lots and lots of new software and data systems need to be bought. I mean, billions of dollars worth. Billions and billions.

“You’re welcome.”

If the answer is yes, please come to one or both of the two
sessions where I am speaking on April 3. I will give the
John Dewey Society lecture at the
Convention Center, 100 Level, Room 114, from 4-7 pm. (Lots of time
for discussion). My topic: “Does Evidence
Matter?”
Fair warning: The room holds only 600
people. Before the Dewey lecture, I will join Philadelphia parent
activist Helen Gym and Carl Grant of the University of Wisconsin
(chair) in a special Presidential session from 2:15 to 3:45,
on the same level in Room 121B The
title of the session is:
Rising to the
Challenges of Quality and Equality:

The Promise of a Public
Pedagogy
If you join me at the early session,
you will have to race with me to the lecture, and the room may be
full.

Peter Greene, in a serious vein, explains that the Common Core standards are integrally connected to the collection of data.

They can’t be changed or revised–contrary to the nationally and internationally recognized protocol for setting standards–because their purpose is to tag every student and collect data on their performance.

They cannot be decoupled from testing because the testing is the means by which every student is tagged and his/her data are collected for Pearson and the big data storage warehouse monitored by amazon or the U.S. government.

He writes:

We know from our friends at Knewton what the Grand Design is– a system in which student progress is mapped down to the atomic level. Atomic level (a term that Knewton loves deeply) means test by test, assignment by assignment, sentence by sentence, item by item. We want to enter every single thing a student does into the Big Data Bank.

But that will only work if we’re all using the same set of tags.

We’ve been saying that CCSS are limited because the standards were written around what can be tested. That’s not exactly correct. The standards have been written around what can be tracked.

The standards aren’t just about defining what should be taught. They’re about cataloging what students have done.

Remember when Facebook introduced emoticons. This was not a public service. Facebook wanted to up its data gathering capabilities by tracking the emotional states of users. But if users just defined their own emotions, the data would be too noisy, too hard to crunch. But if the user had to pick from the facebook standard set of user emotions– then facebook would have manageable data.

Ditto for CCSS. If we all just taught to our own local standards, the data noise would be too great. The Data Overlords need us all to be standardized, to be using the same set of tags. That is also why no deviation can be allowed. Okay, we’ll let you have 15% over and above the standards. The system can probably tolerate that much noise. But under no circumstances can you change the standards– because that would be changing the national student data tagging system, and THAT we can’t tolerate.

This is why the “aligning” process inevitably involves all that marking of standards onto everything we do. It’s not instructional. It’s not even about accountability.

It’s about having us sit and tag every instructional thing we do so that student results can be entered and tracked in the Big Data Bank.

And that is why CCSS can never, ever be decoupled from anything. Why would facebook keep a face tagging system and then forbid users to upload photos?

The Test does not exist to prove that we’re following the standards. The standards exist to let us tag the results from the Test. And ultimately, not just the Test, but everything that’s done in a classroom. Standards-ready material is material that has already been bagged and tagged for Data Overlord use.

The end-game is data-tracking, not standards. And that helps to explain why CCSS was written without consultation with educators; without participation by early childhood educators or those knowledgeable about students with disabilities; why there is no appeals process, no means of revision, why they were written so hurriedly in 2009 and pushed into 45 states and D.C. by Race to the Top.

Big data will open the way to the future of education, says
the CEO of Knewton.

 

The company is piloting its products at Arizona
State University. Whatever we used to call education will cease to
exist. Big data will change everything.

 

“The so-called Big Data movement, which has been largely co-opted by the for-profit
education industry, will serve as “a portal to fundamental change
in how education research happens, how learning is measured, and
the way various credentials are measured and integrated into hiring
markets,” says Mitchell Stevens, an associate professor of
education at Stanford University. “Who is at the table making
decisions about these things,” he says, “is also up for grabs.”

 

Want to know the future? Watch Knewton: “Big Data stands to play an
increasingly prominent role in the way college will work in the
future. The Open Learning Initiative at Carnegie Mellon University
has been demonstrating the effectiveness of autonomous teaching
software for years. Major educational publishers such as Pearson,
McGraw-Hill, Wiley & Sons and Cengage Learning have long
been transposing their textbook content on to dynamic online
platforms that are equipped to collect data from students that are
interacting with it. Huge infrastructural software vendors such as
Blackboard and Ellucian have invested in analytics tools that aim
to predict student success based on data logged by their client
universities’ enterprise software systems. And the Bill &
Melinda Gates Foundation has marshaled its outsize influence in
higher education to promote the use of data to measure and improve
student learning outcomes, both online and in traditional
classrooms. “But of all the players looking to ride the data wave
into higher education, Knewton stands out.”

 

Read more:

http://www.insidehighered.com/news/2013/01/25/arizona-st-and-knewtons-grand-experiment-adaptive-learning#ixzz2wkgLQ1ZS

Inside Higher Ed

If you have been wondering why data mining matters so much,
you will want to see this video.

Please note that the U.S. Department of
Education’s logo is on this video.

In it, an entrepreneur named Jose Ferreira, CEO of Knewton, shares his vision for a future in
which education of every individual child is completely determined
by data. Education today happens to be the most “data-mineable
industry in the world,” he says.

His firm and Pearson can map out whatever your child knows and doesn’t know, design lessons, and do
whatever is necessary to “teach” the concepts needed. There is
nothing about your child that they don’t know, and they will know
more about him or her next year than they do this year. If this is
the future, then teachers will be mere technicians, if they are
needed at all. What do you think?

Peter Greene saw the video and
thought it was scary. He wrote: “Knewton will generate this giant
data picture. Ferreira says presents this the same way you’d say,
“Once we get milk and bread at the store,” when I suspect it’s
really more on the order of “Once we cure cancer by using our
anti-gravity skateboards,” but never mind. Once the data maps are
up and running, Knewton will start operating like a giant
educational match.com, connecting Pat with a perfect educational
match so that Pat’s teacher in Iowa can use the technique that some
other teacher used with some other kid in Minnesota. Because
students are just data-generating widgets. “Ferreira is also
impressed that the data was able to tell him that some students in
a class are slow and struggling, while another student could take
the final on Day 14 and get an A, and for the five billionth time I
want to ask this Purveyor of Educational Revolution, “Just how
stupid do you think teachers are?? Do you think we are actually
incapable of figuring those sorts of things out on our
own?””

Follow

Get every new post delivered to your Inbox.

Join 94,799 other followers