Archives for category: Technology

Somebody has figured out how to make a pile of money with a bright and shiny innovation: AI. Artificial Intelligence. Two hours of AI daily is all the students need.

Ah, innovation! We can never have too much innovation! But is two hours daily enough instruction?

Peter Greene explains it all here:

MacKenzie Price has made headlines with a charter school that uses two hours of AI instead of human teachers, then expanded that model to cyber schools under the “Unbound Academic Institute” brand. Now she is awaiting approval from the Pennsylvania Department of Education that would bring that same cyber charter model to cash in on the commonwealth’s already-crowded, yet still profitable, cyber school marketplace. 

Price, a Stanford graduate now living in Austin, Texas, started her entrepreneurial journey with Alpha Private Schools. In this glowing profile from Austin Woman, Price tells the origin story of Alpha Schools, starting with her own child:

“Very early on, I started noticing frustration around the lack of ability for the traditional model to be able to personalize anything,” she recalls. “About halfway through my daughter’s second grade year, she came home and said, ‘I don’t want to go to school tomorrow.’ She looked at me and she said, ‘School is so boring,’ and I just had this lightbulb moment. They’ve taken this kid who’s tailor-made to wanna be a good student, and they’ve wiped away that passion.”

Price, who has no previous experience in education, launched Alpha Schoolsabout a decade ago, powered by a model that she soon spun off into its own company – 2 Hour Learning. She has thoughts about how long education needs to take, as she told Madeline Parrish of Arizona Republic:

When you’re getting one-to-one personalized learning, it doesn’t take all day. Having a personal tutor is absolutely the best way for a student to learn.

The snake oil pitch is even more direct on the company’s website:

School is broken, and we’re here to fix it. 2 Hour Learning gives students an AI tutor that allows them to: Learn 2X in 2 Hours.”

The personal tutor in this case is a collection of computer apps. After two hours at the computer, students spend the rest of the day pursuing “personal interests” and joining in life skills workshops. There are no teachers in Alpha’s schools, but “guides” are on hand to provide motivation and support. Tuition at most of the Alpha campuses is $40,000 a year. 

As Price tells an “interviewer” in one paid advertorial:

Yes, it’s absolutely possible! Not only can they learn in two hours what they would learn all day in a traditional classroom, the payoffs are unbelievable! My students master their core curriculum through personalized learning in two hours. That opens up the rest of their day to focus on life skills and finding where their passions meet purpose. Students love it because it takes them away from the all-day lecture-based classroom model. Instead, my students are following their passions.

Price has been clear that “AI” in this case does not mean a ChatGPT type Large Language Model, but apps more along the lines of IXL Math or Khan Academy’s Khanmigo, that pitch themselves as being able to analyze student responses and pick a next assignment that fits, or perhaps recommend a video to explain a challenging point. 

If that seems like an extraordinary stretch, Price has decided to go one better and turn that model into a virtual charter model. How that model would manage the “personal interest” afternoon structure is not entirely clear; one application promises “a blend of scheduled live interactions and self-managed projects.” As the application promises, “No Teachers, Just Guidance.”

And that model is the one Unbound wants to bring to Pennsylvania.

AD 4nXfGgeXQpbIX5l2StEZdpx2eH4vwKETBKdz1qjeGnn04aytVoJbFB1iPPMZKRq iE1czT0pZIoKNaXoqRgR908i2Z2Maw2VI9H6wJjOOeX6joh 6feLuAF1GcoLq 4eRF0e0DUzjekhHonThDi6AJn4?key=Zh58BXstnzJkvLRwAUAZj59 - Bucks County Beacon - Texas Businesswoman Wants to Open AI-Driven, Teacherless Cyber Charter School in Pennsylvania

The model looks to be a highly profitable one. While MacKenzie Price is the public face of the company, with a big social media presence, at least some of the business savvy may come from Andrew Price, MacKenzie’s husband and co-founder of the business. Andrew is the Chief Financial Officer at Trilogy, Crossover, Ignite Technologies, and ESW Capital. 

Crossover recruits employees, particularly for remote work. ESW is an private equity firm for one guy –Joe Liemandt, who made a huge bundle in the tech world; Leimandt also owns Trilogy. In 2021, Price’s boss was expressing some interesting thoughts about white collar jobs, as quoted in Forbes:

Most jobs are poorly thought out and poorly designed—a mishmash of skills and activities . . . poor job designs are also quickly exposed with a move to remote work

In 2023, Liemandt was found slipping a million dollars to Republican Glenn Youngkin’s gubernatorial campaign, via Future of Education LLC, formed just the day before the donation. It turns out the address of that group was the Price home ; MacKenzie had launched the Future of Education podcast in February of 2023 (though her LinkedIn dates it to August).

All of this interconnectedness is part of how the game is played. The Unbound application to open a cyber charter in Pennsylvania includes: 

In support of its operations, Unbound Academy will collaborate with 2hr Learning, Inc. to deliver its adaptive learning platform, while Trilogy Enterprises will manage financial services, and Crossover Markets, Inc. will assist with recruiting qualified virtual educators.

In Pennsylvania, it’s not legal to run a charter school for profit. But the law says nothing about running the school as a non-profit while hiring other for-profit organizations to handle the operation of the school. In Unbound Academy we find the Prices hiring themselves to operate the school. And they’re not done yet. 

YYYYY, LLC. will be the general and administrative service provider.

The President and Director of YYYYY, LLC. is Andrew Price. According to the application, YYYYY,LLC will provide a start-up donation for Unbound and then serve as its management organization. 

The application was filed by Timothy Eyerman, the Dean of Parents at Alpha Private Schools.

So we have a total of five organizations involved in the proposed school, all tied to MacKenzie and Andrew Price, and all proposing to pass a pile of Pennsylvania taxpayer money back and forth.

And what a pile of money it is.

Joel Westheimer is Professor of Education and Democracy at the University of Ottawa. He is also a columnist for CBC Radio in Ontario. His most recent book is What Kind of Citizen? Educating Our Children for the Common Good.

This column appeared in the Globe & Mail in Canada.

When Mark Zuckerberg declared that Meta would stop its fact-checking efforts on its social media platforms, he was conducting a master class in bowing to authoritarianism. The move has been viewed as an effort to placate U.S. president-elect Donald Trump, who has praised Meta’s decision. But while it’s easy to direct our outrage at Mr. Zuckerberg personally, his announcement reflects something much deeper and more troubling: the rarefied world of the modern plutocrat.

Social norms govern behaviour for most people, setting limits on what we deem acceptable. But those norms are no longer the same across different social and economic strata. We would like to believe that commonly held norms reflect ideals of fairness, decency and accountability. But Mr. Zuckerberg and his fellow plutocrats share their own set of norms that privilege shareholder value, political expediency and the maintenance of their unparalleled influence. These norms, values and perceptions of what is acceptable behaviour are shaped not by the needs of democracy or society, but by the insulated, self-reinforcing logic of their own milieu – a logic wherein bowing to a fascist seems reasonable, even admirable.

As former deputy prime minister Chrystia Freeland pointed out more than a decade ago, plutocrats live entirely insulated from the rest of us. Their lives are global. They move from one Four Seasons hotel to another. They eat at the same restaurants. They see only each other. As much as we would like to believe otherwise, Mr. Zuckerberg, Jeff Bezos, Elon Musk and their peers do not feel guilty at night. They sleep fine.

The chasm between their world and ours mirrors the grotesque wealth inequality that defines our era, an inequality not seen since the days of the robber barons. And like that earlier gilded age, this one is undermining democracy at its core.

The insulated world plutocrats live in also allows for dangerous indifference to the consequences of their decisions. While the rest of society grapples with misinformation, rising authoritarianism and the erosion of trust in public institutions, the tech elite shrug. Their wealth not only shields them from the effects of democratic decline but often ensures they benefit from it. After all, authoritarian regimes offer stable environments for market expansion and profit maximization – no pesky regulations or democratic checks to contend with.

The implications are chilling. Meta’s decision isn’t just a policy shift; it’s a reflection of a deeper decay in democratic accountability. In a world where billionaires and their companies wield extraordinary power, platforms such as Facebook and X have become the de facto public squares of our time. Yet these spaces are governed not by the public interest but by the profit margins of the ultra-wealthy. When this small handful of individuals decide what speech is amplified, suppressed or ignored, they fundamentally reshape the boundaries of democratic discourse.

What does it mean for democracy when the norms governing the lives of the wealthiest people on Earth are so utterly detached from the values of the societies their platforms claim to serve; when truth is sacrificed to political gain; when fascism is appeased to protect market share; and when those with unimaginable resources opt to placate authoritarianism rather than challenge it? These decisions do not occur in a vacuum. They emerge from a cultural context that prizes wealth and influence above all else – even the integrity of democratic systems.

Mr. Zuckerberg’s announcement is a reminder that democracy does not simply erode; it is eroded. The responsibility for this erosion lies not just with one, two or three men or companies, but with a broader culture of plutocratic complacency and complicity. The erosion is cumulative, each decision stacking upon the next to create a structure that serves the interests of the few at the expense of the many.

The rest of us, however, are not powerless. History demonstrates that when perverse norms of the wealthy are weaponized against democracy, people can and do fight back. From labour movements to civil-rights struggles, ordinary citizens have reclaimed power from elites before and can do so again. Norms can be reimagined and reclaimed. It’s time to insist that truth is not negotiable, that democracy is not a product to be monetized, and that the plutocrats of our age should not be above accountability.

The robber barons of old built railroads and monopolies; today’s tech barons shape reality itself. If we fail to hold them accountable, the price will be not just economic inequality, but the very fabric of democracy. And that is a cost we cannot afford to pay.

Did Elon Musk say that? Yes, he did.

Snopes, the fact-checking service, confirmed that billionaire Elon Musk said that Jeff Bezos’ ex-wife, MacKenzie Scott, was a “reason why Western Civilization died.”

Why? Because since her divorce, Scott has given away billions of dollars to charitable organizations that help women and racial minorities.

Snopes provided this context:

Musk wrote in response to a post on X that, “‘Super rich ex-wives who hate their former spouse'” should be listed among “‘Reasons that Western Civilization died.'” That post said of Scott’s philanthropic efforts that “over half of the orgs to which she’s donated so far deal with issues of race and/or gender.” Musk later deleted his post.

Questions:

Does Elon Musk make charitable gifts? If so, where does he give? There are tax breaks for giving to charity. What are Elon’s charities?

Amid pitched discussions of whether artificial intelligence–powered technologies will one day transform art, journalism, and work, a more immediate impact of AI is drawing less attention: The technology has already helped create barriers for people seeking to access our nation’s social safety net.

Across the country, state and local governments have turned to algorithmic tools to automate decisions about who gets critical assistance. In principle, the move makes sense: At their best, these technologies can help long-understaffed and underfunded agencies quickly process huge amounts of data and respond with greater speed to the needs of constituents.

But careless—or intentionally tightfisted and punitive—design and implementation of these new high-tech tools have often prevented benefits from reaching the people they’re intended to help.

In 2016, during an attempt to automate eligibility processes, Indiana denied one million public assistance applications in three years—a 54 percent increase. Those who lost benefits included a six-year-old with cerebral palsy and a woman who missed an appointment with her case worker because she was in the hospital with terminal cancer. That same year, an Arkansas assessment algorithm cut thousands of Medicaid-funded home health care hours from people with disabilities. And in Michigan, a new automated system for detecting fraud in unemployment insurance claims identified fivefold more fraud compared to the older system—causing some 40,000 people to be wrongly accused of unemployment insurance fraud.

This is good news. The misuse of AI threatens the integrity of our elections and our ability to trust anything that is communicated to us other than in person. Justice was served!

The man responsible for a political robocalling hoax aimed at New Hampshire voters has been fined $6 million by the Federal Communications Commission.

Steve Kramer, the New Orleans-based political consultant who has admitted his involvement in the hoax, must pay the fine for violating the federal Truth in Caller ID Act, which makes it illegal to make automated telephone calls with intent to defraud or cause harm. The FCC says that it will hand the matter to the US Justice Department if Kramer doesn’t pay up in 30 days.

The hoax occurred in January, when New Hampshire voters received robocalls in the runup to the state’s primary elections. The calls, which featured a computer-generated voice that mimicked the voice of President Joe Biden, urged voters not to cast ballots in the primary.

Kramer hired New Orleans magician Paul Carpenter to create the recording with help from ElevenLabs, a company that uses artificial intelligence to generate highly realistic simulations of individuals’ voices. Carpenter has said that he didn’t know Kramer’s plans for the AI recording. Kramer has claimed that he did it to demonstrate the dangers posed by computer-generated “deepfakes.”

Lingo Telecom, the Missouri phone company that sent out the robocalls, agreed to pay a $1 million fine last month for its involvement in the hoax.

In addition to the FCC fine, Kramer faces 13 felony counts of attempted voter suppression in the New Hampshire courts, as well as 13 misdemeanor counts of impersonating a candidate.

This is not April Fools’ Day. This is real. And it’s serious.

The Los Angeles Times reported a massive data breach that includes the Social Security numbers and other personal data about every American.

About four months after a notorious hacking group claimed to have stolen an extraordinary amount of sensitive personal information from a major data broker, a member of the group has reportedly released most of it for free on an online marketplace for stolen personal data.

The breach, which includes Social Security numbers and other sensitive data, could power a raft of identity theft, fraud and other crimes, said Teresa Murray, consumer watchdog director for the U.S. Public Information Research Group.

“If this in fact is pretty much the whole dossier on all of us, it certainly is much more concerning” than prior breaches, Murray said in an interview. “And if people weren’t taking precautions in the past, which they should have been doing, this should be a five-alarm wake-up call for them.”

According to a class-action lawsuit filed in U.S. District Court in Fort Lauderdale, Fla., the hacking group USDoD claimed in April to have stolen personal records of 2.9 billion people from National Public Data, which offers personal information to employers, private investigators, staffing agencies and others doing background checks. The group offered in a forum for hackers to sell the data, which included records from the United States, Canada and the United Kingdom, for $3.5 million, a cybersecurity expert said in a post on X.

The lawsuit was reported by Bloomberg Law.

Last week, a purported member of USDoD identified only as Felice told the hacking forum that they were offering “the full NPD database,” according to a screenshot taken by BleepingComputer. The information consists of about 2.7 billion records, each of which includes a person’s full name, address, date of birth, Social Security number and phone number, along with alternate names and birth dates, Felice claimed….

Several news outlets that focus on cybersecurity have looked at portions of the data Felice offered and said they appear to be real people’s actual information. If the leaked material is it what it’s claimed to be, here are some of the risks posed and the steps you can take to protect yourself.

The threat of ID theft

The leak purports to provide much of the information that banks, insurance companies and service providers seek when creating accounts — and when granting a request to change the password on an existing account.

A few key pieces appeared to be missing from the hackers’ haul. One is email addresses, which many people use to log on to services. Another is driver’s license or passport photos, which some governmental agencies rely on to verify identities.

Still, Murray of PIRG said that bad actors could do “all kinds of things” with the leaked information, the most worrisome probably being to try to take over someone’s accounts — including those associated with their bank, investments, insurance policies and email. With your name, Social Security number, date of birth and mailing address, a fraudster could create fake accounts in your name or try to talk someone into resetting the password on one of your existing accounts.

How to protect yourself

Data breaches have been so common over the years, some security experts say sensitive information about you is almost certainly available in the dark corners of the internet. And there are a lot of people capable of finding it; VPNRanks, a website that rates virtual private network services, estimates that 5 million people a day will access the dark web through the anonymizing TOR browser, although only a portion of them will be up to no good.

If you suspect that your Social Security number or other important identifying information about you has been leaked, experts say you should put a freeze on your credit files at the three major credit bureaus, ExperianEquifax and TransUnion. You can do so for free, and it will prevent criminals from taking out loans, signing up for credit cards and opening financial accounts under your name. The catch is that you’ll need to remember to lift the freeze temporarily if you are obtaining or applying for something that requires a credit check.

Placing a freeze can be done online or by phone, working with each credit bureau individually. PIRG cautions never to do so in response to an unsolicited email or text purporting to be from one of the credit agencies — such a message is probably the work of a scammer trying to dupe you into revealing sensitive personal information.

For more details, check out PIRG’s step-by-step guide to credit freezes.

You can also sign up for a service that monitors your accounts and the dark web to guard against identity theft, typically for a fee. If your data is exposed in a breach, the company whose network was breached will often provide one of these services for free for a year or more.

As important as these steps are to stop people from opening new accounts in your name, they aren’t much help protecting your existing accounts. Oddly enough, those accounts are especially vulnerable to identity thieves if you haven’t signed up for online access to them, Murray said — that’s because it’s easier for thieves to create a login and password while pretending to be you than it is for them to crack your existing login and password.

Of course, having strong passwords that are different for every service and changed periodically helps. Password manager apps offer a simple way to create and keep track of passwords by storing them in the cloud, essentially requiring you to remember one master password instead of dozens of long and unpronounceable ones. These are available both for free (such as Apple’s iCloud Keychain) and for a fee.

Beyond that, experts say it’s extremely important to sign up for two-factor authentication. That adds another layer of security on top of your login and password. The second factor is usually something sent or linked to your phone, such as a text message; a more secure approach is to use an authenticator app, which will keep you secure even if your phone number is hijacked by scammers.

Yes, scammers can hijack your phone number through techniques called SIM swaps and port-out fraud, causing more identity-theft nightmares. To protect you on that front, AT&T allows you to create a passcode restricting access to your account; T-Mobile offers optional protection against your phone number being switched to a new device, and Verizon automatically blocks SIM swaps by shutting down both the new device and the existing one until the account holder weighs in with the existing device.

Open the link to read the rest of the article.

Brave New World, indeed!

John Thompson is a retired teacher and historian in Oklahoma. I admit that I steer clear of AI, in part because of my innate aversion to “thinking machines” replacing humans. I am biased towards people deciding for themselves, but as I watch the polling numbers in the race for President, I wonder if artificial intelligence might be more trustworthy than people who support a man with Trump’s long record of lying and cheating others.

John Thompson writes:

We live in a nation where reliable polling data reveals that 23% of respondents “strongly believe” or “somewhat believe” that the attacks on the World Trade Center were an “inside job.” Moreover, 1/3rd of polled adults believe that COVID-19 vaccines “caused thousands of sudden deaths,” and 1/3rd also believe the deworming medication Ivermectin was an “effective treatment for COVID-19.” Moreover, 63% of Americans “say they have not too much or no confidence at all in our political system.”

Such falsehoods were not nearly as common in the late 1990s when I first watched my high school students learn how to use digital media. But, I immediately warned my colleagues that we had to get out in front of the emerging technological threat. Of course, my advocacy for digital literacy, critical thinking, and digital ethics was ignored. 

But who knew that misuse of digital media would become so harmful? As Surgeon General Vivek Murthy now explains: 

It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents

As a special issue of the Progressive  reports, our  digital ecosystems, with their deepfakes and disinformation are distorting reality, and increasing “human tendencies toward cognitive biases and groupthink.”  It further explains that since 2019 the number of people “who cite social media as their number one source for news has increased by 50 percent.” Moreover:

Half of American adults report that they access their news from social media sometimes or often. For Americans under the age of thirty-four, social media is the number one source for news.

The Progressive further explains that young people “don’t necessarily trust what they read and watch.” They “know that private corporations manipulate political issues and other information to suit their agendas, but may not understand how algorithms select the content that they see.”

We in public education should apologize for failing to do our share of the job of educating youth for the 21st century. Then we should commit to plans for teaching digital literacy.

It seems likely that the mental distress suffered by young people could be a first driver toward healthy media systems. According to the National Center for Biotechnology Information:

According to data from several cross-sectional, longitudinal, and empirical research, smartphone and social media use among teenagers relates to an increase in mental distress, self-harming behaviors, and suicidality. Clinicians can work with young people and their families to reduce the hazards of social media and smartphone usage by using open, nonjudgmental, and developmentally appropriate tactics, including education and practical problem-solving.

According to the Carnegie Council:

Social media presents a number of dangers that require urgent and immediate regulation, including online harassment; racist, bigoted and divisive content; terrorist and right-wing calls for radicalization; as well as unidentified use of social media for political advertising by foreign and domestic actors. To mitigate these societal ills, carefully crafted policy that balances civil liberties and the need for security must be implemented in line with the latest cybersecurity developments.

Reaching such a balance will require major investments – and fortitude – from the private sector and government. But it is unlikely that real regulatory change can occur without grassroots citizens’ movements that demand enforcement.

And we must quickly start to take action to prepare for Artificial Intelligence (A.I.). In a New York Times commentary, Evgeny Morozov started with the statement by 350 technology executives, researchers and academics “warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority.” He then cited a less-alarming position by the Biden administration which “has urged responsible A.I. innovation, stating that ‘in order to seize the opportunities’ it offers, we ‘must first manage its risks.’” 

Morozov then argued, “It is the rise of artificial general intelligence, or A.G.I., that worries the experts.” He predicted, “A.G.I. will dull the pain of our thorniest problems without fixing them,” and it “undermines civic virtues and amplifies trends we already dislike.”

Morozov later concluded that A.G.I. “may or may not prove an existential threat’ but it has an “antisocial bent.” He warned that A.G.I. often fails “to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.”

I lack expertise in A.I. and A.G.I. but it seems clear that the dangers of data, driven by algorithms and other impersonal factors, must be controlled by persons committed to social values. It is essential that schools assume their duty for preparing youth for the 21st century, but they can only do so with a team effort. I suspect the same is true of the full range of interconnected social and political institutions. As Surgeon General Murthy concludes his warning to society about social media:

These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability.

The moral test of any society is how well it protects its children.

No sooner did Apple put its ad for the new iPad on the air when our reader Bob Shepherd expressed his outrage. The ad showed a giant compressor crushing all sorts of musical instruments, materials for art, materials for craft, and replacing them with an iPad.

Bob wrote:

When they come for the cellos and the metronomes; for the saws and the planes and glue pots and stains; for the palette knives, the Titanium White, and the artists’ mannequins, when they came for the notepads and the pencils, they are also coming for the musicians and the luthiers and the painters and the writers and so, so many more. They think that these people can be replaced by the abominations they create, which render mediocrity by the yacht-load in seconds. What are those of us who write music and design and paint and write supposed to do when the public has been trained to this swill from birth? Go extinct, I guess.

Is this how civilization ends?

Apple must have heard him and tens of thousands of others who thought the ad was obnoxious. The tech company apologized and pulled the ad, though it’s still on its website.

The Washington Post reported:

Apple is apologizing for an iPad ad that was supposed to celebrate the creative possibilities of its newest, priciest tablet. Instead, the company received vocal blowback for appearing to destroy beloved physical tools used by artists.

The ad, released after the company announced its newest iPad lineup on Tuesday, showed a massive hydraulic press destroying a mountain of supplies used to create music, paintings, sculptures, clothing and writing. It flattened a record player, a piano, buckets of paints, journals, a camera and drawing board. After about 45 seconds of destruction and one dramatic splatter, the press pulled up to reveal a tiny iPad.

The goal was to show how much the iPad is now capable of, but instead it offended many of the same creatives it was trying to sell on the device.

“Our goal is to always celebrate the myriad ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry,” Tor Myhren, Apple’s VP of marketing communications, said in a statement to AdAge.

According to AdAge, the ad, called “Crush!”, will not have any kind of TV run. But it’s still on Apple’s official YouTube page and has already had 1.1 million views so far. Apple has a history of high-budget, glossy ads that make a statement, as far back as its iconic 1984 ad that came out ahead of the original Apple Macintosh.

Critics online called the new ad wasteful and disrespectful. Some were upset that Apple appeared to be destroying perfectly good art supplies while most were more offended that it devalued the more analog ways of creating art — especially when tools like AI are being used to automate things like writing, music and illustration.

Generative AI tools have used massive amounts of creative works to train their systems to spit out similar style images and texts, often without permission from the original artists.

Tom Ultican, retired teacher in California, smells a scam in the making. The science behind “the Science of Reading” movement is not very scientific, he writes. Publishers and vendors are preparing to cash in on legislative mandates that force reading teachers to use only one method to teach reading despite the lack of evidence for its efficacy. Ultican zeroes in on the role of billionaire Laurene Powell Jobs as one of the key players in promoting SofR.

He writes:

Laurene Powell Jobs controls Amplify, a kids-at-screens education enterprise. In 2011, she became one of the wealthiest women in the world when her husband, Steve, died. This former Silicon Valley housewife displays the arrogance of wealth, infecting all billionaires. She is now a “philanthropist”, in pursuit of both her concerns and biases. Her care for the environment and climate change are admiral but her anti-public school thinking is a threat to America. Her company, Amplify, sells the antithesis of good education.

I am on Amplify’s mailing list. April third’s new message said,

“What if I told you there’s a way for 95% of your students to read at or near grade level? Maybe you’ve heard the term Science of Reading before, and have wondered what it is and why it matters.”

Spokesperson, Susan Lambert, goes on to disingenuously explain how the Science of Reading (SoR) “refers to the abundance of research illustrating the best way students learn to read.”

This whopper is followed by a bigger one, stating:

“A shift to a Science of Reading-based curriculum can help give every teacher and student what they need and guarantee literacy success in your school. Tennessee school districts did just that and they are seeing an abundant amount of success from their efforts.”

A shift to SoR-based curriculum is as likely to cause harm as it is to bring literacy success. This was just a used-car salesman style claim. On the other hand, the “abundance of success” in Tennessee is an unadulterated lie. National Assessment of Education Progress (NAEP) tracks testing over time and is respected for education testing integrity. Tennessee’s NAEP data shows no success “from their efforts.” Their reading scores since 2013 have been down, not a lot but do not demonstrate an “abundance of success”.

NAEP Data Plot 2005 to 2022

Amplify’s Genesis

Larry Berger and Greg Dunn founded Wireless Generation in 2000 to create the software for lessons presented on screens. Ten years later, they sold it to Rupert Murdoch and his News Corporation for $360 million. Berger pocketed $40 million and agreed to stay on as head of curriculum. Wireless Generation was rebranded Amplify and Joel Klein was hired to run it.

Murdoch proposed buying a million I-pads to deliver classroom instruction. However, the Apple operating system was not flexible enough to run the software. The android system developed at Google met their needs. They purchased the Taiwanese-made Asus Tablets, well regarded in the market place but not designed for the rigors of school use. Another issue was that Wireless Generation had not developed curriculum but Murdoch wanted to beat Pearson and Houghton Mifflin to the digital education market place … so they forged ahead.

In 2012, the corporate plan was rolling along until the wheels came off. In Guilford County, North Carolina, the school district won a Race to the Top grant of $30 million dollars which it used to experiment with digital learning. The district’s plancalled for nearly 17,000 students in 20 middle schools to receive Amplify tablets. When a charger for one of the tablets overheated, the plan was halted. Only two months into the experiment, they found not only had a charger malfunctioned but another 175 chargers had issues and 1500 screens were kid-damaged.

This was the beginning of the end.

By August of 2015, News Corporation announced it was exiting the education business. The corporation took a $371 million dollar write-off. The next month, they announced selling Amplify to members of its staff. In the deal orchestrated by Joel Klein, who remained a board member, Larry Berger assumed leadership of the company.

Three months later, Reuters reported that the real buyer was Laurene Powell Jobs. She purchased Amplify through her LLC, the Emerson Collective. In typical Powell Jobs style, no information was available for how much of the company she would personally control.

Because Emerson Collective is an LLC, it can purchase private companies and is not required to make money details public. However, the Waverley Street Foundation, also known as the Emerson Collective Foundation, is a 501 C3 (EIN: 81-3242506) that must make money transactions public. Waverly Street received their tax exempt status November 9, 2016.

SoR A Sales Scam

The Amplify email gave me a link to two documents that were supposed to explain SoR: (Navigating the shift to evidence-based literacy instruction 6 takeaways from Amplify’s Science of Reading: The Symposium) and (Change Management Playbook Navigating and sustaining change when implementing a Science of Reading curriculum). Let’s call them Symposium and Navigating.

Navigating tells readers that it helps teachers move away from ineffective legacy practices and start making shifts to evidence-based practices. The claim that “legacy practices” are “ineffective” is not evidence-based. The other assertion that SoR is evidence-based has no peer-reviewed research backing it.

Sally Riordan is a Senior Research Fellow at the University College London. In Britain, they have many of the same issues with reading instruction. In her recent research, she noted:

“In 2023, however, researchers at the University of Warwick pointed out something that should have been obvious for some time but has been very much overlooked – that following the evidence is not resulting in the progress we might expect.

“A series of randomised controlled trials, including one looking at how to improve literacy through evidence, have suggested that schools that use methods based on research are not performing better than schools that do not.”

In Symposium, we see quotes from Kareem Weaver who co-founded Fulcrum in Oakland, California and is its executive director. Weaver also was managing director of the New School Venture Fund, where Powell Jobs served on the board. He works for mostly white billionaires to the detriment of his community. (Page 15)

Both Symposium and Navigating have the same quote, “Our friends at the Reading League say that instruction based on the Science of Reading ‘will elevate and transform every community, every nation, through the power of literacy.”

Who is the Reading League and where did they come from?

Dr. Maria Murray is the founder and CEO of The Reading League. It seems to have been hatched at the University of Syracuse and State University of New York at Oswego by Murray and Professor Jorene Finn in 2017. That year, they took in $11,044 in contributions (EIN: 81-0820021) and in 2018, another $109,652. Then in 2019, their revenues jumped 20 times to $2,240,707!

Jorene Finn worked for Cambria Learning Group and was a LETRS facilitator at Lexia. That means the group had serious connections to the corporate SoR initiative before they began.

With Amplify’s multiple citations of The Reading League, I speculated that the source of that big money in 2019 might have been Powell Jobs. Her Waverly Street Foundation (AKA Emerson Collective Foundation) only shows one large donation of $95,000,000 in 2019. It went to the Silicon Valley Community Foundation (EIN: 20-5205488), a donor-directed dark money fund.

There is no way of following that $95 million.

The Reading League Brain Scan Proving What?

Professor Paul Thomas of Furman University noted the League’s over-reliance on brain scans and shared:

“Many researchers in neurobiology (e.g., Elliott et al., 2020; Hickok, 2014; Lyon, 2017) have voiced alarming concerns about the validity and preciseness of brain imaging techniques such as functional magnetic resonance imaging (fMRI) to detect reliable biomarkers in processes such as reading and in the diagnosis of other mental activity….

“And Mark Seidenberg, a key neuroscientist cited by the “science of reading” movement, offers a serious wcaution about the value of brain research: “Our concern is that although reading science is highly relevant to learning in the classroom setting, it does not yet speak to what to teach, when, how, and for whom at a level that is useful for teachers.”

“Beware The Reading League because it is an advocacy movement that is too often little more than cherry-picking, oversimplification, and a thin veneer for commercial interests in the teaching of reading.”

The push to implement SoR is a new way to sell what Amplify originally called “personalized learning.”This corporate movement conned legislators, many are co-conspirators, into passing laws forcing schools and teachers to use the SoR-related programs, equipment and testing.

SoR is about economic gain for its purveyors and not science based.

When politicians and corporations control education, children and America lose.

To read an earlier post by Tom Ultican on this topic, see this.

Peter Greene warns teachers not to fall for the cheap and lazy artificial intelligence (AI) that designs lesson plans. He explains why in this post:

Some Brooklyn schools are piloting an AI assistant that will create lesson plans for them. 

Superintendent Janice Ross explains it this way. “Teachers spend hours creating lesson plans. They should not be doing that anymore.”

The product is YourWai (get it?) courtesy of The Learning Innovation Catalyst (LINC), a company that specializes in “learning for educators that works/inspires/motivates/empowers.” They’re the kind of company that says things like “shift to impactful professional learning focused on targeted outcomes” unironically. Their LinkedIn profile says “Shaping the Future of Learning: LINC supports the development of equitable, student-centered learning by helping educators successfully shift to blended, project-based, and other innovative learning models.” You get the idea.

LINC was co-founded by Tiffany Wycoff, who logged a couple of decades in the private school world before writing a book, launching a speaking career, and co-founding LINC in 2017. Co-founder Jaime Pales used to work for Redbird Advanced Learning as executive director for Puerto Rico and Latin America and before that “developed next-generation learning programs” at some company. 

LINC has offices in Florida and Colombia. 

YourWai promises to do lots of things so that teachers can get “90% of your work done in 10% of the time.” Sure. Ross told her audience that teachers just enter students’ needs and the standards they want to hit and the app will spit out a lesson plan. It’s a “game changer” that will give teachers more time to “think creatively.” 

These stories are going to crop up over and over again, and every story ought to include this quote from Cory Doctorow:

We’re nowhere near the point where an AI can do your job, but we’re well past the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job.

Look, if you ask AI to write a lesson plan for instructing students about major themes in Hamlet, the AI is not going to read Hamlet, analyze the themes, consider how best to guide students through those themes, and design an assessment that will faithfully measure those outcomes. What it’s going to do is look at a bunch of Hamlet lesson plans that it found on line (some of which may have been written by humans, some of which may have been cranked out by some amateur writing for online corner-cutting site, and some of which will have been created by other AI) and mush them all together. Oh, and throw in shit that it just made up. 

There are undoubtedly lessons for which AI can be useful–cut and dried stuff like times tables and preposition use. But do not imagine that the AI has any idea at all of what it is doing, nor that it has any particular ability to discern junk from quality in the stuff it sweeps up on line. Certainly the AI has zero knowledge of pedagogy or instructional techniques.

But this “solution” will appeal because it’s way cheaper than, say, hiring enough teachers so that individual courseloads are not so heavy that paperwork and planning take a gazillion hours.