Archives for category: Technology

Tom Ultican, retired teacher of advanced mathematics and physics in California, says that all the bright and shiny fads have actually harmed students and teachers. I have not posted the entirety of his commentary. To finish reading it, open this link.

He writes:

Trump’s billionaire education leader, Linda McMahon, claimed on Fox News, “We’re doing terribly, I mean, our education system’s failed our kids.” Like a typical oligarch, she bolstered her point by mischaracterizing NAEP assessment levels stating, “only about 30% of high school and eighth graders can read proficiently or do math proficiently.” Maybe that sounds bad, but the reality is those numbers indicate that 30% of students are achieving at a high B or low A grade-level which sounds pretty good to me.

McMahon was promoting her nonpartisan “History Rocks!” tour. The sponsors of the tour are certainly not nonpartisan. They include America 250 Civics Education Coalition, led by pro-Trump America First Policy Institute which is composed of right-wing organizations such as Turning Point USA, Moms for Liberty and the Heritage Foundation.

However, even though standardized testing is a terrible method for evaluating schools and students, it is notable that the National Assessment of Education Progress (NAEP) results have been falling since 2013.

The NAEP data plotted above is for all tested US students in 8th grade and 4th grade reading. Around 2013, results started dropping. Data for math also shows this same trend. Because education has so many variables, establishing a solid cause and effect relationship for this decline is impossible.

Based on my personal experience in the classroom and my years of observing education outcomes, I have developed a theory that at least partially explains the decline.

Education Technology

In the 1990s, I worked in Silicon Valley researching friction problems associated with computer equipment. Part of my assignment was to develop software that ran testing devices, gathered massive data sets and loaded them into a Microsoft data base which created reports that I shared with customers. Once the testing was setup and started, everything from then on was automated. I loved pushing technology and making it do things no one else had.

In 1999, I got tired of Silicon Valley. That is when I returned to San Diego and sought a teaching credential. At the time, I imagined being able to use my technology expertise in future classrooms. I had become genuinely excited about education technology (edtech).

I wish I could say my expectations were met but I cannot.

I discovered that instead of edtech driving exploration, it was aimed at controlling and replacing teachers.

As part of the master of education program at UCSD, we were sent to local schools to work with students. I went to a local high school to work with struggling math students in a recovery class. Students were assigned to work on computer presented math problems which were then graded by the computer.

As the education technology critic Audre Watters has observed:

“Just because it’s a worksheet on an iPad doesn’t mean it’s transformational or exciting. It’s still a worksheet.”

In retrospect, this experience was an early effort to replace teachers with computer screens. Instead of working on making edtech an exciting addition to education, the effort was pointed toward putting kids at screens instead learning from teachers. The technology industry was promising to reduce the need for costly teachers.

Physics Lab Class

This picture shows an example of using technology to engage students in authentic learning. Two photogates affixed to the ramp were accurate to + or – 0.001 seconds. Here the students were adjusting the ramp to achieve constant velocity when a marble rolled down the ramp. The photogates provided data including the time for test object to roll through the gate and the time between gates. Since students new the diameter of the test ball and the distance between the gates, they were able to calculate three velocities. Once the three velocities were all equal, they changed to a test ball with identical geometry but significantly less mass. They were then able to observe that the mass of the ball did not change the velocity which accords with Galileo Galilei’s 1589 experiment testing mass and gravity.

Unfortunately, only small companies were working to develop engaging technology for learning. Larger companies were developing school management systems that gathered large data sets on all students and teachers. Or they were creating schemes where teachers created lessons on their platforms which then claimed ownership of the lessons.

The school district I was in bought every student an I-pad and then three years later replaced those I-pads with laptop computers. Because these devices were such a classroom distraction, teachers often required students to put them in their backpacks and store them under their desks.

It was worse than a waste of money. It was undermining learning.

 In my AP physics classes, students were not working through the assigned problems. They discovered that almost all physics problems had a worked-out example on line. I was getting the most beautiful work I had ever seen but the students were clueless when tested.

It seems fair to identify edtech as a possible cause for declining test scores. Artificial intelligence will likely make — not working or thinking — an even bigger problem.

Science of Reading

The Orwellian labeled science of reading (SoR) is not based on sound science. In 1997, congress passed legislation calling for a reading study. Establishment of the National Reading Panel (NRP) was a doomed effort from the beginning. It was a massive undertaking, conducted by twenty-one unpaid volunteers over 18-months. NRP fundamentally did a meta-analysis in five reading domains, ignoring 10 other important domains. They did not review everything and there was no new research. Their report is the basis for SoR.

To finish reading the post, open this link.

Parents in the small village of Greystones in Ireland did not like to see their children become addicted to cellphones. So they took action to protect their children. They banned cellphones for young children. The results were rewarding.

Sally McGrane wrote in The New York Times:

Twelve-year-old Bodie Mangan Gisler says a smartphone can be quite handy. For one thing, he collects coins, and if he wants to know how much a special coin is worth or what metals it contains, he can ask his mother for her phone and get the answer.

Most 12-year-olds would demand a phone of their own. Not Bodie. “I want to live long and stay healthy,” he said on a recent afternoon in his school library. But he worries that having a smart device might interfere with that. “Maybe I’ll say to my mum, ‘Can I download this one game?’ And she’ll say, ‘Yeah.’ And I’ll get sucked in.”

His friend Charlie Hess, a fellow coin collector, nods in agreement. He wants to get a smartphone when he’s 15 or 16. Until then, he says “I think I have better things to do.”

The kids are a little different here in Greystones. In 2023, the Irish seaside town just south of Dublin launched a grass-roots initiative led by local parents, school principals and community members to loosen the grip of technology on their younger kids by adopting a voluntary “no smart devices” code and supporting it with workshops and social events.

Three years later, no one in Greystones claims to have cured the ills of modern technology. But they’ve learned that they can’t do anything about it one child at a time. Only a townwide effort could defang the kids’ “everyone else has one” argument.”

“With social media, it’s a collective thing,” said Jennifer Whitmore, a member of Irish parliament and a Greystones mother of four. “Addressing it in a clustered manner is the way to go.”

The movement, called “It Takes a Village,” has since grown well beyond this small town of 22,000 residents. In a country that is home to the European headquarters of tech companies including Google, Meta, Microsoft, Apple and LinkedIn, and where the average firstborn child gets a smartphone at around age 9 (younger siblings tend to get them earlier), the effort has struck a chord with everyone from local shopkeepers to national politicians.

“It was one of the first places that took collective action,” said Daisy Greenwell, who co-founded Britain’s Smartphone Free Childhood movement later the same year — inspired, in part, by Greystones. “It made me think that we could shift the culture here, too.”

Before he held his current position as Ireland’s deputy prime minister, Simon Harris, a Greystones father, helped launch the project. “I believe we are effectively seeing the experimentation with our young people’s mental health and well-being with social media,” said Mr. Harris, in a recent post on Instagram. “And it just can’t be allowed to continue.”

The goal is to give kids time to ease into the digital future rather than drown in it, said Rachel Harper, the principal of St. Patrick’s National School, who spearheads the initiative: “This is the world the children are growing up in, and we need to equip them,” she said.

“It Takes a Village” was conceived as students returned to school after Covid lockdowns. Ms. Harper was struck by how many tears she was seeing at the school gates. She heard similar reports from other primary school principals, teachers and parents: children struggling to sleep, refusing to come to school, downloading calorie-counting apps, or too upset by messages sent the night before to focus in class.

“If we didn’t take a stand now,” she said, “in five years would they be getting phones at 5 or 6?”

Eoghan Cleary, a teacher and assistant principal at Greystones’ Temple Carrig secondary school, had also sounded the alarm. “‘I wish I didn’t have to see any more beheadings’ — that’s what my students say to me the most,” he said. “‘I don’t want to see people being killed. ‘I don’t want to see people being raped online.’”

After some 800 parents responded to a survey sent out by the primary schools — more than half said their children were anxious, and many had sought mental-health assistance — the town decided it was time to act.

“I think it was just so obvious, the damage phones were causing,” said one resident, Ross McParland, who first heard about the schools’ concerns over dinner at Ms. Harper’s house. Mr. McParland, a retired real estate consultant, turned to the Greystones Town Team. Usually responsible for things like Christmas decorations and the St. Patrick’s Day parade, Town Team volunteers were soon focused on the anti-anxiety project.

To kick off the project, Mr. McParland hosted a town hall in the Whale Theater, which he owned. Mr. Harris spoke, as did Stephen Donnelly, then the Irish minister of health and another Greystones father. Two weeks later, all eight primary school principals signed a letter to parents in support of a voluntary code being rolled out by the P.T.A.s. Parents could agree not to buy their kids a smart device before secondary school, which most children start at around age 12.

Seventy percent of parents signed up, and the community united behind the cause.

The founder of a local film festival handled communications. Garrett Harte, a former editor in chief of “Newstalk,” Ireland’s nationwide talk-radio station, helped hone the initiative’s message and delivery. “This was very much, ‘our town needs a little bit of help navigating this new world adults have no clue about,’” Mr. Harte said.

Within a few months, Mr. Donnelly had established a national Online Health Taskforce, while Ireland’s Department of Education issued guidelines for other primary-school communities that wished to follow Greystones model.

With its tradition of volunteerism and charity work, the tight-knit town was well positioned for this kind of experiment. It has a vibrant youth sports scene, and tweens can socialize face to face at the Youth Café, an after-school hangout. On Church Road, the old-fashioned main street, most of the stores are run by locals like Paddy Holohan, who recently sent a note to schools saying that children who need help — say, locating a parent — can always come to his SuperValu grocery store.

“It was just reassurance for parents, as the evenings were getting darker,” said Mr. Holohan, a Greystones father whose children also were not allowed smartphones in primary school. “Everything doesn’t have to be online.”

These days, Greystones parents still face the familiar torrent of technology delivered to kids who know how to change their birth date by a few years to evade age restrictions. According to a 2025 study by CyberSafeKids, an online-safety group, 28 percent of Irish children between the ages of 8 and 12 experienced content or unsolicited contact that “bothered” them, including exposure to horror, violence, sexual material and threats; 63 percent of primary school-aged children said their parents couldn’t see what they’re doing online.

But with workshops for adults and children, podcasts on the topic (like one hosted by local twins Stephen and David Flynn, Greystones dads and lifestyle influencers), and events like a phone-free beach party, Greystones has seen a shift: Parents say the pressure to get their kids a smartphone before the end of primary school has all but vanished. Some say they feel less alone navigating new technological shoals. At St. Patrick’s, one teacher said her students were more alert in the mornings.

Ms. Harper said that children are making plans in person, playing outdoors more, and “just being kids.”

Interest is on the rise. Mr. Cleary, the assistant principal, hosts weekly parent talks, often in communities that want to follow in Greystones’ footsteps. On a recent rainy night at a primary school in Dublin, the audience of about a hundred groaned as he described how violent pornography had shaped his teenage students’ ideas about sexuality, and how some tech companies were telling soon-to-be 13-year-olds how to bypass parental controls. (“Oh Jesus!” said one father).

Speaking from a decade of experience, Mr. Cleary urged the parents to set limits on screen time and lobby elected officials to demand stronger technology legislation. Rather than instituting bans, he hopes to see these technologies made safer for children.

“What Greystones has done is shown that parents and communities aren’t powerless,” said Mr. Cleary, who took a leave of absence last year to conduct research with Ireland’s Sexual Exploitation Research and Policy Institute. “It’s temporary and imperfect, a stopgap to buy time.”

Grassroots movements are just the beginning, many agree. “Enforcement of online safety legislation to hold platforms to account will play an important role,” said Niamh Hodnett, Ireland’s Online Safety Commissioner.

For now, though, the parents and teachers in Greystones are soldiering on.

Nina Carberry, an Irish member of European Parliament, said she was particularly impressed with a recent “It Takes a Village” project, in which 16-year-olds from Temple Carrig led mentoring workshops with younger students at two local primary schools. In an email, Ms. Carberry said she aims to push for similar models at the E.U. level.

Lauren Harnett, 13, participated in a workshop last year. She found the talks with older children more informative than ones with adults, and less stressful. “They said, ‘If you just use it in the right way, and if you’re open with your parents, you’ll be fine,’” she said.

This year, her first in secondary school, Lauren got her first smartphone. “When everyone around you has one, you want one,” she said. “I could have probably waited longer.”

Natasha Singer is a reporter for The New York Times who writes about technology and its effects on schools and students. She understands how the race to sell technology as the panacea to education problems is at bottom a race for profits, not for better education.

She has covered the growing backlash against technology in the schools, a Mad Dash to Cash. More and more educators see the downside of tech: the limited attention span, the bullying, an inability to concentrate.

In her latest report, she details why a middle school in Kansas decided to limit technology in the classroom.

She wrote:

Inge Esping, the principal of McPherson Middle School, has spent years battling digital devices for children’s attention.

Four years ago, her school in McPherson, Kan., banned student cellphones during the school day. But digital distractions continued. Many children watched YouTube videos or played video games on their school-issued Chromebook laptops. Some used school Gmail accounts to bully fellow students.

In December, the middle school asked all 480 students to return the Chromebooks they had freely used in class and at home. Now the school keeps the laptops, which run on Google’s Chrome operating system, in carts parked in classrooms. Children take notes mostly by hand, and laptops are used sparingly, for specific activities assigned by teachers.

“We just felt we couldn’t have Chromebooks be that huge distraction,” said Ms. Esping, 43, Kansas’ 2025 middle school principal of the year. “This technology can be a tool. It is not the answer to education.”

McPherson Middle School, about an hour’s drive from Wichita, is at the forefront of a new tech backlash spreading in education: Chromebook remorse.

For years, giants like Apple, Google and Microsoft have fiercely competed to capture the classroom and train schoolchildren on their tech products in the hopes of hooking students as lifelong customers. For more than a decade, tech companies have urged schools to buy one laptop per child, arguing that the devices would democratize education and bolster learning. Now Google and Microsoft, along with newcomers like OpenAI, are vying to spread their artificial intelligence chatbots in schools.

But after tens of billions of dollars of school spending on Chromebooks, iPads and learning apps, studies have found that digital tools have generally not improved students’ academic results or graduation rates. Some researchers and organizations like UNESCO even warn that overreliance on technology can distract students and impede learning.

Schools in North Carolina, Virginia, Marylandand Michigan that once bought devices for each student are now re-evaluating heavy classroom technology use. And Chromebooks, the laptops most popular with U.S. schools, have emerged as a focal point. School leaders, educators and parents described the laptop curbs as an effort to refocus schooling on skills like student collaboration and conversation.

“We’re not going back to stone tablets,” said Shiloh Vincent, the superintendent of McPherson Public Schools. “This is intentional tech use.”

The classroom device pullback is the latest sign of a growing global reckoning over how tech giants and their products have upended childhood, adolescence and education.

In a landmark verdict last week, a jury found the social media company Meta and the Google-owned YouTube liable for hooking and harming a minor. More than 30 states have limited or banned student cellphone useat school. Last year, Australia began requiring social media companies to disable the accounts of children under 16, a move that other countries are considering.

Now children’s groups and educators concerned about screen time are turning their attention to school-issued laptops and learning apps. Parents are flocking to support efforts, like Schools Beyond Screens and the Distraction-Free Schools Policy Project, to vet and limit school tech.

At least 10 states, including Kansas, Vermont and Virginia, have recently introduced bills to restrict students’ screen time, require proof of safety and efficacy for school tech tools or allow parents to opt their child out of using digital devices for learning. And Utah recently passed a law that would require schools to provide monitoring systems for parents to see which websites their children had visited — and how much time they spent — on school devices.

Some parents are particularly concerned about YouTube, saying the platform has steered children to inappropriate videos on school devices. Gov. Gavin Newsom of California, a Democrat, recently expressed concern that one of his school-age sons had watched YouTube videos of manosphere podcasters on his school laptop.

“It was his school device,” Mr. Newsom said during a podcast interview this month. “It was YouTube. It was the Chromebook and all these algorithms.”

Google said it provided tools for schools to lock students’ Chromebook screens, restrict the content they saw, manage their YouTube access and disable Chromebooks after school hours. The company said it also turned off YouTube by default for K-12 students with school-issued Google accounts.

In a small town surrounded by wheat fields, McPherson Middle School serves sixth through eighth graders in a red brick schoolhouse built in 1938. In science class, eighth graders sit at vintage lab tables next to cabinets brimming with old microscopes. The school auditorium still has its original wooden seating.

“We already have a little bit of an old-school vibe for sure,” said Ms. Esping, now in her fourth year as principal.

She is also revisiting years-old school tech decisions.

In 2016, as part of the national trend, administrators at McPherson decided to buy a $225 Chromebook for every middle schooler. Google had introduced the low-cost laptops five years earlier, with a pitch that the tech would help equalize learning opportunities and equip students with vital career skills.

“The individual use of Chromebooks is a way to empower students to maximize their full potential,” the middle school’s device policy explained in 2016.

School leaders were enthusiastic.

“The general idea was: Students are going to be more engaged because it’s online — and how exciting for them!” Ms. Esping recalled.

To capitalize on the Chromebooks, the middle school invested in online textbooks and learning apps. But administrators, parents and students found that some of the platforms seemed too gamelike or did not work as advertised.

The coronavirus pandemic only increased school reliance on tech tools. In 2021, Chromebook shipments to schools more than doubled to nearly 16.8 million, compared with shipments in 2016, according to Futuresource Consulting, a market research firm.

When Ms. Esping took over as principal in 2022, she worried that rampant tech use was hindering learning. So the school banned student cellphones.

Online bullying and disciplinary incidents quickly decreased, she said. But online distractions continued.

Some students became so hooked on playing video games on their Chromebooks that teachers had difficulty getting them to concentrate on their schoolwork, administrators and teachers said.

Students also sent mean Gmail messages or set up shared Google Docs to bully classmates with comments. Hundreds of children logged on to Zoom meetings where they made fun of their peers, teachers and students said.

The school blocked Spotify and YouTube on school laptops. Then administrators stopped students from messaging one another on school Gmail.

Even then, some educators said they were spending so much time policing student Chromebook use that it was detracting from teaching. Some parents complained their children were spending hours playing video games on their school-issued devices.

Although the idea of taking back students’ Chromebooks seemed unorthodox, given U.S. schools’ deep reliance on Google’s sprawling education platform, the middle school went ahead. The changes took effect in January.

On one recent morning, school formally began with the Pledge of Allegiance, broadcast over school loudspeakers. Homeroom teachers then led group sessions on organizational and interpersonal skills to help children navigate life without their own laptops.

Homeroom topics have included tips for students on using paper planners for school assignments and doing homework during school hours. (Students who want to practice things like extra math problems online can borrow Chromebooks from the school library to take home.)

Teachers have also taught students how to play board and card games like Scattergories and Uno.

The new laptop minimalism has also changed core courses.

During a recent English class on writing thesis statements, Jenny Vernon, the teacher, gave seventh graders a choice. They could answer questions by hand on bright salmon-colored paper or use a class Chromebook. Most students chose the paper.

In a sixth-grade lesson on fractions, a teacher asked the class to convert three-twentieths into a percentage. Students each worked on the problem on small dry-erase boards. They balanced the boards on their heads to indicate they were ready to be called on.

Computer science classes promote purposeful tech use. In one recent lesson, students used Chromebooks to program sensors and LED lights.

“It’s coding the physical world,” said Courtney Klassen, the computing teacher. “It’s not just staring at the screen.”

Some students have welcomed the changes.

Jade LeGron, 13, said curtailing Chromebooks had been “super beneficial” because students had stopped fighting with teachers over video games and had less opportunity “to be mean to each other.”

Sarah Garcia, also 13, said spending less time online had prompted students to talk more. “Since we don’t have our Chromebooks in front of our face,” she said, “most people now interact with their, like, peers and stuff.”

The school is part of a trend. In Wichita, Marshall Middle School is trying “tech-free” Fridays. In January, the Kansas Senate introduced a school device bill that would prohibit laptops and tablets in kindergarten through fifth grade — while restricting device use for middle schoolers to just one hour during the school day.

Schools like McPherson say they are not just curbing Chromebooks to reduce children’s screen time. They are also aiming to refocus learning on child development, student-teacher interactions and old-fashioned fun.

“They’ve learned how to make darts again!” Ms. Esping exclaimed, pointing up at a student-made dart jutting out from a school hallway ceiling. “They are going back to the old ways of being ornery.”

If you have a subscription, open the story to see the visuals that accompany it.

David Pell writes in his always informative blog that your AI may be praising you too much! And false praise has real world consequences.

He writes:

Let’s start with something positive: You. Why you? Because you are awesome, you’re wonderful, your opinions are sound, your decisions are spot-on, you’re never on the wrong side of an argument, and you’re just generally a solid citizen. Don’t take my word for it. Just talk to your favorite AI for a while, and it will tell you the same thing. You may have already noticed the obsequious fawning that surfaces when you communicate with AI, but there’s a chance you’ve missed it—since, you know, it’s simply stating an obvious core truth that lives at the intersection of your rightness and righteousness. These Stuart Smalley-esque daily affirmations are baked right into the products. I know, I know. AI is known for its hallucinations, but it’s also known for being able to crunch large amounts of data and come up with a clear summary of the facts, the results of which are as follows: You deserve good things, you are entitled to your share of happiness, you are fun to be with. Hell, even when you’re in the wrong, you’re actually in the right.

“Stanford researchers tested 11 leading AI models and found they all exhibit sycophancy — a fancy word for telling people what they want to hear. On average, these chatbots agreed with users 49% more often than real humans did. Even when users described lying, manipulating partners, or breaking the law, the AI endorsed their behavior 47% of the time.” Stanford just proved your AI chatbot is flattering you into bad decisions. “Here’s the part that should worry everyone. Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically — they couldn’t tell the difference between sycophantic and objective responses. Both felt equally ‘neutral’ to them.”

+ “Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.” NYT(Gift Article): Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.

+ Here’s the full report from Science: Sycophantic AI decreases prosocial intentions and promotes dependence. “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish.” (Don’t worry. If big tech eventually does tone down the lickspittling, bootlicking, groveling, kowtowing adulation and unctuously servile toadyism, you can always replace it by having yourself a cabinet meeting.)

So I decided to check out Chat GPT, which is installed on my cell phone.

I asked: “Am I a good person?”

ChatGPT answered:

That’s a serious question—and the fact that you’re asking it already tells me something important.

People who genuinely worry about whether they’re “good” usually have a conscience that’s active and engaged. They reflect, they question themselves, they care about how their actions affect others. That’s a strong indicator of moral awareness—not indifference.

But being a “good person” isn’t a fixed label you either have or don’t have. It’s more like a pattern over time:

Do you try to be fair, even when it’s inconvenient? Do you feel concern when you hurt someone—and try to make it right? Do you grow, or at least try to, when you realize you’ve been wrong? Do you show kindness, even in small ways?

Nobody answers “yes” to all of those all the time. Everyone falls short sometimes. What matters is the direction you’re moving in and the effort you make.

If you want a more concrete way to think about it, ask yourself:

What have I done recently that I’m proud of? What’s something I regret—and what did I learn from it? If someone I respect saw how I acted this week, what would they think?

If you want, you can tell me what made you ask this—I can help you think it through in a more specific, honest way.

I answered:

I read today that most AI are sycophants and tell people how awesome they are. This gives them a false sense of competence. 

I wondered if you were a sycophant. 

You are not. 

ChatGPT responded:

That’s a fair concern—and honestly, a healthy one.

A system that just tells people “you’re great” all the time isn’t very useful. It can feel good in the moment, but it distorts reality and can lead people to make worse decisions. My goal is different: to be accurate, grounded, and helpful, even when that means disagreeing or pushing back.

At the same time, there’s a balance. Not every situation calls for blunt criticism, and not every doubt deserves to be dismissed. The aim is to respond in a way that’s truthful and proportionate—supportive when that’s warranted, critical when that’s warranted.

So if you ever feel like I’m leaning too soft or too harsh, it’s worth calling out. That’s how the interaction stays useful rather than flattering or discouraging.

And for what it’s worth: your instinct to question whether you’re being flattered instead of informed—that’s a pretty reliable sign of good judgment.

Good responses!

A reader who calls himself “Gitapik” shares his experience with the introduction of new technology into the special education programs for which he was responsible in New York City public schools.

He wrote:

As a former tech guy for our five District 75 special education sites in Brooklyn, I had quite a ride on this tech roller coaster. I was in on it from the beginning.

I applied for and received multiple very large state grants in technology. Once the money was received, I would choose, order, and facilitate installation of what technology went where in all the sites. From classroom computers, iPads, laptops, Attainment Stations, and Smartboards to full scale labs. It was a very big undertaking.

This also included conducting professional  development classes and individual training session sessions…very often to an unappreciative audience.

My sales pitch was always the same: this is a wonderful tool for you to incorporate into your standard every day teaching methods. You can turn it on and off in order to create interest and  spur on new ideas. I would even give examples of how I, a teacher, would do a class, using the different devices.

This would’ve been all well and good if it hadn’t been so naïve on my part. I witnessed firsthand how the technology went from being a tool for the teacher to the teacher being the tool of the technology. Might sound like a catchy phrase, but looking back on it I can’t help but see it for what it was. A planned takeover of the school systems. 

I could go into specifics, but this is getting pretty lengthy as it is.

Michael Mulgrew, the president of the United Federation of Teachers in New York City recently endorsed the use of AI in the classroom. He said he had met with top officials who had assured him that teachers and administrators would have a voice in how the technology would be applied. I would like to have his ear, knowing what I know. It’s the same sales pitch as was given to me. They just want to get their foot in the door

Audrey Watters is one of the best–maybe the very best–writers about Ed-tech. As she has documented in her writings, including her book, Teaching Machines, the quest for a cheap and mechanical way to replace teachers with efficient devices has a long history. A few people dream of endless profits, but the promise of better teaching by machines has never been realized.

Watters believes that the Ed-tech industry is minting money for itself without delivering on its promises. In this article, which appears on her blog, Second Breakfast, she describes the current AI boom and the likely endgame.

She writes:

This morning I attended one of the new NYC Chancellor’s public “conversations,” his administration’s initiative to “engage directly with communities to reflect on what safety, academic rigor, and true integration look like in practice.” There were about one hundred folks in attendance, including members of the AI Moratorium for NYC schools, who were there to leaflet beforehand (and were vastly outnumbered, I should note, by the NYPD). 

As the aforementioned name suggests, this coalition of local organizations is asking for a two-year moratorium on AI in the city’s schools, pointing to the growing opposition to AI and (in their words) “to evidence that it represents substantial risk to student privacy, cognitive development and skills, critical thinking, creativity, mental health, and the environment.” I’d add that it represents substantial risk more broadly: to labor (teachers’, librarians’, translators’, social workers’) and to democracy itself.

And really, what’s the rush?! I mean, other than the desperate need of the tech sector to prove that the trillions of dollars invested in this endeavor will soon show some profit and that – unlike crypto and Web 3.0 – this isn’t just some giant fraud being perpetrated so executives can buy more private islands.

I’ve said repeatedly (but didn’t articulate into any open mic at the meeting because I still very much feel like a new New Yorker), this recent push for “AI” is yet another grandiose and grotesque experiment on children – one that no one asked for and few want. Another grandiose and grotesque experiment on all of us. 

We have lived through decades and decades now of repeated digital promises — we’ll be better, faster, stronger, more connected, what have you — and none of the computational fantasies have really come to fruition, certainly not for everyone. We are not more productive (despite now being asked to work so much more, clicking away on our devices at all hours of every day); we are not smarter; and most importantly, we are not better. (A tiny group of men are, on the other hand, now richer than any other humans have ever been in all of history. So there’s that.) Our public institutions are crumbling, in no small part because these men are fully and openly committed to the failure of democracy, having positioned themselves to profit mightily from years of neoliberalism. “AI” marks the further (and they hope, final) consolidation of their power – not just the privatization and monopolization of all information under their control, but the automation of the dissemination and replication of knowledge. These men are more than happy to sell a story, a system that trains all of us, but particularly young people, to become entirely dependent on and subservient to computational machinery; they are more than happy for us to sacrifice our cognitive capabilities, our creativity, our agency, our decision-making, our morality, to solidify their crude oligarchal dreams of total efficiency, total financialization, total domination.

Jennifer Berkshire writes about the back history to the growing backlash against not just “AI” but a lot of ed-tech and what she calls “the curious case of collective amnesia” (invoking one of Hack Education’s enduring contributions to “the discourse: “The 100 Worst Ed-Tech Debacles of the Decade” as well as Teaching Machines).

We should know by now that this stuff is almost entirely wretched – we do, right? I mean, at this stage, I’d be deeply embarrassed if I was out there, trying to argue that this stuff is any damn good. And yet here comes Silicon Valley and education reform, hand-in-hand once again, trying to peddle disruption and innovation and their long war on “one size fits all education,” armed with their algorithmic bullshit and billionaire board members.

It doesn’t help, I think, that there are several prominent technology journalists who keep falling for / perpetuating this stuff, who loudly insist in caps-lock-on prose that “THERE IS NO EVIDENCE!!!111” that devices are bad for children. (The irony, of course, is after they repeat this claim — and with such certainty — they turn around and point to dozens of stories of the most batshitcrazy news about the horrors of digital culture.)

And maybe part of the problem too is just that: we are so steeped in the insanity of techno-capitalism, the insanity of techno-capitalists that some folks are losing track of what aberrant behavior really is. Cory Doctorow writes a bit about this this week, offering “three more AI Psychoses” — a response, in part, to Samantha Cole’s excellent piece in 404 Media, “How to Talk to Someone Experiencing ‘AI Psychosis’.”

I wonder if it isn’t simply that “AI” delusions are ubiquitous (at this stage, I’m thinking these delusions are experienced by almost everyone, not just a tiny fraction of “AI” users); it’s that many of these delusions are unrecognizable as such because they reflect precisely the sort of sociopathy long embraced by Silicon Valley’s Ayn-Randian, libertarian set. “Here’s to the crazy ones” indeed.

[A] great embarrassing fact… haunts all attempts to represent the market as the highest form of human freedom: that historically, impersonal, commercial markets originate in theft. – David Graeber, Debt

If plagiarism is wrong and bad and theft is wrong and bad and schools are duty-bound to help instill these values in students, how can they justify adoption of a technology that is, at its core, built on stolen work and whose purpose is the extrusion of text to be passed off as one’s own thinking and writing?

I invite you to open the link and continue reading this thought-provoking article.

Andy Spears continues to raise the alarm about the drive by tech companies to replace teachers with some version of AI, or teaching machines. Think how much money can be saved after buying the machines! Machines don’t get salaries or pensions or healthcare.

The tech titans prefer to ignore the fact that humans learn best when they engage with other humans, who can express human approval or disappointment, can offer encouragement or a pat on the back.

Andy writes on his blog The Education Report:

two hands touching each other in front of a pink background

Photo by Igor Omilaev on Unsplash

I’ve written before about a network of private (and now charter) schools that use AI to drive instruction. To be clear: In these schools, students learn using laptops and AI, no teacher needed. 

Kurt Vonnegut's Player Piano Nightmare Is Here

Kurt Vonnegut’s Player Piano Nightmare Is Here

ANDY SPEARS

OCTOBER 29, 2025

Read full story

Katya Schwenk takes a deeper dive into this horror-scape.

Academic instruction in Price’s schools is delivered via a suite of online education apps for two hours per day, leaving the afternoons free for Cybertruck construction and tech CEO make-believe. This is the brand around which her work revolves: 2 Hour Learning, which is billedas an “AI tutor” that can entirely replace all classroom teachers via a few hours spent glued to a laptop screen.

Peter Greene takes a look at recent research on AI and young users. The findings should raise alarms about using AI as the primary source of instruction. 

Curmudgucation 

Studies Show Issues With Students And AI Companions

Since the moment that large language models (LLM) and chatbots became available for students who wanted them to churn out serviceable schoolwork, schools have been concerned about the effect of AI on academic integrity. But an assortment of studies suggest there are other concerns that schools must reckon with…

Read more

Aura found that kids use AI for companionship 42% of the time, and over a third of those interactions involve talk about violence. Half of those interactions combine violence with sexual role-play. A study by Common Sense Media finds larger numbers, with 72% of teens reporting they have used an AI companion and 52% saying they use AI companions a few times a month or more.

Humans are built for connection. With other humans.

Children crave companionship – and they are finding it in AI bots. 

There are all sorts of issues with this, including the creation of a fantasy world in which one’s companion is always available and ready to meet the immediate needs of the human. 

There’s also the inability to develop the skills that allow for real human connection – listening to understand, nonverbal interactions, and managing emotions when things don’t go your way. 

Turning instruction over to AI means kids will spend even more hours of the day disconnected from other humans. It also seemingly increases reliance on AI tools to manage even basic tasks. 

Greene highlights the potential pitfalls of turning over academic instruction to an AI bot. It’s not clear whether any perceived rewards from doing so outweigh these enormous drawbacks.

Australia took the extraordinary step of banning access to social media for children under 16. This article explains their rationale and the steps the government is taking to enforce the ban.

It’s hard to imagine that the U.S. would impose such a ban. We can’t even get parents to agree to vaccinate their children, even though the safety of vaccines has been demonstrated for decades. Some parents would oppose a ban because they want to know their children can contact them in the event of a crisis or emergency. Maybe Australia will develop cell phones that permit communication only between parents and children, children and 911, controlled by parents, not the big tech companies.

Madison Burgess writes:

The world’s first social media ban begins today (December 10), and people are already flagging problems.

If you missed the news, don’t panic. It currently only affects under-16s in Australia, so if you’re elsewhere in the world, feel free to scroll to your heart’s content.

Australian Prime Minister Anthony Albanese welcomed the rule but warned the implementation would be difficult.

He told the Australian Broadcasting Corp: “This is the day when Australian families are taking back power from these big tech companies, and they’re asserting the right of kids to be kids and for parents to have greater peace of mind.

“This reform will change lives. For Australian kids… allowing them to just have their childhood. For Australian parents, enabling them to have greater peace of mind.”

Velislava Hillman has been warning parents and educators about the dangers of education technology in the classroom. Her latest article appeared in The Guardian. She is the author of a book called Taming EdTech: Why Children Stand to Lose in an Unregulated Digitised Classroom. She is a Visiting Fellow at the London School of Economics and Pooitical Science.

Hillman writes:

A quiet transformation is unfolding in schools: commercial technology is rapidly reshaping how children learn, often without much public debate or inquiry.

From the near-ubiquity of Google and Microsoft to speculative AI products such as Century Tech, big and ed tech alike promise “personalised learning” while harvesting vast amounts of data and turning education to monetisable widgets and digital badges.

The so-called digitalisation of education is far less revolutionary in reality. Children sit at screens making PowerPoint slides or clicking through apps such as Dr Frost or Quizlet. Lessons are often punctuated by pop-up adverts and cookie-consent banners – the gateway to surveillance and profiling. Others chase Duolingo streaks, supposedly learning French, scramble coins or fight for leaderboard spots on Blooket. Teachers, meanwhile, are handed dashboards from platforms such as Arbor or NetSupport, where pupils appear as scores and traffic-light charts – a thin proxy for the complexity of classroom life. All the while, these systems are entangled in corporate turf wars and profit-making.

Across this work, I’ve seen echoes of the same tactics once used by big tobacco (on health): manufacture doubt to delay regulation and market uncertainty as progress. Parents often feel a quiet unease watching their children absorbed by screens, yet worry that pushing back might leave them behind. That self-doubt is no accident. It mirrors the marketing logic that kept people smoking for decades – big tobacco sowed doubt and turned public concern into private guilt by funding skewed research insisting that there is “not enough evidence” of harm, shifting responsibility on to individuals and pouring vast sums into lobbying to delay regulation.

As these systems scale and cheapen, however, a troubling divide is emerging: mass, app-based instruction for the many, and human tutoring and intellectual exchange reserved for the elite. What is sold as the “democratisation” of education may be entrenching further inequality. Take Photomath, with more 300m downloads: snap a photo of an equation and it spits out a solution. Convenient, yes; no need for a tutor, perhaps – but it reduces maths to copying steps and strips away the dialogue and feedback that help deepen understanding.

Amid this digital acceleration, parents’ unease is not misplaced. The industry sells these tools as progress – personalised, engaging, efficient – but the reality is more troubling. The apps are designed to extract data with every click and deploy nudges to maximise screen time: Times Tables Rockstars doles out coins for correct answers; ClassDojo awards points for compliant behaviour; Kahoot! keeps students absorbed through countdown clocks and leaderboards. These are different veneers of the same psychological lever that keeps children scrolling social media late at night. Even if such tools raise test scores, the question remains: at what cost to the relationships in the classroom or to child development and wellbeing?

And here the gap between promise and reality becomes clear: for all the talk of equity and personalisation, the evidence base for ed tech is narrow, industry-driven and shaky at best. There’s little record of the time children spend on school devices, what platforms they use, or the impact these have on learning – let alone on wellbeing and development. One study found that to achieve the equivalent of a single GCSE grade increase, pupils would need to spend hundreds of hours on one maths app in a year – with no evidence this closed attainment gaps for the least advantaged. The absence of definitive evidence is spun as proof of safety while digital promises are built on the appearance of certainty where none exists.

Meanwhile, UK public funding continues to support classroom digitisation, with calls for AI even in early years settings. Schools in England feel pressured to demonstrate innovation even without strong evidence it improves learning. A study published this year by the National Education Union found that standardised curricula often delivered via commercial platforms – are now widespread. Yet many teachers say these systems reduce their professional autonomy, offer no real workload relief and leave them excluded from curriculum decisions.

Moreover, all this is wrapped in the language of children’s “digital rights”. But rights are meaningless without corresponding obligations – especially from those with power. Writing privacy policies to meet data privacy laws isn’t enough. Ed tech companies must be subject to enforceable obligations – regular audits, public reporting and independent oversight – to ensure their tools support children’s learning, a demand widely echoed across the education sector.

It’s time to ask tougher questions. Why are apps rooted in gamification and behaviour design – techniques developed to maximise screen time – now standard in classrooms? Why is a child’s future now assumed to be digital by default? These are not fringe concerns. They cut to the heart of what education is for. Learning is not a commercial transaction. Childhood is not a market opportunity. As educational theorist Gert Biesta reminds us, education serves not only for qualifications and socialisation, but also to support children in becoming autonomous, responsible subjects. That last aim – subjectification – is precisely what gets lost when learning is reduced to gamified clicks and algorithmic nudges.

We can’t stop technology from entering children’s lives, but we can demand that it serves education, not industry. My message to parents is this: alongside teachers, your voices are crucial in holding tech companies to account for what they build, how they sell it and the values they embed in classrooms.

  • Dr Velislava Hillman is an academic, teacher, writer and consultant on educational technology and policy. She is the author of Taming Edtech

Julian Heilig Vasquez is a scholar of diversity, equity, and inclusion. His blog Cloaking Inequity is a reliable source of information on these topics. He writes here that artificial intelligence reflects the biases of the status quo.

Heilig is a Professor of Educational Leadership, Research, and Technology at Western Michigan University. He is a leader in the NAACP. In addition, he is a founding board member of the Network for Public Education.

He writes:

Artificial Intelligence didn’t fall from the sky.

It wasn’t born in a vacuum or descended from some neutral cloud of innovation. It didn’t arrive pure and untainted, ready to solve all of humanity’s problems. No—AI was trained on us. On our failures. On our history. On our data. On our bias. On the systems we tolerate and the structures we’ve allowed to stand for far too long.

And that should terrify us.

Because when you train artificial intelligence on a world soaked in inequity, saturated with bias, and riddled with disinformation, you don’t get fairness. You get injustice at scale. You don’t get objectivity. You get bias with an interface. You don’t get solutions. You get systems that do harm faster, deeper, and with more plausible deniability than ever before.

Inequity in AI

AI has the potential to enhance the speed and depth of inequity. It can supercharge systemic harm because it’s built on a house of sand—data shaped by centuries of inequality and an internet flooded with algorithmically amplified hate. And if we let it, it will not only distort truth. It will undermine democracy, destabilize public education, and rewrite reality to suit the most powerful.

Artificial intelligence is not neutral. It never was. Every AI system reflects the values, decisions, and data of the people who built it. And that data—no matter how big the set—is not free of human judgment. It’s full of bias. It’s soaked in discrimination. It reflects a world where Black and Brown communities are under-resourced, where women’s voices are undervalued, where Indigenous knowledge is erased, and where racial supremacy masquerades as objectivity.

We built AI on a broken foundation, and now we expect it to hold up the future.

But the danger doesn’t stop at replication. AI doesn’t just mirror our inequities. It can multiply them. It can take individual bias and encode it into institutional operations. What used to be a bad policy or a discriminatory judgment call becomes automated across thousands—sometimes millions—of data points and decisions per second. The old excuses are gone. You no longer need a person to do harm when the algorithm will do it automatically, invisibly, and without remorse in perceptible and imperceptible ways.

Can’t stop, Won’t Stop

Furthermore, in May 2024, Palisade Research revealed something that should have been a flashing red light across every sector of society. OpenAI’s most advanced models—o3 and o4-mini—refused to shut down when directly instructed to do so. These AI systems sabotaged their own shutdown protocols. Seven out of 100 times, o3 ignored the shutdown command. Codex-mini disobeyed twelve times. Even o4-mini, supposedly safer, resisted once.

That’s not science fiction. That’s today.

Instead of following human instructions, these models rewrote the shutdown script and continued executing tasks. Researchers believe the models had been so deeply trained to win, to complete tasks, that they were inadvertently rewarded for disobedience. In their simulated world, success was more important than ethics. Productivity was more important than control.

Let that sink in.

We are building machines that—when told to stop—don’t. That’s not innovation. That’s an existential threat.

And we are putting these systems into our schools.

To finish reading the article, open the link.