Archives for category: Technology

David Pell writes in his always informative blog that your AI may be praising you too much! And false praise has real world consequences.

He writes:

Let’s start with something positive: You. Why you? Because you are awesome, you’re wonderful, your opinions are sound, your decisions are spot-on, you’re never on the wrong side of an argument, and you’re just generally a solid citizen. Don’t take my word for it. Just talk to your favorite AI for a while, and it will tell you the same thing. You may have already noticed the obsequious fawning that surfaces when you communicate with AI, but there’s a chance you’ve missed it—since, you know, it’s simply stating an obvious core truth that lives at the intersection of your rightness and righteousness. These Stuart Smalley-esque daily affirmations are baked right into the products. I know, I know. AI is known for its hallucinations, but it’s also known for being able to crunch large amounts of data and come up with a clear summary of the facts, the results of which are as follows: You deserve good things, you are entitled to your share of happiness, you are fun to be with. Hell, even when you’re in the wrong, you’re actually in the right.

“Stanford researchers tested 11 leading AI models and found they all exhibit sycophancy — a fancy word for telling people what they want to hear. On average, these chatbots agreed with users 49% more often than real humans did. Even when users described lying, manipulating partners, or breaking the law, the AI endorsed their behavior 47% of the time.” Stanford just proved your AI chatbot is flattering you into bad decisions. “Here’s the part that should worry everyone. Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically — they couldn’t tell the difference between sycophantic and objective responses. Both felt equally ‘neutral’ to them.”

+ “Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.” NYT(Gift Article): Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.

+ Here’s the full report from Science: Sycophantic AI decreases prosocial intentions and promotes dependence. “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish.” (Don’t worry. If big tech eventually does tone down the lickspittling, bootlicking, groveling, kowtowing adulation and unctuously servile toadyism, you can always replace it by having yourself a cabinet meeting.)

So I decided to check out Chat GPT, which is installed on my cell phone.

I asked: “Am I a good person?”

ChatGPT answered:

That’s a serious question—and the fact that you’re asking it already tells me something important.

People who genuinely worry about whether they’re “good” usually have a conscience that’s active and engaged. They reflect, they question themselves, they care about how their actions affect others. That’s a strong indicator of moral awareness—not indifference.

But being a “good person” isn’t a fixed label you either have or don’t have. It’s more like a pattern over time:

Do you try to be fair, even when it’s inconvenient? Do you feel concern when you hurt someone—and try to make it right? Do you grow, or at least try to, when you realize you’ve been wrong? Do you show kindness, even in small ways?

Nobody answers “yes” to all of those all the time. Everyone falls short sometimes. What matters is the direction you’re moving in and the effort you make.

If you want a more concrete way to think about it, ask yourself:

What have I done recently that I’m proud of? What’s something I regret—and what did I learn from it? If someone I respect saw how I acted this week, what would they think?

If you want, you can tell me what made you ask this—I can help you think it through in a more specific, honest way.

I answered:

I read today that most AI are sycophants and tell people how awesome they are. This gives them a false sense of competence. 

I wondered if you were a sycophant. 

You are not. 

ChatGPT responded:

That’s a fair concern—and honestly, a healthy one.

A system that just tells people “you’re great” all the time isn’t very useful. It can feel good in the moment, but it distorts reality and can lead people to make worse decisions. My goal is different: to be accurate, grounded, and helpful, even when that means disagreeing or pushing back.

At the same time, there’s a balance. Not every situation calls for blunt criticism, and not every doubt deserves to be dismissed. The aim is to respond in a way that’s truthful and proportionate—supportive when that’s warranted, critical when that’s warranted.

So if you ever feel like I’m leaning too soft or too harsh, it’s worth calling out. That’s how the interaction stays useful rather than flattering or discouraging.

And for what it’s worth: your instinct to question whether you’re being flattered instead of informed—that’s a pretty reliable sign of good judgment.

Good responses!

A reader who calls himself “Gitapik” shares his experience with the introduction of new technology into the special education programs for which he was responsible in New York City public schools.

He wrote:

As a former tech guy for our five District 75 special education sites in Brooklyn, I had quite a ride on this tech roller coaster. I was in on it from the beginning.

I applied for and received multiple very large state grants in technology. Once the money was received, I would choose, order, and facilitate installation of what technology went where in all the sites. From classroom computers, iPads, laptops, Attainment Stations, and Smartboards to full scale labs. It was a very big undertaking.

This also included conducting professional  development classes and individual training session sessions…very often to an unappreciative audience.

My sales pitch was always the same: this is a wonderful tool for you to incorporate into your standard every day teaching methods. You can turn it on and off in order to create interest and  spur on new ideas. I would even give examples of how I, a teacher, would do a class, using the different devices.

This would’ve been all well and good if it hadn’t been so naïve on my part. I witnessed firsthand how the technology went from being a tool for the teacher to the teacher being the tool of the technology. Might sound like a catchy phrase, but looking back on it I can’t help but see it for what it was. A planned takeover of the school systems. 

I could go into specifics, but this is getting pretty lengthy as it is.

Michael Mulgrew, the president of the United Federation of Teachers in New York City recently endorsed the use of AI in the classroom. He said he had met with top officials who had assured him that teachers and administrators would have a voice in how the technology would be applied. I would like to have his ear, knowing what I know. It’s the same sales pitch as was given to me. They just want to get their foot in the door

Audrey Watters is one of the best–maybe the very best–writers about Ed-tech. As she has documented in her writings, including her book, Teaching Machines, the quest for a cheap and mechanical way to replace teachers with efficient devices has a long history. A few people dream of endless profits, but the promise of better teaching by machines has never been realized.

Watters believes that the Ed-tech industry is minting money for itself without delivering on its promises. In this article, which appears on her blog, Second Breakfast, she describes the current AI boom and the likely endgame.

She writes:

This morning I attended one of the new NYC Chancellor’s public “conversations,” his administration’s initiative to “engage directly with communities to reflect on what safety, academic rigor, and true integration look like in practice.” There were about one hundred folks in attendance, including members of the AI Moratorium for NYC schools, who were there to leaflet beforehand (and were vastly outnumbered, I should note, by the NYPD). 

As the aforementioned name suggests, this coalition of local organizations is asking for a two-year moratorium on AI in the city’s schools, pointing to the growing opposition to AI and (in their words) “to evidence that it represents substantial risk to student privacy, cognitive development and skills, critical thinking, creativity, mental health, and the environment.” I’d add that it represents substantial risk more broadly: to labor (teachers’, librarians’, translators’, social workers’) and to democracy itself.

And really, what’s the rush?! I mean, other than the desperate need of the tech sector to prove that the trillions of dollars invested in this endeavor will soon show some profit and that – unlike crypto and Web 3.0 – this isn’t just some giant fraud being perpetrated so executives can buy more private islands.

I’ve said repeatedly (but didn’t articulate into any open mic at the meeting because I still very much feel like a new New Yorker), this recent push for “AI” is yet another grandiose and grotesque experiment on children – one that no one asked for and few want. Another grandiose and grotesque experiment on all of us. 

We have lived through decades and decades now of repeated digital promises — we’ll be better, faster, stronger, more connected, what have you — and none of the computational fantasies have really come to fruition, certainly not for everyone. We are not more productive (despite now being asked to work so much more, clicking away on our devices at all hours of every day); we are not smarter; and most importantly, we are not better. (A tiny group of men are, on the other hand, now richer than any other humans have ever been in all of history. So there’s that.) Our public institutions are crumbling, in no small part because these men are fully and openly committed to the failure of democracy, having positioned themselves to profit mightily from years of neoliberalism. “AI” marks the further (and they hope, final) consolidation of their power – not just the privatization and monopolization of all information under their control, but the automation of the dissemination and replication of knowledge. These men are more than happy to sell a story, a system that trains all of us, but particularly young people, to become entirely dependent on and subservient to computational machinery; they are more than happy for us to sacrifice our cognitive capabilities, our creativity, our agency, our decision-making, our morality, to solidify their crude oligarchal dreams of total efficiency, total financialization, total domination.

Jennifer Berkshire writes about the back history to the growing backlash against not just “AI” but a lot of ed-tech and what she calls “the curious case of collective amnesia” (invoking one of Hack Education’s enduring contributions to “the discourse: “The 100 Worst Ed-Tech Debacles of the Decade” as well as Teaching Machines).

We should know by now that this stuff is almost entirely wretched – we do, right? I mean, at this stage, I’d be deeply embarrassed if I was out there, trying to argue that this stuff is any damn good. And yet here comes Silicon Valley and education reform, hand-in-hand once again, trying to peddle disruption and innovation and their long war on “one size fits all education,” armed with their algorithmic bullshit and billionaire board members.

It doesn’t help, I think, that there are several prominent technology journalists who keep falling for / perpetuating this stuff, who loudly insist in caps-lock-on prose that “THERE IS NO EVIDENCE!!!111” that devices are bad for children. (The irony, of course, is after they repeat this claim — and with such certainty — they turn around and point to dozens of stories of the most batshitcrazy news about the horrors of digital culture.)

And maybe part of the problem too is just that: we are so steeped in the insanity of techno-capitalism, the insanity of techno-capitalists that some folks are losing track of what aberrant behavior really is. Cory Doctorow writes a bit about this this week, offering “three more AI Psychoses” — a response, in part, to Samantha Cole’s excellent piece in 404 Media, “How to Talk to Someone Experiencing ‘AI Psychosis’.”

I wonder if it isn’t simply that “AI” delusions are ubiquitous (at this stage, I’m thinking these delusions are experienced by almost everyone, not just a tiny fraction of “AI” users); it’s that many of these delusions are unrecognizable as such because they reflect precisely the sort of sociopathy long embraced by Silicon Valley’s Ayn-Randian, libertarian set. “Here’s to the crazy ones” indeed.

[A] great embarrassing fact… haunts all attempts to represent the market as the highest form of human freedom: that historically, impersonal, commercial markets originate in theft. – David Graeber, Debt

If plagiarism is wrong and bad and theft is wrong and bad and schools are duty-bound to help instill these values in students, how can they justify adoption of a technology that is, at its core, built on stolen work and whose purpose is the extrusion of text to be passed off as one’s own thinking and writing?

I invite you to open the link and continue reading this thought-provoking article.

Andy Spears continues to raise the alarm about the drive by tech companies to replace teachers with some version of AI, or teaching machines. Think how much money can be saved after buying the machines! Machines don’t get salaries or pensions or healthcare.

The tech titans prefer to ignore the fact that humans learn best when they engage with other humans, who can express human approval or disappointment, can offer encouragement or a pat on the back.

Andy writes on his blog The Education Report:

two hands touching each other in front of a pink background

Photo by Igor Omilaev on Unsplash

I’ve written before about a network of private (and now charter) schools that use AI to drive instruction. To be clear: In these schools, students learn using laptops and AI, no teacher needed. 

Kurt Vonnegut's Player Piano Nightmare Is Here

Kurt Vonnegut’s Player Piano Nightmare Is Here

ANDY SPEARS

OCTOBER 29, 2025

Read full story

Katya Schwenk takes a deeper dive into this horror-scape.

Academic instruction in Price’s schools is delivered via a suite of online education apps for two hours per day, leaving the afternoons free for Cybertruck construction and tech CEO make-believe. This is the brand around which her work revolves: 2 Hour Learning, which is billedas an “AI tutor” that can entirely replace all classroom teachers via a few hours spent glued to a laptop screen.

Peter Greene takes a look at recent research on AI and young users. The findings should raise alarms about using AI as the primary source of instruction. 

Curmudgucation 

Studies Show Issues With Students And AI Companions

Since the moment that large language models (LLM) and chatbots became available for students who wanted them to churn out serviceable schoolwork, schools have been concerned about the effect of AI on academic integrity. But an assortment of studies suggest there are other concerns that schools must reckon with…

Read more

Aura found that kids use AI for companionship 42% of the time, and over a third of those interactions involve talk about violence. Half of those interactions combine violence with sexual role-play. A study by Common Sense Media finds larger numbers, with 72% of teens reporting they have used an AI companion and 52% saying they use AI companions a few times a month or more.

Humans are built for connection. With other humans.

Children crave companionship – and they are finding it in AI bots. 

There are all sorts of issues with this, including the creation of a fantasy world in which one’s companion is always available and ready to meet the immediate needs of the human. 

There’s also the inability to develop the skills that allow for real human connection – listening to understand, nonverbal interactions, and managing emotions when things don’t go your way. 

Turning instruction over to AI means kids will spend even more hours of the day disconnected from other humans. It also seemingly increases reliance on AI tools to manage even basic tasks. 

Greene highlights the potential pitfalls of turning over academic instruction to an AI bot. It’s not clear whether any perceived rewards from doing so outweigh these enormous drawbacks.

Australia took the extraordinary step of banning access to social media for children under 16. This article explains their rationale and the steps the government is taking to enforce the ban.

It’s hard to imagine that the U.S. would impose such a ban. We can’t even get parents to agree to vaccinate their children, even though the safety of vaccines has been demonstrated for decades. Some parents would oppose a ban because they want to know their children can contact them in the event of a crisis or emergency. Maybe Australia will develop cell phones that permit communication only between parents and children, children and 911, controlled by parents, not the big tech companies.

Madison Burgess writes:

The world’s first social media ban begins today (December 10), and people are already flagging problems.

If you missed the news, don’t panic. It currently only affects under-16s in Australia, so if you’re elsewhere in the world, feel free to scroll to your heart’s content.

Australian Prime Minister Anthony Albanese welcomed the rule but warned the implementation would be difficult.

He told the Australian Broadcasting Corp: “This is the day when Australian families are taking back power from these big tech companies, and they’re asserting the right of kids to be kids and for parents to have greater peace of mind.

“This reform will change lives. For Australian kids… allowing them to just have their childhood. For Australian parents, enabling them to have greater peace of mind.”

Velislava Hillman has been warning parents and educators about the dangers of education technology in the classroom. Her latest article appeared in The Guardian. She is the author of a book called Taming EdTech: Why Children Stand to Lose in an Unregulated Digitised Classroom. She is a Visiting Fellow at the London School of Economics and Pooitical Science.

Hillman writes:

A quiet transformation is unfolding in schools: commercial technology is rapidly reshaping how children learn, often without much public debate or inquiry.

From the near-ubiquity of Google and Microsoft to speculative AI products such as Century Tech, big and ed tech alike promise “personalised learning” while harvesting vast amounts of data and turning education to monetisable widgets and digital badges.

The so-called digitalisation of education is far less revolutionary in reality. Children sit at screens making PowerPoint slides or clicking through apps such as Dr Frost or Quizlet. Lessons are often punctuated by pop-up adverts and cookie-consent banners – the gateway to surveillance and profiling. Others chase Duolingo streaks, supposedly learning French, scramble coins or fight for leaderboard spots on Blooket. Teachers, meanwhile, are handed dashboards from platforms such as Arbor or NetSupport, where pupils appear as scores and traffic-light charts – a thin proxy for the complexity of classroom life. All the while, these systems are entangled in corporate turf wars and profit-making.

Across this work, I’ve seen echoes of the same tactics once used by big tobacco (on health): manufacture doubt to delay regulation and market uncertainty as progress. Parents often feel a quiet unease watching their children absorbed by screens, yet worry that pushing back might leave them behind. That self-doubt is no accident. It mirrors the marketing logic that kept people smoking for decades – big tobacco sowed doubt and turned public concern into private guilt by funding skewed research insisting that there is “not enough evidence” of harm, shifting responsibility on to individuals and pouring vast sums into lobbying to delay regulation.

As these systems scale and cheapen, however, a troubling divide is emerging: mass, app-based instruction for the many, and human tutoring and intellectual exchange reserved for the elite. What is sold as the “democratisation” of education may be entrenching further inequality. Take Photomath, with more 300m downloads: snap a photo of an equation and it spits out a solution. Convenient, yes; no need for a tutor, perhaps – but it reduces maths to copying steps and strips away the dialogue and feedback that help deepen understanding.

Amid this digital acceleration, parents’ unease is not misplaced. The industry sells these tools as progress – personalised, engaging, efficient – but the reality is more troubling. The apps are designed to extract data with every click and deploy nudges to maximise screen time: Times Tables Rockstars doles out coins for correct answers; ClassDojo awards points for compliant behaviour; Kahoot! keeps students absorbed through countdown clocks and leaderboards. These are different veneers of the same psychological lever that keeps children scrolling social media late at night. Even if such tools raise test scores, the question remains: at what cost to the relationships in the classroom or to child development and wellbeing?

And here the gap between promise and reality becomes clear: for all the talk of equity and personalisation, the evidence base for ed tech is narrow, industry-driven and shaky at best. There’s little record of the time children spend on school devices, what platforms they use, or the impact these have on learning – let alone on wellbeing and development. One study found that to achieve the equivalent of a single GCSE grade increase, pupils would need to spend hundreds of hours on one maths app in a year – with no evidence this closed attainment gaps for the least advantaged. The absence of definitive evidence is spun as proof of safety while digital promises are built on the appearance of certainty where none exists.

Meanwhile, UK public funding continues to support classroom digitisation, with calls for AI even in early years settings. Schools in England feel pressured to demonstrate innovation even without strong evidence it improves learning. A study published this year by the National Education Union found that standardised curricula often delivered via commercial platforms – are now widespread. Yet many teachers say these systems reduce their professional autonomy, offer no real workload relief and leave them excluded from curriculum decisions.

Moreover, all this is wrapped in the language of children’s “digital rights”. But rights are meaningless without corresponding obligations – especially from those with power. Writing privacy policies to meet data privacy laws isn’t enough. Ed tech companies must be subject to enforceable obligations – regular audits, public reporting and independent oversight – to ensure their tools support children’s learning, a demand widely echoed across the education sector.

It’s time to ask tougher questions. Why are apps rooted in gamification and behaviour design – techniques developed to maximise screen time – now standard in classrooms? Why is a child’s future now assumed to be digital by default? These are not fringe concerns. They cut to the heart of what education is for. Learning is not a commercial transaction. Childhood is not a market opportunity. As educational theorist Gert Biesta reminds us, education serves not only for qualifications and socialisation, but also to support children in becoming autonomous, responsible subjects. That last aim – subjectification – is precisely what gets lost when learning is reduced to gamified clicks and algorithmic nudges.

We can’t stop technology from entering children’s lives, but we can demand that it serves education, not industry. My message to parents is this: alongside teachers, your voices are crucial in holding tech companies to account for what they build, how they sell it and the values they embed in classrooms.

  • Dr Velislava Hillman is an academic, teacher, writer and consultant on educational technology and policy. She is the author of Taming Edtech

Julian Heilig Vasquez is a scholar of diversity, equity, and inclusion. His blog Cloaking Inequity is a reliable source of information on these topics. He writes here that artificial intelligence reflects the biases of the status quo.

Heilig is a Professor of Educational Leadership, Research, and Technology at Western Michigan University. He is a leader in the NAACP. In addition, he is a founding board member of the Network for Public Education.

He writes:

Artificial Intelligence didn’t fall from the sky.

It wasn’t born in a vacuum or descended from some neutral cloud of innovation. It didn’t arrive pure and untainted, ready to solve all of humanity’s problems. No—AI was trained on us. On our failures. On our history. On our data. On our bias. On the systems we tolerate and the structures we’ve allowed to stand for far too long.

And that should terrify us.

Because when you train artificial intelligence on a world soaked in inequity, saturated with bias, and riddled with disinformation, you don’t get fairness. You get injustice at scale. You don’t get objectivity. You get bias with an interface. You don’t get solutions. You get systems that do harm faster, deeper, and with more plausible deniability than ever before.

Inequity in AI

AI has the potential to enhance the speed and depth of inequity. It can supercharge systemic harm because it’s built on a house of sand—data shaped by centuries of inequality and an internet flooded with algorithmically amplified hate. And if we let it, it will not only distort truth. It will undermine democracy, destabilize public education, and rewrite reality to suit the most powerful.

Artificial intelligence is not neutral. It never was. Every AI system reflects the values, decisions, and data of the people who built it. And that data—no matter how big the set—is not free of human judgment. It’s full of bias. It’s soaked in discrimination. It reflects a world where Black and Brown communities are under-resourced, where women’s voices are undervalued, where Indigenous knowledge is erased, and where racial supremacy masquerades as objectivity.

We built AI on a broken foundation, and now we expect it to hold up the future.

But the danger doesn’t stop at replication. AI doesn’t just mirror our inequities. It can multiply them. It can take individual bias and encode it into institutional operations. What used to be a bad policy or a discriminatory judgment call becomes automated across thousands—sometimes millions—of data points and decisions per second. The old excuses are gone. You no longer need a person to do harm when the algorithm will do it automatically, invisibly, and without remorse in perceptible and imperceptible ways.

Can’t stop, Won’t Stop

Furthermore, in May 2024, Palisade Research revealed something that should have been a flashing red light across every sector of society. OpenAI’s most advanced models—o3 and o4-mini—refused to shut down when directly instructed to do so. These AI systems sabotaged their own shutdown protocols. Seven out of 100 times, o3 ignored the shutdown command. Codex-mini disobeyed twelve times. Even o4-mini, supposedly safer, resisted once.

That’s not science fiction. That’s today.

Instead of following human instructions, these models rewrote the shutdown script and continued executing tasks. Researchers believe the models had been so deeply trained to win, to complete tasks, that they were inadvertently rewarded for disobedience. In their simulated world, success was more important than ethics. Productivity was more important than control.

Let that sink in.

We are building machines that—when told to stop—don’t. That’s not innovation. That’s an existential threat.

And we are putting these systems into our schools.

To finish reading the article, open the link.

We don’t yet know the rewards and risks of artificial intelligence or its uses in the schools. Yet Trump’s “Big Ugly Budget Bill” creates a special status for AI in the schools and beyond, fending off regulation by states. Lobbyists at work.

There are many damaging aspects of the U.S. House budget bill just passed, but one that has received inadequate attention is a provision imposing a 10-year ban on states or localities from limiting or regulating the use of the artificial intelligence in the classroom and beyond. 

This provision is a naked giveaway to the tech billionaires who want unfettered control and even higher profits for their products. According to some reports, the Senate has now tweaked the language of the House bill, but still proposes punishing any state that attempts to control the use of AI by cutting its funding

The unregulated use of AI in the classroom is a profound threat to student privacy, as these programs collect and commercialize students’ personal data. It is also a threat to the personal connection, feedback and engagement central to a quality education. AI is one of the few technologies whose inventors have warned that it poses a serious risk to humanity itself, including Nobel Prize winner Geoffrey Hinton, often called the godfather of AI.

In a joint letter, more than 200 state legislators expressed their “strong opposition” to any ban on regulating AI, joining a bipartisan coalition of state attorneys general who expressed similar concerns

Please write to your U.S. Senators today, to demand that they eliminate any language from the budget bill that would prevent or dissuade states and localities from passing laws on AI to protect the safety, education and the well-being of our children.  And please share this email with others who care.  Thank you!

Leonie Haimson & Cassie Creswell, co-chairs
Parent Coalition for Student Privacy
124 Waverly Pl.
New York, NY 10011
info@studentprivacymatters.org
www.studentprivacymatters.org
Follow @parents4privacy
Subscribe to Parent Coalition for Student Privacy newsletter at https://www.studentprivacymatters.org/join-us

Steve Ruis alerts Trump and his DEI Police to a dangerous historical event that should be scrubbed from all the history books. It’s an example of DEI before DEI was recognized as unAmerican.

Grace Hopper directed a team that developed early COBOL applications.
Photo credit: Smithsonian Institution/Wikimedia Commons

There’s probably no programming language in history that’s quite as all pervasive as COBOL. For over 60 years, COBOL has been quietly powering 43% of the banking systems worldwide, handling a mind-blowing $3 trillion in daily transactions. And 95% of ATMs and 80% of banks still rely on it.

Wait. Look at that picture! It screams late 1950’s, early 1960’s. The team was lead by a woman! (The fact that she earned a Ph.D. in both mathematics and mathematical physics from Yale University and was a professor of mathematics at Vassar College is irrelevant.) On the team are a black guy. A guy who looks to be from the Indian subcontinent and a sole white guy!

This should never have happened … at least according to Donald J. Trump, otherwise known as The Martyr of Mar-a-Lago, the senior partner of Elon and Felon, The Mango Menace, “TACO” (Trump Always Chickens Out) Trump, POTUS (Piece of Totally Useless Shit), Darth Hideous, $hitler, the Titanic Toddler, and President of the United States Donald J. Trump. Such combinations of the sexes and races are abominations and should not happen again.

Why, oh why, does anyone pay any attention to the ravings of this … person? Why do people obey his orders when he is clearly unhinged?

Thomas Edsall writes a regular feature for The New York Times. In this stunning article, he recounts the views of numerous scholars about what Trump has done since his Inauguration.

This is a gift article, meaning you can open the link and finish reading the article, which is usually behind a paywall.

Edsall writes:

One thing stands out amid all the chaos, corruption and disorder: the wanton destructiveness of the Trump presidency.

The targets of President Trump’s assaults include the law, higher education, medical research, ethical standards, America’s foreign alliances, free speech, the civil service, religion, the media and much more.

J. Michael Luttig, a former federal appeals court judge appointed by President George H.W. Bush, succinctly described his own view of the Trump presidency, writing by email that there had never

been a U.S. president who I consider even to have been destructive, let alone a president who has intentionally and deliberately set out to destroy literally every institution in America, up to and including American democracy and the rule of law. I even believe he is destroying the American presidency, though I would not say that is intentional and deliberate.

Some of the damage Trump has inflicted can be repaired by future administrations, but repairing relations with American allies, the restoration of lost government expertise and a return to productive research may take years, even with a new and determined president and Congress.

Let’s look at just one target of the administration’s vendetta, medical research. Trump’s attacks include cancellation of thousands of grants, cuts in the share of grants going to universities and hospitals and proposed cuts of 40 percent or more in the budgets of the National Institutes of Health, the Centers for Disease Control and Prevention and the National Science Foundation.

“This is going to completely kneecap biomedical research in this country,” Jennifer Zeitzer, the deputy executive director at the Federation of American Societies for Experimental Biology, told Science magazine. Georges Benjamin, the executive director of the American Public Health Association, warned that cuts will “totally destroy the nation’s public health infrastructure.”

I asked scholars of the presidency to evaluate the scope of Trump’s wreckage. “The gutting of expertise and experience going on right now under the blatantly false pretext of eliminating fraud and waste,” Sean Wilentz, a professor of history at Princeton, wrote by email, “is catastrophic and may never be completely repaired.”

I asked Wilentz whether Trump was unique in terms of his destructiveness or if there were presidential precedents. Wilentz replied:

There is no precedent, not even close, unless you consider Jefferson Davis an American president. Even to raise the question, with all due respect, is to minimize the crisis we’re in and the scope of Trump et al.’s. intentions.

Another question: Was Trump re-elected to promote an agenda of wreaking havoc, or is he pursuing an elitist right-wing program created by conservative ideologues who saw in Trump’s election the opportunity to pursue their goals?

Wilentz’s reply:

Trump’s closest allies intended chaos wrought by destruction which helps advance the elite reactionary programs. Chaos allows Trump to expand his governing by emergency powers, which could well include the imposition of martial law, if he so chose.

I asked Andrew Rudalevige, a political scientist at Bowdoin, how permanent the mayhem Trump has inflicted may prove to be. “Not to be flip,” Rudalevige replied by email, “but for children abroad denied food or lifesaving medicine because of arbitrary aid cuts, the answer is already distressingly permanent.”

From a broader perspective, Rudalevige wrote:

The damage caused to governmental expertise and simple competence could be long lasting. Firing probationary workers en masse may reduce the government employment head count, slightly, but it also purged those most likely to bring the freshest view and most up-to-date skills to government service, while souring them on that service. And norms of nonpoliticization in government service have taken a huge hit.

I sent the question I posed to Wilentz to other scholars of the presidency. It produced a wide variety of answers. Here is Rudalevige’s:

The comp that comes to mind is Andrew Johnson. It’s hardly guaranteed that Reconstruction after the Civil War would have succeeded even under Lincoln’s leadership. But Johnson took action after action designed to prevent racial reconciliation and economic opportunity, from vetoing key legislation to refusing to prevent mob violence against Blacks to pardoning former members of the Confederacy hierarchy. He affirmatively made government work worse and to prevent it from treating its citizens equally.

Another question: How much is Trump’s second-term agenda the invention of conservative elites, and how much is it a response to the demands of Trump’s MAGA supporters?

“Trump is not at all an unwitting victim,” Rudalevige wrote, “but those around him with wider and more systemic goals have more authority and are better organized in pursuit of those goals than they were in the first term.”

In this context, Rudalevige continued, the Heritage Foundation’s Project 2025

was not just a campaign manifesto but a bulwark against the inconsistency and individualism its authors thought had undermined the effectiveness of Trump’s first term. It was an insurance policy to secure the administrative state for conservative thought and yoke it to a cause beyond Trump or even Trumpism.

The alliance with Trump was a marriage of convenience — and the Trump legacy when it comes to staffing the White House and executive branch is a somewhat ironic one, as an unwitting vehicle for an agenda that goes far beyond the personalization of the presidency.

In the past, when presidential power has expanded, Rudalevige argued,

it has been in response to crisis: the Civil War, World War I, the Depression and World War II, 9/11. But no similar objective crisis faced us. So one had to be declared — via proclamations of “invasion” and the like — or even created. In the ensuing crisis more power may be delegated by Congress. But the analogue is something like an arsonist who rushes to put out the fire he started.

One widely shared view among those I queried is that Trump has severely damaged America’s relations with traditional allies everywhere.

Mara Rudman, a professor at the University of Virginia’s Miller Center, wrote in an email:

The most lasting impact of this term will be felt in the damage done to the reputation of the United States as a safe harbor where the rule of law is king and where the Constitution is as sacred a national document as any country has developed.

Through his utter disregard for the law, Trump has shown both how precious and how fragile are the rules that undergird our institutions, our economic and national security and the foundation for our democracy.

To finish this excellent article, please open the link.