John Thompson, retired teacher and historian, knows that we are at a fork in the road with artificial intelligence: Will it control us or will we control it? The evolution and implementation of AI is driven by corpirations making huge investments and seeking huge profits. The well-being of children is not their uppermost goal.
He writes:
My head has been spinning since I attended the University of Oklahoma’s “Applied A.I. in the Workplace seminar.”
The session began with O.U.’s Dr. Shishir Shah who provided a detailed history of machine learning, starting with the 1940’s. Dr. Shah went into the nuances of the phases of A.I. It culminated in today’s period of “Human Alignment” with its exploding data bases. He says that we’re entering an era where we don’t ask whether machines “think,” but what will A.I. learn next.
In conversations with Dr. Shah, I was especially impressed with his insights into public education, and what we would need to do to prepare students for the 21st century. He also said:
As both Dr. Ali and Dr. Jones indicated, we all have to engage in open and transparent discussions about AI and its uses. This will help improve our understanding of its potential impacts, which then can help shape appropriate guidelines, standards, and policies. Engagement is important.
Then Dr. Kyle Jones, who leads field engineering for “Databricks,” warned that anything you think you know about A.I. changes in 6 months. Dr. Jones described a number of ways that A.I. provides useful results. But, he added that A.I. is making things easier for robots, and then asked, “What about human beings?”
Dr. Jones also questioned the role of corporate profits in rapidly expanding A.I.
Then, Dr. Asim Ali, from Auburn University, explained that private investment in A.I. is dominated by the U.S., but we need international solutions, and more regulations. He focused on the recent history of A.I increasing, declining, and returning to growth as it approaches long-term growth. For instance, he used Anthropic’s “Claude” chatbot for an example of what’s possible, as well as its major shortcomings.
Dr. Ali advocates for engaging conversations about A.I. and its uses; if we are “passive about A.I., the future with AI will not be one we like.”
He also reported on “the low likelihood that we will have [A.I.] Superintelligence anytime soon, but that there’s value in discussing a future with Superintelligence because it challenges us to determine our values when using AI and wrestle with the potential negative outcomes for human society.”
That brings me from the various, nuanced history and possible futures they explained to the more complicated paths towards minimizing the harms of A.I. They offered complex appraisals of multiple paths forward. Perhaps we could refine technocratic skills to program A.I. so it doesn’t turn on humans. Or should we try to launch A.I. so that it then learns how to protect and make a better world for humans?
And, yes, companies want us to use more data, despite the environmental damage that results. But shouldn’t we ask whether our rampant use of digital tools and social media is meaningful enough to justify the harms done by data centers?
And, shouldn’t we do a better job of teaching critical thinking?
So, when I drove home from those sessions, my plan was to first reread my notes and to deepen my understanding of their research. But, the first thing I found in my mailbox was Jill Lepore’s “We, The Robots.”
And Lepore’s opening sentence was a quote from Geoffrey Hinton, a “Nobel Prize-winning godfather of A.I.” “’Unless you can be very sure that it’s not going to kill you when you grow up, you should worry.’”
Lepore asks Daniel Roher, the director of the documentary “The A.I. Doc,” which quotes an A.I. insider who says, “I know people who work on A.I. risk who don’t expect their children to make it to high school.”
Roher further explains that the government has “abdicated the regulation of artificial Intelligence, just as it failed to pass any meaningful legislation regarding social media.”
Lepore uses Anthropic’s “Claude’s,” effort to create an A.I. “Constitution” as a “trying” example of the problems with A.I, during a time when President Donald Trump is attacking the American Constitution.
And, she asks whether Anthropic’s efforts are designed to “move toward human participation and democratic governance instead of relying on what appears to be technocratic automatism.”
Lepore recalled reasons for hope when OpenAI formed a “Superalignment team” and President Joe Biden issued an executive order “calling for Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. But “In Trumpworld, this was the equivalent of DEI for computers.”
And that brings me back to the O.U. seminar. I don’t know enough to compare and contrast its experts’ detailed findings on A.I. with those of the experts Jill Lepore drew upon. I heard them as being less pessimistic, emphasizing the long histories of challenges that humans have overcome. But, I believe the biggest difference between them is the tone of their analyses.
For instance, Dr. Jones told me, “There is no single inevitable path that A.I. will follow. As humans we have free will and we get to choose. So, his “response is to engage with this, rather than ignore it and hope for the best. After all, hope is not a strategy.”

The answer to both questions (there is no “or”) is “no.” What is happening now, is the most probable line of action. Corporations are laying off workers in rather large numbers and claiming that they are replacing those workers with ‘AIs.” There is no evidence that the “AIs” used can actually do the jobs of the workers displaced, but corporation execs love to lay off workers because it makes their stock options that much more valuable.
An example already documented is a law firm who laid off all of their law clerks and “replaced” them with AIs, only to find out the AI output wasn’t reliable, even to the extent that the AI used made up legal precedents out of whole cloth. The junior partners were having to review all of that output, tying them up substantially. When the mistake was recognized they tried to hire back the aides they had fired but they had found other jobs.
In another case, AIs have been so good and so fast at creating computer code that a company laid off all of its junior software engineers. Again, the AI code proved not to be trust worthy and so had to be checked by senior engineers, who pointed out that they learned how to be senior software engineers as junior software engineers, so the corp execs had just laid off all of its future senior software engineers.
The “hype” and the “promise” of AI is entirely fictional at this point, but people are already acting as if it were real and realizing their mistakes to late to correct them. And, aren’t there proposals to do away with teachers and replace them with AIs? That disaster is currently being pursued by oligarchs who hate paying employees.
At one time some Japanese corporations had the goal of providing quality jobs to people in their communities. Why is that goal not part of the incorporation requirements in this country?
LikeLike
Steve,
You didn’t mention the massive environmental damage caused by AI, the millions of gallons of water consumed by data centers.
LikeLiked by 1 person
Those costs are also massive and studies show that even the economic/development costs are unlikely to come close to being paid for through subscriptions and the like. Maybe the goal is a government bailout?.
LikeLike
Well said.
LikeLike
Agree. Artificial intelligence is a marketing term, not an accurate description. It can be used to influence people’s thoughts and behaviors, but it is not intelligent. Would you hire someone to do any job at all if you knew that they were deceiving you more than half the time they spoke? I wouldn’t.
LikeLike
Most times a situation can’t be explained with a one word answer, but the future of AI is unknown. The only one word answer to whether this strategy of machine learning and decision making will work, is “hopefully”. And HOPE IS A STRATEGY! It’s a way of thinking that is chosen. Nobody can force that feeling upon you. One chooses it. It’s a loaded word!
LikeLike
I fear for my children.
LikeLike
‘Hope’ says stiegem and ‘fear’ says FLERP!. I know these emotions all too well having survived Arne Duncan.
More than half the nations of the world in 2026 are autocracies, and fewer than half are democracies. The United States is charging backward in the shift, faster than anyone. My grandparents fought the Nazis only to have technocratic fascism return in my lifetime. The people who control the so-called AI are men like Elon Musk, always portraying themselves as saviors while they DOGE questions about their secretive data harvesting and weapons manufacturing. AI decides who dies in Iran and Lebanon. Who knows what else? They’re calling the chatbots “agents” for a reason. Chatbots are controlled by engineers and can manipulate people like secret agents do.
If you ask whether we will control artificial intelligence or artificial intelligence will control us, who is the ‘we’ in that question? Unless there’s a hedge fund manager or tech CEO or the like here in Diane’s online living room right now, I don’t think anyone reading this blog gets to decide who will be in control. As a matter of fact, I think the answer is kind of plain. We will be controlled by wealthy technocrats using weapons of mass deception called AI.
Hope? Fear? We will need them both en masse. Hope that AI will not cause democracy to perish from the earth. And fear leads to anger, and anger leads to change.
LikeLike
Here’s another reason to put an end to this nonsense:
Robert’s Rule | Bob Shepherd | Praxis
LikeLiked by 1 person