John Thompson, retired teacher and historian, knows that we are at a fork in the road with artificial intelligence: Will it control us or will we control it? The evolution and implementation of AI is driven by corpirations making huge investments and seeking huge profits. The well-being of children is not their uppermost goal.
He writes:
My head has been spinning since I attended the University of Oklahoma’s “Applied A.I. in the Workplace seminar.”
The session began with O.U.’s Dr. Shishir Shah who provided a detailed history of machine learning, starting with the 1940’s. Dr. Shah went into the nuances of the phases of A.I. It culminated in today’s period of “Human Alignment” with its exploding data bases. He says that we’re entering an era where we don’t ask whether machines “think,” but what will A.I. learn next.
In conversations with Dr. Shah, I was especially impressed with his insights into public education, and what we would need to do to prepare students for the 21st century. He also said:
As both Dr. Ali and Dr. Jones indicated, we all have to engage in open and transparent discussions about AI and its uses. This will help improve our understanding of its potential impacts, which then can help shape appropriate guidelines, standards, and policies. Engagement is important.
Then Dr. Kyle Jones, who leads field engineering for “Databricks,” warned that anything you think you know about A.I. changes in 6 months. Dr. Jones described a number of ways that A.I. provides useful results. But, he added that A.I. is making things easier for robots, and then asked, “What about human beings?”
Dr. Jones also questioned the role of corporate profits in rapidly expanding A.I.
Then, Dr. Asim Ali, from Auburn University, explained that private investment in A.I. is dominated by the U.S., but we need international solutions, and more regulations. He focused on the recent history of A.I increasing, declining, and returning to growth as it approaches long-term growth. For instance, he used Anthropic’s “Claude” chatbot for an example of what’s possible, as well as its major shortcomings.
Dr. Ali advocates for engaging conversations about A.I. and its uses; if we are “passive about A.I., the future with AI will not be one we like.”
He also reported on “the low likelihood that we will have [A.I.] Superintelligence anytime soon, but that there’s value in discussing a future with Superintelligence because it challenges us to determine our values when using AI and wrestle with the potential negative outcomes for human society.”
That brings me from the various, nuanced history and possible futures they explained to the more complicated paths towards minimizing the harms of A.I. They offered complex appraisals of multiple paths forward. Perhaps we could refine technocratic skills to program A.I. so it doesn’t turn on humans. Or should we try to launch A.I. so that it then learns how to protect and make a better world for humans?
And, yes, companies want us to use more data, despite the environmental damage that results. But shouldn’t we ask whether our rampant use of digital tools and social media is meaningful enough to justify the harms done by data centers?
And, shouldn’t we do a better job of teaching critical thinking?
So, when I drove home from those sessions, my plan was to first reread my notes and to deepen my understanding of their research. But, the first thing I found in my mailbox was Jill Lepore’s “We, The Robots.”
And Lepore’s opening sentence was a quote from Geoffrey Hinton, a “Nobel Prize-winning godfather of A.I.” “’Unless you can be very sure that it’s not going to kill you when you grow up, you should worry.’”
Lepore asks Daniel Roher, the director of the documentary “The A.I. Doc,” which quotes an A.I. insider who says, “I know people who work on A.I. risk who don’t expect their children to make it to high school.”
Roher further explains that the government has “abdicated the regulation of artificial Intelligence, just as it failed to pass any meaningful legislation regarding social media.”
Lepore uses Anthropic’s “Claude’s,” effort to create an A.I. “Constitution” as a “trying” example of the problems with A.I, during a time when President Donald Trump is attacking the American Constitution.
And, she asks whether Anthropic’s efforts are designed to “move toward human participation and democratic governance instead of relying on what appears to be technocratic automatism.”
Lepore recalled reasons for hope when OpenAI formed a “Superalignment team” and President Joe Biden issued an executive order “calling for Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. But “In Trumpworld, this was the equivalent of DEI for computers.”
And that brings me back to the O.U. seminar. I don’t know enough to compare and contrast its experts’ detailed findings on A.I. with those of the experts Jill Lepore drew upon. I heard them as being less pessimistic, emphasizing the long histories of challenges that humans have overcome. But, I believe the biggest difference between them is the tone of their analyses.
For instance, Dr. Jones told me, “There is no single inevitable path that A.I. will follow. As humans we have free will and we get to choose. So, his “response is to engage with this, rather than ignore it and hope for the best. After all, hope is not a strategy.”






