John Thompson is a retired teacher and historian in Oklahoma. I admit that I steer clear of AI, in part because of my innate aversion to “thinking machines” replacing humans. I am biased towards people deciding for themselves, but as I watch the polling numbers in the race for President, I wonder if artificial intelligence might be more trustworthy than people who support a man with Trump’s long record of lying and cheating others.
John Thompson writes:
We live in a nation where reliable polling data reveals that 23% of respondents “strongly believe” or “somewhat believe” that the attacks on the World Trade Center were an “inside job.” Moreover, 1/3rd of polled adults believe that COVID-19 vaccines “caused thousands of sudden deaths,” and 1/3rd also believe the deworming medication Ivermectin was an “effective treatment for COVID-19.” Moreover, 63% of Americans “say they have not too much or no confidence at all in our political system.”
Such falsehoods were not nearly as common in the late 1990s when I first watched my high school students learn how to use digital media. But, I immediately warned my colleagues that we had to get out in front of the emerging technological threat. Of course, my advocacy for digital literacy, critical thinking, and digital ethics was ignored.
But who knew that misuse of digital media would become so harmful? As Surgeon General Vivek Murthy now explains:
It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents.
As a special issue of the Progressive reports, our digital ecosystems, with their deepfakes and disinformation are distorting reality, and increasing “human tendencies toward cognitive biases and groupthink.” It further explains that since 2019 the number of people “who cite social media as their number one source for news has increased by 50 percent.” Moreover:
Half of American adults report that they access their news from social media sometimes or often. For Americans under the age of thirty-four, social media is the number one source for news.
The Progressive further explains that young people “don’t necessarily trust what they read and watch.” They “know that private corporations manipulate political issues and other information to suit their agendas, but may not understand how algorithms select the content that they see.”
We in public education should apologize for failing to do our share of the job of educating youth for the 21st century. Then we should commit to plans for teaching digital literacy.
It seems likely that the mental distress suffered by young people could be a first driver toward healthy media systems. According to the National Center for Biotechnology Information:
According to data from several cross-sectional, longitudinal, and empirical research, smartphone and social media use among teenagers relates to an increase in mental distress, self-harming behaviors, and suicidality. Clinicians can work with young people and their families to reduce the hazards of social media and smartphone usage by using open, nonjudgmental, and developmentally appropriate tactics, including education and practical problem-solving.
According to the Carnegie Council:
Social media presents a number of dangers that require urgent and immediate regulation, including online harassment; racist, bigoted and divisive content; terrorist and right-wing calls for radicalization; as well as unidentified use of social media for political advertising by foreign and domestic actors. To mitigate these societal ills, carefully crafted policy that balances civil liberties and the need for security must be implemented in line with the latest cybersecurity developments.
Reaching such a balance will require major investments – and fortitude – from the private sector and government. But it is unlikely that real regulatory change can occur without grassroots citizens’ movements that demand enforcement.
And we must quickly start to take action to prepare for Artificial Intelligence (A.I.). In a New York Times commentary, Evgeny Morozov started with the statement by 350 technology executives, researchers and academics “warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority.” He then cited a less-alarming position by the Biden administration which “has urged responsible A.I. innovation, stating that ‘in order to seize the opportunities’ it offers, we ‘must first manage its risks.’”
Morozov then argued, “It is the rise of artificial general intelligence, or A.G.I., that worries the experts.” He predicted, “A.G.I. will dull the pain of our thorniest problems without fixing them,” and it “undermines civic virtues and amplifies trends we already dislike.”
Morozov later concluded that A.G.I. “may or may not prove an existential threat’ but it has an “antisocial bent.” He warned that A.G.I. often fails “to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.”
I lack expertise in A.I. and A.G.I. but it seems clear that the dangers of data, driven by algorithms and other impersonal factors, must be controlled by persons committed to social values. It is essential that schools assume their duty for preparing youth for the 21st century, but they can only do so with a team effort. I suspect the same is true of the full range of interconnected social and political institutions. As Surgeon General Murthy concludes his warning to society about social media:
These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability.
The moral test of any society is how well it protects its children.

Neither!
or Both!
Depends on context and purpose of use.
LikeLike
The ChatBots and LLMs being pushed by corporate hawkers of late amount to a very limited species, you might even say an invasive species, spun off from the much broader field of AI research as we have known it for many decades now. Corporate interests do what corporate interests always do and maybe folks will eventually catch on … let’s hope so …
LikeLike
well said, Jon
LikeLiked by 1 person
When have humans ever created a technology,
said, “This is just too horrible for us ever to use?”
and then NOT used it?
Never.
Never.
Not once. Ever.
Right now, nations around the world are fielding autonomous or semiautonomous land, air, and sea surveillance, reconnaissance, and killer AI drones. And they are competing in a new arms race for fear that the most spectacular improvements in these areas will be made by someone else. So, while officially people say, oh, we are really worried about this; we need to put limitations on AI development like the laws of robotics in Isaac Asimov’s stories, in reality they are all about moving full speed ahead. No checks.
We are f____ed.
Algorithm | A Short Story | Bob Shepherd | Praxis (wordpress.com)
LikeLiked by 1 person
Exactly. Technology is a ratchet and it’s always used.
LikeLike
I used to teach a short story called “Second Variety” by Phillip A Dick (if I correctly recall. This was 1979). Like a lot of Sci Fi, it was a dystopian view of machines and war. Your comments recall that story, albeit in s fog made of time and distance.
LikeLike
So many great stories on this theme.
TheWeapon-FredricBrown.pdf (mit.edu)
LikeLike
And see Jon’s note, above. The weirdest and most dangerous stuff in AI is going to come from what is, for most people, completely out of left field. Something that was being worked on that they didn’t even have any notion was underway. And here’s the kicker: some of it, NO HUMAN WILL HAVE ANY IDEA ABOUT because it will be entirely about AI developing AI and creating code that a human simply couldn’t possibly read.
LikeLike
MICRO DRONES KILLER ARMS ROBOTS – AUTONOMOUS ARTIFICIAL INTELLIGENCE – WARNING !! (youtube.com)
LikeLike
With almost no expertise related to AI, I have a problem based on actual results. When I start to write something, often the suggestion of how to complete the sentence has no relationship to where I am planning to go. That happened several times as I wrote this comment. If I can be given nonsense in the first 3 sentences of this comment, who gibberish would result in a full article?
LikeLike
More commonly, it won’t be gibberish but mediocrity. These chatbots work by predicting what the next word or phrase will be based on having looked at an enormous corpus of text. So, they deliver up the received idea, the predictable, often wrong crap based on what the typical buffoon out there is saying.
LikeLiked by 1 person
Bob’s explanation give me a favor plan. We will teach a chat bot to write like I do, beginning nowhere and ending in some peculiar ether that reminds one of incorrect views of space in the Nineteenth Century. Then we will spread this stuff all over the net and people will train AI based entirely on my gibberish. And do it will fail.
LikeLiked by 1 person
Sounds like fun!
LikeLike
Yes, AI is a big danger. For one thing it’s mostly digital, while the “natural” world from which it derives is analogue–an infinite number of points between two points.
Let’s fight back–not just by complaining. Let’s start by refusing to equate “information,” which is what machines/robots provide–and sometimes very useful information–and “intelligence.”
“Intel–intelligence” is a military term for “information.” Remember, Bush II’s forces gathered “intelligence”–“intel” about Saddam Hussein’s nuclear program, etc., before we carpet-bombed that nation. Turned out the “information” wasn’t very intelligent. There is a difference.
My brother, a career soldier, said “Military ‘intelligence’ is the ultimate oxymoron.”
Let’s start our resistance my calling A.I. “artificial information.” And go on from there.
LikeLike
Read George Tenet’s book about that. Basically, Cheney and Rummy and the DIA just made shit up.
LikeLike
Is the Internet a boon or a danger?
Are smart phones a boon or a danger?
Is social media a boon or a danger?
Technology is what we make of it.
LikeLike