John Thompson is a retired teacher and historian in Oklahoma. I admit that I steer clear of AI, in part because of my innate aversion to “thinking machines” replacing humans. I am biased towards people deciding for themselves, but as I watch the polling numbers in the race for President, I wonder if artificial intelligence might be more trustworthy than people who support a man with Trump’s long record of lying and cheating others.

John Thompson writes:

We live in a nation where reliable polling data reveals that 23% of respondents “strongly believe” or “somewhat believe” that the attacks on the World Trade Center were an “inside job.” Moreover, 1/3rd of polled adults believe that COVID-19 vaccines “caused thousands of sudden deaths,” and 1/3rd also believe the deworming medication Ivermectin was an “effective treatment for COVID-19.” Moreover, 63% of Americans “say they have not too much or no confidence at all in our political system.”

Such falsehoods were not nearly as common in the late 1990s when I first watched my high school students learn how to use digital media. But, I immediately warned my colleagues that we had to get out in front of the emerging technological threat. Of course, my advocacy for digital literacy, critical thinking, and digital ethics was ignored. 

But who knew that misuse of digital media would become so harmful? As Surgeon General Vivek Murthy now explains: 

It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents

As a special issue of the Progressive  reports, our  digital ecosystems, with their deepfakes and disinformation are distorting reality, and increasing “human tendencies toward cognitive biases and groupthink.”  It further explains that since 2019 the number of people “who cite social media as their number one source for news has increased by 50 percent.” Moreover:

Half of American adults report that they access their news from social media sometimes or often. For Americans under the age of thirty-four, social media is the number one source for news.

The Progressive further explains that young people “don’t necessarily trust what they read and watch.” They “know that private corporations manipulate political issues and other information to suit their agendas, but may not understand how algorithms select the content that they see.”

We in public education should apologize for failing to do our share of the job of educating youth for the 21st century. Then we should commit to plans for teaching digital literacy.

It seems likely that the mental distress suffered by young people could be a first driver toward healthy media systems. According to the National Center for Biotechnology Information:

According to data from several cross-sectional, longitudinal, and empirical research, smartphone and social media use among teenagers relates to an increase in mental distress, self-harming behaviors, and suicidality. Clinicians can work with young people and their families to reduce the hazards of social media and smartphone usage by using open, nonjudgmental, and developmentally appropriate tactics, including education and practical problem-solving.

According to the Carnegie Council:

Social media presents a number of dangers that require urgent and immediate regulation, including online harassment; racist, bigoted and divisive content; terrorist and right-wing calls for radicalization; as well as unidentified use of social media for political advertising by foreign and domestic actors. To mitigate these societal ills, carefully crafted policy that balances civil liberties and the need for security must be implemented in line with the latest cybersecurity developments.

Reaching such a balance will require major investments – and fortitude – from the private sector and government. But it is unlikely that real regulatory change can occur without grassroots citizens’ movements that demand enforcement.

And we must quickly start to take action to prepare for Artificial Intelligence (A.I.). In a New York Times commentary, Evgeny Morozov started with the statement by 350 technology executives, researchers and academics “warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority.” He then cited a less-alarming position by the Biden administration which “has urged responsible A.I. innovation, stating that ‘in order to seize the opportunities’ it offers, we ‘must first manage its risks.’” 

Morozov then argued, “It is the rise of artificial general intelligence, or A.G.I., that worries the experts.” He predicted, “A.G.I. will dull the pain of our thorniest problems without fixing them,” and it “undermines civic virtues and amplifies trends we already dislike.”

Morozov later concluded that A.G.I. “may or may not prove an existential threat’ but it has an “antisocial bent.” He warned that A.G.I. often fails “to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.”

I lack expertise in A.I. and A.G.I. but it seems clear that the dangers of data, driven by algorithms and other impersonal factors, must be controlled by persons committed to social values. It is essential that schools assume their duty for preparing youth for the 21st century, but they can only do so with a team effort. I suspect the same is true of the full range of interconnected social and political institutions. As Surgeon General Murthy concludes his warning to society about social media:

These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability.

The moral test of any society is how well it protects its children.