Amid pitched discussions of whether artificial intelligence–powered technologies will one day transform art, journalism, and work, a more immediate impact of AI is drawing less attention: The technology has already helped create barriers for people seeking to access our nation’s social safety net.
Across the country, state and local governments have turned to algorithmic tools to automate decisions about who gets critical assistance. In principle, the move makes sense: At their best, these technologies can help long-understaffed and underfunded agencies quickly process huge amounts of data and respond with greater speed to the needs of constituents.
But careless—or intentionally tightfisted and punitive—design and implementation of these new high-tech tools have often prevented benefits from reaching the people they’re intended to help.
In 2016, during an attempt to automate eligibility processes, Indiana denied one million public assistance applications in three years—a 54 percent increase. Those who lost benefits included a six-year-old with cerebral palsy and a woman who missed an appointment with her case worker because she was in the hospital with terminal cancer. That same year, an Arkansas assessment algorithm cut thousands of Medicaid-funded home health care hours from people with disabilities. And in Michigan, a new automated system for detecting fraud in unemployment insurance claims identified fivefold more fraud compared to the older system—causing some 40,000 people to be wrongly accused of unemployment insurance fraud.

Dear Diane,
I think what happened here is a lot of otherwise savvy people missed the significance of AI’s transfer from the modestly ethical groves of Academe to the dark satanic engines of corporate industry. When I see people still talking about “AI Safety” and the “Alignment of AI with Human Interests” instead of the “Alignment of Corporate Agendas with Human Interests”, it tells me those people are still missing the bigger picture.
LikeLike
THIS.
LikeLike
Algorithms like people can be biased as they depend on the perspective of the engineers that program them. The software can be trained according to a predetermined set of criteria that may cast a wide net of inclusion or a narrow one depending on the set of criteria established in the programming. Those of us in education understand this quite well. When states want to present a picture of success on state tests, they set a low passing score. When states want to present a picture of “failing public schools,” politicians set the “passing score” high to exclude large numbers of students and cast doubt on public schools. This same type of algorithmic bias was the reason value add-VAM scores for teachers with preset “failure” criteria were challenged in the courts and caused numerous teachers to lose their jobs.
All states have to do to reduce the number of people qualifying for public assistance is to provide programmers with a narrow number of eligible conditions or circumstances, and the computer will exclude lots of applicants for public assistance. States may use computers to reduce the number of case workers and turn the responsibility over to a bot. As a result, they can blame technology for the decision and try to deflect from their role in the process.
LikeLike
A lot of this stuff is machine learning-based these days. It grows beyond and transcends its programmers’ intentions. It learns. There are even algorithms that replicate with recombination, testing themselves against some task, discarding the weak products and maintaining and recombing the strong ones. Genetic algorithms, but ones that don’t work at the slow pace of biological evolution.
Algorithm | A Short Story | Bob Shepherd | Praxis (wordpress.com)
LikeLike
Shorter: garbage in, garbage out.
In good news, however, Audrey Watters of Hack Education is back to brilliantly parsing AI and ed tech. Her new blog is at
https://audreywatters.com/
LikeLike
“Smart phones” I notice tend to “learn” from the routine ways I use my phone. I adjusts itself to my habits. It’s robotic conditioning.
LikeLike
AMERICA’S GREATEST ENEMY
America’s greatest enemy isn’t China, or Russia, or Iran.
America’s greatest enemy isn’t MAGA fascism or MAGA’s führer Donald Trump.
MAGA and Trump and neofascism are the products of so-called “social media”.
Irresponsible “social media” are America’s Greatest Enemy.
When China, Russia, and Iran read what Congress had written in favor of social media in Section 230 of the laughingly-titled Communications Decency Act, the dictators in those nations could not believe the powerful tool that America’s naive Congress had given them to destroy America.
In Section 230, Congress gave social media a get-out-of-jail-free card to let anyone post virtually any kind of lie on social media without any consequences.
Russia already had Trump financially roped to it, so in 2016 when Trump began his run for the U.S. presidency, Russia, China, and Iran immediately put their Internet armies to work creating fake news organizations that posted incessantly on social media, promoting Trump and undermining other Republican candidates with lies.
Trump had become The Kremlin Kandidate. He still is.
The “make America great again” theme that Russia and the others latched onto for the Kremlin Kandidate’s 2016 campaign was copied from the campaign slogan that in the 1930s divided Americas and led America into World War II.
Democratic republics across the world are also under siege from the fake news and lies that today flood social media from Russia, China, and Iran. Freedom is on the verge of ending worldwide.
Congress can remedy this situation and save democracy in America and worldwide WITHOUT PASSING ANY LAW THAT IN ANY WAy RESTRICTS FREE SPEECH.
The only thing that Congress must do is to repeal Section 230, and then the trillion dollar tech corporations that carry “social media” would merely become responsible for what is posted on them, just like legitimate newspaper, TV, and radio news companies have always done.
Congress broke it. Congress needs to fix it.
But Congress won’t. Congress is owned and operated by high tech billionaires.
LikeLike
Just as algorithms are used to deny public education. Denying services and eliminating jobs is what algorithms are for. It is simply Some Devalue Added Method. (Hope to see you again in just a few more weeks, Poet!)
I’ll take a book over a website any day. Diane, I’m pretty sure I sold a couple copies of Language Police for you this afternoon. Hooray! More reading books. Less technological back stepping.
LikeLike
Diane writes one helluva book, doesn’t she! What a brilliant, beautiful human.
LikeLike
I have that in hardcover in my office.
LikeLike