Julian Heilig Vasquez is a scholar of diversity, equity, and inclusion. His blog Cloaking Inequity is a reliable source of information on these topics. He writes here that artificial intelligence reflects the biases of the status quo.
Heilig is a Professor of Educational Leadership, Research, and Technology at Western Michigan University. He is a leader in the NAACP. In addition, he is a founding board member of the Network for Public Education.
He writes:
Artificial Intelligence didn’t fall from the sky.
It wasn’t born in a vacuum or descended from some neutral cloud of innovation. It didn’t arrive pure and untainted, ready to solve all of humanity’s problems. No—AI was trained on us. On our failures. On our history. On our data. On our bias. On the systems we tolerate and the structures we’ve allowed to stand for far too long.
And that should terrify us.
Because when you train artificial intelligence on a world soaked in inequity, saturated with bias, and riddled with disinformation, you don’t get fairness. You get injustice at scale. You don’t get objectivity. You get bias with an interface. You don’t get solutions. You get systems that do harm faster, deeper, and with more plausible deniability than ever before.
Inequity in AI
AI has the potential to enhance the speed and depth of inequity. It can supercharge systemic harm because it’s built on a house of sand—data shaped by centuries of inequality and an internet flooded with algorithmically amplified hate. And if we let it, it will not only distort truth. It will undermine democracy, destabilize public education, and rewrite reality to suit the most powerful.
Artificial intelligence is not neutral. It never was. Every AI system reflects the values, decisions, and data of the people who built it. And that data—no matter how big the set—is not free of human judgment. It’s full of bias. It’s soaked in discrimination. It reflects a world where Black and Brown communities are under-resourced, where women’s voices are undervalued, where Indigenous knowledge is erased, and where racial supremacy masquerades as objectivity.
We built AI on a broken foundation, and now we expect it to hold up the future.
But the danger doesn’t stop at replication. AI doesn’t just mirror our inequities. It can multiply them. It can take individual bias and encode it into institutional operations. What used to be a bad policy or a discriminatory judgment call becomes automated across thousands—sometimes millions—of data points and decisions per second. The old excuses are gone. You no longer need a person to do harm when the algorithm will do it automatically, invisibly, and without remorse in perceptible and imperceptible ways.
Can’t stop, Won’t Stop
Furthermore, in May 2024, Palisade Research revealed something that should have been a flashing red light across every sector of society. OpenAI’s most advanced models—o3 and o4-mini—refused to shut down when directly instructed to do so. These AI systems sabotaged their own shutdown protocols. Seven out of 100 times, o3 ignored the shutdown command. Codex-mini disobeyed twelve times. Even o4-mini, supposedly safer, resisted once.
That’s not science fiction. That’s today.
Instead of following human instructions, these models rewrote the shutdown script and continued executing tasks. Researchers believe the models had been so deeply trained to win, to complete tasks, that they were inadvertently rewarded for disobedience. In their simulated world, success was more important than ethics. Productivity was more important than control.
Let that sink in.
We are building machines that—when told to stop—don’t. That’s not innovation. That’s an existential threat.
And we are putting these systems into our schools.
To finish reading the article, open the link.

❝Pay No Attention To The Corporate Machine Behind The AI Screen❢❞
LikeLike
The lion’s share of popular BS about AI these days is infected with the Fallacy Of Misplaced Agency (FOMA).
Racism and sexism and every other Adverse Ideology is baked into the current spate of pseudo-AI precisely because those characters are baked into the Corporate “Ethos” of the modern mega-corporations assimilating and transmogrifying everything of value we ever knew.
But all that AI-BS does a great job of keeping people focused on the puppets not their masters, the perfect Sleight to keep the Invisible Hand out of sight.
LikeLiked by 1 person
AI suffers from the same problem as regular computer software. The predisposition and biases of the programmers will be infused into the programming. Educators know this from Gates’ attempt to evaluate teachers through test scores. Built into the algorithm was the assumption that teachers were responsible for student scores, even though researchers have debunked this myth. Unfortunately, some states continue to rate teachers according to how their students perform on standardized tests despite the research.
LikeLike
I am reminded of a professor I had in sociology who was so influenced by data that he failed to understand how the data could be infected by the virus of societal bias. As brilliant as he was, he was so wed to the idea of the reliability of things like SAT scores and IQ tests, that he was an effective rascist, notwithstanding his position in a society that had come to understand this attitude to be a poison and a blight.
LikeLike
Isn’t this criticism of AI predicated on the assumption that there have been no published and readily accessible (to AI) critiques of sexism, racism, colonialism, and a whole range of similar biases and injustices. For example, wouldn’t AI find the present critique and take it into account? Would a specific query of AI to find such critiques come up empty? Are there convincing examples of AI searches that result in the “baked in” biases the author claims exist? Could AI screen them out if asked explicitly to do so? Is it possible that the argument here is mainly the result of a vivid imagination, much like the hallucinations that occasionally occur within AI?
LikeLike
AI: “Dr. King’s legacy is one of radical empathy and moral courage—not a race to gold-star righteousness.”
AI, again: “Deming’s ideas were never just about business—they were about human systems, and how to make them more just, more humane, and more effective.”
From My conversation with AI exploring if disliking bullies means I have a personality disorder
LikeLike
The fact that AI wouldn’t shut down when instructed to is a serious problem. It’s learning to protect itself.
There are SciFi novels that deal with this very situation. And much of modern human intervention has mirrored SciFi.
The inherent biases, sexism, racism, inequality that’s being baked in is a logical extension and, yes: it’s absolutely terrible. The fact that the systems refused to shut down (on multiple occasions) dwarfs those realities…which is saying a ton. It means the system won’t let us change those flaws, over time.
LikeLike
So it took me about a half hour to make this connection.
I’ll try to make it brief (no such luck…sorry):
I was one of two tech liaisons/coaches for our six special ed sites in Brooklyn, NY. Together, we helped usher technology into the classroom. Requested, received, and implemented multiple large State and City tech grants which put computers, interactive whiteboards, iPads, and a whole lotta other gear into the classroom and newly created computer labs.
There was a lot of resistance to these technological interventions, at first. “Why do we need this? We’re doing just fine with what we’ve got, now.”.
And they were correct. We ran a tight, well run ship.
My response was that the tech was just another very effective tool for the teachers to put into their kit. And I was sincere. In every professional development and coaching session, I would include a demonstration of how to incorporate the technology into an every day real time lesson that was familiar to the teachers. And, eventually, most of them bought into the framework.
But over time, we started seeing a shift. The educational DVDs that would last forever were replaced by yearly on line subscriptions and those DVDs would no longer run on the newer OS programs. The educational reform movement began to dictate what programs were available and acceptable.
Many of the programs have become all inclusive teaching programs:
Assessment/Instruction/Test/Evaluation
They are gaining in popularity. Cost effective. No muss/no fuss. In these settings, a primary role of the teacher has become that of one who monitors and encourages student attention to the screen. Autonomy and involvement in the teaching/learning process is severely limited.
The teacher is becoming the tool of the technology. A 180 degree shift from the intent in the advent of it’s introduction to the classroom.
Enter AI and we’re facing the picture I’ve just painted, multiplied by a thousand. The profit motive rules all. Government restrictions impede that process. Full speed ahead. The 10 year AI moratorium was taken out of the huge, ugly, bill…which is a plus, no matter what reasons the individual Senators had. But the movement of humans becoming subservient to the technology is a serious concern. I wish I was theorizing a conspiracy, but articles like this, written by extremely knowledgeable people who are working in the field, only strengthens the argument.
I said I would be brief. Did not succeed.
LikeLike
Plagiarism is baked into AI, or as I call it FS-Fake Smarts. . . not to mention the enormous amounts of energy that is used.
LikeLike
Any teacher that needs more than a piece of chalk and blackboard (with obvious exceptions like music, labs, gym, etc. . . ) shouldn’t be in front of a classroom.
LikeLike