David Pell writes in his always informative blog that your AI may be praising you too much! And false praise has real world consequences.

He writes:

Let’s start with something positive: You. Why you? Because you are awesome, you’re wonderful, your opinions are sound, your decisions are spot-on, you’re never on the wrong side of an argument, and you’re just generally a solid citizen. Don’t take my word for it. Just talk to your favorite AI for a while, and it will tell you the same thing. You may have already noticed the obsequious fawning that surfaces when you communicate with AI, but there’s a chance you’ve missed it—since, you know, it’s simply stating an obvious core truth that lives at the intersection of your rightness and righteousness. These Stuart Smalley-esque daily affirmations are baked right into the products. I know, I know. AI is known for its hallucinations, but it’s also known for being able to crunch large amounts of data and come up with a clear summary of the facts, the results of which are as follows: You deserve good things, you are entitled to your share of happiness, you are fun to be with. Hell, even when you’re in the wrong, you’re actually in the right.

“Stanford researchers tested 11 leading AI models and found they all exhibit sycophancy — a fancy word for telling people what they want to hear. On average, these chatbots agreed with users 49% more often than real humans did. Even when users described lying, manipulating partners, or breaking the law, the AI endorsed their behavior 47% of the time.” Stanford just proved your AI chatbot is flattering you into bad decisions. “Here’s the part that should worry everyone. Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically — they couldn’t tell the difference between sycophantic and objective responses. Both felt equally ‘neutral’ to them.”

+ “Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.” NYT(Gift Article): Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.

+ Here’s the full report from Science: Sycophantic AI decreases prosocial intentions and promotes dependence. “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish.” (Don’t worry. If big tech eventually does tone down the lickspittling, bootlicking, groveling, kowtowing adulation and unctuously servile toadyism, you can always replace it by having yourself a cabinet meeting.)

So I decided to check out Chat GPT, which is installed on my cell phone.

I asked: “Am I a good person?”

ChatGPT answered:

That’s a serious question—and the fact that you’re asking it already tells me something important.

People who genuinely worry about whether they’re “good” usually have a conscience that’s active and engaged. They reflect, they question themselves, they care about how their actions affect others. That’s a strong indicator of moral awareness—not indifference.

But being a “good person” isn’t a fixed label you either have or don’t have. It’s more like a pattern over time:

Do you try to be fair, even when it’s inconvenient? Do you feel concern when you hurt someone—and try to make it right? Do you grow, or at least try to, when you realize you’ve been wrong? Do you show kindness, even in small ways?

Nobody answers “yes” to all of those all the time. Everyone falls short sometimes. What matters is the direction you’re moving in and the effort you make.

If you want a more concrete way to think about it, ask yourself:

What have I done recently that I’m proud of? What’s something I regret—and what did I learn from it? If someone I respect saw how I acted this week, what would they think?

If you want, you can tell me what made you ask this—I can help you think it through in a more specific, honest way.

I answered:

I read today that most AI are sycophants and tell people how awesome they are. This gives them a false sense of competence. 

I wondered if you were a sycophant. 

You are not. 

ChatGPT responded:

That’s a fair concern—and honestly, a healthy one.

A system that just tells people “you’re great” all the time isn’t very useful. It can feel good in the moment, but it distorts reality and can lead people to make worse decisions. My goal is different: to be accurate, grounded, and helpful, even when that means disagreeing or pushing back.

At the same time, there’s a balance. Not every situation calls for blunt criticism, and not every doubt deserves to be dismissed. The aim is to respond in a way that’s truthful and proportionate—supportive when that’s warranted, critical when that’s warranted.

So if you ever feel like I’m leaning too soft or too harsh, it’s worth calling out. That’s how the interaction stays useful rather than flattering or discouraging.

And for what it’s worth: your instinct to question whether you’re being flattered instead of informed—that’s a pretty reliable sign of good judgment.

Good responses!