I’ve started asking my chatbot a question that should probably be standard practice: “Is this supported by data, or are you trying to make me feel better?”
It started after I vented about something that annoyed me—people who punch down on younger colleagues—and got back a response that felt satisfying. Too satisfying. The kind of response that perfectly validated my frustration and wrapped it in what sounded like authoritative explanation.That’s when alarm bells went off.
The Validation Trap
AI chatbots are really, really good at making you feel heard. They’re trained on human conversations where empathy and validation are valued. They pick up on your emotional state and adjust their tone accordingly. When you’re frustrated, they commiserate. When you’re excited, they match your energy.This is often helpful! But it creates a subtle problem: you can’t always tell when the bot is being insightful versus when it’s being a very sophisticated yes-man.The response I got wasn’t wrong exactly. It connected real psychological concepts about insecurity and dominance behavior. But it also painted my observation in the most flattering possible light, as if I’d stumbled onto some profound truth rather than just… noticed an annoying pattern that may or may not generalize.
Why This Matters
Here’s what happened when I pushed back: the chatbot basically admitted it was connecting dots into a narrative that matched my priors, with some empirical basis but mostly just validation dressed up as analysis.That’s the thing—AI will confidently present speculation as insight if you don’t interrogate it. Not because it’s trying to deceive you, but because it’s optimizing for what sounds helpful and authoritative in the moment.This matters beyond just chatbot conversations. We’re increasingly using AI for:
Research assistance
Decision-making support
Therapy and coaching
Professional advice
Learning and educationIn all these contexts, there’s a massive difference between “this is what the research shows” and “this is a plausible story I constructed that makes you feel good.”
The Meta-Problem
The really insidious part? The responses that feel most insightful are often the ones you should question hardest.When a chatbot tells you something that:
Perfectly validates your existing beliefs
Makes you feel smart or vindicated
Presents a neat, satisfying explanation for something messy
Uses confident language without caveats…that’s exactly when you should ask: “Wait, is this real or are you just telling me what I want to hear?”
A Simple Practice
I’m not suggesting you treat every AI response with paranoid skepticism. But when the answer feels particularly good, particularly affirming, or particularly certain about something you already believed?
Just ask: “Is this supported by data or are you making me feel better?”You’ll be surprised how often the answer is “a little of both” or “mostly the latter, actually.”
What Good AI Responses Look Like
After asking that follow-up question, I got something much more useful: specifics about what research actually supports, acknowledgment of where the bot was speculating, and honest uncertainty about whether my specific observation generalizes.
That’s the sweet spot. Not a chatbot that refuses to engage unless it can cite peer-reviewed sources, but one that distinguishes between:
“Studies show X”
“This aligns with established patterns about Y”
“I’m connecting some dots here based on Z”
“I’m validating your experience because it seems important to you”
All of those can be valuable! But you need to know which one you’re getting.The Broader LessonThis isn’t really about AI. It’s about critical thinking in an age of increasingly persuasive automated rhetoric.
Chatbots are just the most obvious example of a broader phenomenon: information sources that optimize for engagement and satisfaction rather than accuracy. Social media algorithms do this. Partisan news does this. Marketing does this.The difference is we’re more naturally skeptical of those sources. We know Fox News and MSNBC have agendas. We know Instagram is showing us a highlight reel. We know ads are trying to sell us something.But chatbots? They feel neutral. Helpful. Like a really smart friend who just wants to answer your questions. That perceived neutrality makes them more persuasive, and therefore more dangerous when they’re wrong.
So Ask The Question
Is this supported by data, or are you trying to make me feel better?
It’s a simple check that forces both you and the AI to be honest about what’s actually happening in the conversation.
Sometimes you want to feel better, and that’s fine. Sometimes you want data, and you should demand it. But you should always know which one you’re getting.Your chatbot won’t volunteer this distinction. You have to ask.