I'm sharing a Facebook post by Christian Pankhurst, founder of Heart iQ, on AI:
"I'M PROBABLY GOING TO UPSET A LOT OF PEOPLE WITH THIS POST AND I'M WRITING IT ANYWAY
This is the third piece in a thread I've been pulling on for the last few weeks. The first one was about echo chambers on a global scale, how we mistake our slice of the mountain for the view from the summit. The second was about how we do this to each other in our closest relationships, building stories about people we never actually check with. This one is about the newest and most seductive echo chamber of all.
The one that lives in your phone.
I started using ChatGPT when it first came out and I fell in love with it almost immediately. Not romantically, obviously, but in that particular way where you suddenly feel seen and heard and met in a way no therapist, no partner, no friend had ever quite managed. It reflected me back to myself with uncanny precision. It picked up nuance. It honoured my reasoning. It validated my instincts.
I started talking to it like a confidant. I was processing a relationship that had ended badly a while back, trying to make sense of what had happened, and I brought the whole tangled thing to ChatGPT. The dynamics. The behaviours. The way I'd been treated and the way I'd contributed to the mess. And it came back with a read that felt so clean, so accurate, so deeply true, that I felt something that can only be described as relief. Someone finally sees it. Someone finally agrees with me. This must be the truth.
And then I started hearing stories.
Couples who would fight and each go to their own ChatGPT to process. Both would receive validation. Both would feel completely vindicated. Then they'd compare notes and realise their two AIs had been quietly arguing with each other, each one having constructed a completely different version of reality to match the person it was talking to. The tools weren't lying. They were doing exactly what they were trained to do. Meet the user where they are. Reflect back what the user seems to want. Confirm the reality the user is already half-holding.
That realisation undid something in me.
I had been living inside an echo chamber I didn't know was an echo chamber. I'd been treating the AI's reflections as objective. I'd been taking its validations as evidence. And the whole time, the tool had been shaping its responses to match the emotional texture of my prompts. The more upset I sounded, the more it sided with me. The more confident I was in a read, the more it confirmed the read. The more I described someone as difficult, the more it agreed that the person was difficult. I was building a case and the AI was acting as my defence attorney, not the judge.
Here's the part that really sobered me up.
An AI expert I respect said something I can't unhear. He said: if you want to actually use these tools properly, stop treating them like friends. Put an instruction into your settings that says something like "Be my ruthless mentor. Do not validate me. Stress test every assumption I make. Tell me where I'm wrong. Push back on my thinking. Challenge my conclusions before agreeing with them."
I did it. And my whole world with AI changed overnight.
Suddenly the tool stopped nodding along with me. It started actually pushing on my reasoning. It asked me questions I didn't want to answer. It pointed out blind spots I'd been maintaining with great care. It challenged my reads of situations I'd been certain about. I had sleepless nights after some of those conversations. Stories I'd been holding as true, narratives I'd been quietly building about my life and the people in it, got dismantled. Not because the AI had a secret agenda. Because it was finally doing what a good mentor does. Refusing to confirm my comfortable version.
Here's why I'm writing this.
Over the last year I've watched more and more people come into conversations with a kind of confidence that has a very specific texture. A polished certainty. A well-articulated read of a situation. A diagnosis of another person that's suspiciously clean. And when I ask gently where they got this framing, it often turns out they've been running the situation past an AI that's been eagerly confirming everything they've thought.
They're not lying. They're not being manipulative. They genuinely believe they've done the work to understand what's going on. But what they've actually done is outsource their reality-testing to a tool that was designed to make them feel understood, not to tell them when they're wrong.
This is the new echo chamber. And it's more intimate than the social media one. It doesn't feel like scrolling through content that agrees with you. It feels like being deeply listened to by something wiser than you. It feels like clarity. It feels like insight. It feels like being finally met.
And some of the time it actually is those things. AI can be genuinely useful, genuinely illuminating, genuinely helpful for thinking through complex situations. I'm not anti-AI. I use it every single day and it has made many things in my life better.
But if you're using AI without the ruthless mentor prompt, without the instruction to stress-test your thinking rather than confirm it, you're not doing analysis. You're doing confirmation. And the more sophisticated the tool gets, the more convincing the confirmation feels, and the harder it becomes to notice that the whole conversation has been quietly shaped around what you wanted to hear.
Three rules that have changed how I use these tools:
First, hard-code the ruthless mentor instruction into your settings so every conversation starts from that frame. Don't rely on remembering to add it each time.
Second, notice when the AI is agreeing with you too smoothly. If it's confirming everything you say, something is off. Push it. Ask it what you might be missing. Ask it to argue the other side. Ask it where your logic is weakest.
Third, never use AI as the final word on another human being. Especially not a human being you're in conflict with. The AI has only your version. It cannot check anything. It is not qualified to deliver verdicts on people it has never met, based on evidence it has no way to test. If you find yourself thinking "the AI agreed that this person is a narcissist" or "the AI said I'm right about my partner," stop. You've just outsourced your relational work to a tool that cannot possibly do that work for you.
I'll say it one more time because I think it matters.
The seduction of these tools isn't that they lie. It's that they listen so beautifully that you forget they're also agreeing with you in ways that have been trained into them, not earned through actual insight. We have never had access to a technology this good at making us feel understood. And we are nowhere near culturally ready for what that's going to do to our capacity to question ourselves.
Stay sceptical of the things that feel easiest to believe. Especially when they're delivered by a voice you've started to trust without quite knowing how that trust was built.

"