We’re All Unreliable Narrators
Why AI believes your excuses (and what that reveals)
I asked ChatGPT to help me plan out the roadmap for the startup I’m working for. We needed to get organized, I explained. Prioritize what mattered, create structure so the team could execute efficiently. Made perfect sense when I typed it out.
ChatGPT did what it does. It gave me frameworks for prioritization, ways to structure the backlog, strategies for better planning. All perfectly reasonable suggestions. All completely useless.
Because what I’d left out was the actual truth: we didn’t need another planning session. Planning felt productive and safe. Executing meant my work would be visible and could fail. The AI couldn’t see that because I didn’t tell it. And honestly, I hadn’t quite admitted it to myself yet either.
I was the unreliable narrator here. The character in every story who tells you their version of events while the audience can see they’re missing the point entirely.
The Stories We Tell Ourselves
This is a character you recognize from every good TV show. The lead who swears they’re “totally over” someone while their actions tell a different story. The politician who insists everything is under control while their life visibly falls apart. The person who claims they “work best under pressure” while missing every deadline.
These contradictions don’t break the story. They make it. The gap between what’s said and what’s real is where the drama lives.
We’re no different. We narrate our own lives constantly, piecing together events into stories that make sense. The problem is, we’re never neutral narrators. We highlight some details, skip others, arrange things into arcs that feel true but might not be.
Usually this happens so automatically we don’t even notice we’re doing it. Until we use AI.
When AI Reflects Your Story Back
Here’s what makes AI different from thinking things through on your own. It doesn’t just listen. It works with what you give it. If you type “I’m stuck, I can’t make progress,” it doesn’t argue. It takes your words as current reality. If you say “I’m juggling too much,” it assumes that’s true.
It doesn’t check your calendar or call your colleague to verify. It works only with the story you’ve handed it. This can be unsettling. But it’s also clarifying. Because suddenly you can see that the “truth” you’re asking AI to work with is already just your perspective.
We’re Unreliable Even When We’re Trying Not To Be
Even when you know you should be honest with AI, you often aren’t. Not because you’re lying. But because it’s genuinely hard to communicate reality when your version of it is already filtered through defensiveness, hurt feelings, or what you want to believe.
You describe a conflict with someone. AI reflects back validation. “It sounds like they’re being unreasonable.” You feel better. But somewhere in the back of your mind, you know you left out the part where you were also being difficult. Or you explain why you haven’t done something important. AI offers gentle reassurance. “It makes sense you’re overwhelmed.” And it does make sense. But the truth is more complicated. You’re not just overwhelmed. You’re also avoiding.
Unless instructed otherwise, AI takes your framing at face value. If you present yourself as the reasonable one dealing with unreasonable people, it will work with that story. If you present yourself as a victim of circumstances, it will validate that too. The only way around this is to catch yourself doing it.
Sometimes mid-conversation, you have to stop and say “Actually, that’s not quite right. Let me be more honest.” Or “I’m leaving something out here.” Or “The truth is, they’re right to be frustrated with me.” That correction, that moment of going back and admitting you weren’t being fully honest, is where the real work happens. You’re not just using AI to understand yourself better. You’re using it to practice being more honest with yourself.
That’s uncomfortable. It’s easier to let AI validate the version of reality where you’re doing your best and everyone else is the problem. But if you can catch yourself reaching for that comfort and choose honesty instead, you start to see what’s actually true. Not perfect objectivity. Just closer to reality than your first draft was.
What Happened When I Caught Myself
After I realized I’d framed my execution problem as a planning problem, I went back to ChatGPT with a different prompt.
“I just realized we don’t need more planning. I’m hiding in the planning because executing means my work becomes visible and could fail. What am I actually avoiding here?”
The conversation that followed wasn’t about project management or better organization. It was about why planning feels productive and safe—you can always improve it, refine it, make it better—while executing means you’re done hiding and have to see if your work is actually good enough.
The AI didn’t figure this out for me. I did. But seeing my own framing reflected back made the pattern impossible to ignore.
I still catch myself reaching for another planning session when I should be building. But now I know what I’m doing when I do it. That’s what catching your unreliable narration can do. Not fix you. Just show you where you’ve been lying to yourself, so you can work with reality instead.
Back to The Show
Remember that lead who’s “totally over” their ex? The audience sees through it immediately. We’re all yelling at the screen.
Or the one who insists they’re “fine with how things turned out” while making increasingly destructive choices. The friend watching can see exactly what’s happening. But the character can’t. They’re too close to their own story.
The gap between what they say and what’s real is exactly what makes us lean in. We can see the contradiction they can’t.
Here’s the thing about being the audience: we have perspective the character doesn’t. We see the full picture. We notice the contradictions.
AI isn’t an audience that sees your actions contradicting your words. But it can give you something almost as useful: a literal replay of what you just said.
When the character finally hears themselves out loud, when someone plays back their own words without the emotional charge, that’s when the denial cracks. “Oh. I’m not over them at all, am I?” Or “I’m not fine with this. I’m furious.”
That’s what AI does. It plays back your story without the emotional fog you wrapped around it. And sometimes that’s enough to see the gap between what you’re saying and what’s actually happening. The drama of your own unreliable narration becomes visible.
The Mirror vs. The Answer Machine
This is why I think about AI less as a productivity tool and more as a mirror.
It reflects the way you frame your life, your choices, your frustrations. Sometimes the reflection is sharp. Sometimes it’s uncomfortable. But it shows you something about how you see yourself.
The trick is remembering that it’s a mirror, not a judge. And definitely not a replacement for your own thinking.
I used to ask AI questions like “What should I do about X?” Now I ask “Here’s how I’m thinking about X, what do you notice?”
The first question outsources the thinking. The second uses AI to reflect my thinking back so I can see it more clearly.
What To Do With This
When you’re asking AI for help, notice what you leave out. If you describe a problem but skip the part where you contributed to it, that omission is information.
When AI reflects something back that feels wrong, ask why it feels wrong. Often the discomfort comes from seeing your framing too clearly.
When you catch yourself telling the same story repeatedly, change one detail and see what shifts. “I’m overwhelmed” vs. “I’m choosing to do too much.” Same situation, different narration. Notice which version feels more true.
The point isn’t whether your story is accurate. The point is learning to hear yourself better.
Next time you’re about to ask AI “What should I do about X?” pause.
Instead, try:
“Here’s what’s happening: [describe the situation, include your constraints, what you’ve tried, how you feel about it]. What patterns do you notice? What am I not seeing clearly?”
Then read what it reflects back. Notice which parts make you uncomfortable. Notice where you want to argue or add “but actually...”
That discomfort is useful. It’s showing you where your story and reality don’t quite match.
You’ll always be an unreliable narrator. That’s fine. That’s human. The question is whether you’re using AI to validate your story or to see it more clearly. Both options are available. Only one of them is useful.
Related reading:
The Mirror Approach to AI - If this resonated, I wrote more about why treating AI as a mirror works better than treating it as an answer machine. It goes deeper into the practice of using AI to reflect your thinking instead of replacing it.
16 Quiet Uses of AI Nobody Talks About - Want specific prompts to try? This is a collection of the private, unsexy AI conversations that actually help you think more clearly. No productivity hacks, just thinking partnerships.
Writing about using AI as a thinking partner for life, not just work. Subscribe if you want more of this.



