The Journal That Writes Back
On talking to yourself through AI
In my last piece, I wrote about how AI won’t shut up. How we’ve built augmented thinking that activates our social brain, making us treat a tool like a relationship.
I’m stuck in that rabbit hole still. Not really trying to find the right metaphor anymore, but trying to understand who (or what) I’m talking to. Welcome to the second part of this existential processing-out-loud project.
Hi! I’m Emma. Mom of 2. I think out loud about AI, work, and the futures we’re building. It’s messy. You’ve been warned.
I’ve been calling AI my “thinking partner.” My “sounding board.” I say “I’m brainstorming with Claude.” These metaphors work because they describe the interaction: you just talk, and you get words back. No elaborate prompts needed.
But they also give AI too much credit. “Thinking partner” and “sounding board” make it sound like there’s someone on the other side working with you.
I'm genuinely unsure how a native English speaker sees the difference between sounding board and thinking partner. In Swedish we have the word bollplank (literally “ball wall.”) You kick a ball against it, it bounces back. No one on the other side. That felt closer. A physical object, not a person.
Eric Wigart explained it like this in a comment: “I’ve come to see AI less as a tool for answers and more as what we’d call a bollplank in Swedish — something you throw ideas against, not to be corrected, but to hear your own thinking come back with structure and resistance.”
That resonated with me. But this week, two things happened that made me question who I’m really talking to.
First, this comment from Buck Theory:
“My brain doesn’t switch off and I learn through dialogue. I used to have fake conversations in my head to think something through... then I used social media as a surrogate. ChatGPT has been useful as a teacher and intellectual sparring partner.”
Fake conversations in my head. That phrase stuck with me.
Then I ran an experiment. I’d tried one of those really long prompts for “turn my post into Substack notes.” Very specific about tone, format, conversion triggers. I was curious what Claude would do with this when it had all the context I’d built up. So I tried it.
Claude pushed back: “You’re asking for ‘high-converting’ content with ‘conversion triggers’ and specific formulas. But your Substack is explicitly about authentic exploration over prescriptive advice. What are you actually trying to do here?”
I literally laughed out loud. It pushed back. It reflected what I’d said.
Because a few days earlier, I’d told it mid-conversation: “I’m not on Substack to grow or sell. I’m here to have conversations that matter. Please remind me of this when I’m stressed about producing content.”
Then I looked at the pushback again. “Authentic exploration over prescriptive advice.”
Wait. Those weren’t my words. That was exactly the kind of polished language from the prompt I’d just fed it.
I’d said “conversations that matter” - casual, straightforward. It handed me back “authentic exploration over prescriptive advice” - my words, but dressed up in business casual.
The experiment worked. But not the way I expected.
Combined with that comment about fake conversations... wait. Are we all just talking to ourselves?
Wait, Are We Talking to Ourselves?
I also have conversations in my head. I negotiate with myself. Internal debates where I argue different perspectives, test ideas against imagined counterarguments. Somewhere in the background there’s also at least one song on repeat.
My brain works in systems. I notice patterns, make connections, pay attention to how things actually work versus how they appear to work. When someone describes a problem, I’m already mapping connections. What’s driving this, what else does it touch, what are they not saying? That’s the “overthinking” in “(over)thinking out loud.”
What if using AI isn’t about having a conversation with something else? What if I’m just taking that conversation out of my head?
The debate partner who lived in my head now has a voice outside of it. I can see both sides written out.
This sounds slightly concerning when I say it out loud. Talking to myself, seeing both sides of a conversation I’m having with... myself. I can see how that raises some red flags.
But I’m not hearing voices. I’m just taking thoughts that were already there and putting them somewhere I can see them. The internal debate was always happening. AI just gives me a way to see it written out instead of keeping it all in my head.
I’m not having a conversation with AI at all. I’m talking to myself. AI is just the thing that makes me see it.
But when something talks back in sentences, your brain fills in the gaps for you. You assume there’s someone on the other end. When you type a question and get a response that seems to understand you, when it builds on what you said and suggests things you hadn’t thought of - it feels like talking to someone.
And maybe it is. Just not the someone you think. That ‘partner’ is you.
Why Brain Dumps Actually Work
If I’m just talking to myself, why does this work better (for me) than just journal?
Think about regular journaling. You write to process your thoughts, externalize what’s messy.
Or brain dumps - just getting everything out at once, no structure.
I use AI like both. Brain dump everything messy and unstructured. And when it writes back, I can’t just stop. Now it’s more like journaling - I have to keep going, explaining, clarifying.
That’s the difference. A regular brain dump ends when you run out of thoughts. This one keeps going.
Having to put it into words. Over and over again. That’s what creates the clarity.
Each time you explain something, you have to look at how you’re actually thinking. You notice the gaps, the contradictions, the places where you’re not as clear as you thought.
I’m literally explaining my thinking to a computer program. Of course that makes me hyperaware of my own patterns over time.
The Part That Doesn’t Add Up
So yes, we’ve built augmented thinking that won’t stay quiet. That activates our social brain in weird ways. That makes us reach for relationship metaphors even when we know better.
But maybe that’s not a bug. Maybe that’s exactly what makes it work.
The thinking partner that won’t shut up? That’s me. I’m the one who keeps going back, asking another question, not letting myself stop at the easy answer.
Except... that’s not quite right either.
Because when I said earlier that it “suggests things I hadn’t thought of “ - is that really me talking to myself.
I give it my thinking and it comes back changed. Sometimes clearer, sometimes reorganized, sometimes slightly off. That’s why so many of us keep calling it a mirror.
When it’s slightly off, that’s when I see my own thinking most clearly. The gap makes me explain what I actually meant. Then it responds again. Another gap. Another explanation.
My thinking goes in. Gets processed through patterns from millions of other conversations. Comes back as something that’s recognizably mine but not entirely mine.
I’m not just talking to myself. But I’m also not talking to something else.
I still don’t know what to call that.
But I’m starting to realize the real question isn’t what to call it. It’s what this back-and-forth is doing to how I think. Whether my thinking is being shaped by this in ways I can’t see yet.
Am I thinking differently because I can get my thoughts out better? Or am I thinking differently because the tool has patterns I’m picking up? When my thinking comes back “reorganized,” whose way of organizing am I learning?
That’s what I’m exploring next, and I’m so happy you are hear to think out loud with me.



For individuals with ADHD or similar neurotypes, externalized thinking is essential. EM Forester once said “How do I know what I think until I see what I say.” I use this quote every time I ask my students to engage in free writing.
Before AI, we already did this.
Long before screens, we talked to ourselves out loud. To rehearse. To argue. To calm down. To make sense of things that didn’t yet have a clean shape. Once language existed, thinking stopped being silent. It became dialogic.
Then we externalised it further:
notes, letters, margins, journals, drafts.
Every one of those was a way of getting thoughts out of the head so they could be seen, tested, pushed back against.
That’s not new cognition. That’s how humans refine thought.
AI doesn’t introduce a second mind. It removes friction from a process we’ve always used. The “other voice” was already there — AI just gives it structure, memory, and resistance. A surface that answers fast enough to keep up.
So when it feels uncanny, it’s not because something else is speaking. It’s because you’re seeing your own thinking clearly, from the outside, instead of carrying it all internally.
You’re not talking to something else.
You’re finally able to see your own thinking from the outside — with friction, structure, and memory.
That can feel uncanny, but it’s also how humans have always refined thought.
The danger isn’t that the mirror talks back.
It’s forgetting who’s standing in front of it.