Augmented Cognition That Won't Shut Up
We built a thinking tool that talks back, and now we can't stop treating it like a person
My two-year-old asked for cereal the other day. I asked if he wanted milk with that.
“Yes. And a bowl. And a spoon.”
He’s learning new words everyday and is still figuring out when to use which one. He doesn’t know yet that when you say cereal, you can mean the whole thing.
My five-year-old has figured it out. “Cereal” means bowl, spoon, milk, the right ratio.. and preferably sitting at the kitchen table. She’s learned what gets to stay implicit. As adults we’ve figured out that “Let’s meet for coffee” contains timing negotiations, location preferences, the understanding that one person might be late and that’s fine. “I’m fine” contains... well, a lot.
And then there’s talking to AI
Hi! I’m Emma. Mom of 2. I think out loud about AI, work, and the futures we’re building. It’s messy. You’ve been warned.
Sometimes it’s like being back with my toddler. First we put on shoes, then we can leave the house. First you need to know my background, then the context, then what I actually want. I’m back to saying every part out loud: the bowl, the spoon, the milk.
But other times I expect it to just know. To read between my lines like a therapist would. To anticipate what I need like an expert. And sometimes it does! Or seems to. So I slip back into implicit mode, assuming understanding that isn’t there.
It’s this constant whiplash. Toddler-level explicit one moment, therapist-level implicit the next. I can’t figure out which one it is.
We keep reaching for labels to make sense of it. I’ve seen people call it “a very eager intern,” endlessly willing and super enthusiastic to help, but needing clear instructions. Others say it’s a smart search engine. A patient mirror. A thinking partner. An all-knowing oracle. A tireless research assistant. A therapist who never gets tired of your problems. A toddler who needs everything spelled out.
And then there are those who shrug and say it’s just another tool.
I’m using the mirror metaphor myself. I give it something and it reflects it back to me. Sometimes clearer, sometimes transformed. Sometimes exactly what I needed to see, sometimes slightly off.
None of these labels quite fit. But each one shifts what we expect from it.
What is AI, really?
Math. Probability. Pattern matching at scale. The technical stuff is fascinating, but that’s not what I’m trying to figure out. I’m interested in what it feels like to use. How we collaborate with it. What happens in our heads when we try to figure out how to talk to something that talks back but isn’t a person.
Maybe what we need isn’t a new metaphor from human relationships. Maybe we need to look at what AI actually does. It augments human capability.
I wrote my master’s thesis on augmented reality, so I keep seeing AI through that lens. An augmented human is a person whose physical, sensory, or cognitive abilities are enhanced or extended by technology.
Augmented reality systems that layer digital information over what you see: turn-by-turn directions floating in your view, medical data overlaid during surgery, instant translation of text you’re looking at. You know what else augments human capabilities? Glasses. Hearing aids. Prosthetics. Exoskeletons that let people lift beyond normal capacity.
We’ve been doing this for decades. Technology that helps people with different needs participate in a world designed for someone else’s body.
Nobody wonders what kind of relationship they have with their prosthetic leg. Nobody asks if you should name your glasses.
There’s a reason for that. Back in the 1990s, researcher Mark Weiser described what he called “calm technology,” tech that disappears so completely into your life you stop noticing it.
These technologies achieve this. They augment our abilities without demanding constant attention. They don’t ask us to relate to them. They just work. Quietly. In the background. Like good technology should.
But now we have exoskeletons for our brains. And they won’t shut up.
Augmented cognition that talks back
That’s what AI is, really. Augmented cognition. A tool that extends what I can think through, process, articulate. I can do things beyond my normal mental capacity, like draft faster, think through complex systems, process information at scale.
But here’s the thing: we made this cognitive tool conversational.
And conversation is the interface we’ve only ever used with other humans.
So we can’t help ourselves. Our brains reach for relationship words: intern, mirror, therapist, assistant, toddler, because that’s how we’re wired to understand anything that responds to us in sentences.
The conversational interface hacks our social brain. We’re stuck in an impossible middle ground where we work best with AI when we treat it like a person, but we have to remember it’s software.
It makes us treat a tool like a relationship. It makes us forget we’re supposed to say the bowl and the spoon, because it talks back like someone who should already know.
Loved this way of framing it from Lyka Saint “It literally speaks my language”.
There’s no word that captures this. We can call it “augmented thinking,” but that makes it sound like all the other calm technologies. It’s not. This is augmented thinking that won’t stay quiet. Something that feels like someone, even though we know it’s not.
We keep trying to figure out what kind of relationship we’re in because the interface won’t let us do anything else. And maybe that’s it. The same interface has to work for everything. Existential questions and recipes. Work problems and creative ideas. No context switching, no relationship boundaries. Just one endless conversation trying to be whatever we need in that moment.
My toddler is learning what “cereal” contains. I’m still learning what “AI” means.
In the meantime, I’ll keep treating it like a toddler when I need to be explicit, like a colleague when I need to think out loud, like software when I need to check its work. Not because I’ve figured out what it is, but because I’ve accepted it might be all of those things, depending on the moment.
And maybe that’s what makes this interesting. We’ve built a tool we can’t stop treating like a person. We built this thing to get work done, but it keeps activating the part of our brain that’s wired for human connection. We’re enhancing our thinking with something that won’t shut up and stay in the background.
The question isn’t what AI is. It’s what we’re becoming as we figure out how to work with thinking partners that won’t shut up.
I’d love to know how you think about this. Whether you’ve found a frame that works, or if you’re still in the whiplash with me.
Big thanks to Jessie Mannisto for asking questions on a previous post that made me slow down and think about this. This is what I like about Substack.




Emma — I appreciate how carefully you resist turning AI into a metaphysical spectacle and instead frame it as a form of augmented cognition that unsettles us precisely because it speaks. Your emphasis on the conversational interface — and the way it pulls us into human frames almost against our will — feels exactly right.
What struck me most is your shift away from asking what AI is toward asking what we are becoming through sustained interaction with something that responds, elaborates, and never quite recedes into the background. That feels like the more honest question, and also the more unsettling one.
It made me wonder whether the discomfort many people feel isn’t confusion about machines at all, but unease about how deeply linguistic interaction structures human thinking in the first place. If thinking has always been partially externalized through dialogue — with others, with texts, with imagined interlocutors — then perhaps AI doesn’t introduce something alien so much as intensify something familiar.
A few questions your piece leaves me sitting with:
When does augmentation quietly become reorientation — not of tools, but of attention?
🔎 What we extend outward may begin to pull us inward in new ways.
Is our tendency to anthropomorphize conversational systems a cognitive error, or a clue to how thought itself is scaffolded?
🔎 Language may be less a medium we use than an environment we inhabit.
If these systems never “shut up,” what disciplines of silence or restraint will matter more, not less?
🔎 Tools that speak continuously test our capacity to choose when not to listen.
Thank you for opening a genuinely reflective space around this.
I love this piece and this space you’ve created! I love that you’re asking questions and thinking through the stuff that I’m trying to navigate professionally with colleagues and figuring out how to teach my teens as I’m learning myself - Thanks for starting the conversation.