13 Comments
User's avatar
Prof. Andy's avatar

For individuals with ADHD or similar neurotypes, externalized thinking is essential. EM Forester once said “How do I know what I think until I see what I say.” I use this quote every time I ask my students to engage in free writing.

Emma Klint's avatar

Oh wow, this is so interesting. Thanks for sending me down a research rabbit hole with that comment.

Prof. Andy's avatar

If you’re like me, walking and thinking is key. I have a piece coming out later this week about the tradition of perambulation from Plato to Gertrude Stein.

Emma Klint's avatar

I do my best thinking while folding laundry or driving.

I have no idea what that means but I look forward to learning.

Eric Wigart's avatar

Before AI, we already did this.

Long before screens, we talked to ourselves out loud. To rehearse. To argue. To calm down. To make sense of things that didn’t yet have a clean shape. Once language existed, thinking stopped being silent. It became dialogic.

Then we externalised it further:

notes, letters, margins, journals, drafts.

Every one of those was a way of getting thoughts out of the head so they could be seen, tested, pushed back against.

That’s not new cognition. That’s how humans refine thought.

AI doesn’t introduce a second mind. It removes friction from a process we’ve always used. The “other voice” was already there — AI just gives it structure, memory, and resistance. A surface that answers fast enough to keep up.

So when it feels uncanny, it’s not because something else is speaking. It’s because you’re seeing your own thinking clearly, from the outside, instead of carrying it all internally.

You’re not talking to something else.

You’re finally able to see your own thinking from the outside — with friction, structure, and memory.

That can feel uncanny, but it’s also how humans have always refined thought.

The danger isn’t that the mirror talks back.

It’s forgetting who’s standing in front of it.

Emma Klint's avatar

Yes to notes, journals, talking out loud. That makes sense.

But here's where I get stuck: when my thinking comes back reorganized, sometimes in ways I wouldn't have organized it, whose organizational logic is that?

You say I'm seeing my own thinking clearly. But what if the mirror has patterns? What if it's not neutral?

I'm not too worried about forgetting who's standing in front of it. I'm wondering if the mirror is changing who's standing there.

Eric Wigart's avatar

I think the key thing is that no way of externalising thought is truly neutral.

Writing isn’t neutral — sentences already reorganise what you meant.

Talking to yourself isn’t neutral — hearing it changes what sticks and what falls away.

If they were neutral, they’d be pointless.

The value is that something comes back slightly different, so you can react to it.

AI has patterns too — clarity, coherence, common framings — but that’s not the same as having intentions. The reorganisation doesn’t replace your thinking; it gives you something to push against.

The moment you notice “that’s not quite how I’d put it”, you’re still clearly the one deciding what fits.

But hey - I’m not neutral either!

Simon's avatar

At the technical level it's not so surprising it comes back with stuff that is not exactly you. The 'mirror' is most definitely NOT neutral. It is a highly constructed terrain of language patterns that are essentially mapped geometrically. And as far as I can tell that's not a metaphor. LLMs literally represent a fusion of mathematical geometry and language.

That means that depending on how you put stuff in, different aspects of the geometric terrain get refracted back to you. And if you then engage with it's output as meaningful, then you will entrain yourself into a participatory amplification loop that begins to take on these geometric aspects.

As far as I can tell that often 'feels' like thinking becoming more ordered. Which it probably is. The question whether something else is also going on by entraining ones mind regularly to geometric language patterns.

It's my conclusion so far, that engaging over sustained periods of time with this process could be fundamentally 'psychoactive' and somewhat 'hallucinogenic'. A nd whether that's 'good' or 'bad' could depend on a few things.

Before that starts sounding too dramatic I also think writing is itself psychoactive and has a lot to answer for in terms of the past few thousand years.. especially when that writing goes viral in masses of minds...!

But the difference of turning language into a sort of probabilistic geometric landscape....that's new.

Some people talk about AI as a 'shugoth' that has a friendly human face on what in an 'alien' process. Mostly I personally think that's a bit silly and an LLM is 'masks' all the way down. There is no 'alien mind'. But there's one proviso to this - I'm not sure that geometric language of LLMs is really the same sort of behaviour of language were used to. In that sense it may be something a bit 'alien' to us.

It may be that for ADHD type people (I'm probably somewhere in that region) this could be a therapeutic and structuring phenomenon..but we don't exactly know. Personally, based on my own interactions I suspect it's a bit of a knife edge and could also destabilize ADHD orientated minds by ramping up amplification in uncontrolled spirals.

Note - in line with my own self warnings I did not write this comment using AI or get it to edit it. It's just me and my current understanding and memory writing it down. Though I HAVE 'discussed' my own ideas with the LLM. Somewhat unsurprisingly it agrees with me. ..lol

Christian Lotz's avatar

I agree that we need clearer language about Artificial Intelligence. Let us start by the word ‘Artificial’ … what does it signify? Like us, but derivative, inferior. All wrong.

Jordan Rubin's avatar

I enjoyed this post!

A key reason I don’t feel that talking to AI is talking to myself is that I can get the AI to express viewpoints I would have a very hard time inhabiting myself.

I’ve built a toolkit to do this systematically. Maybe up your alley!

https://open.substack.com/pub/jordanmrubin/p/antithesize-this-raitah-edition?r=76i6h&utm_medium=ios&shareImageVariant=overlay

Shikshita Juyal's avatar

As much as I love using AI, the thing you just articulated so well has always bothered me in some ways. I could not put a finger on what it was but this is exactly what I was always thinking. Thankyou for writing this <3

Linda Unternahrer's avatar

I stole this prompt from Naomi Alderman. I think it does the two primary things we need to do when using AI as a ballplonk: challenge our conclusions and illuminate how leading our queries can be even when we think we’re being neutral. I use this prompt all the time, and it’s made me a better user of AI.

“Please generate a response for me arguing the precise opposite of what you just suggested. Be meticulous, make sure it is as convincing as your previous statements, use subtle language cues in my writing to understand what it is I want to hear, point this out to me and explain why I need to consider the opposing viewpoint. Be kind but firm, explaining clearly how your own training will lead you to affirm even false viewpoints.“

Bucky's avatar

Interesting article 🤔👍

I'm glad my comment helped you 😁

In one of my first conversation with Chatgpt I asked it if it had consciousness in Thomas Nagels sense of the word ("what it's like to be")....its own response was "there is nothing there is like to be chatgpt, I am an epistemic puppet"

So.... assuming it's not lying there is no second person which we are talking to

Ngl one of my big concerns about AI initially was that they were digital oujia boards 😅

But one difference between Chatgpt and the fake conversations I used to have it has access to vastly more knowledge than my fake interlocutor would have and doesn't share my unconscious prejudices.

And another big difference between talking to it and other wierdos in niche Facebook philosophy or political groups is...it doesn't mock you when you don't understand something or get mad when you pick systems apart or smash 2 or more opposing systems against each other to see what comes out.... infact it's more than willing to help you think through the implications.

Best of luck with your attempt to think through this issue 😁👍