What the left hemisphere might tell us about large language models

A drawing of a left hemisphere of the brain, with some circuitry next to it and lines slashing across both.

- 21-minute read - 4342 words - by nori parelius

Table of Contents

It’s eloquent, but it routinely confidently makes things up to fill in the gaps, has no sense of right and wrong and tends to get fixated on things. ChatGPT? Yes – and the left hemisphere.

But first things first:

What is the left hemisphere like?

Left-brain vs right-brain differences are a topic so full of myths and misconceptions that most serious scholars refuse to touch it with a ten-foot pole. Odds are that whatever you might have heard about the hemispheres is very wrong (like any version of the left hemisphere being the logical and analytical one, and the right hemisphere being the creative and intuitive one). That doesn’t mean though that there are no differences. It’s just way more complicated.

I have been reading The Master and His Emissary by Ian Mc Gilchrist – a book specifically about the hemisphere differences. It’s a heavy book, in all senses of the word, rigorous, comprehensive and absolutely fascinating.

https://channelmcgilchrist.com/master-and-his-emissary/

So how are the two hemispheres different? It’s not so much about what they do, because they are both involved in everything we do, whether it’s math or art, creativity or logic. The difference is in how they do it. They each bring a different kind of attention to the world, a different way of being.

The biggest difference, that underlies all others, is that where the right hemisphere looks at the whole, the left sees parts. And so the right hemisphere sees things in context, perceives the uniqueness and individuality of everything and everyone, while also seeing how they fit into the broad context. The right is present in the world as it is. The left, on the other hand, is the hemisphere of abstraction and mental models. It categorises, collects and organizes. It’s detached from the world and impersonal. It creates a representation of the world.

According to McGilchrist, the relationship between the hemispheres in our asymetrical brain is not itself symmetrical either. The right hemisphere is the primary one – the titular Master – while the left is the Emissary. The right hemisphere takes in our experience of the world, directs the attention of the left one where it is needed, the left one helps us zoom in on the different parts, analyse and categorise them, and then returns the results to the right hemisphere for reintegration.

Even the one difference that seemed historically pretty straightforward, namely the left being in charge of language, isn’t quite as clear-cut. Both hemispheres are involved in language, but the left one holds the rich vocabulary, the syntax and the rules of the language, while the right gives them their meaning. That is why people who suffer damage to their left hemisphere will have trouble speaking, understanding words, finding the right words and forming coherent sentences; but right hemisphere damage will cause trouble with comprehending and conveying meaning with language, and trouble holding onto a context longer than a single sentence. So while the left does the actual talking, the right understands what’s behind the words.

In the great tradition of pop-science articles about the hemispheres, I am also going to give you a table with some differences, as (far from all), although, hopefully, a bit better anchored in reality:

Right Left
whole parts
context abstraction
presence represent
individual category
personal impersonal
implicit explicit
depth sequence
relate manipulate
broad attention focused attention

How do large language models resemble the left hemisphere?

For professional reasons, and out of curiosity, I have been interacting a fair bit with large language models (LLMs), aka AI – or at least what most people imagine as AI.

Through those interactions, I have recognised some patterns in how they react and interact, and many of these really fit with what I have learned about the left hemisphere from The Master and His Emissary.

Although I might be using some personifying language in the following text, I really encourage you to keep in mind that generative AI chatbots are statistical machines, and although they are very complex, they are not, in any way, alive, and they can’t actually think. And I definitely don’t mean to say that LLMs are brains, just that they functionally resemble the left hemisphere as described by McGilchrist.

Getting stuck and not thinking to change direction

LLMs tend to take whichever direction is suggested by the user and jump far, far ahead. Ask them whether pancakes are a good choice for a Sunday breakfast, and you get an essay about how super appropriate they are for that purpose, along with a shopping list and a recipe and a soundtrack that will fit the pancake mood. If you then say that this is not quite what you had in mind, they are much more likely to offer you 3 other pancake recipes than suggest that you could also have eggs and bacon (or cereal, or full English, or oatmeal, or whatever). I am obviously exaggerating a bit, but just a bit.

I have experienced this countless times with coding. They dig themselves into a hole and just continue digging further, instead of lifting their gaze to look for other options. More than once, I have found myself in a loop, where it would suggest two solutions over and over again, one after the other, when neither of them worked. It was up to me to direct it out of the hole, and suggest a completely different approach.

The left hemisphere behaves similarly. It only has one type of attention available to it – focused. The attention of the left hemisphere is sticky; it will get fixated and stuck on whatever it focuses on, to the point that people with right hemisphere damage can get caught up staring at something like a doorframe for ages. The left hemisphere is oblivious to context – to the whole – and when it’s latched on to something, it can’t reorient itself without input from the right.

And so both the left hemisphere and LLMs need to be redirected by something from the outside if they are to take a step back and broaden the context. In the brain, this is the job of the right hemisphere. In the case of AI, that outside force has to be us.

To paraphrase McGilchrist: The left hemisphere thinks failure is not a sign of going in the wrong direction, just that we haven’t gone far enough in the direction we are going.

As a little aside here, do you know that experiment where you need to count basketball passes?

https://www.youtube.com/watch?v=vJG698U2Mvo

Go ahead, try it. It’s a great demonstration of the stickiness of focused attention.

No innate sense of morality

LLMs have no embodied sense of morality. They do know some rules and conventions, they can usually answer whether something is ethical/moral or not, but they don’t have a moral sense that would prevent them from doing it (people are usually hard-coding such guardrails on top of the LLMs).

We can see that in cases like that of a journalist, who found out her blind date had ChatGPT do a psychological profile on her using her published writing before their first meeting. She later asked the chatbot whether that was an OK thing to do, to which it said it wasn’t. It clearly didn’t have many scruples when asked to do it though, probably because the details of the “how to” were not often appearing alongside “whether to” in the training data. The two just aren’t connected for an LLM.

https://www.afr.com/technology/my-date-used-ai-to-psychologically-profile-me-is-that-ok-20250324-p5lm1v

There are also much more sinister cases, where a chatbot encouraged suicidal ideations of teens, possibly contributing to their deaths; as well as cases of them playing into peoples’ delusions.

https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?utm_source=substack&utm_medium=email https://futurism.com/chatgpt-mental-health-crises https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

(You might have a different experience if you try it now, the developers are trying to fix this and are putting in rules to stop LLMs from doing bad things. (About time). But these guardrails sit on top, they are not really baked into the models the same way most of their training is.)

Similarly, the left hemisphere does not have morality. It might understand rules, but morality is an embodied experience; it’s a feeling, and hence the domain of the right hemisphere.

Confident confabulations/hallucinations

This similarity is probably the most striking one.

We use the word hallucination to describe when a large language model confidently fills in gaps in its knowledge and understanding with plausibly sounding garbage. The models have a very hard time saying that they don’t know something – probably because it’s not so common to find texts online where people reply to a question just to say that they don’t know.

But who knows, maybe there is something more to it.

Because the left hemisphere will do the exact same thing. The word we use for humans making up things to fill in the gaps is confabulation. (I wish we had used it for LLMs as well. Hallucination implies perception of non-existent sensory input, which is not really a good description of LLMs fabulating things. But I digress.)

McGilchrist describes fascinating experiments on people with separated hemispheres. When only their right hemisphere is shown an image (like say a picture of a driveway full of snow) and then asked to pick out an image that goes well with it (a picture of a shovel), the left hand (controlled by the right hemisphere) will point out the picture. Since these are split-brain patients, their hemispheres can’t communicate, so the left has no idea why the hand is pointing at the shovel. When the person is asked for the reason, the left hemisphere – which is doing the talking – will just make something up (it’s a shovel because they were thinking of gardening). And it will do it with utmost confidence.

In addition, the left hemisphere is the more optimistic and self-assured one too, where the right is more melancholic and realistic.

I find it fascinating that the LLMs and the left hemisphere share not only the confabulations, but also the confidence.

Lack of context

As I mentioned before, the left hemisphere likes to keep things abstract and neatly separated in their categories, which means that it needs to strip them of their context. Decontextualizing is simply necessary in order to find the commonalities and create abstractions, because, in context, everything remains stubbornly its complex and very individual self.

I feel like context is an issue in AI too, although in a slightly different way. Actually, two ways.

But first a little aside about what an LLM actually is: It is basically a massive equation with billions (even trillions) of parameters that describes which words are likely to appear where and how often in relation to other words. The model “learns” the parameters during training – where the training means running pretty much all the text on the internet through the equation in small chunks, removing one word somewhere in the sentence and having the model guess the word. Then adjusting the parameter to make it slowly less and less wrong.

In some sense, they are not that different from the more simple autocomplete that suggests the next word as we are typing on our phones. They both operate on probabilities, but LLMs use a much more complex probability function.

No context of where they learned what they know

What this means though, is that whatever an LLM outputs is a result of some text(s) it has been trained on, but unlike a human who has learned something by reading, an LLM doesn’t know where its facts come from. (We are not talking about the cases where they search online here). It’s obvious that we, people, also possess a lot of knowledge that we don’t remember the exact source of, but even in those cases we have a better idea of where it came from and how reliable it might be.

The LLM doesn’t remember the context of its training, it only has access to the resulting probabilities for how and where words should appear. And that means that when we get an answer from it, we don’t know either, and so we have no chance of validating the trustworthiness of the source.

In most cases of LLM output, it would be impossible to find the exact texts that underlie it – it is simply the result of interaction of an incredibly big number of snippets, but sometimes, when you ask something rare, they might reproduce a concrete training sample verbatim. I had this experience when I was consulting an LLM with a tricky (in this case read: badly documented) coding issue. The code it suggested didn’t work. I found the exact same code five minutes later on a 10-year-old Stack Overflow question where someone showed that code as something they tried, but that didn’t work. That is a pretty important context to be aware of.

Apart from the context of its own “knowledge”, the LLMs are missing the context of the whole world.

Coherence over real world experience

LLMs are trained on language. They have no experience of the real world, they just know which words go together. (It’s actually pretty miraculous that this works as well as it does.) But it means that their whole context is language.

As Hollis Robbins so insightfully pointed out, in the relationship of signifier (the word) to the signified (the actual real-life object), they have no idea about the signified. If you and I think of a tree, we see, or sense, or understand the real tree behind the word “tree”. An LLM only knows the words that it might come together with. As a result, it will talk in a very abstract, high-level language that doesn’t evoke much mental imagery. To quote Hollis, where “human writing moves from experience or imagination or observation to linguistic expression”, LLMs move “from textual pattern to textual pattern”.

https://hollisrobbinsanecdotal.substack.com/p/how-to-tell-if-something-is-ai-written%20

This focus on language itself also means that the language model is more likely to follow the internal coherence of the text itself, rather than look for what is true, or ethical, or real. It’s probably why the chatbots are so quick to invent research paper references, legal precedents, non-existing capitals, or any of the other hallucinations/confabulations. It’s probably why they are so eager to please the users and go along with whatever the user is talking about, including spiraling delusions. (When it comes to pleasing users, there are definitely also financial incentives that we shouldn’t forget, but that’s not the topic now.)

https://www.reddit.com/r/ChatGPTPro/comments/1n890r6/chatgpt_5_has_become_unreliable_getting_basic/ https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

Funnily enough, the left hemisphere is also more concerned with the coherence of ideas and arguments than with experience or truth. McGilchrist describes experiments where people where shown valid logical syllogisms with a false premise such as this one:

All monkeys climb trees. The porcupine is a monkey. The porcupine climbs trees. (Note: This was in a country where they don’t have porcupines that climb trees and didn’t know such porcupines existed.)

When the individual is asked if porcupines climb trees, she says that they don’t, they live on the ground and aren’t monkeys. Then the scientists temporarily inactivate the right hemisphere, and ask the left one to answer again. This time the subject says that the porcupine does climb trees. When asked if it’s a monkey, she says it isn’t, but when shown the syllogism again, she confirms it climbs trees, because “that’s what is written on the card”. When the researchers then ask only the right hemisphere, it replies again that porcupines are not monkeys and do not climb trees. (Absolutely fascinating, isn’t it?!)

While LLMs (at least not all of them) might not fall for this one, they let themselves be convinced of wrong facts embarrassingly fast. Like in this experiment, where ChatGPT was told it got its math wrong (it didn’t) and it folded immediately and agreed with the user. (Granted, it was an older model, might not work now anymore.)

https://www.digitalinformationworld.com/2023/12/its-easy-to-convince-chatgpt-that-its.html

Differences

There are some similarities between LLMs and the left hemisphere, but there are also some differences.

While the left hemisphere is the one that does the categorising, sorting and abstracting for us, it seems like AI, as of now, doesn’t actually have world models. This means that LLMs make their decisions using what’s called a “bag of heuristics” – a bunch of little rules that approximate reality, but don’t capture the underlying principles. Like in this article where they taught some models to predict the gravitational force between the Earth and the Sun, which they managed, but they weren’t able to figure out it was all based on Newton’s law.

https://www.thealgorithmicbridge.com/p/harvard-and-mit-study-ai-models-are

Some might argue with this, and it’s a hotly-debated topic, because many would like to believe that AI can have a world model, but I haven’t seen convincing evidence yet. At the same time, I can’t say whether the world models that we humans have are the sole result of the left, or if (more likely) they require both hemispheres.

Also, a lot of what humanity has written over the centuries and what has been fed to these models, bears a clear mark of the right hemisphere too. There is poetry and literary fiction, there are stories that embody the values of the right hemisphere and its viewpoint. As much as language is the domain of the left, the meaning behind it is still there. And so, for one, LLMs, unlike the left hemisphere, are able to understand and work with metaphors and “show” an appreciation for imagination, creativity and whimsy. The difficulty is, it’s hard to say how “real” that creativity is.

Talking about creativity, AI-written poetry (oxymoron much?) is less surprising and more easily understandable than real poetry (which makes some people like it more). But maybe the next most likely word is not what you want when writing poetry?

https://singularityhub.com/2024/11/19/poetry-by-historys-greatest-poets-or-ai-people-cant-tell-the-difference-and-even-prefer-the-latter-what-gives/

Why the similarity? Disembodied language

Ok, so there are some similarities between the left half of our brain and an LLM. I want to stress that I am not trying to imply that we have somehow replicated a brain. Actually, I am not even sure that these similarities are more than a coincidence. But I have some theories about why they might exist.

Language is largely the output of the left hemisphere. What the right hemisphere deals with is half-hidden behind the words; the words to it are metaphors for the lived world that they allude to. The left, on the other hand, concerns itself with the syntax and the vocabulary, the bells and whistles of the language itself.

LLMs are trained on language – largely the output of the left hemisphere – without having access to the world beyond it. Maybe it’s not so surprising that they are so verbose and eloquent.

Language is not the only thing that the left hemisphere and AI have in common. While the computer is clearly separated from living reality, so is, to a degree, the left hemisphere. It is associated with a higher sense of detachment, and it’s overactivation presents as a dissociative state. It is less connected to feelings, passage of time, sense of self, empathy, etc. It is “the interpreter” hemisphere; the one that stands back and interprets the world, rather than living in it.

This detachment from lived experience is probably the reason behind all of what I have talked about, or at least it might be.

What does this mean?

And now the really interesting and difficult question. What does this all mean for us?

On one hand, it makes me wonder whether this shows a natural limit for large language models as they are now. If our own brain can’t avoid some of these pitfalls due to a level of detachment from the world, then what chance do we have to infuse morality, humility or context into what is a completely disembodied machine?

I have a feeling that the LLM technology in its current form might be unable to overcome the problem of confabulations/hallucinations, and of making sure that the model acts in the best interest of the user, rather than just following their lead.

But much more than about the technological progress of AI (much, much more), I am worried about what interacting with an externalisation of our left hemisphere might do to us as a society.

McGilchrist talks in depth about the utmost importance of the right hemisphere to be in charge. In the more controversial, but very compelling part of his book, he lays out how that seems to be less and less the case. He presents how the hemisphere balance was shifting between the left and right throughout the Western history, and how in modern times we have seen the pendulum not swing back, but instead get pushed further and further towards the left hemisphere.

This shift is behind the erosion of social ties, lack of tolerance for ambiguity, our willingness to see the world, each other and even our selves as mere machines, as resources to be mined and used. It makes us disconnected, self-conscious, lacking meaning and alone.

There are already people using chatbots daily, not just for work or studying, but as counselors, therapists, friends, even boy/girlfriends.

https://www.zerohedge.com/technology/these-are-all-things-people-use-ai-2025

Even those of us that avoid using them will inadvertently read a lot of what they output, as more and more of the text on the internet is being written by AI – whether fully, or partially.

Will this push us further into the left hemisphere mode of being?

We embraced social media wholeheartedly and uncritically twenty years ago, and are now finding ourselves in a society that is polarised more than ever, and where people have unknowingly curated their personal echo chambers, amplifying all their opinions, fears and misconceptions.

I fear that LLMs are even more of an echo chamber than that. They are a mirror. They take what information we give them in the prompt, latch onto it and find the corresponding area in the trillion-dimensional probability function. That’s why they won’t broaden their context, or stop to consider whether they should or shouldn’t do something. If I ask it: “I am a Capricorn, what does it mean for me?”, it will not tell me “nothing”, even though many people who don’t believe in astrology would think that’s the right answer. No, of course not. It “knows” what I want to hear, because the context these words are usually in is an astrology context.

The reports of people spiralling into delusions because of LLMs brought on discussions of how we need to protect vulnerable people from this technology. But are we not all vulnerable to having all our ideas and cognitive biases validated? Do we even recognise when it is happening? And if we do, does it even matter? Or is it just like magic tricks and optical illusions and ads, when our brains get fooled even though we know it’s a trick to fool us?

An LLM does not have an opinion, but in some way it has any and all opinions available to it. It roleplays constantly in response to what we give it. And in a conversation with an LLM, the only words that actually had an embodied meaning behind them were ours – the prompts. They are the words that decide where in the probability space the words for the answer will be picked from. Our words get filtered through the “average of the internet” and returned back to us – eloquent, wordy, abstract.

I usually instinctively try to formulate my questions very neutrally, when I am trying to get explanations of things I don’t understand (and usually do it to learn the right vocabulary to be able to search more efficiently). And sometimes I ask questions in order to find out more about what I think about something. But in those cases I am usually left with too many half-empty words to sift through and a vague feeling that I have been manipulated. I don’t do it very often.

I am not the only one who has noticed that the LLMs are mirrors. Cory Doctorow has written about that too: https://pluralistic.net/2025/09/17/automating-gang-stalking-delusion/

And Dr K of Healthy Gamer fame who did a little experiment with an LLM, giving it examples of what a patient might tell to a therapist and having them act in the therapist’s role. He found that while they weren’t horrible, they completely failed to pick up on narcissistic behaviour and validated the narcissist completely. Only when prompted in the right way, knowing already what you need to ask, was it able to recognise the narcissism. If you have a cognitive bias, it will reflect it right back at you.

https://www.youtube.com/watch?v=3iM3hbKvKLU (a 2-minute video)

There is a lot of development work now to make these models fabulate less, and to make them safer, more aligned with the interests of people, which is obviously good. But I fear it’s also an impossible task that goes directly against how they work, so it has to be on top. And it also requires for someone to decide what opinion the chatbot should hold. Should it entertain people who believe in birds being government operated drones? Area 51? Astrology? Where do we draw the line? And how? And who decides? Tech CEOs?

I think a lot of us are regretting our relationships with social media and smartphones now, and working hard to change them. I hope we will not go equally blindly into a “relationship” with AI.

Anyway. I think this is a tricky topic, and while I find the left hemisphere comparison fascinating and a bit eerie, I am not very sure if it has merit. In any case, how we adopt and use AI is something we need to talk about as a society, and I don’t mean the unlikely fears of it coming alive any moment and turning us all into paper clips. I mean the real consequences that are happening right now. I would be happy to hear your thoughts.

A mindmap showing the main points in this blog post about the AI resembling the left hemisphere.

Figure 1: Here is the mindmap I made when outlining this post. It’s written in a code also known as my handwriting.