Moya Sarner 

Do you feel stuck? It’s time to simplify your thinking

Our minds often seek to muddle rather than clarify. But we have the power to notice this in ourselves, and others
  
  

Graphic illo of a person holding their hands up in frustration.
Our minds sometimes like to keep things cloudy and confusing rather than direct and meaningful. Composite: Guardian Design; Getty Images

Psychodynamic psychotherapists do not give their patients advice, and although this is a column and not a psychotherapy session, I do try to stick to this when writing about how to build a better life. However, I am now breaking that rule. I advise you never to read anything by ChatGPT, ever. Or, as ChatGPT might put it, having taken all factors into consideration in the process of reaching the aforementioned conclusion, it could be beneficial to note the following if you find yourself in the position of wishing for your life to be different in a positive way from how it currently is: Never. Read. Anything. Written. By. ChatGPT. Ever. And don’t @me.

This kind of wordy “slop” with its multiple mind-numbing clauses often features in the text produced by AI chatbots. It’s what I call the thickening agent: something added to the text that brings no meaning – no flavour or nutrition; that takes up page/brain space and makes it more difficult to understand, not easier. The text seems to be saying something and then, suddenly, it isn’t. In journalism school I learned to KISS – Keep It Simple, Stupid. AI chatbots do not appear to have been taught the same.

It’s not just that it’s bad writing. I knew as soon as I read my first unnecessarily long Chat GPT-created sentence that I had experienced this thickening agent somewhere before.

I’ve felt it in the consulting room – the one in which I see my psychoanalyst, and the one in which I see my patients. This thickening agent is a part of our own minds. It is the bit of us that seeks to muddle instead of to clarify, and that would rather keep things cloudy and confusing than direct and meaningful. It’s the bit that keeps us stuck, and not living a better life.

Let me try to describe what it feels like in a session. I might sense that a patient is really getting somewhere: that they are making their way towards some very painful emotional truth, linking, for example, a devastating loss from their childhood with a relationship pattern they keep unconsciously repeating today, and which desperately needs to be understood so they can choose a different response. My patient is associating memories and feelings and dreams, and this understanding is within sniffing distance – and then suddenly out comes the thickening agent. The sniff of possibility is gone; the tone becomes knowing instead of exploratory, cognitive instead of emotional, intellectualised instead of intimate. The compass of the session turns from pointing towards understanding to pointing towards obfuscation.

The psychoanalyst Wilfred Bion devised a way of mapping analytic sessions according to these compass points, among others. He determined that a patient and/or analyst might be “in K” or “in -K” (minus K) at any point in a session: that is, to my understanding, at a point where they can bear a certain kind of knowledge about themselves (“in K”), or in a state where they cannot (“in -K”). What I am saying is that ChatGPT operates very much in “-K”; I can smell it from a mile away.

And, as regular readers may by now suspect, the reason this irritates me so much is that I cannot bear my own capacity for getting stuck in -K. I have been a patient in many sessions like the one I described. As much as I understand this is a part of being human, that does not take away the frustration of feeling the minutes of my session tick by while I remain trapped in the -K web I have woven from my own words.

It is not a coincidence that the slop of these AI chatbots reads like stodgy unseasoned potato soup. Because, as a friend recently explained to me, these chatbots learn to write from what they are fed: that is, writing by humans. The design is inspired by neural networks in human brains. ChatGPT operates in -K because we do.

The difference is that, as human beings, we can begin to notice this about ourselves and in others. We can have feelings about it. We can observe when we are on to a good thing with someone (when they are communicating something meaningful) and we can observe when their words serve a different function: to confuse us, make us feel unsure of ourselves, thicken things up.

This all came to mind recently when I was listening to the news and heard the beautiful phrase “flim-flam artists”. This is how national hero Alan Bates described the government officials who are delaying the paying out of compensation to the survivors of the Post Office scandal. True to himself, Bates is continuing to show the same courage he showed when standing up for himself and other post office operators in the face of the lies, obfuscation and flim-flam of an organisation intent on hiding the truth. He sees it and tells it like it is, and I bet he smells of pure K.

Perhaps there is some kind of algorithm to make these AI chatbots KISS, but I don’t believe it will ever be possible for them to go all the way. To feel, in the way humans do. And that is the crucial ingredient when it comes to good writing, and to building a better life. Everything else is just slop.

Moya Sarner is an NHS psychotherapist and the author of When I Grow Up – Conversations With Adults in Search of Adulthood

 

Leave a Comment

Required fields are marked *

*

*