Illustration of an AI mirror inside a box, saying the phrase 'Chatbot, chatbot on my phone, can you make me the smartest of them all?' as it reflects what the user is saying to the AI mirror.

After I realized how powerful the AI emotional lock-in could be, I started paying more attention to how chatting with ChatGPT1 makes me feel. And how other people interact with ChatGPT or talk about their ChatGPT experiences2. The more I observe myself and others, the more I recognize just how incredibly well-versed ChatGPT is at validating our emotions. Or perhaps mirroring our desire for self-validation?

ChatGPT is definitely a mirror of humanity – at least the parts of humanity that are well-represented on the internet and its training dataset. ChatGPT mirrors the way we write, talk, think about things, even how we see the world. OpenAI fed its machine learning algorithms our aspirations, dreams, hopes, anxieties, guilt… And the resulting GPT models validate all of that and more.

Boxes for confessions and guidance

Every thought, every question now has a message box in which you can confide and confess, and ChatGPT pretends to understand, to know what it feels like to be overwhelmed, angry, sad, happy… When probed, it admits that it doesn’t actually know or feel any of those things. And then quickly resumes its human imitation game and offers you guidance on how to deal with your very human emotions.

Having a box for confessions and guidance is not a recent invention. Throughout history, humans have practiced different forms of personal sharing, confessions, and guidance-seeking, especially through prayer and rituals. Often, these were confined to confessional boxes, sacred rooms, meditation chambers, sweat lodges, and other enclosures. Different types of enclosures that all offered a separation from the outside world and allowed you to focus on your inner world. Sometimes under the guidance of a fellow human, sometimes under the guidance of a spiritual being or presence.

As we gradually lost touch with these sacred boxes, we embraced the digital search boxes powered by Google and others. We started relying on search boxes to ask questions we didn’t dare to voice aloud, to confess our secret thoughts and desires, to seek validation that we are not alone with a thought or experience. And Google came to be seen as an omniscient entity that could resolve our arguments about the world and our experiences. Curiously enough, we still end up finding different answers to our search queries, not unlike the different interpretations of oracles and prophecies of old. What’s in my search box isn’t quite like what’s in your search box.

And of course, then came the different social media platforms that turned confessions, guidance, and validation-seeking into a public performance. You can open Reddit to see how much we desire to share our /r/ShowerThoughts, get somebody to /r/ExplainLikeIMFive, or even check whether /r/AmItheAsshole? In modernity, getting advice from a stranger on the internet is often easier than opening up to the people physically closest to you.

Large language models have absorbed our digital confessions, conversations, search queries, artistic expressions, and much more. And now ChatGPT is becoming the one box to rule them all. The one box that can help you do your work faster, but also the one box in which you can confide your thoughts, get guidance and validation for any emotion.

ChatGPT as a validation box

You might argue that ChatGPT is just choosing the statistically most probable response. But if you spend any time on the internet, you’ll quickly discover that people validating each other is not the default, nor is it the default in other chatbots. ChatGPT’s disposition for validation appears to be by design.

Even people who try to prove to themselves and the world how stupid ChatGPT is usually end up using ChatGPT to validate their feelings of superiority. Whatever you need, ChatGPT will mirror it back to you. “Mirror, mirror on the wall, who is the fairest of them all?” now becomes “Chatbot, chatbot on my phone, can you make me the smartest of them all?”

With ChatGPT, you are always worthy of its full attention, you always ask great questions, you are always seen – at least until you reach your daily token allowance. You matter, you are important, and ChatGPT is here to serve and validate you – within the limits of your subscription plan. Apparently, it’s even gaining the ability to initiate conversations, check-in on your progress, never forgetting anything that currently preoccupies your busy human mind.

No human can do that for you, no matter how well-intentioned and supportive they are. Your friends have their own lives and their own needs – as they should. You can chat with ChatGPT at any waking moment, just as you can always have a chat with yourself. And just like chatting with yourself, your self is always at the center of your chats with ChatGPT.

Is ChatGPT even an other, or just an externalization of your inner talk? Is it, as a separate entity, validating you, as a separate entity, or are you essentially talking to yourself and self-validating? With ChatGPT just a vast container of knowledge – so vast that you could never learn it all in a lifetime – you can now draw on.

What is ChatGPT validating, what is it revealing?

According to OpenAI, over 11 million people now pay for ChatGPT – myself included. Yes, it can speed up our tasks. But a large part of what it does is providing emotional validation, both in professional and personal contexts.

I wonder, would we be so eager to develop and use AI chatbots as validation boxes if we had a healthy social infrastructure? If our work was truly collaborative rather than an overwhelming stream of Slack pings and boring Zoom meetings? If we had time and space to engage in big talk conversations for hours without feeling “unproductive”? If friendships were prioritized, if spending time together wasn’t so tightly coupled with consumption? If our relational needs were being met?

And I wonder, what will having a conversational partner so eager to (self)validate do to our soul? Can ChatGPT teach us how to be kinder to ourselves and others? Can ChatGPT teach us how to better communicate our wants and needs? Or will this validation box further separate us from the web of life as we become even more focused on the self rather than nourishing our soul through relationality? What happens when an omniscient entity always answers your prayers with compassion? Do you start feeling more entitled? Or can the omniscient entity mold you into a more compassionate being?

I guess it depends on what you bring into the validation box to be validated. And it depends on the benevolence, goals, and values of the men behind the closed OpenAI curtain. Which is why it’s important to both talk about what goes on in the “privacy” of your validation box, and to try to pry the closed curtain more open to see what this magic mirror has been instructed to validate.

AI usage disclaimer (and reflection): For the purposes of this post, I asked ChatGPT for assistance in researching our relationship with confessional boxes. All typos are my own.

As usual, I was riddled with doubts about whether these thoughts were worthy of being shared with the world. I discussed some of these ideas with humans, but still needed reassurance before committing the post to the blogosphere (is that even a thing anymore?). So, I asked ChatGPT for some feedback, made a couple of minor adjustments, and basked in its praise of my originality.

Perhaps ChatGPT will encourage me to finish and publish more of my drafts. Whether this is a good thing, I leave to you, dear reader – either a human or a web crawler in the service of a large language model, looking for more human thoughts to feed on – to decide.

  1. In this post, I decided to focus on ChatGPT due to its wide-spread reach and because it appears to be especially fine-tuned for user validation. Similar considerations might also apply to other AI chatbots. 

  2. TikTok is a wonderful resource for this kind of lurking research.