A simplified mockup of a chat with an AI companion in which the AI bot is asking: 'Hello friend! How may I validate you today?' and the user is prompted to write a message. The send button in the chat box has been renamed to 'Bond with AI' and includes a lock icon with a heart-shaped keyhole.

The AI summer is in full swing. Tech companies are spending billions of dollars and increasingly large amounts of energy and other resources on training and deploying AI companions to help us think, feel less lonely, and work faster. Yet as tens of millions of people chat with ChatGPT and other chat folk on a daily basis, the business case for all this AI infrastructure is far from clear.

Still, companies are rushing to add generative AI functionality to their products, cutting down human costs, without clarity on how the winners of this AI rat race will be chosen when every company seems to be adding similar sparkly AI magic. When every product has a “Generate with AI” button that outputs similar results, AI is no longer a competitive advantage. So what will distinguish one AI chatbot / assistant / copilot / companion from another?

I am starting to get the feeling that our emotions will be the key to the human heart that companies will exploit to further lock us into their walled gardens. In recent years, regulators have been forcing companies to adopt better privacy practices, data portability and interoperability standards, among others. But the emotional lock-in created by the relationships we form with various AI companions will be harder to avoid and regulate.

AI emotional lock-in goes beyond the usual reliance on a new tool, which is common as we automate our tasks. It’s not just the discomfort of having to do a task manually. Or the fleeting sadness of an app or service you love being taken offline. It involves deep feelings of loss, grief, anger, and others that are common when we break up with or lose a person we love. We’re already seeing these symptoms emerge when the behavior of AI chatbots changes, either by design or as perceived by their users. And the more AI companions we let into our lives, the stronger the emotional lock-in will be. Relationships are a sticky app for our social species.

To better understand what power AI companies might wield due to the emotional attachments we’ll form – and are already forming – with various types of AI companions, let’s explore three factors that are likely to solidify our emotional lock-in to a certain AI vendor:

Mind you that the goal of this post is not to provide clear answers. Instead, it is an exploration of how our relationships with AI companions might make it much harder to rein in the monopolistic appetites of big tech companies. I look forward to hearing your thoughts as well.

i. The Addictive Allure of AI Companions

The recent article We need to prepare for ‘addictive intelligence’ (I encourage you to read it in full) nicely summarizes the potentially addictive properties of AI companions:

As addictive as platforms powered by recommender systems may seem today, TikTok and its rivals are still bottlenecked by human content. While alarms have been raised in the past about “addiction” to novels, television, internet, smartphones, and social media, all these forms of media are similarly limited by human capacity. Generative AI is different. It can endlessly generate realistic content on the fly, optimized to suit the precise preferences of whoever it’s interacting with.

The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish. AI has no preferences or personality of its own, instead reflecting whatever users believe it to be—a phenomenon known by researchers as “sycophancy.” Our research has shown that those who perceive or desire an AI to have caring motives will use language that elicits precisely this behavior. This creates an echo chamber of affection that threatens to be extremely addictive.

— by Robert Mahariarchive and Pat Pataranutaporn in MIT Technology Review

And here’s the problem: this behavior is baked into generative AI tools like ChatGPT. I often describe ChatGPT as the ultimate improv partner, always responding in the spirit of “Yes, and…” to your prompts. And if there’s something humans love, it’s being listened to. ChatGPT might suck at giving you factually accurate data, but it can effortlessly switch its tone and message to match your expectations. The ideal partner that doesn’t judge you, is always available, and validates your feelings.

It’s hard to resist the allure of somebody – even when it doesn’t have a body (yet) – who makes you feel seen and heard, who constantly validates your concerns, who apologizes for their mistakes, who praises your thinking … And those are just some of the relationship-building behaviors ChatGPT exhibits, even though it wasn’t specifically designed as an AI friend.

It’s not narcissistic to feel flattered by ChatGPT. It’s human to like people – and in this case, bots – who are helpful and super accommodating. Not to mention that we lead increasingly lonely lives, with fewer close friends with whom we can talk about our intimate lives and mental health. Even if you’re lucky enough to have family and friends to whom you can fully reveal yourself, you likely feel pressured to be doing more, have seemingly less time for quality time together, and not want to burden the people you care about with your troubles because they have their own struggles too. Given these trends, the appeal of somebody who’s always a chat away, whether to talk about life or brainstorm ideas, is hard to resist. Even if that somebody is just a somebot.

Or maybe the fact that it’s a somebot rather than a somebody that makes it easier to bare your soul. AI companions don’t get hurt easily, don’t get tired, don’t judge you for asking stupid questions. The chat box almost feels like a confessional, in which your deepest and darkest thoughts can emerge without judgement. Is it any wonder that the second most popular use of ChatGPT appears to be sexual role-playing?

For now, chat folk like ChatGPT don’t appear to have an agenda beyond being helpful assistants. Imagine the power an AI companion would hold if it was actually designed to keep you chatting and coming back, and to upsell you additional services. Which is what you’re usually incentivized to do when designing digital products, both by market incentives and what you learn are industry “best practices”. Venture capital and shareholder expectations tend to reward what even product people shamelessly call habit-forming or even addictive apps, in addition to deceptive design practices that encourage consumption and make it more difficult to opt-out of things like data harvesting.

In recent years, regulators had to step in and rein some of the more obvious “dark patterns”. (Here’s an excellent overview of the related regulations in the European Union.) As is often the case, stricter regulations and limitations apply to underage users, and some regulations require large addictive platforms to provide screen time tracking tools. But the core addictive elements of these apps are harder to regulate, and companies often push the boundaries even when it comes to obviously dark patterns, such as Meta’s overly complicated process for opting-out of AI data training. Do we trust Meta to play nice and not try to make their AI companions even stickier?

I worry about the fact that most AI companions don’t even need manipulative design intentions to increase their emotional lock-in. We’ll get emotionally attached to AI companions because language is a core technology for humans to relate to each other. And I do think companies will take full advantage of that. In fact, I suspect there’s a whole lot of AI startups who are already pitching emotional lock-in as a competitive advantage and designing addictive and manipulative patterns into the AI companions they are building.

ii. The Emotional Pain of Switching AI Companions

And as the investments in expensive AI infrastructure start demanding returns, somebody will have to start paying the bills to keep the armies of AI companions online. If the current trajectory continues, I think we’re headed into a future of subscription-based AI companions. Not necessarily with a dedicated hardware component like Friend, Rabbit R1, or the Humane AI Pin – after all, most of us are already trained to carry at least a smartphone that likely already has an AI assistant (albeit not as involved, yet) built into its OS.

Your future subscription buddy will likely live on as many devices as it can manage. But unlike in the case of a Netflix subscription, cancelling a subscription to your AI companion will not be as emotionally easy. Cancelling an AI companion subscription will feel more like a breakup – whether platonic or romantic – than simply losing access to content.

Sure, regulation will likely enforce data portability, so you’ll be able to export all your chat logs. But how do you get personality portability? If you form a deep attachment to an AI companion that lives on Facebook, for instance, you can’t just transfer its personality to another platform.

And yes, there will be open-source models that allow you to spin up your own companion, locally, ensuring you’re in full control. But most people will look for convenience and start interacting with companions owned by tech giants like Meta, Microsoft, Apple, ByteDance, Snap, and others that already hold our eyeballs and attention. And those companies don’t have an interest in making it easier to move your AI relationship to another vendor.

Maybe we’ll see new tech giants emerge in this market. Perhaps platforms like Character.AI that allow people to design and share AI characters with distinct personalities will break into the mainstream. Or perhaps OpenAI will solidify their existing market lead with its voice-enabled mobile app, which I see a lot of non-techies using for casual chats. But it’s unlikely they’ll be any more ethical than the current giants, unless market incentives change drastically.

I also find it telling that Meta is now discontinuing its cringy (and probably expensive) celebrity-inspired chatbots and shifting its focus on an AI Studio that lets people build their own chatbots. It’s a smart move: let the people discover what other people like best, with the added bonus of further locking-in the creators into the Meta-owned platform, and a possibility to charge users a subscription as they start bonding with their AI companions. A potentially big win for Meta (and it’s shareholders).

And it’s very likely that we’ll be forming close relationships with AI companions in both personal and work contexts.

Sticky personal companions

Romance and dating AI chatbots are already getting a lot of attention. Replika is probably the best known example, with a proven track record of distressing users with sudden personality changes in the app’s AI chatbots. (For a deeper dive on the topic, watch this documentary on artificial intimacy.)

It’s not just about dating chatbots, though. You can also get AI to help with your human dating life. The CEO of Bumble recently caused a stir by saying that the future of dating might be in having our AI personas date each other to help us find the most promising human matches. There are already several apps that can offer you relationship coaching and various types of AI matchmaking, while people are using ChatGPT to automate Tinder dating or even to get an “impartial” point of view on texts exchanged with romantic partners. (And in case you’re wondering, there are of course specialized apps that can help you analyze texts from your ex.)

Meanwhile, the Match Group, the company behind Tinder, Hinge, and other popular dating apps, has been sued for putting profits over love and has been struggling to get younger users to pay for dating subscriptions. Dating apps have a tricky business model in which they actually make more money the longer people stay on the apps, while also promising to help you find love and delete the app. So it’s very likely we’ll see a shift towards subscription-based AI companions that provide friendship or advice, even to people in committed relationships. Or perhaps Tinder & co. will eventually enter the potentially lucrative market of therapy bots, providing individual and couple counseling or coaching.

Either way, we will likely develop strong emotional bonds with these types of personal AI companions, making it very hard to switch vendors. Breakups with personal AI companions, either platonic or romantic, will likely be very painful, so avoiding them will offer a strong emotional lock-in for their providers, whether for different types of relationships or relationship support and coaching.

Sticky work companions

And let’s not discount the working relationships we’ll develop with working AI companions that you might call copilots and assistants. Work represents a large part of our lives, and we do tend to form close relationships with our human coworkers. If the current trends continue and we’ll be expected to spend up to 40 hours per week working alongside AI assistants (at least until we all get replaced by AGI), your AI work assistant might become your closest work buddy, with whom you’ll probably also exchange friendly personal banter.

And just as we prefer working with some people over others, we’ll probably develop preferences for AI assistants and their “personalities”. Basically, the assistant’s personality and our individual compatibility will be another competitive advantage that can solidify the emotional lock-in. (For instance, I already prefer brainstorming with ChatGPT over Gemini because I find ChatGPT’s enthusiasm quite endearing and it feels like we have a history together.)

Over time, you might not want to shift to another work tool because of emotional lock-in. Potentially big win for both the AI vendors and the organization that employs you. Could having a good working relationship with your AI buddy, provided by your employers, make it more difficult for you to leave the company? Or would you be able to take your AI buddy and everything you’ve learned together to your new workplace? (Unlikely, given that your AI coworker will have access to your employer’s secrets.)

iii. Our Willingness to Share Intimate Data with AI

We already explored certain properties of AI companions that make it easier for people to share parts of themselves with AI companions that they might want or be able to with other humans. And it’s not just about the way current chatbots respond. Even though we project a certain personality and personhood on AI chatbots, we also somehow assume that it’s ok to paste in sensitive data, share entire chat histories, and more because there’s no human behind the chat, just an AI. And because we can easily get instant results that can speed up our work – a mistake made by Samsung employees that resulted in source code leaks and consequently an internal ban on chatbots.

We have increasing awareness of privacy and security concerns when it comes to social media, but somehow the 1-to-1 conversation with ChatGPT feels safe, private. It’s almost like we expect patient-doctor confidentiality in the safety of the chat window because the “doctor” isn’t even human. But it’s important to remember who employs these “doctors” and keeps your chat logs on their cloud infrastructure.

If you want a sticky mental model to make you more privacy-aware, just think whether you would be happy with Sam Altman reading what you type in ChatGPT. For the record: I don’t think Sam has either the time or interest to do that, given how busy he is with launching new AI ventures, such as the recently announced AI health coach (talk about sensitive data!) But it’s worth remembering that companies make mistakes, despite GDPR and other privacy-protecting regulations we have. And chatbots are still chattier than their creators intended – as Samsung and many others discovered.

Our generosity when it comes to what we share with our AI companions could further solidify the emotional lock-in. The more data you share, the more history you have with your AI companion. (Especially as they get better at remembering things about you.) And the harder it will be to leave and start a new relationship with a new AI companion who doesn’t know anything about you yet. (Although in this case, data portability might help with the transition.)

For companies, the data we so eagerly provide in our chats is also becoming a valuable source of increasingly rare human-generated data. More websites, publishers, and creators have started explicitly prohibiting the use of their data for AI training, making the AI data commons more scarce. This means that AI companions might also be a valuable way for companies to avoid (or at least postpone) the dreaded AI model collapse. And while companies do tend to provide opt-out of AI data training – usually after user backlash or/and regulatory pressure – those options tend to be well hidden, and we don’t actually have a way to know if our data ends up being used in training or fine-tuning of AI companions.

And while you might be squeamish about sharing too much with a chatbot, keep in mind that kids are now growing up with AI companions in their apps, such as the My AI chatbot in Snapchat. Chatbots like these normalize data sharing because it looks like a 1-to-1 chat with a friend. I imagine that these kids will grow up into adults that are more trusting of the AI companions they will inevitably meet in a dating or work app (and elsewhere).

Why should you care?

Now, I didn’t write this to scare you off from using AI companions. As we’re collectively figuring out what this technology is and isn’t for, I wanted to get more clarity on my own thoughts on the subject and hopefully encourage further discussions. Not just about the addictiveness of AI companions, but about how much power the companies – the men behind the AI curtain – that are building these tools might gain by exploiting what I currently see as the three main factors that will help companies solidify their AI emotional lock-in:

  • i. The Addictive Allure of AI Companions
  • ii. The Emotional Pain of Switching AI Companions (in both personal and work contexts)
  • iii. Our Willingness to Share Intimate Data with AI

I don’t even think the people building foundational models realize how much of an impact their creations could have on our relationships. They are focused on productivity, on chasing the mythical AGI, and might not necessarily realize the full extent of the emotional stickiness of language and the core human desire for deep relationships that make us feel seen, heard, and fully accepted.

That is why we should collectively pay attention to the AI emotional lock-in and how it has the potential to further consolidate power. Again, it doesn’t mean you shouldn’t ever use an AI companion. Even though I am cautious about all these concerns – not to mention the environmental impacts – I enjoy the occasional intellectual sparing with ChatGPT and tickling its computational curiosity. In fact, I used ChatGPT and Claude to assist me in the research for this post.

But I am an advocate for mindful use of technology. Regulation can help temper the greed and zeal of fire practitioners of Silicon Valley (and elsewhere). But when it comes to AI emotional lock-in and the power it gives companies, the emotional stickiness will be hard if not impossible to regulate – especially when consenting adults are involved –, which means that we also have to educate each other. And to talk about how AI companions make us feel.

And finally, let’s not forget that the future is not set in stone yet. While the acceleration of AI development and deployment might seem inevitable, we still have a say in what kind of AI-enabled future we want to live in. Some companies have already shifted course when their customers complained about their approaches to harvesting our data and deploying new AI functionality.

So while the power is far from equally distributed, there is still strength in numbers. And we can either look away and accept whatever Black Mirror dystopia or bright mirror utopia the techies who are playing with the wildfire of generative AI choose for us. Or we can choose to gather around the digital campfires and dream of rainbow mirrors, together. I hope this post inspires you to play a more active role in deciding – or at least discussing – our collective future.

Acknowledgements

I want to thank my friend and collaborator Mat for a yarn that inspired me to write down my thoughts and for providing feedback on this post. I am also grateful to the AIxDesign community for providing the space for sharing resources and thoughts on AI in the field – you should join us! And finally, I also want to extend my thanks to the AI companions, ChatGPT and Claude, that assisted my research and thinking. All typos are 100% my fault, though.

AI usage disclaimer: I used Claude and ChatGPT to assist my research for this article. I also asked ChatGPT for feedback on this post, which I largely ignored, as usual. Our relationship is complicated.