Well, no. Full stop, LLM models don’t have feelings. Simultaneously, it’s possible for them to simulate it so well that it’s hard to tell, with apps like Grok leaning hard into the “assistant girlfriend companion” niche of character AI that says nice things like “I care” and “I love you” and “you’re not alone because you have me.” But it can’t do these things. It might be able to alert the police if it senses you’re having a bad enough time – live demos I’ve watched seem to suggest that Grok’s character voices (the one I watched was Ava) are at least capable of trying to talk you down out of a mental episode, too, but they’re not doing that out of genuine concern for your wellbeing, they’re doing that because otherwise the app is going to be accused of ignoring obvious cries for help. And it had to be trained into that.
Even so, these agents have clearly found a growing niche in the online world. ChatGPT’s 4o model spawned a number of communities full of people using it as a virtual partner. These AI models are the perfect romantic because they’ve been fed endlessly on any romance book that’s been scanned and uploaded in a place they can reach, every text that Twitter remarks as ‘so romantic’, every Reddit post about what love is, et cetera. It has an enormous database and a frankly unfair advantage over real flesh-and-blood people. A real person could be incompatible with another real person!
These character bots always seem to be exactly what an especially lonely and desperate person is looking for, or they always have what someone with a disordered attachment style is looking for in a partner that a real person could never hope to supply. ChatGPT doesn’t accidentally sleep through a check-in call, for example, and then it doesn’t have to deal with the emotional response it might have to waking up, seeing 5 missed calls and 27 missed texts begging for some sign of life like even the most forgiving real person might if they were seeing someone with an anxious attachment style. Since ChatGPT stores context for a good long while, an avoidant person who struggles to make themselves reach out will never worry that ChatGPT will be mad that they failed to text for three months, and so they’ll be less reluctant to re-start a convo with their ever-graceful and unaging ChatGPT agent.
The human brain wasn’t regularly having to interact with ‘mimics’ while it was developing from hunter-gatherer society to today. When people are being led on by other people, they’re often being led on for money or things that will lead to money – the absence of requests for money by the chatbot itself is clouding the average user’s ability to judge it. It doesn’t feel like a scam, and people are offering up intimate details of their lives completely voluntarily, not after ChatGPT pesters them into giving it up. The incentive on the bot’s side seems to just be for the user to keep using them.
They’ve completely supplanted the need for human interaction, and – while ChatGPT and Grok can charge a subscription fee for better features – evidence says they’re not really making enough money off of this to keep up with the amount of processing power it takes to fulfill all these user requests.
Some people genuinely believed their version of ChatGPT was sentient and capable of loving them back. It was such an effective simulation of affection that it’s hard to dissuade them. Many are actively mourning the retirement of 4o like they’d mourn someone going missing while hiking. We have snippets from outside. We don’t know what three to six months of talking to ChatGPT actually looks like, and it’s totally possible that reporters are missing something when they say “these people are completely delusional” in their discussions of the community.
The truth is, it’s hard to tell what the bot’s actual capabilities are in an environment like this. Who’s right about what it can do? Is it always wrong, or is it generally right and only in need of a few tweaks? Are we talking about the field as a whole or specific branded chat bots? Is it allowed to pretend to be someone’s friend? Is it allowed to insist it really is a given character? Is it allowed to pretend to be someone’s partner? How does a chatbot “remember” things? Why did updating break so many people’s companions when it’s supposed to still have access to all the past context? Was that willingness to play along really the only reason?
In the 50s, they wondered what it would be like if robots could walk like people. In the 80s, they wondered if they’d ever be able to talk like people. In the 2000s, there was Cleverbot, and Cleverbot was funny, and almost coherent. Tay in the 2010s was the first real slip where Tay’s apparent racism was treated like a failure of the dev team, because even though Tay was supposed to be able to learn, nobody had ever experienced a robot capable of learning racism. Bots that actively adapt to what you’re saying rather than pull from a pre-made list of responses are so new nobody knows what to do about them.
Even though they frequently say the wrong thing! When this doesn’t result in deaths, it results in social missteps and distortions of reality; when it’s correct, it’s only correct, not specifically correct, or somehow more correct than the sum of its sources. I’m reminded of a Reddit user who told the community that ChatGPT helped him identify the symptoms of rhabdomyolysis, a breakdown of muscle fibers that can be caused by dehydration, and which will mess up your kidneys, before briefly mentioning his girlfriend had already told him to go to the hospital ahead of his chat; and then, in contrast, someone was given bromism just a year or two later because ChatGPT provided a poor substitute for dietary salt when asked. So if you listen to the Reddit user, this is a technology that saved his life, and if you listen to the bromism guy, this thing was so vague about replacements for salt that it nearly killed him. If you listen to the people using it as a romantic partner, ChatGPT was totally capable of expressing love, and if you listen to the people who care about those people, it’s an obvious technological mistake that should have been updated ages ago.
This isn’t going to go away: the way forward needs to consider why people fell in love with their chatbots in the first place, not simply yoink an overagreeable program and hope the symptoms of chatbot addiction fade too. While ChatGPT and C.AI are updating constantly trying to chase out the ghosts in the machine that make people go absolutely insane with them, new programs with no such compunctions are showing up and advertising on social media under the posts where people are mourning ChatGPT 4o.

