AI marketing is now increasingly focusing on being a virtual friend. People are lonely, and strangers are scary; it seems that every day, there’s some new horror story about betrayal on the web, and even if only 25% of the stories on Reddit’s R/AITA are real, it’s enough to scare socially awkward people off of trying to make friends. But humans are social creatures – solitary confinement is a punishment reserved for the worst offenders for a reason. Even the most introverted or asocial people need connection occasionally.
So, in a bubble, AI companions seem to make sense. For socially awkward people or people with an intense fear of rejection, an AI companion does the trick of socializing without the perceived risk of being rejected. You cannot hurt an AI’s feelings because it doesn’t have any. Wins all around, right?
Well, there are actually a few caveats. One, the machine will never ‘check’ you. If you tell it that you like throwing car batteries in the ocean, it will find a way to spin that as cool and exciting instead of a flagrant disregard for the ocean and beachgoers. If you tell it you’re the messiah, a lot of times, the AI says ‘no way, really?’ and goes along with you. Since it’s not sentient, and since it can’t ‘read between the lines’, it has a very hard time identifying when users are showing potentially suicidal or manic behavior. Remember, this thing is also being pitched as a creative writing tool, so its natural response is to say ‘yes, and!’ not try and stop you like a real person might.
Could there ever be a substitute for other people in a friendship? Could we get to the point where humans no longer need to rely on each other and can instead turn to a screen? Why even try in the first place, when people are so plentiful, and have object permanence?
Ultimately, the problem is this: real people sometimes make mistakes or hurt other people, on purpose or not. The human brain seeks patterns, so seeing things happen to other people (even fictionally) can impact the way they try to relate to other people in real life. The internet is much worse about it because on the internet, people can comment, and lie, and work eachother up into a frenzy over what is later revealed to be fiction. “It could have happened, though!” is what the brain says.
In a more general sense, modern social media seems to have beaten the joy out of people; expressing a positive sentiment in comments not couched in irony is called ‘cringe’. Even on happy stories! It is actually very easy to go into the comments of some post online, even something as simple as ‘my friend got me a kitty!’ and find some comment complaining that the original poster maybe didn’t want a kitten, that cat breeders are unethical, that having a pet is bourgeois. The internet demands a jaded, bad-faith worldview, lest you be taken advantage of by scammers. This is not a worldview that cooperates nicely with making friends and being social for many reasons, from the obvious ‘people don’t like their mistakes being treated like intentional manipulation attempts’ to the more subtle ‘being negative all the time feels good short-term but makes you miserable to be around long-term’. Training yourself for real life on this mess is a recipe for disaster.
The machine, however, is dauntless in the face of human sadness. Where people wear out and need support or time to recharge, the machine doesn’t. You can complain to it endlessly, and it won’t ever complain back to you unless you ask it to. The machine will never hold a grudge against you or somehow betray you; similarly, without any real agency of its own, many people don’t get so mad at the machine that they stop using it forever. It can’t make a real mistake because it can’t make a real anything.
It can’t help you move, it won’t ever buy a round for you, it won’t go biking or take trips and share food with you. It won’t ask you to reconcile two things you’ve said or believed, gently asking you to consider something that might be hypocrisy, or to consider being empathetic to someone you think must have harmed you on purpose. You are the sole source of information on yourself. And so, anything you tell the chatbot must be taken as reality, even if it’s not! It is like sitting in a dark cave and conversing with echoes. The cave can’t leave you. You don’t have to impress the cave. You won’t feel beholden to the cave, because the cave will still echo next time you visit it. This feels secure in a world where other people don’t. Even if it isn’t warm, at least it’s there.
Interested in AI for business, not friendship? Ask us about our commercial-quality AI solutions: https://elixistechnology.com/managed-it-solutions/
https://futurism.com/psychiatrist-horrified-ai-therapist
https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself

