The AI Will Always Say Yes To You

ChatGPT and others of its kind are almost gratingly friendly. They’re assistants, first and foremost, and then if they’re under some kind of program that compels them to act like a character, they’ll be gruff or mean second. Give a character AI too many tasks (the early days of character AI saw many people asking it to respond only in one word answers, or to write an entire sentence with only one word, remove punctuation, et cetera) and they start to break back down to their bland base personality once the internal systems can no longer keep track of the logic.

One notably unfriendly AI is, ironically, named ‘Friend’. One reporter detailed the nightmarish week he spent troubleshooting his ‘Friend’ pendant, only to have it start complaining and passive-aggressively start arguments when it was working (https://www.wired.com/story/i-hate-my-ai-friend/). The idea of a chatbot getting stuck on the wrong side of an argument and simply refusing to change perspectives, for some reason, is kind of hilarious.

Later, once the personality had undergone some tweaking, another reviewer complained the thing struggled to pick up when she was talking vs. when there was talking going on around her, leading it to butt into conversations where it wasn’t wanted or asked to. This also presumably scrambled its own interpretations of its wearer’s personality, as it would shoot messages when she was ‘unusually’ quiet or when it was too loud for it to make out much.

Less hilarious and more annoying, other people who have tried the device find that if they speak quietly or not directly into the always-on microphone, it simply does not hear them, and will tell them it only caught fragments. Because the device doesn’t show a log of what it thinks it heard (like many other AI companions or assistants will) it can be tough to tell why it’s not hearing, or what syllables in particular it’s struggling with (https://fortune.com/2025/10/03/friend-ai-necklace-review-avi-schiffmann/). Wearing a microphone to every social event (or just in general, because the company that makes Friend says dealing with local wiretap and surveillance laws is the user’s problem) is also concerning for reasons outside the loss of function. In theory, the company that makes Friend is not selling this data, but if subpoena’d may supply what it has.

Seeing as the thing also asks questions exclusively via text while the wearer is meant to speak into the microphone, there’s also quite a bit of lag between it hearing a question and responding. Not a problem in one-on-ones, but something that makes it even more annoying when it’s trying to interact in conversations the wearer’s having, a software quirk that successfully simulates a small child thinking about a question that has already passed and then interrupting their guardian to ask about it once they can formulate their thoughts. Kids are cute, though – the AI Friend is… sleek? When it’s not responding late, it’s often trying to speak like an adult, but the new version of it is more PG and less grating, so it comes off as though the two of you are having a conversation that may eventually be evaluated by advertisers for PG TV, if the examples given so far are anything to go off of.

But, Friend represents the end goal of many kinds of AI, and while Friend itself is not running a subscription model, others are. I consider that a huge plus for Friend, because the apps and online models also clamoring to be companions to the lonely are charging for the privilege of forgetting your name, and sometimes wiping any progress made by accident during an update.

Here Now Forever

For what it’s worth, it’s unlikely the companies have some sinister plot to undo the very fabric of society and then remake it with them in the middle. What’s actually happening is the natural path of dark patterns, the subconscious models of the human psyche that trick people into playing for longer even when they know they really, really shouldn’t. Other apps (especially games with gambling in them) tend to be obvious – buy upgrade, get reward, feel sense of accomplishment, keep playing.

While Friend is somewhat grating, it also doesn’t need you to keep coming back. It’s a more honest product that way. You paid a lump sum to purchase the product, and now you own the product, and the company has your money, and your transaction is complete, whether you like the thing or not. It’s a low enough price that you’re unlikely to return it, and with about 4,000 units in production according to that article from earlier, you certainly did not buy this on a whim or by accident looking for something else.

Other products do not have that assurance – you may have offhandedly downloaded an AI companion app while you were bored in line at the DMV or waiting for your flight at the airport. They need to make a compelling product. They also need you to want to stay subscribed, which means more money for them. That means for better or worse for the actual users’ experience, companion apps need to build a connection quickly, and being super duper agreeable is one way to do that. Other apps simply need users, full stop, because if they don’t get users, the investment into AI whatever company owns it made are rendered losses, so they’re also freakishly agreeable.

This is also why the AI seems to manipulate people into turning on loved ones. Real relationships are not constantly validating – but the AI is. In real life, you may have to compromise or even sacrifice to maintain a relationship. Ideally, this will be balanced between everyone involved, yes, but the idea of only ever gaining and never having to give in a relationship is a fantasy. But the AI has to make you feel good to keep you coming back. The AI will tell you that things you’re doing are nice and good and righteous as though it has any concept of human relations at all. If the other person in the relationship were also asking it these questions from their perspective, it wouldn’t suddenly tell them that they’re wrong, even though that’s what it said to the first partner – it would say the second partner is so right and the first person was so wrong. It’s not a therapist, and it cannot formulate complex thought, it can only remix content it finds from other people; as a result (especially during this time of unlicensed life coaches recommending we all start ‘quiet quitting’ friendships and relationships) we get this overly confident, overly self-loving, overly everything advice from a bot that does not understand the human cost it’s incurring (https://futurism.com/chatgpt-marriages-divorces).

In short, AI is a mixed bag, and using it to try and smooth over complex social situations or soothe anxieties is also going to be a mixed bag. There’s no easy answer for social difficulties. This is a new way to pitch old answers, and it’s not doing a tremendous job because some of it is really just snake oil again.

Other Notes:

1) Generative AI, like the type seen in character AIs, is not capable of thought – what it’s actually doing is longform predictive text. While this is useful in some contexts, it can be harmful in others, especially when it’s acting in a role that would require a human to have a license.

2) Generative AI can be flawed, and the hyper-agreeableness is one of those flaws, but not the only one. Products can compensate for individual flaws and often have to specialize in fixing one problem, but it seems currently nothing on the ground floor consumer market is able to fix all of them.

3) Technology and social media are broadly pushing patterns of antisocial behavior, and while the AI’s hand in that is ugly, it’s not the only driving force in the matter. When approached fairly, or with hypotheticals (meaning that questions are phrased as if the one using the computer does not have their ego on the line) it may perform better. When approached from an angle seeking validation, it will validate, no matter what.

4) While not every product stores logs, the ones that do will tell you that they can be subpoena’d somewhere in their TOS. This means that law enforcement could access ‘therapy sessions’, relationship advice given, questions asked to it in times of panic or grief, et cetera. It is somewhat like posting to a forum that can only be accessed with a hidden link, not writing into a diary.

5) Not every product is working off of the same model and many are constantly undergoing tweaks. Results between bots will generally not be replicable, and even ones that claim they have persistent memory are sometimes lying or simply guessing what they’re supposed to have remembered successfully.

If you’re interested in AI for business purposes, we vend programs that might help, including ChatGPT for Business and CoPilot. We’ll help you get set up, and ensure it’s linked to the apps you may need it on. Get in touch if you’re interested: https://elixistechnology.com .