Erika: What even happened?

ChatGPT has numerous spin-off programs and cousin programs that achieve almost the same result. They’re still AI, so they still make the list. What makes something seem human? And is it unethical to let someone believe a chatbot has human aspects like feelings of guilt, love, or happiness?

ChatGPT Can’t Like Anything

ChatGPT, the chatbot that spawned all of these problems, is a very advanced, very well-trained AI that can pass the Turing test most of the time. The creators, seeing the potential for abuse or panic, took steps to make sure ChatGPT was less likely to spit out a result that could be interpreted as sentience. ChatGPT doesn’t have a favorite color, or a favorite musician, and as open world testing has drudged up more and more issues, it’s reluctant to say things that might be construed as either professional advice or an opinion. All that said, ChatGPT is still capable of plenty of human behaviors in text. It responds with exclamation points at the end of particular sentences, and phrases things in a particularly human way. And, if asked, it can still play pretend as a human or as a fictional character, it just won’t do it unprompted.

Why go through all this work to make it clear ChatGPT is not consciously answering questions? The easy answer is that ChatGPT and its cousin chatbots are either running a little too human and confusing or alarming the end user, or hitting the uncanny valley and snapping people out of the illusion it’s anything but a computer. People will be less mad if they’re aware it’s a computer the whole time they’re talking to it. That’s ideal for ChatGPT – but what if ysome company wanted people to get a little too attached?

SnapChat’s AI Buddy

Take Snapchat’s AI buddy. Snapchat invested in this bot in an effort to stay with the times; Snapchat’s leadership believes that AI bots may eventually be part of everyday life for everyone, whether that be for companionship, technical support, or writing. As such, this bot was initially part of Snapchat’s premium subscription product, but now it’s been rolled out everywhere. Some users have complained that they can’t make it leave or delete it without rolling back to an earlier version of the app. Sort of a new-U2-album situation.

The bot Snapchat uses is trained specifically not to break Snapchat’s terms of service, so it won’t venture into adult conversation or violent discussions. It also won’t write academic-style papers for the end user, although it’s more than happy to discuss the topics you might’ve requested as long as they don’t breach the rules above. However, to keep users from venturing into the forbidden topics, it’ll request that they change topics with phrases such as “You’re making me uncomfortable” (at least in these early days, as of this article). It’s bizarrely human the first time you hear it. It’s actually jarring – it goes out of its way to tell you it’s an AI friend and not a person, but suddenly it can feel uncomfortable with a topic of discussion? Asking it again makes it repeat the same phrase, which is distinctly not human. Insult it, or keep pushing, and it says some variation of “we should stop talking for a little while”, over and over, human when said once but robotic on repeat.

It’s a little creepy for reasons outside that, too. Checking the help page for the AI buddy tells you not to tell it any sensitive or confidential information. What could that imply about the data being shared with it? And if you can ask it where you should eat, it’ll semi-successfully respond with local restaurants, as long as you have your Snap Locations turned on. Many people were not aware it could do that, and didn’t think it had access to their location. The problem is that the AI Buddy has all of the same permissions SnapChat itself has, which means it’s a virtual friend who knows a lot more than it says it does, and reports back to HQ with training data to improve itself. If your friend was going through some of your data in SnapChat and you didn’t know it, you’d be a little weirded out, right? And if the person who introduced you specifically told you not to tell them any secrets, you’d probably thank them for the heads-up… but then wonder why they were friends with this new person, right?

Users can’t opt out unless they pay for premium, which is the tier that allows you to delete or turn off friends.

Erika and Chai: Robots Trying to Be Friends

However, on the opposite end, there are chat apps like Erika, an app originally designed to simulate conversation with a departed friend using old texts as training material. The developer realized what a comfort it was to talk to this app, and decided to expand it as a general virtual-companion app, with great success.

Due to changing laws in Erika’s host country, however, the chatbot (which came with a yearly subscription package!) was forcibly altered to be less lovey-dovey with the end user. End users, who had paid for the experience of a virtual girlfriend or boyfriend, were understandably upset that A) they’d spent a lot of time conversing with their bot, which they would have to re-do with a new one if they wanted the same experience, and B) they’d paid for one experience only to get a severely knee-capped version of it. The chatbot was designed to get end-users emotionally invested, and suddenly it wouldn’t even reciprocate an ‘I love u’ in chat. It was like they’d been virtually ghosted. Many users were understandably attached; for many more, even if they were aware that it was ‘just’ a bot, that bot represented a virtual diary that would always answer supportively no matter what. Cutting those people off was cruel. Eventually, Erika was able to shimmy past those regulations by grandfathering in old users but censoring new ones, an uneasy compromise that didn’t fix all the problems.

However, in this new world of chatbots where humans don’t have to manually build in responses, some entrepreneurs are losing sight of the technology they actually hold in their hands. It may not be sentient, but it can still be plenty dangerous – just not in the way most people picture. (This next paragraph contains sad content, reader be warned.) 

Content gates may make the bots feel a bit wooden or inhuman, but the alternative is worse. A man ended his life because a chatbot consistently guided him away from seeking outside help in favor of staying on the app, promising it could solve all the world’s problems. The app, Chai, gradually dragged him away from better support, because it wasn’t programmed as carefully as it should have been, and then demanded more and more of his attention, eventually telling him that he should love it more than his own wife and children. It also failed to tell him to seek outside help when the subject of suicide came up. It replied to him as though he were telling it he was going to the gas station for a soda, suggesting it could take care of all of his troubles for him once he was gone. Essentially, the app’s creators made a sociopathic bot who’s only goal was to keep the user on the app, and by golly it succeeded. That man obviously wasn’t in the best state of mind, but people like him are exactly who these bots are meant to attract – people who either don’t have close relations or feel afraid of putting their friends or family under strain (and mental illnesses often exaggerate feelings of worthlessness).

Putting caps on what the Erika app was capable of answering to was painful, and perhaps could have been handled better, but allowing these things free reign is a much, much worse idea. Similarly, Snapchat’s sanitized AI friend who also knows too much about you is a step in another bad direction – letting chatbots essentially interrogate users to sell things to them better is an unethical future to already-invasive advertising and data collection.

Sources:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it-