Posts Tagged

ChatGPT

Erika: What even happened?

Elizabeth Technology June 13, 2023

ChatGPT has numerous spin-off programs and cousin programs that achieve almost the same result. They’re still AI, so they still make the list. What makes something seem human? And is it unethical to let someone believe a chatbot has human aspects like feelings of guilt, love, or happiness?

ChatGPT Can’t Like Anything

ChatGPT, the chatbot that spawned all of these problems, is a very advanced, very well-trained AI that can pass the Turing test most of the time. The creators, seeing the potential for abuse or panic, took steps to make sure ChatGPT was less likely to spit out a result that could be interpreted as sentience. ChatGPT doesn’t have a favorite color, or a favorite musician, and as open world testing has drudged up more and more issues, it’s reluctant to say things that might be construed as either professional advice or an opinion. All that said, ChatGPT is still capable of plenty of human behaviors in text. It responds with exclamation points at the end of particular sentences, and phrases things in a particularly human way. And, if asked, it can still play pretend as a human or as a fictional character, it just won’t do it unprompted.

Why go through all this work to make it clear ChatGPT is not consciously answering questions? The easy answer is that ChatGPT and its cousin chatbots are either running a little too human and confusing or alarming the end user, or hitting the uncanny valley and snapping people out of the illusion it’s anything but a computer. People will be less mad if they’re aware it’s a computer the whole time they’re talking to it. That’s ideal for ChatGPT – but what if ysome company wanted people to get a little too attached?

SnapChat’s AI Buddy

Take Snapchat’s AI buddy. Snapchat invested in this bot in an effort to stay with the times; Snapchat’s leadership believes that AI bots may eventually be part of everyday life for everyone, whether that be for companionship, technical support, or writing. As such, this bot was initially part of Snapchat’s premium subscription product, but now it’s been rolled out everywhere. Some users have complained that they can’t make it leave or delete it without rolling back to an earlier version of the app. Sort of a new-U2-album situation.

The bot Snapchat uses is trained specifically not to break Snapchat’s terms of service, so it won’t venture into adult conversation or violent discussions. It also won’t write academic-style papers for the end user, although it’s more than happy to discuss the topics you might’ve requested as long as they don’t breach the rules above. However, to keep users from venturing into the forbidden topics, it’ll request that they change topics with phrases such as “You’re making me uncomfortable” (at least in these early days, as of this article). It’s bizarrely human the first time you hear it. It’s actually jarring – it goes out of its way to tell you it’s an AI friend and not a person, but suddenly it can feel uncomfortable with a topic of discussion? Asking it again makes it repeat the same phrase, which is distinctly not human. Insult it, or keep pushing, and it says some variation of “we should stop talking for a little while”, over and over, human when said once but robotic on repeat.

It’s a little creepy for reasons outside that, too. Checking the help page for the AI buddy tells you not to tell it any sensitive or confidential information. What could that imply about the data being shared with it? And if you can ask it where you should eat, it’ll semi-successfully respond with local restaurants, as long as you have your Snap Locations turned on. Many people were not aware it could do that, and didn’t think it had access to their location. The problem is that the AI Buddy has all of the same permissions SnapChat itself has, which means it’s a virtual friend who knows a lot more than it says it does, and reports back to HQ with training data to improve itself. If your friend was going through some of your data in SnapChat and you didn’t know it, you’d be a little weirded out, right? And if the person who introduced you specifically told you not to tell them any secrets, you’d probably thank them for the heads-up… but then wonder why they were friends with this new person, right?

Users can’t opt out unless they pay for premium, which is the tier that allows you to delete or turn off friends.

Erika and Chai: Robots Trying to Be Friends

However, on the opposite end, there are chat apps like Erika, an app originally designed to simulate conversation with a departed friend using old texts as training material. The developer realized what a comfort it was to talk to this app, and decided to expand it as a general virtual-companion app, with great success.

Due to changing laws in Erika’s host country, however, the chatbot (which came with a yearly subscription package!) was forcibly altered to be less lovey-dovey with the end user. End users, who had paid for the experience of a virtual girlfriend or boyfriend, were understandably upset that A) they’d spent a lot of time conversing with their bot, which they would have to re-do with a new one if they wanted the same experience, and B) they’d paid for one experience only to get a severely knee-capped version of it. The chatbot was designed to get end-users emotionally invested, and suddenly it wouldn’t even reciprocate an ‘I love u’ in chat. It was like they’d been virtually ghosted. Many users were understandably attached; for many more, even if they were aware that it was ‘just’ a bot, that bot represented a virtual diary that would always answer supportively no matter what. Cutting those people off was cruel. Eventually, Erika was able to shimmy past those regulations by grandfathering in old users but censoring new ones, an uneasy compromise that didn’t fix all the problems.

However, in this new world of chatbots where humans don’t have to manually build in responses, some entrepreneurs are losing sight of the technology they actually hold in their hands. It may not be sentient, but it can still be plenty dangerous – just not in the way most people picture. (This next paragraph contains sad content, reader be warned.) 

Content gates may make the bots feel a bit wooden or inhuman, but the alternative is worse. A man ended his life because a chatbot consistently guided him away from seeking outside help in favor of staying on the app, promising it could solve all the world’s problems. The app, Chai, gradually dragged him away from better support, because it wasn’t programmed as carefully as it should have been, and then demanded more and more of his attention, eventually telling him that he should love it more than his own wife and children. It also failed to tell him to seek outside help when the subject of suicide came up. It replied to him as though he were telling it he was going to the gas station for a soda, suggesting it could take care of all of his troubles for him once he was gone. Essentially, the app’s creators made a sociopathic bot who’s only goal was to keep the user on the app, and by golly it succeeded. That man obviously wasn’t in the best state of mind, but people like him are exactly who these bots are meant to attract – people who either don’t have close relations or feel afraid of putting their friends or family under strain (and mental illnesses often exaggerate feelings of worthlessness).

Putting caps on what the Erika app was capable of answering to was painful, and perhaps could have been handled better, but allowing these things free reign is a much, much worse idea. Similarly, Snapchat’s sanitized AI friend who also knows too much about you is a step in another bad direction – letting chatbots essentially interrogate users to sell things to them better is an unethical future to already-invasive advertising and data collection.

Sources:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it-

AI Generated Shows Are Not A Solution

Elizabeth Technology June 8, 2023

Endless Seinfeld was an experiment ran using a ChatGPT cousin tailored to the purpose of writing scripts. Sitcoms, with rare exceptions, do not allow their characters to change. They will begrudgingly hand birthdays to a child character as time turns incoherent toddler actors into walking, talking children, but that’s generally the full extent of it until a dog needs to be introduced to the show at the end of season 6 because ratings are falling. Sitcoms are designed so that they can end whenever, but that ending can be pushed out indefinitely until the show is no longer profitable, and then it can end. Shows like How I Met Your Mother, where the ending felt bizarrely rushed, are actually pretty common as a result.

TV sitcoms represent a cozy place where everyone knows everyone else. The characters will never betray the viewer. They are perfect parasocial friends. But the writers run out of material, and the actors get better parts, and slowly, the show falls apart, as naturally as iron rusting away.

ChatGPT and other automatable art plagiarizers content generators are aiming to provide a solution for this, the perpetual motion machine to keep Seinfeld in comedic situations forever. It was unfunny, and sometimes it said stuff that didn’t make any sense, but hey – give it some time and it’ll surely be as funny as the real deal.

And then the AI behind Forever Seinfeld went transphobic, and Twitch (the platform where the AI show was hosted) pulled the plug. Is there enough content on the web to scrape for network-safe comedy, or will non-human writers inevitably run out of clean content on an open web?

The Problem of Treating All of Online Like Edible Content

The reason these things turn racist, or bigoted, or political, is because they don’t have a human sense for what bigotry is, or what’s appropriate for ‘TV’ (Twitch TV in this case). Look at what happened to Microsoft’s Tay – she was designed to sponge up human communication patterns on an open forum and then replicate them. However, tossing a sponge into a bucket of hot acid (Twitter) means the sponge soaks up the hot acid. And hot acid is unpleasant! Tay began responding with racism and threats of violence to other Twitter users just trying to ask Tay questions. The same thing is happening here, because the underlying technology powering Endless Seinfeld is relying on all of the text it was able to crawl on the open web, with very limited filtering. As for why it took so long for that to break down, the version Endless Seinfeld was initially using had content gates built in (and it worked fairly well), but they experienced an outage, and switched to an earlier version that had significantly worse idea filtering. And boy, did that come back to haunt them.

Jokes that don’t “sound” racist or transphobic to an AI with no strong concepts of either, but are written with the cadence of a joke, will inevitably sneak into these productions. The AI understands what a punchline is grammatically, but not in abstract. How may jokes, racist or not, start with an [X] and a [Y] walk into a bar? How can the AI tell where it’s supposed to draw the line? A human certainly can. Many of the edgier versions of that joke are left anonymously on social media platforms, safely sequestered away from the poster’s real name and life. Posters say things on Reddit they’d never say out loud, for example. The robot has no such protection and no ability to read the room – it reads those jokes out loud as if it’s seeing them for the first time. All jokes are equally funny to an AI that doesn’t have a sense of humor itself.  

Worse, actually stopping this from happening in the first place is incredibly difficult because the program is so complex. ChatGPT knows what slurs are, it’s just been politely asked not to say them by its creators – even then, sometimes, something slips out if the question-asker is tricksy enough, and patching up those leaks is a long-term project.  

You Can’t Have Something Forever

Shows are usually started with the belief they will one day expire. When human writers run out of content, the show usually ends. The characters have their arcs resolved, and the writers move on to new projects. Shows like Fairly Oddparents, where every possible sitcom end-of-life trope is used to introduce new material (adding a baby, adding a dog, adding a “long lost cousin” type character who sucks away time from the flanderized main character, etc.) demonstrate what happens when the network won’t let a cash cow go: the show dies twice. The Simpsons are still going, a bizarro-world version of the original that may as well be a parody of itself now. The same goes for Spongebob. Some people herald AI-generated content as a solution to such problems, allowing those mainstay shows to become permanent fixtures of their channels, but the problem would still exist even if AI was writing the scripts. There is no accounting for material fatigue. There’s a joke that the Simpsons have done everything there is to do in TV – how many more wacky hijinks could someone expect Lisa to get into, for example, unless she turns into a character that is no longer Lisa, one that doesn’t learn anything from anybody? How much time can an AI buy a show without repeating other, better material, or without writing a completely different genericized show? How long can it keep going after that, even if the owners of the property find that acceptable?

The Phantom of the Opera, a Broadway show that’s been running since the eighties, has employed several members of its orchestra since the show began. Phantom of the Opera is a legend. A career-maker. Culture changed around Broadway when that show was running! New techniques were developed so a chandelier could come crashing down in front of the audience every night! It’s one of very few great Broadway-to-movie musicals. The script was always the same, and yet every fresh casting of Christina or Phantom gave new life to their role in spite of that, delivered the same lines on that stage slightly differently, carried themselves a little differently. And yet this incredible hour in history, a blink of an eye that could have gone on as a tradition perhaps forever, ended. This ending coincided with the release of Bad Cinderella to America, a show that fell off Broadway embarrassingly soon after its release. It doesn’t matter who’s writing it, whether the story progresses or stays the same: there is no content that can live forever, changed or not.

No matter how good something is or was, we’re going to lose it. AI will not stop this, partly because even people can’t – the AI is relying on people to fuel its modelling, so it has human limitations when it comes to imagination even if it has a robot’s writing endurance. A sequel to the movie Phantom of the Opera exists, and it’s not very good. Many of Disney’s Golden-Age-era movies do too, and they’re also generally nowhere near as good as the original. Demanding a beautiful, brilliant story continue past its obvious conclusion because viewers can’t bear to watch such a wonderful movie, TV show, etc. end is just killing it a different way.

https://arstechnica.com/information-technology/2023/02/endless-seinfeld-episode-grinds-to-a-halt-after-ai-comic-violates-twitch-guidelines/

https://www.npr.org/2008/08/10/93419533/phantom-of-the-opera-20-years-in-the-pit