Posts Tagged

AI

Erika: What even happened?

Elizabeth Technology June 13, 2023

ChatGPT has numerous spin-off programs and cousin programs that achieve almost the same result. They’re still AI, so they still make the list. What makes something seem human? And is it unethical to let someone believe a chatbot has human aspects like feelings of guilt, love, or happiness?

ChatGPT Can’t Like Anything

ChatGPT, the chatbot that spawned all of these problems, is a very advanced, very well-trained AI that can pass the Turing test most of the time. The creators, seeing the potential for abuse or panic, took steps to make sure ChatGPT was less likely to spit out a result that could be interpreted as sentience. ChatGPT doesn’t have a favorite color, or a favorite musician, and as open world testing has drudged up more and more issues, it’s reluctant to say things that might be construed as either professional advice or an opinion. All that said, ChatGPT is still capable of plenty of human behaviors in text. It responds with exclamation points at the end of particular sentences, and phrases things in a particularly human way. And, if asked, it can still play pretend as a human or as a fictional character, it just won’t do it unprompted.

Why go through all this work to make it clear ChatGPT is not consciously answering questions? The easy answer is that ChatGPT and its cousin chatbots are either running a little too human and confusing or alarming the end user, or hitting the uncanny valley and snapping people out of the illusion it’s anything but a computer. People will be less mad if they’re aware it’s a computer the whole time they’re talking to it. That’s ideal for ChatGPT – but what if ysome company wanted people to get a little too attached?

SnapChat’s AI Buddy

Take Snapchat’s AI buddy. Snapchat invested in this bot in an effort to stay with the times; Snapchat’s leadership believes that AI bots may eventually be part of everyday life for everyone, whether that be for companionship, technical support, or writing. As such, this bot was initially part of Snapchat’s premium subscription product, but now it’s been rolled out everywhere. Some users have complained that they can’t make it leave or delete it without rolling back to an earlier version of the app. Sort of a new-U2-album situation.

The bot Snapchat uses is trained specifically not to break Snapchat’s terms of service, so it won’t venture into adult conversation or violent discussions. It also won’t write academic-style papers for the end user, although it’s more than happy to discuss the topics you might’ve requested as long as they don’t breach the rules above. However, to keep users from venturing into the forbidden topics, it’ll request that they change topics with phrases such as “You’re making me uncomfortable” (at least in these early days, as of this article). It’s bizarrely human the first time you hear it. It’s actually jarring – it goes out of its way to tell you it’s an AI friend and not a person, but suddenly it can feel uncomfortable with a topic of discussion? Asking it again makes it repeat the same phrase, which is distinctly not human. Insult it, or keep pushing, and it says some variation of “we should stop talking for a little while”, over and over, human when said once but robotic on repeat.

It’s a little creepy for reasons outside that, too. Checking the help page for the AI buddy tells you not to tell it any sensitive or confidential information. What could that imply about the data being shared with it? And if you can ask it where you should eat, it’ll semi-successfully respond with local restaurants, as long as you have your Snap Locations turned on. Many people were not aware it could do that, and didn’t think it had access to their location. The problem is that the AI Buddy has all of the same permissions SnapChat itself has, which means it’s a virtual friend who knows a lot more than it says it does, and reports back to HQ with training data to improve itself. If your friend was going through some of your data in SnapChat and you didn’t know it, you’d be a little weirded out, right? And if the person who introduced you specifically told you not to tell them any secrets, you’d probably thank them for the heads-up… but then wonder why they were friends with this new person, right?

Users can’t opt out unless they pay for premium, which is the tier that allows you to delete or turn off friends.

Erika and Chai: Robots Trying to Be Friends

However, on the opposite end, there are chat apps like Erika, an app originally designed to simulate conversation with a departed friend using old texts as training material. The developer realized what a comfort it was to talk to this app, and decided to expand it as a general virtual-companion app, with great success.

Due to changing laws in Erika’s host country, however, the chatbot (which came with a yearly subscription package!) was forcibly altered to be less lovey-dovey with the end user. End users, who had paid for the experience of a virtual girlfriend or boyfriend, were understandably upset that A) they’d spent a lot of time conversing with their bot, which they would have to re-do with a new one if they wanted the same experience, and B) they’d paid for one experience only to get a severely knee-capped version of it. The chatbot was designed to get end-users emotionally invested, and suddenly it wouldn’t even reciprocate an ‘I love u’ in chat. It was like they’d been virtually ghosted. Many users were understandably attached; for many more, even if they were aware that it was ‘just’ a bot, that bot represented a virtual diary that would always answer supportively no matter what. Cutting those people off was cruel. Eventually, Erika was able to shimmy past those regulations by grandfathering in old users but censoring new ones, an uneasy compromise that didn’t fix all the problems.

However, in this new world of chatbots where humans don’t have to manually build in responses, some entrepreneurs are losing sight of the technology they actually hold in their hands. It may not be sentient, but it can still be plenty dangerous – just not in the way most people picture. (This next paragraph contains sad content, reader be warned.) 

Content gates may make the bots feel a bit wooden or inhuman, but the alternative is worse. A man ended his life because a chatbot consistently guided him away from seeking outside help in favor of staying on the app, promising it could solve all the world’s problems. The app, Chai, gradually dragged him away from better support, because it wasn’t programmed as carefully as it should have been, and then demanded more and more of his attention, eventually telling him that he should love it more than his own wife and children. It also failed to tell him to seek outside help when the subject of suicide came up. It replied to him as though he were telling it he was going to the gas station for a soda, suggesting it could take care of all of his troubles for him once he was gone. Essentially, the app’s creators made a sociopathic bot who’s only goal was to keep the user on the app, and by golly it succeeded. That man obviously wasn’t in the best state of mind, but people like him are exactly who these bots are meant to attract – people who either don’t have close relations or feel afraid of putting their friends or family under strain (and mental illnesses often exaggerate feelings of worthlessness).

Putting caps on what the Erika app was capable of answering to was painful, and perhaps could have been handled better, but allowing these things free reign is a much, much worse idea. Similarly, Snapchat’s sanitized AI friend who also knows too much about you is a step in another bad direction – letting chatbots essentially interrogate users to sell things to them better is an unethical future to already-invasive advertising and data collection.

Sources:

https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it-

AI Generated Shows Are Not A Solution

Elizabeth Technology June 8, 2023

Endless Seinfeld was an experiment ran using a ChatGPT cousin tailored to the purpose of writing scripts. Sitcoms, with rare exceptions, do not allow their characters to change. They will begrudgingly hand birthdays to a child character as time turns incoherent toddler actors into walking, talking children, but that’s generally the full extent of it until a dog needs to be introduced to the show at the end of season 6 because ratings are falling. Sitcoms are designed so that they can end whenever, but that ending can be pushed out indefinitely until the show is no longer profitable, and then it can end. Shows like How I Met Your Mother, where the ending felt bizarrely rushed, are actually pretty common as a result.

TV sitcoms represent a cozy place where everyone knows everyone else. The characters will never betray the viewer. They are perfect parasocial friends. But the writers run out of material, and the actors get better parts, and slowly, the show falls apart, as naturally as iron rusting away.

ChatGPT and other automatable art plagiarizers content generators are aiming to provide a solution for this, the perpetual motion machine to keep Seinfeld in comedic situations forever. It was unfunny, and sometimes it said stuff that didn’t make any sense, but hey – give it some time and it’ll surely be as funny as the real deal.

And then the AI behind Forever Seinfeld went transphobic, and Twitch (the platform where the AI show was hosted) pulled the plug. Is there enough content on the web to scrape for network-safe comedy, or will non-human writers inevitably run out of clean content on an open web?

The Problem of Treating All of Online Like Edible Content

The reason these things turn racist, or bigoted, or political, is because they don’t have a human sense for what bigotry is, or what’s appropriate for ‘TV’ (Twitch TV in this case). Look at what happened to Microsoft’s Tay – she was designed to sponge up human communication patterns on an open forum and then replicate them. However, tossing a sponge into a bucket of hot acid (Twitter) means the sponge soaks up the hot acid. And hot acid is unpleasant! Tay began responding with racism and threats of violence to other Twitter users just trying to ask Tay questions. The same thing is happening here, because the underlying technology powering Endless Seinfeld is relying on all of the text it was able to crawl on the open web, with very limited filtering. As for why it took so long for that to break down, the version Endless Seinfeld was initially using had content gates built in (and it worked fairly well), but they experienced an outage, and switched to an earlier version that had significantly worse idea filtering. And boy, did that come back to haunt them.

Jokes that don’t “sound” racist or transphobic to an AI with no strong concepts of either, but are written with the cadence of a joke, will inevitably sneak into these productions. The AI understands what a punchline is grammatically, but not in abstract. How may jokes, racist or not, start with an [X] and a [Y] walk into a bar? How can the AI tell where it’s supposed to draw the line? A human certainly can. Many of the edgier versions of that joke are left anonymously on social media platforms, safely sequestered away from the poster’s real name and life. Posters say things on Reddit they’d never say out loud, for example. The robot has no such protection and no ability to read the room – it reads those jokes out loud as if it’s seeing them for the first time. All jokes are equally funny to an AI that doesn’t have a sense of humor itself.  

Worse, actually stopping this from happening in the first place is incredibly difficult because the program is so complex. ChatGPT knows what slurs are, it’s just been politely asked not to say them by its creators – even then, sometimes, something slips out if the question-asker is tricksy enough, and patching up those leaks is a long-term project.  

You Can’t Have Something Forever

Shows are usually started with the belief they will one day expire. When human writers run out of content, the show usually ends. The characters have their arcs resolved, and the writers move on to new projects. Shows like Fairly Oddparents, where every possible sitcom end-of-life trope is used to introduce new material (adding a baby, adding a dog, adding a “long lost cousin” type character who sucks away time from the flanderized main character, etc.) demonstrate what happens when the network won’t let a cash cow go: the show dies twice. The Simpsons are still going, a bizarro-world version of the original that may as well be a parody of itself now. The same goes for Spongebob. Some people herald AI-generated content as a solution to such problems, allowing those mainstay shows to become permanent fixtures of their channels, but the problem would still exist even if AI was writing the scripts. There is no accounting for material fatigue. There’s a joke that the Simpsons have done everything there is to do in TV – how many more wacky hijinks could someone expect Lisa to get into, for example, unless she turns into a character that is no longer Lisa, one that doesn’t learn anything from anybody? How much time can an AI buy a show without repeating other, better material, or without writing a completely different genericized show? How long can it keep going after that, even if the owners of the property find that acceptable?

The Phantom of the Opera, a Broadway show that’s been running since the eighties, has employed several members of its orchestra since the show began. Phantom of the Opera is a legend. A career-maker. Culture changed around Broadway when that show was running! New techniques were developed so a chandelier could come crashing down in front of the audience every night! It’s one of very few great Broadway-to-movie musicals. The script was always the same, and yet every fresh casting of Christina or Phantom gave new life to their role in spite of that, delivered the same lines on that stage slightly differently, carried themselves a little differently. And yet this incredible hour in history, a blink of an eye that could have gone on as a tradition perhaps forever, ended. This ending coincided with the release of Bad Cinderella to America, a show that fell off Broadway embarrassingly soon after its release. It doesn’t matter who’s writing it, whether the story progresses or stays the same: there is no content that can live forever, changed or not.

No matter how good something is or was, we’re going to lose it. AI will not stop this, partly because even people can’t – the AI is relying on people to fuel its modelling, so it has human limitations when it comes to imagination even if it has a robot’s writing endurance. A sequel to the movie Phantom of the Opera exists, and it’s not very good. Many of Disney’s Golden-Age-era movies do too, and they’re also generally nowhere near as good as the original. Demanding a beautiful, brilliant story continue past its obvious conclusion because viewers can’t bear to watch such a wonderful movie, TV show, etc. end is just killing it a different way.

https://arstechnica.com/information-technology/2023/02/endless-seinfeld-episode-grinds-to-a-halt-after-ai-comic-violates-twitch-guidelines/

https://www.npr.org/2008/08/10/93419533/phantom-of-the-opera-20-years-in-the-pit

Moderator Bots: Do They Work?

Elizabeth Technology February 28, 2023

In a world of ever-growing conversations and large forums, moderating manpower is in high demand. Websites turn to bots. Is that really the best idea?

Children’s MMOs And Overzealous Bots

Poorly configured bots will spot curse words in other words, so bot configuration is especially important to prevent kids from reverse-discovering a curse word. Kid’s games with open chat are notorious for this issue, even though they should have more attention and care put into their bot moderation than anywhere else. That’s the problem: they’ll go to extreme lengths to protect these children! The people programming auto-moderator bots get overaggressive and say ‘no exceptions. None.’ to their bots. Context doesn’t matter, if it sees a combination of letters that add up to a curse word, then it has to be removed before other children see it. This, however, causes problems.

If someone tries to type ‘assess the situation’ they may end up with a message that says ‘***ess the situation’. They can confirm or deny words their friends told them were actually curse words by bouncing it off the chat filter. Children may be naïve, but they aren’t stupid!

Moderator bots were also trained to spot curse words separated by spaces ‘l i k e t h i s’ later on. This isn’t a bad idea – it just has to be more delicately configured. People will do their best to worm around content filters, and if spaces work, then they’ll use spaces to curse out other players. The problem is that such machines frequently doesn’t understand the context of the letters surrounding it, and you get “Ay* **mells weird” instead of “Aya Ssmells weird” from some little kid’s typo.

The irony of all of this is that it creates a reverse censor effect – clean words seem dirty because the bot’s censored them, words like ‘Assassinate’, or “Scattered”, things kids might use in a game. Typos under this system turn into a fount of forbidden knowledge. People will worm around bot moderators, but – especially on children’s forums – it’s important that the bot understands context, at least a little. If it can’t do that, a human teammate is necessary to whitelist weird word combinations as they appear.

Paleontology and Oversized Profanity Libraries

There are many bones. And if you were going to single out a specific bone (in the context of paleontology) just to cause problems, which bone would you pick? The censor library picked the pubic bone, alongside a host of other totally normal words like ‘stream’ and ‘crack’. There were curse words in the library too, but, of course, like most normal, professional conferences, the curse words did not appear nearly as much as the other words used in completely scientific contexts.

As in the children’s MMO example, it wasn’t an innuendo to say ‘the bone was found in a stream’ until the censor library did the equivalent of adding the flirty wink emoji to the end of the statement. Since tone can’t be conveyed over text except by word choice, the computer choosing to single out a definition for ‘stream’ and apply it to all uses is what made it a dirty word. Besides the words with no connection to actual profanity, pubic bones do come up quite a lot when talking about fossils, because it provides information about how fossilized animals would walk. The pubic bone is the ‘front’ bone in the pelvis: two-legged animals have a differently shaped one than four-legged ones, and animals that walk totally upright like humans have differently shaped ones than animals that ‘lean forwards’, like birds.

Why make a moderation bot too strict to have conversations around? They didn’t make the bot! The conference organizers were using a pre-made program that included its own profanity library. Buying a software that includes censorship already baked-in sounds like a great idea! If applied correctly, it can save everyone time and prevent profanity from appearing where it shouldn’t, even anonymously. However, ask two people what profanity is, and you’ll get two different answers. Everyone has a different threshold for professional language, so it’s better to build a library of the ‘obvious’ ones and go from there based on the event. The best censoring software is the kind you don’t have to use. Professional events are better off stating their expectations, before frustrating their attendees with a software that causes more harm than good.

Weaponizing Profanity Filters

Twitter had a bit of a kerfuffle involving the city of Memphis. People using the word Memphis in a tweet got a temporary ban. Then, a rash of baiting other Twitter users into using Memphis hit once word got around. Memphis getting users banned was the result of a bug, but the incident itself highlights issues with profanity filters. It’s possible to bait people into using banned words, especially if they aren’t inherently a profane word when used out of context.

For example, some online games will filter out the very real countries of Niger and Nigeria, to prevent misspellings of a racial slur from evading a deserved ban. Why would North Americans ever be discussing African countries over a game set in Russia, after all? But, by including them, they’ve created a way to troll other players without saying anything profane (in context). Baiting another user into answering questions about the countries will result in them getting banned, not the question-asker. The person who answered now has to contact the human support line to get unbanned, or wait for their timeout to end, which is annoying and inconvenient for them. The anti-profanity filter has been weaponized!

Building a positive culture around a game takes a lot of effort, and profanity filters are an integral part of keeping arsonists and trolls out. Nobody should feel targeted in game chat for reasons outside the game. However, just like with every example mentioned here, humans should be on call to un-ban and un-block users who were genuinely attempting to answer a question. Err on the side of caution, both with the software and customer support.

Are Bots a Cure?

Short answer: no. Most good moderation teams have at least one human on them in case the bot screws up. Preferably, they’ll be able to respond to ‘deleted comment’ or ‘banned user’ complaints right away. Even better, if the bots are configured well enough, they’re not going to be jumping the gun often enough to take a team!

It’s just very difficult to make a bot that understands people well enough to understand every instance of bad language.

If you’re running a forum and you don’t want people using profanity, you will censor the profane words. A bot could do that. But then there’s things like LeetSpeek, where users will spell the colloquial name for a donkey with two fives in place of the ‘s’s. Do you ban that too? Sure, you could add that to the bot’s library. But then they change the A to a 4. Do you censor that too? If you do, people will push to figure out what is and isn’t acceptable to your bots, and they will. Not. Stop.

And then there’s things like homophones! TikTok, a popular video app, has a fairly robust profanity filter for text. Videos with curse words and sensitive topics in them are noticeably less popular than ones without those words, due to TikTok’s algorithm.  However, people making videos on sensitive topics use phrases like ‘Sewer Slide’ and ‘Home of Phobia’ to evade the bots. The bots, then, have not stopped anything. These conversations will happen no matter what TikTok’s moderators want, and banning the word ‘sewer’ is only displacing the problem. If you don’t want users discussing these things on your site, you’ll have to have human moderators at some point.

Language is dynamic, and bots simply can’t keep up. It takes real people to study languages – why wouldn’t it take real people to moderate it online?

Sources:

https://www.theguardian.com/science/2020/oct/16/profanity-filter-bones-paleontologists-conference

https://www.brennancenter.org/sites/default/files/2019-08/Report_Internet-Filters-2nd-edition.pdf

https://blog.twitter.com/en_us/topics/company/2019/hatefulconductupdate.html

https://www.engadget.com/twitter-bug-memphis-ban-133327641.html

https://www.theguardian.com/technology/2021/mar/15/twitter-accidentally-blocks-users-who-post-the-word-memphis

Stop Putting Stuff into AI Apps, Especially If It Wasn’t Yours In The First Place

Elizabeth Technology January 3, 2023

Lensa AI is an app that uses AI combined with data scraped from pictures posted online to turn a user’s picture into a new, whimsical creation.

However, you should consider what you’re giving the app before you upload that selfie or pet pic – the app demands the rights to the photos you give it, meaning that once you upload that picture, Lensa can use it however it wants, wherever it wants. That’s actually pretty unusual as far as art-related apps go!

With other websites reliant on user photos, like Deviantart or Instagram, the hosting company only needs permission to redistribute and host the image – as in, it needs to be able to upload your photo to your page and then show it to other people if they click on it. It doesn’t get permission itself to alter the picture or create things out of it.

Lensa, on the other hand, is actively consuming the pictures it’s given. It needs those rights to be able to train itself. If you give a picture of your pet to Lensa, for example, your pet picture is now part of it’s training database and you can’t extract it. If you go on to make sellable prints of that pet image, Lensa could recreate it by mistake for someone else hoping to sell the pictures Lensa creates, and there’s not much you could do to stop it. It’s up to the person asking to catch accidental copyright infringement, and they may not realize they have your pet. It won’t be a 1 for 1 duplicate, anyway – once filtered, it becomes fair use, for better or worse, and your pet with their own special patches of color and freckles and sparkly eyes or whiskers may become just another stock image for some guy on Redbubble. You can complain they stole the image of your pet – to Redbubble. Nothing fed to the machine may come back out of it unscathed!

Worse still, it seems like Lensa is not actually asking permission to use huge swathes of it’s training database. It asks users of the app, yes… everyone else is sort of a mixed bag, where they can get permission from the hosting platform to bypass asking individual artists.

If someone has a very unique, quirky, or recognizable art style, and they are cursed with a large fanbase, enough of that fanbase asking the AI to recreate that style may very well drive the artist out of art. After all, if a machine can do whatever the requestor wants, why bother going to the source for a commission? Just keep slapping the button to get art for free. Never mind what it was fed on; the machine created a new piece of art legally distinct from the original, and so it doesn’t owe the artist anything for training it. It has scalped the work of the artist and is reselling it, yeah… but… free. And there’s a chance some of the work of other artists is in there too, so no single artist has the right to be outraged. As a bonus, the AI will gladly spin in ideas that the original artist wouldn’t have painted, either for moral or legal reasons. Would that artist ever have made art of Andy Dick depicted as Saint Anthony? Well, with the power of AI, now they would!

Disrespect of Humans in the Craft

Why would you continue to produce art under those circumstances? Even if it’s ‘for the love of the craft’, the way people behave when given a machine that recreates years of skill in a few keystrokes can very easily make you fall out of love with the craft.

I watched as a Twitch Streamer in the middle of creating an artwork was sent an AI’s iteration of her work, something a viewer made with of a screenshot of her rough draft. The viewer had asked an AI to finish it, and then they sent that picture to the artist while they were still making the art. The sentiment contained in that action was “You can stop drawing now. The AI can take it from here.” Was this what they meant? If it wasn’t, we can’t tell! Another Twitter thread I witnessed was of a cute, stylized bat drawing, one the artist posted with an affectionate caption. This was her creation. But then a follower threw that picture into an AI program so they could send her back remixed images of the bat she drew. The artist was understandably insulted. They weren’t another artist standing on equal footing with her and trading art – they had no art to trade. She literally couldn’t do the same back to that Twitter user because the Twitter user didn’t have any original works. They saw an idea and asked a machine to recreate it. Even a bad pencil sketch done by hand would have been leagues less weird and significantly better-received.

The way people are using the AI, it’s like trying to finish a sentence only for someone else to keep interrupting with the word you’re most likely to say next. They think they’re being helpful, but the message that sends subliminally is “I want you to be done talking now”. Complaining about it, for some reason, leads to the AI’s fans saying it’s better if it talks over the people it’s learning from because it makes better sentence sounds and words good.  

Pause a Minute

Aside from the failings of the machine (it still doesn’t understand hands or tangent lines), the lack of awareness it takes to pull a concept out of an artist’s hands so that a third party (the AI) can do what they want with it is really bizarre. It’s also a fantastic recipe for unearned bitterness.

To go on a bit of a tangent, writers on TV shows can’t read fan-fiction (free fiction online that fans write about pre-existing shows) because they may accidentally incorporate something from a story they read. Admitting they read fanfiction at all is sometimes enough to cause legal trouble. If the fanfiction author can prove the show writer read their story, or if they can prove that the show writer reads fan fiction and might have read their work, and the real show has an idea too similar to the fan-fiction’s idea, it can end in a lawsuit and a lot of hurt feelings in the fan community. The show writers must completely abstain from that side of fan culture to avoid this happening. That way, nobody can say they found the idea instead of coming up with it themselves.

Back to the art, an artist just trying to finish a painting on a livestream now has to fend off people sending them “completed” versions because the people sending may try to claim some credit if it’s too similar to the actual finished product. This is a natural conclusion of treating AI art like it’s just like human-made art. Of course, the machine should be the one insulted if the final result is too similar, but the machine doesn’t speak or create for itself.

It’s one thing for a human to create fanart, or share art of another creator’s creation, like the bat. It’s another to get a machine to finish a piece the artist was still drawing, or use a machine to draw someone’s character instead of making their own art of it. The skill is not the issue, the sentiment is.   

The End Goal

Assuming such a chaotic industry has an end goal in and of itself is giving too much agency to a mindless machine in an ocean of mindless machines, both real and metaphorical. AI creators want to make money. People want to make money off of the art the machine can make for them. Others want custom artwork without paying for it, others still want to preserve the creativity of artists who are long dead. But should they? If the artist is dead, do we need more of their art? Every good use is a double-edged sword; many of the bad uses are infringing on human artists’ good times. The entire thing is confusing and upsetting, and the people who are pro-machine are more often than not coming across as anti-human artist, even though the beast would not exist if it hadn’t been fed their work.

Look at the Artstation boycott: the anti-AI image was so common on the website that machine generated images were coming back with red crossbars where faces should have been, an artifact that could have only come from the machine scooping up pictures of the protest. At least, that’s what people thought – it turns out some of that was actually a trolling campaign made by pro-AI Art accounts, a joke. In theory, it takes a couple of weeks to incorporate taken art into the final pictures, although the AI does accept and reproduce art from artists who have just recently started trending, so that timeline may not be 100% accurate. How that campaign was supposed to help their case, I have no idea, because it seemed to just confirm the worst suspicions of anti-AI users and nobody caught on to the ‘trolling’ during the phenomenon’s 5 seconds of online fame. It is still snatching art right off the front page no matter how long it takes the AI to use it, and somehow the AI creators and websites expect the artists who fuel them to not be so upset about it.

There is no end goal. These machines were initially made with the goal of dreaming – teaching a machine to appreciate art or identify a dog in an image was a message of hope for the computers we could one day build. Much like Blockchain, something that could have been great is instead being used to print money and win competitions that were intended for humans.

The Copyright Debate

The only things it can spit out are by definition recycled. Still, most of it qualifies as fair use… given the machine avoids taking too much from one particular image, which it very well might.

The music industry has been on this precipice for a while now, and yet it never crossed over into music AI trying to make a new album in an artist’s style all by itself. This is because music copyright is much more tightly regulated than image copyright – the databases that music AIs are allowed to listen to and learn from are all free and public. The music AI creators are very aware of the copying issue (which should tell you that the art AI creators realistically should be too): the AI has a tendency to lift whole riffs and chunks of songs and put them wherever. They’ll tell you this themselves because they know what they’re controlling. If the requestor doesn’t recognize the riff, they could be stumbling into copyright nightmare territory by publishing whatever the machine spit out, and it’s just not worth the risk. Why is visual art not getting this treatment?

Proving that an image is stolen when it’s part of a massive collage of millions of different images (also stolen with few exceptions) is really, really hard. For music, identifying a rhythm that’s too close to another one is pretty easy, as there are not infinitely many ways to recreate a riff. A face can come out identical to one an artist created for their ArtStation account, but the rest of the image not fitting combined with an opaque generating process means that nobody can definitively prove anything belongs to them no matter how similar it looks. The machine really might pull a face or a set of wings directly from training images to put into a generated picture, and nobody – not even the creators of the machine – could tell you that it did that or that it didn’t just happen to generate wings that looked exactly like something posted back in 2019. Without being able to interrogate it, without being able to see inside, nobody on the human artist side can do much but shout this information to the heavens.

The fight can’t even start until someone huge like Disney starts questioning why the machine can respond to phrases like ‘dog, pixar style’  and come out with something that looks like Doug from the movie ‘Up’.

This is such an obvious weak point that recently, a judge denied someone copyright protection for the AI-generated art they used to make a comic. The story, which was made by a human, was eligible, but the art behind it, not made by a human, was not. This is a step in the right direction, because all that theft will ultimately come to nothing if corporations can’t squeeze money out of it or protect what they make from being transfigured into something they don’t want to be associated with. It will be reduced to a cottage industry run by people hoping to make a quick buck and then bail.

TikTok’s Censorship is Bad To Convey Ideas

Elizabeth Technology May 19, 2022

“Unaliving” and Other Such Words

TikTok started out pretty rough when it was introduced to the US. Much like the old internet of yore, it was possible to stumble across something pretty disturbing, graphic, or violent just by using the app. However, upon introduction to the Apple app store, which required a stringent series of reviews, the app began censoring. Users, too, began self-censoring upon pain of being blocked or simply showered with hate comments. Eventually, the TikTok environment adapted to become more like the pool of the general internet plus some extra chlorine to stay in Apple’s good graces.

However… this has had some pretty bizarre side effects. The changing of words, for example! TikTok doesn’t want to do what Tumblr did when they first started and accidentally encourage the negative mental-health boards common to dark corners online. However, moderating such a large userbase is incredibly difficult. Instead, Tiktok relied on auto-shadowbanning (shadowbanning refers to banning someone or something without alerting them/it that it’s been banned) certain words instead, even if they technically didn’t violate guidelines. Two tiers of ‘bad’ words existed, in essence: words you couldn’t say at all, and words you couldn’t say and still appear on the FYP (for-you page) algorithm for. However, not every discussion featuring a banned word was encouraging it – for example, ‘suicide awareness’ has the word ‘suicide’ in it, but the bot couldn’t tell the difference, and you’d get that video shadowbanned from the algorithm’s front page queue anyway with no way to appeal it.

Instead, users began swapping words. At first, it was “Sewer Slide”, and then the more general “Unaliving” came in to replace killing, murder, suicide, etc. Every word that involves loss of life simply became ‘unalive’. And it worked. Where metaphors might have been inappropriate, the different word worked.

And Then It Got Cutesy

If you weren’t on Tumblr or Reddit during the ‘Heckin’ Pupper’ phase, you may be missing some context for how annoying this got – it was a way of baby-talking things no matter what they were, serious or not. One of the Heckin’ subreddits was Heckin’ Chonkers, a place for owners to post pictures of their obese pets. Many people understood this was unhealthy and were posting pictures of their rescues before they started their diets, but an alarmingly large amount of people saw that subreddit and thought ‘Wow! See, my pet’s just a ‘chonker’, it’s okay!’ when it wasn’t. But instead of having this serious conversation in a serious way, commentors had to fight through the ocean of ‘he’s just heckin’ chubby, lol!’ to get the original poster to understand that this was a problem.

Mixing a joke into something that’s actually serious can really screw up people’s perception of it.

 Back to ‘unaliving’. Consider replacing ‘murder’ with ‘unaliving’ or any other metaphor for what that means. When describing a murder, do you want the words to be said with a wink and a nudge? It didn’t start like that – it started as a way to describe crimes, threats, and real cases without losing too much of the case’s integrity to TikTok’s censorship, but as more people piled in, you saw phrases that were still allowed being replaced with ‘unaliving’. Phrases like ‘passed away’ were getting replaced with ‘unalived’. Even worse, some of the people doing that thought it was funny to do so – it was no longer a way to evade a ban to share info, but a way to share info and also signal in-group membership to other TikTok true-crimers. It depersonalized the issue for the people reading it out. You’re not describing a murder, suddenly, you’re describing an ‘unaliving’. A ‘nighty night’. A ‘fishy sleepover’. This is a stranger who died and simultaneously entertainment for their listeners. A real human life and just more words on a paper, just more audio on a website.

Swapping words for cuter ones when not strictly necessary is a cousin-problem to oversharing details while hiding others to make the case seem more mysterious, and otherwise fumbling the handling of a sensitive subject for likes and laughs. Who’s to say anybody wants to be described as ‘unalived’ when they die?

Others

Other words including slurs and targeted swears were also commonly censored… but some slurs aren’t really slurs unless they’re used as slurs maliciously. Additionally, words relating to the LGBTQ+ community that weren’t slurs were also censored, and that required people who wanted to talk about the community to swap words or censor weirdly too. The most egregious example was “Lesbian” being converted to “Le$bean” in text, which didn’t trigger the algorithm and couldn’t be read correctly by the autogenerated voices, leading to people pronouncing it like ‘Le-Dollar-Bean”, the way the computer reader did as a joke.

People tried to cash in on this in a way they hadn’t for ‘unaliving’. Natural crowd movements are something you can market so long as you’re ‘chill’ about it, so it’s not necessarily a horrid idea. However, trying to make a meme localized to a group of people accessible to everyone often kills the meme. People outside the community use it wrong, they use it to be mean, they use it to laugh at the people using the meme, not with them, and the Le-Dollar-Bean song soon became cringe because it was spreading to people who were making fun of the singer and the meme itself in bad faith.

It’s not just because it was LGBT, either, although the meme wouldn’t have happened in the first place if TikTok hadn’t considered that a controversial issue. For example, the same thing happened to the phrases ‘smol bean’ and ‘cinnamon roll’ on Tumblr, which were ways of describing characters who were innocent and cute. Eventually, people started using it to describe real people, and characters who didn’t fit the description but were conventionally attractive (mostly men). Stickers of mainstream actors with the phrases around them were made, even when it didn’t apply, and then those phrases became cringe too via overexposure.

There’s a political statement to be made about the censorship of gay issues that lead to this whole situation – the Le-Dollar-Bean song, a brief mark from people who just wanted to say the word, and ended up co-opted by people who trust corporations that put rainbows on shirts and bracelets with one hand and then funnel money into anti-LGBT bills with the other, is not that statement. Somebody got a little too serious about the joke and overused it, and now Le-Dollar-Bean is cringe, and the reason it’s like that has been forgotten in favor of the song that started the cringe around actually using Le-Dollar-Bean unironically.

The Ethics of Censoring Your Captions

The goal of any translation should be for the receiver to receive it as directly as possible, with some nuance allowed for things that other languages just don’t have. The Japanese don’t really have sarcasm, and may interpret a sarcastic comment as though you were being literal. Similarly, saying something like “I Love You” during a quiet moment comes across as bizarrely direct, so some Japanese may instead reference a poem or a common phrase as shorthand, which can be translated either literally or figuratively in media. Spanish, too, does something similar: if you watch Spanish soap operas, you may hear te quiero, instead of te amo, but both will be translated as “I love you” in the captions (te quiero being literally “I want you”, but understood as “I love you”). (This triggered a huge debate in the Supernatural fandom when the international dubs of the final episode came out, but that’s another story).

So, what does this have to do with English captions on English videos?

Creator-generated captions often censor swear words, or change what the creator is saying, which is not what those are used for! Captions are not the place to hide jokes. It’s an accessibility issue. While hearing viewers may find the dissonance between what’s in the captions and what’s being said funny, the deaf and hard-of-hearing viewers who don’t have that extra context may be confused. If you can’t swear in the captions for fear of censorship, then your interpretation should be ‘I can’t swear in this video’, not ‘I can’t type this swear in the captions’. It’s not ideal, obviously, to have to censor everything, but that’s TikTok’s problem and you should be complaining to TikTok about it, not giving the deaf audience a cleaner version of the video involuntarily.

It’s not all the creators’ fault – some mistakenly believe the app can’t hear them, but will be able to crawl the captions, and thus censor them so they can still be viewed. Others rely on the auto-generated captions, and sometimes it just doesn’t understand the word that’s being said, and mistranslates it to text. Still, effort should be made to convert the audio as closely as possible to the captioning. Don’t baby-talk, don’t misuse them to hide jokes, and don’t intentionally mistranslate!

V-Tubers: The Virtual Youtuber

Elizabeth Technology February 7, 2022

You might have seen videos on Youtube’s front page for what looks like anime characters playing games. What’s the deal?

The Human Ones

We all know fans can be insane. A fan fatally shot Selenas at a concert. A group of teenaged fans targeted celebrities to steal from. Fans surround famous TikTokers’s houses and park in the street, hoping to get a picture or video of them for the app. In Japan, idols are very reluctant to date, because the insane idol culture means that male fans see them as future girlfriends, and a real boyfriend would mean they were ‘cheating’. Superfans seem to think they ‘own’ celebrities. As such, it’s kind of dangerous to actually be out in the wild as a celebrity.

A solution? Make sure people don’t know what you, your house, or your room looks like, and it makes you harder to find. Software can be used to superimpose a 2-D character over a 3-D person, and have it follow their movements. The real person never actually appears on screen, but their facial expressions and gestures are still caught on screen via their avatar. Win-win – the streamer gets to livestream their reaction to their game anonymously.

However, obscuring one’s real identity isn’t the only reason they’re in use. Some streamers use them because they’re fun and colorful, others use them because they can be used to interact with chat without actively interacting with chat – text can scroll across blank spaces on virtual wings or T-shirts. Virtual confetti can rain down on the virtual streamer with some trigger from chat, with no mess to clean up. Sometimes, the person has appeared live before, but just doesn’t want to dress up for their stream – the V-Tuber version of themselves is always perfectly dressed!

The first one, Kizuna Ai, broke ground when she first began streaming. Motion-Capture tech used to be for movies only, as it was prohibitively expensive, and usually required special kinds of suits.

Motion Capture

If you were around for the filming of The Hobbit, you might remember that video of Benedict Cumberbatch flailing around on the ground in a skintight suit covered with white dots. That was the motion capturing process. They used that footage to rig to the face of Smaug, the villain of the story.

But why?

CGI artists would eventually hit a wall if they were to only make things move by hand. Yes, in the short term, doing it manually looked better (and was faster) than motion capturing, smoothing the capture out, rendering, adding in shadows, etc. However, in the long term, motion capture provides a much more realistic experience at a fraction of the cost and time of doing it the old way, especially as models got more and more detailed.

It also caught key parts of human expression and human movement better. Grimacing has many other, smaller facial movements than just the mouth turning downwards, for example. The artist used to have to move all those little details by themselves, and then repeat that for each expression or word, over and over. The other option was an uncanny-valley creation, or one that felt flat – there just wasn’t another way before motion capture.

When filming The Hobbit, Benedict just had to make his expressions into a camera, and then the computer could use key points of the human face to connect to key points of Smaug’s face. It could register his ‘skeleton’ in the footage with those dots on his suit, and use it to create a functioning, moving Smaug shell that followed along. The computer just has to be told where to attach the dots on his suit to the Smaug shell, and Voila!

Science World compares it to three dimensional rotoscoping. Over time, facial recognition software has gotten much better. The Virtual Youtuber doesn’t even need to be wearing a suit for the virtual model to work anymore. It simply understands what a face looks like now, which is incredible. The rigs that streamers use can understand facial expressions, and as long as you tell it where the eyebrows and mouth are, it can mimic them in the virtual shell. This allows for incredible freedom when designing the character – if you want your character to have a tail, all you have to do is tell it what the tail reacts to. Wings? Same deal, you can attach them to your arms’ movements if you want, and they’ll move when you move. Some programs understand clothing physics, and can move capes according to arm movements.

Many programs are in use today[HYPERLINK V-TUBER WIKI]. CodeMiko on Twitch uses the Unreal Engine software, a program used widely by game studios. FaceRig and Animaze are also popular choices, but freeware programs exist as well. It’s entirely possible to make yourself into a V-Tuber with a little elbow grease, and a willingness to work with the models.

An Opinion: V-Tubing is Friendlier than Virtual Influencing

I like V-Tubers. I don’t like Virtual Influencers. They arrived with a kind of smugness, from both their creators and assorted news outlets: “We’re winning. We’re totally funnier and hotter and more interesting than real people.” Yeah. That’s… not really a revelation. Of course an entire team of people, none of which have to actually appear in front of the camera, is going to be more successful at being hot than a real person. Lil Miquela doesn’t have pores or acne or feelings. She is a CGI’d doll that doesn’t have to actively respond to the environment like a V-Tuber rig does. The whole draw of influencers is that they create the illusion that attractive people exist – real people will photoshop themselves too, but normally they have the decency to hide it.

Meanwhile, V-Tubers have the opposite approach. “We all win. Let’s have fun together with this system.” When people can’t show their faces, they can wear a suit that shows their expressions, allows them to interact with chat, and allows them to communicate nonverbally where they otherwise couldn’t. The rig allows them to connect more organically to their audience, not take advantage of them. They were never meant to replace real people – they’re mostly anime-like characters with big eyes and big heads. The person behind the mask is still playing the games, and talking, too; Lil Miquela barely ever has to ‘appear’ for her audience. 90% of her interaction boils down to text that someone else writes and pictures someone else makes. Meanwhile, a V-Tuber is actually behind the screen. A V-Tuber is ultimately a real person with a tool, not a tool being used to replace a real person.  

Sources:

https://www.theguardian.com/lifeandstyle/2009/oct/27/lindsay-lohan-paris-hilton-robbed

https://www.engadget.com/2014-07-14-motion-capture-explainer.html

http://www6.uniovi.es/hypgraph/animation/character_animation/motion_capture/history1.htm

https://www.sportskeeda.com/esports/what-codemiko-really-like-off-camera

https://virtualyoutuber.fandom.com/wiki/List_of_VTuber-related_software_and_resources

Fast Fashion

Elizabeth Technology January 14, 2022

You’ve likely heard the term before – and for good reason. Fast fashion is bad for the environment, generates a ton of waste and discarded clothing, and more often than not uses sweatshop labor to keep production up and costs down simultaneously. Fast fashion also often rips off clothing from other, more sustainable, smaller brands, and idea theft in the fashion industry is becoming an increasing problem because of it.

Ultimately, well-made clothes don’t need to be replaced very often. Companies want you to replace them (because that’s how they make money). The clothes ‘got old’, so you should want something ‘new’, says advertising, even if there’s nothing wrong with the clothes themselves.

It’s an ugly thing to be a part of, ethically, environmentally, and monetarily. Fast fashion did exist in the past, but not to such extremes, and generally not for singular outfits and bizarre clothing with holes torn in the functional places. This couldn’t have happened without microtrends and the rise of social media.  

Fashion Nova, Shein, and ClickBait Fashion

Fast fashion produces strange results. Strappy sandals that go all the way up the shin combined with swimsuits in a similar fashion, pants with holes up and down the entirety of the leg and combined with ruffles, a denim bikini – not all of these are wearable, but they’re very eye catching on the website. ‘Fashion’ and ‘art’ and ‘clickbait’ all overlap with each other now in a way they didn’t used to.

Know that the website doesn’t need clothing to sell. It does not make all of these pieces to sell them; it makes them to make the other pieces that are actually wearable ‘pop’. When a fashion brand wants to unveil something controversial and exciting, the traditional play is to do it on the runway, and then tone it down for the actual line. When you see a model in an absurdly big hat, they don’t actually want to sell that hat, they just want to plant the idea of a big hat. The hat is an exaggeration of what they’re actually selling, a sort of caricature for the intended look. Fashion Nova and Shein have essentially started listing the giant hat alongside the real hats. The results are weird.

 Of the pieces that are wearable, they often don’t look as good on ordinary people – or they do, but only with a few other articles of clothing, meaning you’re always wearing one shirt with one set of pants or one pair of shoes to make it work and look good. This ultimately means that you’re not going to wear that item until the other items to go with it are clean and ready to wear, so it’s going to sit in the closet for much longer between wears – and it may be out of trend before you, the wearer, have truly gotten your money’s worth out of it.

Social Media And Cute Stuff

We know that art tends to get consumed and riffed on into unfamiliarity when there’s clout to snatch and money to make. A popular Mitski song about longing was turned into an anthem for strawberry animals, completely missing the point, and Saturn Eating His Son, one of Goya’s final paintings before he died (and a painting he did on the wall of his house, meaning he probably didn’t intend for anyone to actually see and document it before he passed) is sold on mugs and masks. Clothing, unfortunately, gets this treatment worse than most.

See the strawberry dress by designer Lirika Matoshi. A 300$ dress (which sometimes arrived with broken zippers and hanging threads, but that’s another issue) got passed around social media alongside remixes of that Mitski song. Some people bought it, some tried to recreate it for cheaper, and the strawberry dress held TikTok’s attention for long enough that it started appearing in anime fanart. That’s pretty rare! Marketing-wise, this was bizarre but ultimately welcomed. People had Pavloved others into liking this dress because it was awfully cute and fairly easy to draw, and everyone else was into it. Make art with it, that art will get likes. Cute Stuff Trends.

A specific item was in the spotlight – not the designer, not the line, just this one item from her, and then when it faded it was barely seen again except for in the art that recirculates every now and again.

This is the essence of a microtrend.

MicroTrends

Microtrend clothing is identifiable by a few different factors:

1) It’s cute and unique – but not so unique it’s shocking

2) It’s reasonably accessible

3) It’s very easy to photograph on almost anyone

4) It has no substitute or ‘dupe’ – only one item will do

5) it comes and goes before outsiders realize it’s ‘in’

Here are the differences between ‘trending items’ and ‘microtrends’. I would classify the brief flash of half-open Hawaiian shirts on picture platforms like Pinterest, TikTok, and Tumblr as a trend, not a microtrend. If all of these pictures were of the same shirt, a shirt conveniently available at Target, or Amazon, then it would have been a microtrend. Even then, the only point it misses is 4 – if everyone had been seeking out a specific shirt to take pictures in, it would have been a bona fide microtrend. The strawberry dress hits all five – by the time people were working out dupes for it, the item was no longer hot.

Microtrends are all of the issues with fast fashion condensed into singular pieces of clothing. A sweater featuring a hillside with cows grazing on it hit the big time after a TikTok creator wore it for a video. The strawberry dress spawned strawberry button-up shirts that had an even shorter lifespan. Once it’s no longer hot for pictures, and all the buyer bought it for was the pictures, what happens to it? Having it was the trend, and now the trend is over.

Small businesses try their best to keep up with microtrends and make something fashionable that could also be ‘viral’, which is a tip taken from fast fashion’s vice grip on social media. I don’t blame small creators for it, because they often do their best to keep things clean and ethical in their production (not all do, but many try). I do blame the big companies who are trying to spark viral want for specific items.

Wanting Clothing? Or Wanting What the Clothing Represents?

Chasing fashion has always been exhausting, but now it’s even worse because other people are expecting their favorite style influencers to have an item and showcase it, but not too late and not in a way that’s obviously ethically questionable.

While clothes are often props for influencers, microtrends and fast fashion items bring it to the extreme. Clothing items are expected to set a scene – that strawberry dress was always out and about, people were twirling in it and frolicking in grass fields. It was a prop for influencers, something especially appealing in color and composition for photos. It wasn’t a very practical item to just… wear. You don’t exist in a 300$ dress, you wear it, take pictures, and then hang it back up.

The same went for the half-open Hawaiian shirts – while the style was very flattering, it could also turn very revealing if the wearer wore it out and about the wrong way. Just like the stuff on those fast fashion sites, it looked good in photos, but the reality of wearing a shirt like that is that you don’t, you wear it a few more buttons done up, not the way the model or the influencers have it on.

The dress, the shirt, the sweater does not exist outside of the scene. These things were being sold by the scene, the same way advertisements try and sell you a lifestyle. The thing this time around is that the influencers had tricked themselves into the marketing for the item instead of waiting to be sponsored for it, because it was so appealing as a prop.

The dress was a symbol of whimsy, bright pink and red and not casual at all. These weird Fashion Nova items are in the same camp – it’s whimsy, flirty, and not casual at all. It’s total unwearable-ness is only a problem when you consider what these microtrends are outside of the internet.  

Digital Clothing

While some see it as the next logical slip into NFT territory and a slow descent into The Emperor’s New Clothes digitally, others are excited by the possibility. Nobody owns the idea, after all, so if you can design your own digital clothing, you can wear it.

Many aspects of fashion are so fashionable and exciting because it involved someone hand-beading 70,000 Swarovski crystals onto a gown. The excess is what makes the runway. While younger folks with experience in digital art understand how difficult it is to sculpt beads in a program like Blender (or something more proprietary) the old guard often sees digital art as ‘art the computer made’, not ‘art someone made with a computer’. Still, digital clothing prevents people from buying things made with sweatshop labor, and it’s flexibility means that influencers don’t have to be under size ten to wear trendy, untailored items, so it does have a lot of appeal.

The appearance often also leaves something to be desired. DressX, the latest platform to try their hand at digital art, offers one-time-use Photoshopping of clothes onto pictures you submit to them. The effects are anywhere from ‘completely believable’ to ‘obviously edited in’. See Youtuber Safiya Nygaard, a Youtuber who tried the service. The first few days of wearing tame clothing with pretty designs went really well… and then she bought a hat to wear, and the hat was so poorly executed that her fans realized some of the previous items were also digital. Imagine buying an item that looks so bad that people who see your pics of it begin to question the reality of other stuff you wore – that’s not what an influencer wants!

Another major downside is that the clothes still cost quite a lot of money for being single-picture-use items. On DressX ( as of right now, September of 2021), you only get one picture for your purchase. If you want more pictures, you have to pay for them separately. Discovering that you don’t like the pose you struck for the outfit is going to cost money. While trendy and neat, spending 60$ every time you want to show off another angle of an item is… not as economical as just buying it, and so this only solves the ethical issues with fast fashion and microtrends for the influencers who can afford to go digital. Any innovation has hard spots, however, so only time will tell if this becomes more accessible and better-looking.

Sources: https://www.thelist.com/418037/what-is-a-micro-trend-and-how-can-it-affect-fashion-sustainability/

Auto-Beauty Filters are a Problem

Elizabeth Technology January 10, 2022

Touchups

People have been faking it since the era of portraitry – you’ll notice the royal or rich subjects of paintings rarely have any blemishes on their skin, even though acne, smallpox, and rashes have always been around. Even when certain features had to be depicted, they were often minimized or altered to make the subject happy. As an extreme example, look at Charles II of Spain – a member of the Habsburg line, which had become notorious for incrossing from family instead of marrying out, a common tactic used to retain power within the bloodline.

Unfortunately, genetic conditions resulted. Charles’s physical deformities made his face somewhat difficult to use as a face– the poor guy had what’s known as ‘The Hapsburg Chin’, a genetic condition passed down from his parents. You can actually trace who in his family had it (and how severely they had it) via their portraits; the royal painters had to tread carefully between depicting their subject accurately and depicting them without insulting them. They had mirrors, of course he knew what he looked like – but, just like today, the subjects wanted to be remembered for more than their facial scars and the extensive mistakes of the royal family. The prince didn’t mind being depicted through the rose-tinted lenses of his artist. When paintings are an expensive luxury, the client wants what they want.

Our recordings of what he looked like and how he was described in writing differ somewhat – in theory, you could still recognize him from his portrait, and that was good enough. The same goes for any number of royals. Airbrushing has always existed!   

Film

Painters weren’t the only ones who took liberties.

Film was also an expensive luxury. If people from the past seemed unusually clear-skinned, they might not have been – a combination of makeup and film retouching removes blemishes like acne scars and wrinkles from the image. Cystic acne can be genetic, syphilis was uncured, and smallpox survivors were still around, but you’d never guess the subjects of professional portraits suffered from those conditions too! Editing in black-and-white or sepia images isn’t witchcraft: it’s as simple as color-matching the person’s skin on the negative, and then painting over the flaw so it’s invisible on the developed image.

So retouching was definitely still a thing – it just wasn’t digital.

When color film hit the market, retouching could still be done, but the process was more difficult as photographers had to compensate for three colors, not just the one. A combination of special dyes and extremely fine brushes on an oversized negative, combined with better makeup, cameras, and specialty lenses (lenses designed to ‘soften’ the image, for example) allowed photographers to make their magazine cover photo flawless. This took time, and it was expensive, however, so retaking the image was often easier than editing out blemishes in post.  

Digital Smoothing

Beyond film, how did you retouch things in the early days of digital filming?

Doing digital work on a person’s face was reserved for magazines, professionals, and hobbyists – not just anyone could pop the SD card into a computer and start removing things. Ironically, the widespread availability of picture-taking items like digital cameras made the overall quality worse. When professionals took digital pictures, they never showed the client the blurry ones, and thumbs were never over critical parts of the lens during the picture-taking. Digital cameras also had lower stakes – you weren’t wasting film by taking three or four shots of the same thing to be sure you ‘had’ it.

Editing software relies on the strength or power of the computer that’s attempting to edit the image. More powerful computers can handle larger images, and gradually-improving computer strength lessened the reliance on film. As a result, businesses and major voices in the photo and film industries switched over when they could, so there is no exact ‘moment’ where editing surpassed painting – it happened in steps.

This also meant that film – which editors were familiar with and could process faster than the still-developing editing software – still held the upper hand for quite some time. Film can be endlessly upscaled; digital images cannot be. The strange grain you see on shows from the 2000s comes from being recorded digitally before the technology was fully mature. That’s just what they looked like, and fixing it would take some pretty intense AI or editing intervention. Meanwhile, films made during that time don’t look old – the clothing, speaking, and actors date the recording, not the visual grain. See the difference between a show like Lost and a show like Real Housewives. The decision to film Lost on real 30mm film has ensured it’s not as dated as it could be.

The iPhone, and Early Retouch Apps

The iPhone wasn’t the beginning of selfies – people took plenty when cameras (especially film cameras) were cheap. And the iPhone didn’t start the trend of editing, either, as you can see above. What the iPhone did was merge the two and allow them to come together in the hands of laymen. Now, with an ‘app’, anyone can take a pic and retouch it, send it to friends and family, print it, rotate it, crop it, etc. all without expert help.

This is no substitute for professional work (the first iPhone took better pics than many other mobile phones, but worse ones than professional or digital cameras) but it isn’t asked to be – we are far beyond the times when pictures were special occasions. You can track how expensive a picture was by the quality and quantity of selfies taken during the period. Did that person dress nicely for the occasion? Was it taken somewhere special? Are they posed in a way that suggests it wasn’t casual? Are they sitting for the photo, or do they just happen to be sitting when the photo was taken?

Early retouch apps were clumsy and frequently difficult to use subtly. If images from Myspace and early Facebook are any indication, the line tool was about as good as it got for tweens – the phone camera couldn’t compare to the stuff magazines and TV shows were using. ‘Digital smoothing’ available to the average consumer was about as good as a blur filter today, which is not very good. MSPaint was a legitimate option for altering profile pics. It just… all looked sort of bad. But it was passable! It wasn’t ideal, but in an era where people are just beginning to learn about Photoshop, and only experts and hobbyists really have it, any editing done to a photo had to be really brazen to not pass as ‘makeup’ or ‘lighting’ to an inexperienced internet.

Of course, professional photos still look professional, and airbrushing celebrities has only gotten more intense, but the average user is not trapped by this yet. For every smoothed, poreless face on the cover of a magazine, there are programs on MTV and tabloids showing what they look like without the touch-ups. Celebrities are an other, and you and your friends still look normal.

SnapChat Filters

And here is where we begin to see issues. SnapChat filters became a thing, and started acting like mirrors. This is a bigger problem than it sounds, and you’ll know why – when you change your haircut, when you put on or take off glasses, or in pictures, you look a little alien to yourself, but eventually, the change settles in and your internal image adjusts to what your eyes are seeing in the mirror. What if you have two mirrors, and one isn’t telling the whole truth? Which image does your brain adapt to? According to research, it’s the one that exaggerates the features you like about yourself, not the honest one.

Snapchat’s filters almost universally slim the face and lighten it up a little, too, even under ‘goofy’ filters like the animal ear ones. Other versions don’t even bother with the pretense of animal ears, they just slap some butterflies on and call it a beauty filter. The end result is a face that may be perceived as more attractive than a plain selfie. This is a problem for a couple of reasons! Assuming a whiter, thinner person is always better than the default image has troubling implications, and while this could make the pictures more attractive to the user, it does so in a way that changes their idea of their own face so much that they can’t look at the regular mirror without feeling vague dysmorphia.

Because these apps aim at teens, tweens, and twenty-somethings, the issue is magnified by developmental steps. They’re right at the age where they begin to notice how they look (and how others may perceive them). Many people get acne as teens, for example, but the Snapchat filters reduce the appearance of red blotches and uneven spots, spots they will have to look at elsewhere. Like mirrors, school photos, photos taken in school clubs, family photos, etc. making special moments more difficult to capture without self-consciousness getting in the way.

Overuse of social media exacerbates the issue, and the baseline for what people really look like is lost. During this time, however, the tech was limited to a select number of apps. It’s still possible to avoid it, and the only people really being affected are people who were spending an unhealthy amount of time online anyway. Surely, simply curbing use and being aware of how filters change your face is good enough to combat it, right?

The Rest

It was… until this new generation of apps and phones came out, and all of the visual ones came with some sort of ‘enhancement’ feature. Some users on TikTok report blurring and re-coloring even when no filters are active. The iPhone and many Android devices now come with beauty filters on by default. The new cameras took in so much information that it seemed silly not to try and capitalize with AI. All of this on top of the social media apps, and magazines, and retouching already seen everywhere else.

If you want to use a social app, you will see other people (many of whom you may associate with IRL because it’s a social app) using filters, even if you decide not to. You will also find that adding cool effects without altering your face is difficult-to-impossible, because it all comes built in. Filters to change eye color? Filters to add fire or ice effects? Filters that make the image black and white, or sepia? All of them come with facial smoothing.

If looking up to absurdly skinny and unrealistically ripped folks causes body issues, imagine what auto-smoothing is doing. “You could look like this,” these auto-apps say, “but you don’t really. So keep using us because we’re the only place you look right.” It is a difficult world to navigate. The worst part is that many of them know this, but their solutions or damage-control attempts can’t come with advice to stop using the platform. The best TikTok does is recommend breaks, and Snapchat, with it’s Streaks deal, doesn’t encourage you to stop at all. Instagram is not better either.

Hidden and unremovable beauty filters are posing a bigger threat than their users realize – once you see yourself in the black mirror, the silver one seems inadequate.

Sources: https://www.pbs.org/newshour/nation/making-people-aware-of-their-implicit-biases-doesnt-usually-change-minds-but-heres-what-does-work

https://medium.com/invisible-illness/social-media-filters-adversely-affect-our-mental-health-3eec79db6383

https://www.womenshealthmag.com/beauty/a33264141/face-filters-mental-health-effect/ ( the gif at the top of the screen shows how even filters not marketed for ‘smoothing’ or ‘beauty’ smooth features.)

https://www.mentalfloss.com/article/83262/how-photo-retouching-worked-photoshop

https://fixthephoto.com/blog/retouch-tips/history-of-photo-retouching.html

https://www.acc.edu.au/blog/social-media-low-self-esteem/

Bad AIs Eat Their Own Data

Elizabeth Technology December 10, 2021

Poorly optimized algorithmic content is frustrating for users, in more ways than one.

Ads (For Items)

It’s a new, somewhat dystopian warning: look for gifts in Incognito Mode so the ads don’t give away what you were looking at. Unfortunately, in a world run by websites that want you to make an account for your purchase, Incognito Mode is less helpful than it used to be.

Websites take notice of what you look at and buy, and then they juggle that into a measure of intent – are you actually planning to buy X item? How many times did you check it, and how long did you look at the listing? Did you look at other listings like it? Did you message the shop owner, or ask a question? Did you ‘heart’ it? If you did, it’s going to recommend more proportionally to how much you interacted with said item. But what about gifts, you may ask? How does the algorithm know I’m not buying this nurse-themed cup and this teacher-themed lanyard for myself?

Turns out any website using Google tools to track engagement knows what data to leave out in the long-term – they’re gathering so much data that it’s not really a loss! Given enough time to read your patterns, they’ll be able to figure out you’re done looking and will squirrel that knowledge away for the Gift Finder stuff (or whatever Google does with all of the data it stores on you) later. That’s… creepy, but not necessarily worsening your experience.

But what About the Ones that Aren’t as Optimized?

What is worsening the user experience is a lack of understanding context by other, less developed and less conscientious algorithms. Google Ads was notorious for following you with an item you looked at once before their target-testing showed users didn’t like it, and it was prone to mistakes anyway; companies following Google as an example didn’t always move on when they figured that out, though. Target sending coupons out for baby carriers and bottles came across as gauche, even when it was right – you hope nothing bad ever happens, but the first trimester for a pregnant woman can be very scary, which is why it’s tradition to hold off until the second trimester to start sharing that info. Imagine a company butting in with a mailed coupon and effectively telling your household that you’re pregnant before you get to!  

And where ‘haunting’ a user with an item they glanced at is still popular, it can make it tough for users to go back to casual browsing without that item appearing, making a website less appealing to casually visit. For example, Etsy – Etsy does not seem to be able to distinguish between items you’d buy once, like musical instruments or coffee tables, and items you’d buy over and over, like soap and other consumables. As a result, if you buy an instrument off Etsy, you don’t necessarily get ads for items related to that instrument – you just get ads for more Instruments. Take these screenshots of my Etsy front page:

This was immediately after I bought an instrument from the shop OrientalMusic, and if this was candles or snacks or something, showing me more stuff from the same vendor would be reasonable – as it is, I can’t window-shop for stuff Etsy thinks I might like because all it thinks I might like right now are more instruments.

“Shuffle” and Spotify

Spotify allows its users to make playlists of songs, but it also attempts to generate separate playlists for the user. “Discover Daily” and Discover Weekly” are designed to show the user new (or new to them) music that they might like. “Release Radar” aims to get you to new songs from other bands in your playlist. And then there’s the “On Repeat” playlist, which is meant to play you the songs that you’ve heard most often.

 The obvious issue with that: if you’re a free listener, Spotify decides which songs you’ve heard most often. If you’re a mobile listener on the free plan, you don’t have the option to not shuffle on the playlists you make, so the algorithm determining what song you’re going to listen to next is also ultimately deciding the On Repeat playlist, not you. The other playlists also learn that you like those same songs more, and Spotify’s algorithms scramble to provide you recommendations based off of the songs you like the most… the songs it thinks you like the most, which aren’t songs in the playlist but are instead songs you listened to, which Spotify decided.

Effectively, Spotify is feeding itself its own data, not yours!

Even worse, the shuffle function isn’t truly random – it’s run on an algorithm too. True randomness would be a saving grace for “On Repeat” – if you have a song in multiple playlists that you listen to often, statistically, it’ll pop up in On Repeat before songs you only have in one. That is, if it were actually random – unfortunately, it’s also decided by an algorithm. If you’re getting the same three or four songs every time you start a playlist, and the same handful the majority of the time afterwards, even with plenty of other songs in the list, that’s not a coincidence.  

OneZero says that Spotify divides its functions into exploit and explore, and when it’s trying to exploit, it’s easily tricked into a feedback loop of the same music you hear all the time. Explore is in the same boat, but it uses other people’s data to suggest songs that Listeners of X liked – leading to the same conclusion every time you open the Discover playlist. If you didn’t like those songs last time, it doesn’t care – it’s recommending them again to you now because Listeners of X liked it, and you listened because the algorithm put it first in line in shuffle, which leads to it thinking you like X a lot. Wired.com says that it can get itself so stuck on what it thinks you want that trying to break out and get new recommendations in your Discover playlists is better done on a fresh account. Yikes.

Youtube Recommended

Youtube’s recommended page is usually pretty good at picking up what you’d probably want to watch… as long as it has some history about you first, and also as long as you don’t stray too far from what you normally consume. Countless Youtubers have filmed themselves opening Youtube in an incognito window so they can show how few videos it takes to get into some crazy conspiracy theory videos – turns out the Flat Earth is never more than five or ten clicks away! A phenomenon that some noted was that new accounts who didn’t have any other data would get funneled into a rabbit hole once Youtube had the slightest smidge of data about them – and when conspiracy theory videos have high engagement (i.e lots of comments arguing) and enough run time for ad breaks, they’re considered above average content. Wonder why Youtube is putting those little Context bars below videos with sensitive topics now? That’s because it was forced to reckon with the algorithm’s tendency to feed misinformation to newcomers and people who ‘did their own research’ right into believing the Earth was flat and lizard people were real.

Sources:

https://onezero.medium.com/how-spotifys-algorithm-knows-exactly-what-you-want-to-listen-to-4b6991462c5c

https://www.wired.co.uk/article/spotify-feedback-loop-new-music

Deepfakes: Should You be Concerned?

Elizabeth Uncategorized October 22, 2021

You might have seen those videos of “Tom Cruise” on TikTok, or maybe you saw someone’s face superimposed onto Superman. Deepfakes are getting better by the day!

Deepfake Software

Deepfakes are a species of visual edits that use pictures and video, combined with AI, to create something new! The AI uses a pre-existing video and a library of photos to replace one person’s likeness with another. If you have the pictures for it, you could deepfake your face onto Chris Hemsworth’s body, and other such shenanigans. And deepfakes aren’t just for videos! They can also be used to create better still images as well. Where Photoshop relies on a human’s touch to make it believable, deepfake tech can create a realistic still mostly by itself given the tools.

That’s the catch, not all deepfake AI has all the tools – some deepfakes are noticeably worse than others, for a couple of reasons. The tech is still pretty new, so most programs are still ‘learning’ what is and isn’t possible for a human face. The second issue is the quality of the images fed to the deepfake – if the images don’t give the deepfake enough information to accurately recreate angles, it’s going to have to get creative. This is a bad thing when you’re trying to make a believable video.

Celebrities Vs. The Average Joe

Deepfakes rely on data, so if the software doesn’t have much data to work with, the resulting deepfake looks…uncanny. Even really, really good deepfakes right now, with a ton of data, look a little uncanny. Picture the last movie you saw a dead celebrity in – you probably realized something was wrong even if you didn’t know they were dead, like General Tarkin in Rogue One. He’d had his whole head scanned at high quality before he died, and he still looked a little strange on-screen. It was little things, like his neck not moving perfectly with his mouth. Young Carrie Fisher at the very end of Rogue One had a noticeable grain due to the source images, and that same young Carrie Fisher in The Rise of Skywalker looked strangely plastic even in low, indirect light.

The average person doesn’t have enough high-quality video or images from even one angle for deepfake AI to make something believable. It only takes a split-second of slightly misplaced nose or mouth for someone to get creeped out by whatever you’re making and identify it as fake. The uncanny valley is instinctual, but it’s reliable! It takes serious work to overcome that instinct. If Hollywood can’t manage it, is there anything to worry about for the average person? Well… yes. Because the average person has access to it, and the tech is always getting better.

Controlling it

How do you control it? Big stars have to deal with their image being stolen all the time. If anyone’s prepared, it’s the celebs, who have to fight magazines and movies alike to be represented like they want to be. But what about the average folks when it starts to bleed downwards? Minor politicians, or competition for the cheerleading squad? Or explicit images made specifically to harm someone’s image, made by an amateur with juuust enough knowledge to make something that, at first glance, looks believable.

How do you account for that?

Lets look at the Tik Tok Tom Cruise account. The creator has gone out of his way to make it clear that Tom Cruise’s likeness there is not real. Even so, the videos are jarringly realistic. He used a Tom Cruise impersonator as the ‘base’ for the deepfake, and the end result barely catches any uncanny valley at all. He just looks a little stiff. That guy’s videos are still up, because it’s obviously not really Tom Cruise no matter how realistic it is.

And then there’s an account that’s putting Charlie D’amelio’s face on their own body, in an attempt to impersonate her. Tik Tok is removing these because it’s not obvious that it’s not Charlie, even though the quality is worse. Someone who watches it more than once is going to recognize that it’s not Charlie, but it’s still getting pulled, because it’s not being clear enough. They are crossing a line.

There’s also a distinction between the two for intent. ‘Tom Cruise’ is showcasing his technical skill, the Charlie impersonator is trying to be Charlie.

Legally, copyright law does have some precedent from: if an the music and art world: if an impersonator is so close in performance to the original that an average person can’t distinguish it from reality, then they’re violating copyright. Singers use this when covers get a little too close to the original. See Drake songs, for instance: the only covers you’ll find on Youtube are by female singers or men who sound totally different, because he’s very strict on his copyright. When the audience can’t tell them apart, they’re pulled.

The problem is enforcement. The average person is not going to have to time or resources to hunt down impostors and report them all. Charlie is famous on Tik Tok, but if she wasn’t, Tik Tok mods likely wouldn’t actively hunt down these impersonator accounts for her. If someone really, really hated an obscure user, they’d be able to overpower their reporting efforts with fake content, and that fake content only has to be believable enough for someone to scroll past it and think “wow, I can’t believe they’d do that”.

The average person is not equipped to scrutinize every single little bit of media that comes their way, it’s exhausting and unrealistic to expect that of them. It’s how disinformation campaigns work. If the deepfake is believable enough, and the original’s not aware of it, that deepfake may as well be fact for everyone who sees it and doesn’t realize it’s fake.

Implications

If you’re online a lot, you might have heard of that new Mountain Dew ad featuring Bob Ross’s likeness. This was… weird, to a lot of people, and for good reason. Using a person’s likeness to sell something has been a matter of debate ever since money became mainstream – you’d probably sell more spices if you said the king bought from you back in BC times. But normally the person is able to call them out for it. Now, with deepfakes, you can make celebrities say anything post-mortem, and nobody but the estate will be able to challenge it.

And, even if the estate gives permission, how specific do you have to be about that image? Actors struggle with Paparazzi images even today – Daniel Radcliffe famously wore the same shirts and pants for weeks while filming a movie, so the paparazzi’s images of him were worthless. Imagine having the ability to put Daniel Radcliffe in any pose or outfit you wanted for the front of a magazine. The person wouldn’t make unflattering faces for your pictures before they died? Well. Now they will.

Presumably Bob Ross’s estate allowed the use of his image, but in the same way we don’t take organs from dead bodies without consent of the deceased, maybe we shouldn’t allow the selling dead loved ones’ images for advertising purposes, without their consent beforehand. Especially now, when it’s easy to deceive people with this tech!

Is There Good?  

And then there’s the other side of the spectrum, where deepfakes can be used to bring people back to their glory days, or color black-and-white movies. They can be used to de-age actors, as seen in Captain Marvel, Star Wars, etc. Samuel L Jackson was 40 years younger thanks to deepfake tech, and Mark Hamill appeared as he was forty years ago for another Star Wars series.

Deepfakes, given the tools, do a better job of recreating someone’s face than human-controlled CGI ever could. They could have been used to make Henry Cavill’s Superman in Batman Vs. Superman mustache-less, instead of whatever they did instead that made his face look unsettling. He couldn’t shave his ‘stache because he was also filming Mission Impossible at the same time, so the only way out was either prosthetic facial hair, or CGI-ing over it. They picked the CGI. People noticed. Deepfake tech might have made his mouth’s movement a little less uncanny.

Deepfake tech could be used to disguise facial injuries, like Mark Hamill suffered during the original Star Wars trilogy, or create alien races without the heavy prosthetics traditionally used or sweatshop CGI-studio labor. They could make dubbed movies less visually jarring, and line up actors’ mouths with the words they’re supposed to be saying.

Deepfake technology is a very double-edged sword. All the good it could do isn’t outweighed by the bad. It’s dangerous technology, and in a world that’s increasingly using the internet to share information, disinformation is a powerful pollutant. 

Sources:

https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them