Posted on May 19, 2022 in Technology

TikTok’s Censorship is Bad To Convey Ideas

“Unaliving” and Other Such Words

TikTok started out pretty rough when it was introduced to the US. Much like the old internet of yore, it was possible to stumble across something pretty disturbing, graphic, or violent just by using the app. However, upon introduction to the Apple app store, which required a stringent series of reviews, the app began censoring. Users, too, began self-censoring upon pain of being blocked or simply showered with hate comments. Eventually, the TikTok environment adapted to become more like the pool of the general internet plus some extra chlorine to stay in Apple’s good graces.

However… this has had some pretty bizarre side effects. The changing of words, for example! TikTok doesn’t want to do what Tumblr did when they first started and accidentally encourage the negative mental-health boards common to dark corners online. However, moderating such a large userbase is incredibly difficult. Instead, Tiktok relied on auto-shadowbanning (shadowbanning refers to banning someone or something without alerting them/it that it’s been banned) certain words instead, even if they technically didn’t violate guidelines. Two tiers of ‘bad’ words existed, in essence: words you couldn’t say at all, and words you couldn’t say and still appear on the FYP (for-you page) algorithm for. However, not every discussion featuring a banned word was encouraging it – for example, ‘suicide awareness’ has the word ‘suicide’ in it, but the bot couldn’t tell the difference, and you’d get that video shadowbanned from the algorithm’s front page queue anyway with no way to appeal it.

Instead, users began swapping words. At first, it was “Sewer Slide”, and then the more general “Unaliving” came in to replace killing, murder, suicide, etc. Every word that involves loss of life simply became ‘unalive’. And it worked. Where metaphors might have been inappropriate, the different word worked.

And Then It Got Cutesy

If you weren’t on Tumblr or Reddit during the ‘Heckin’ Pupper’ phase, you may be missing some context for how annoying this got – it was a way of baby-talking things no matter what they were, serious or not. One of the Heckin’ subreddits was Heckin’ Chonkers, a place for owners to post pictures of their obese pets. Many people understood this was unhealthy and were posting pictures of their rescues before they started their diets, but an alarmingly large amount of people saw that subreddit and thought ‘Wow! See, my pet’s just a ‘chonker’, it’s okay!’ when it wasn’t. But instead of having this serious conversation in a serious way, commentors had to fight through the ocean of ‘he’s just heckin’ chubby, lol!’ to get the original poster to understand that this was a problem.

Mixing a joke into something that’s actually serious can really screw up people’s perception of it.

 Back to ‘unaliving’. Consider replacing ‘murder’ with ‘unaliving’ or any other metaphor for what that means. When describing a murder, do you want the words to be said with a wink and a nudge? It didn’t start like that – it started as a way to describe crimes, threats, and real cases without losing too much of the case’s integrity to TikTok’s censorship, but as more people piled in, you saw phrases that were still allowed being replaced with ‘unaliving’. Phrases like ‘passed away’ were getting replaced with ‘unalived’. Even worse, some of the people doing that thought it was funny to do so – it was no longer a way to evade a ban to share info, but a way to share info and also signal in-group membership to other TikTok true-crimers. It depersonalized the issue for the people reading it out. You’re not describing a murder, suddenly, you’re describing an ‘unaliving’. A ‘nighty night’. A ‘fishy sleepover’. This is a stranger who died and simultaneously entertainment for their listeners. A real human life and just more words on a paper, just more audio on a website.

Swapping words for cuter ones when not strictly necessary is a cousin-problem to oversharing details while hiding others to make the case seem more mysterious, and otherwise fumbling the handling of a sensitive subject for likes and laughs. Who’s to say anybody wants to be described as ‘unalived’ when they die?

Others

Other words including slurs and targeted swears were also commonly censored… but some slurs aren’t really slurs unless they’re used as slurs maliciously. Additionally, words relating to the LGBTQ+ community that weren’t slurs were also censored, and that required people who wanted to talk about the community to swap words or censor weirdly too. The most egregious example was “Lesbian” being converted to “Le$bean” in text, which didn’t trigger the algorithm and couldn’t be read correctly by the autogenerated voices, leading to people pronouncing it like ‘Le-Dollar-Bean”, the way the computer reader did as a joke.

People tried to cash in on this in a way they hadn’t for ‘unaliving’. Natural crowd movements are something you can market so long as you’re ‘chill’ about it, so it’s not necessarily a horrid idea. However, trying to make a meme localized to a group of people accessible to everyone often kills the meme. People outside the community use it wrong, they use it to be mean, they use it to laugh at the people using the meme, not with them, and the Le-Dollar-Bean song soon became cringe because it was spreading to people who were making fun of the singer and the meme itself in bad faith.

It’s not just because it was LGBT, either, although the meme wouldn’t have happened in the first place if TikTok hadn’t considered that a controversial issue. For example, the same thing happened to the phrases ‘smol bean’ and ‘cinnamon roll’ on Tumblr, which were ways of describing characters who were innocent and cute. Eventually, people started using it to describe real people, and characters who didn’t fit the description but were conventionally attractive (mostly men). Stickers of mainstream actors with the phrases around them were made, even when it didn’t apply, and then those phrases became cringe too via overexposure.

There’s a political statement to be made about the censorship of gay issues that lead to this whole situation – the Le-Dollar-Bean song, a brief mark from people who just wanted to say the word, and ended up co-opted by people who trust corporations that put rainbows on shirts and bracelets with one hand and then funnel money into anti-LGBT bills with the other, is not that statement. Somebody got a little too serious about the joke and overused it, and now Le-Dollar-Bean is cringe, and the reason it’s like that has been forgotten in favor of the song that started the cringe around actually using Le-Dollar-Bean unironically.

The Ethics of Censoring Your Captions

The goal of any translation should be for the receiver to receive it as directly as possible, with some nuance allowed for things that other languages just don’t have. The Japanese don’t really have sarcasm, and may interpret a sarcastic comment as though you were being literal. Similarly, saying something like “I Love You” during a quiet moment comes across as bizarrely direct, so some Japanese may instead reference a poem or a common phrase as shorthand, which can be translated either literally or figuratively in media. Spanish, too, does something similar: if you watch Spanish soap operas, you may hear te quiero, instead of te amo, but both will be translated as “I love you” in the captions (te quiero being literally “I want you”, but understood as “I love you”). (This triggered a huge debate in the Supernatural fandom when the international dubs of the final episode came out, but that’s another story).

So, what does this have to do with English captions on English videos?

Creator-generated captions often censor swear words, or change what the creator is saying, which is not what those are used for! Captions are not the place to hide jokes. It’s an accessibility issue. While hearing viewers may find the dissonance between what’s in the captions and what’s being said funny, the deaf and hard-of-hearing viewers who don’t have that extra context may be confused. If you can’t swear in the captions for fear of censorship, then your interpretation should be ‘I can’t swear in this video’, not ‘I can’t type this swear in the captions’. It’s not ideal, obviously, to have to censor everything, but that’s TikTok’s problem and you should be complaining to TikTok about it, not giving the deaf audience a cleaner version of the video involuntarily.

It’s not all the creators’ fault – some mistakenly believe the app can’t hear them, but will be able to crawl the captions, and thus censor them so they can still be viewed. Others rely on the auto-generated captions, and sometimes it just doesn’t understand the word that’s being said, and mistranslates it to text. Still, effort should be made to convert the audio as closely as possible to the captioning. Don’t baby-talk, don’t misuse them to hide jokes, and don’t intentionally mistranslate!