Posts Tagged

Facebook

The ‘Slap a Teacher’ Trend Wasn’t Real

Elizabeth Technology May 3, 2022

Facebook paid for those ads and it’s weird.

TikTok Trends

Most of the TikTok trends you see reach ‘the outer world’ are pretty harmless. Some have the potential to be dangerous – the one I remember causing the most problems was the dance for ‘Kiki’ by Drake, where the participant walks alongside a car while doing a dance, and not all of them figured out that somebody else was driving the car before they did it themselves – but most are inoffensive because most people are normal humans who don’t want to be annoying. If they don’t realize that what they’re about to do will be annoying, their comment section will usually call them out on it. If their commentors don’t realize or don’t care, then the content creator may get a twenty-minute Youtube callout post from an outsider, reserved for the most annoying individuals on the platform.

This bizarre chain of watchers means that generally, people want to be nice, or at least harmlessly annoying. You don’t hit people. You don’t shove things in their face. You can startle them, but not too badly, and only if you know them from class or something. You don’t do things like grab onto someone’s cart and take it from them or walk up to someone and start moaning in front of them while making eye contact. You don’t disrespect teachers unless they disrespect you first, and by extension you generally don’t have a good excuse to disrupt class. All of these things are not only rude, they’re cringe. You’re cringy if you don’t know you’re not supposed to actually be mean. It is cringy to think that you’re the main character, that you can disrupt class for 39 other kids. Believe it or not, many teens do have a grasp on empathy, and as a large group, tend to reject things that aren’t harmless fun. (Individually and in small groups is a different story).

Which is why this ‘Slap a Teacher’ thing was fishy to anybody actually on TikTok, outside of the boomer meme circles.

Are Kids Degenerates?

In TikTok’s early days, I watched a creator get told off because he’d brought a microwave to his college class, walked up to the front of the lecture hall full of students when the teacher asked him to stop disrupting class, and then told the teacher his new career was being transgender for some reason (allegedly, he was retelling this story via voiceover on footage from the incident, and he could have said anything that got him kicked out of class). People in the comments started ragging on him immediately, pointing out that other fully grown adults had paid to be there, and his shenanigans were interrupting their class time. This is the video I remember as the turning point of the Wild-West TikTok into the Social Media TikTok – the final transition from 4Chan to Twitter. The antics that used to be funny no longer were, and you’d have to get new ones if you didn’t want people criticizing you for your meanspirited jokes. We want something funny, not shocking! Any idiot can be shocking! Was the general consensus.

Of course shock artists tried to stick around, but few of them were coming up with anything new to say about stereotypes – the jokes were old, they got buried, they were briefly resurfaced for TikTok, and then TikTok got bored of them too. This meant any trend that could be boiled down to “look how mad these people got when I inconvenienced them” or “haha, that’s politically incorrect but I said it anyway” were doomed to die out from the start.

Facebook Hates TikTok

Facebook haaaaates TikTok. Facebook experienced real subscriber count decline for the first time in forever, and part of that can be attributed to better entertainment sources elsewhere. People don’t trust Facebook, or they trust it too much – Facebook insists on taking as much information about a person as it can and has a profile on you even if you’ve never used it ever. Facebook runs ads as a company, so it does the same thing any third-party ad company does and follows you around. Of course, TikTok is harvesting information on its users and adjacent non-users as well, but the effects of this haven’t really been seen yet. It takes time to build a database as extensive and far-seeing as Facebook’s is.

Plus, the era in which people signed up for each site was different. Users felt betrayed when it came out that Facebook was harvesting their data, but all but the most tech-illiterate out there knew TikTok was going to be consuming their data when they signed up. Facebook was invited into people’s home’s as a guest, and it’s been behaving like it’s at a bar. Meanwhile TikTok was invited to a bar, and it’s behaving like it’s at a bar. People expect annoying or creepy behavior from TikTok before they decide to sign up – they didn’t for Facebook.

Facebook is Actually Pretty Soft on Terrorists

Facebook has had its fair share of issues relating to genocides in other countries – The Rohingya Genocide, for example, can be partly blamed on a combination of inaction and deliberate stoking for engagement on Facebook’s part. They didn’t delete what they should have in time for it to matter, and a literal genocide took place as a result. Is it entirely their fault? No. Is enough of it their fault for a lawsuit to get off the ground? Yes. Facebook is also a notorious source of false and dishonest information. Users had become so accustomed to it that Facebook’s labeling of posts matching disinformation criteria was called censorship, even though the posts were allowed to stay, just with the label! An absurdly small amount of people were allowed to circulate tons of disinformation, and there was nothing in place to stop them from doing so, up to and including the company itself.

While TikTok also has disinformation, it has a better auto filter and a less forgiving algorithm for controversial videos. Shares and comments reward content, but blocking the user, saying ‘show me less videos like these’, or otherwise expressing displeasure with the content means it’s not going to get put in front of everyone on the For You Page. Facebook’s algorithms are murkier. Users who post, say, Covid misinformation are not blasted out to the wider userbase like they’d been on Facebook, instead generally corralled into their own echo chamber. Is that good? No, but it’s better than nothing. During the BLM protests in 2020, TikTok also moved very slowly to remove videos of the protests, for good or bad, which earned a lot of favor from the mostly young userbase – videos of police using extreme methods to disperse crowds who weren’t legally overstepping their right to protest went far and wide in part thanks to TikTok.

As a result, the younger demographics using social media had a better perception of TikTok than they did Facebook, and combined with the first loss of subscribers in some time, Facebook took to an advertising firm to try and ‘fix’ this. Specifically, a right-leaning one, or at least one that didn’t care who its clientele were as long as they could pay the fee. This lead to a right-leaning bias, and left-leaning ad campaigns sought different firms, leading to this firm ending up with a mostly right-leaning portfolio. Facebook may have chosen this firm for many reasons, including a slack attitude when it came to content (this ‘Slap A Teacher’ trend is entirely fake news), but we don’t know for sure that it was because it leaned right – the already existing contacts with right-leaning news may have been all Facebook wanted out of that firm.

Slap A Teacher

The firm they hired to advertise targeted reactionary channels with a fake story to rile up the most reactionary viewers. Allegedly, slapping a teacher was the hot new trend on TikTok, and this firm was going to make sure everyone watching TV knew. The firm focused most of its efforts on the farthest right channels available to the U.S., but the demographics work out such that the largely young audience of TikTok wouldn’t see (and therefore wouldn’t deny) the marketed trend, while the older adults who were watching said right-leaning channels had never been on TikTok (or had been, but got funneled to the previously-mentioned echo chambers). Meaning neither side would challenge the assertion, so parents ‘knew’ this was happening, but kids had never heard of it. The perfect crime to trick gullible or tech-illiterate reactionaries into believing TikTok was a site for violent children, or that it would turn their children violent.

Of course, this was never a thing, as I said earlier.

Anything relating to any violence (even red paint in the wrong spots can trigger the automated filters) is either removed or tagged with a banner at the bottom, a few months after the peak of the BLM protests passed. Incidents with real violence can’t be shared on TikTok anymore. Any video of a singular incident wouldn’t have been shown to many people, and even if it did somehow escape the filters, the user themselves would have likely been challenged on it via the watching system described at the very start of this article. The campaign never made sense if you understood anything about the website, and it’s unfortunate that that was used to Facebook’s advantage!

Fake news that amounts to advertising is a new phenomenon on Cable TV – it seems disinformation used to push users into buying a specific product over another has come for us all with Facebook (now Meta’s) funding.

Sources:

https://www.wavy.com/news/national/facebook-paid-gop-firm-to-run-campaign-against-tiktok-report/

Facebook is Horrible for Information

Elizabeth Uncategorized November 8, 2021

The 2016 and 2020 elections were some of the tensest elections Americans had ever faced, thanks, in part, to Facebook and Twitter.

Why is nobody defending Facebook, even as they begrudgingly defend Twitter?

1) Facebook Knows it Makes You Feel Bad – and it’s Okay with That

Studies show repeatedly that social media makes many people a little unhappier than they were before they opened it – or that a control group that didn’t go to social media sites was happier than the one that did.

If you feel like the internet got negative, and it used to be much easier to use without feeling like garbage after, you’re not necessarily wrong. The rise of engagement tracking means that different things are being used to keep your attention. People will spend more time replying to comments if they’re emotionally invested in the conversation – even the worst, most inflammatory, keyboard-smashing arguments are ‘good’ for algorithmically-based social media sites. Additionally, sad or alarming news generates more clicks than good news, and sensationalism has always sold more papers. It might not have been any website’s original intent, but because people are drawn to this stuff, it gets heavier weight by algorithms relying on clicks to determine ad value. More clicks means more money, so they make more of it. Facebook in particular is worse off for this – even if you go out of your way to block or unfollow things that make you unhappy, your network can still lead to exposure by proxy.

When social media only tracked account usage, and not the person using the account, people were encouraged to stick around in healthier ways. The content everywhere but news sites and intentionally dark corners was generally a little less stressful, and users weren’t exposed to it nonstop. MySpace had much less data on you, and image-sharing sites and forums were often civil by agreement even if they were anonymous. Not everything was peachy and friendly, and moderators still had their work cut out for them, but you weren’t deliberately being shown things that upset you by an algorithm.

2) Facebook’s Survival Depends on its Opacity to Outsiders

Facebook cannot let anyone see under the hood. If anyone could, they’d know how Cambridge Analytica happened, and how disinformation is still happening, and – I suspect – how little Facebook is doing to combat it. This opacity permeates every level of Facebook’s organization, so much so that employees feared that they were being listened to outside of work by their company-issued phones. It’s one thing for a tech company to say ‘no pictures, no phones inside the facility’ – it’s another entirely for the employees to fear discussing their employment at home because they don’t trust that spyware stayed at work. Reports of burner phones made the rounds a little while ago for exactly this reason.

It’s also difficult to tell exactly what they’re doing with valuable personal data – the Cambridge Analytica trial revealed some things, but left much in the dark. Zuckerberg navigated the questions so carefully and deliberately that memes about how robotic he sounded circulated for weeks. Facebook has trapped themselves between a rock and a hard place – continue to be evasive and opaque, or reveal that they’ve been behaving almost maliciously to keep growing.

It knows this is also a bad look, and so Zuckerberg planned on making them look good to consumers with what is effectively positive propaganda disguised as ordinary news in the news feed via Project Amplify. Project Amplify would mix positive articles about Facebook, some of them written by the company itself, in with the regular newsfeed.

The recent whistleblowing by Frances Haugen shows the enormity of the issue – more in-depth articles are already out there, and here’s a link to one: (https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/?utm_source=pocket-newtab). This is really an article all by itself, so I won’t double the length of this listicle off of this one point – but it is a good read.

3) Facebook still has Troll Farms – And Won’t Get Rid Of Them

Twitter did admit it had bots – it’s mass-banning of them was a PR win. It was a relief. As of September 29th of 2021, Youtube has finally taken action to remove information that’s deliberately incorrect from its platform – not just demonetize it, which allows these alternate-reality peddlers to continue making content and potentially money off of other Patreon-like sites. This leaves Facebook as the last member of the triad, but Facebook can’t seem to figure out how to do the same on their own platform.

The fastest way to share information is images and memes. It requires no copy-pasting, it’s eye-catching, usually short, and basic bots and text crawlers can’t read it. Any ‘meme’ style infographic is immediately suspect now because Facebook has allowed troll farms to pump out these misinformative memes at high speeds. Even if they link back to a respectable source, the consumer now has to verify that the respectable source says what the meme said it did – otherwise, they can still be exposed to disinformation. ‘It’s your fault for not checking’, is a common rebuttal. ‘You can’t trust everything you see online’, is another.

And yet we’ve seen time and time again that people do trust these troll farms over doctors and scientists. They cannot possibly check every single picture they see. If it looks good enough, people will believe it, and the line between ‘skewed’ and ‘false’ has never been so tenuous. A comfortable lie with one big, bad enemy is obviously easier to stomach than the reality of the situation, and it’s irresponsible for Facebook to keep playing the ‘you should have known it was fake’ card when the troll farms are manipulative enough and spread out enough to make a narrative so convincing that it’s literally defying reality. Yes, they shouldn’t be trusting everything they see, but they want to trust something. They want something to be stable.

By allowing troll farms and the 12 or so accounts making up 67% of the misinformation on the site to continue posting, they’re tacitly agreeing that it’s okay to be so wrong that people die. Even worse – they agreed it was wrong! They planned to post good information in place of the stuff they were removing. In the first push before the pandemic was widely politicized, Facebook removed an enormous amount of posts containing mis- and disinformation from the platform, only to slow down – for some reason – as it became clear that their options were to take a stand and risk ad revenue, or stay the course and guarantee that certain viewers would cling, later on.

It’s one thing to try and protect free speech – every platform is struggling to juggle where deliberate, damaging disinformation begins and personal opinion ends. It’s another entirely to allow troll farms to continue pumping out disinformation after it becomes clear they’re doing it despite measurable harm to people who have reposted their memes, especially when the posts are still earning money for the farm behind them. Even Youtube didn’t allow their disinformation peddlers to make money off of it.

4) Facebook is Trying to Tie Itself To Everything Else to Make Itself Unremovable and Unavoidable

Oculus Rift. Instagram. Messaging apps. Facebook itself. And now, a meta-verse. Facebook knows it would die in a much more undignified manner than MySpace did if it ever stopped trying to grow, and so it is steadily trying to get its foot in the door of as many other industries as it can. It knows it can squeeze a few more drops of data out of its most loyal customers if it just keeps growing. It knows it can earn a few more percentage points of market share if cool, cutting-edge tech requires a Facebook account. Amazon, Nestle, and other ethically-debatable companies are also doing this. Once the company is big enough, the consumer would have to put in special effort to avoid that company’s products, and many won’t know or won’t bother.

Branching out in itself is not the problem: Colgate made TV dinners for a while, and Barnes and Noble sells board games and stationary alongside the books. The problem is in making an interconnected network that can stomp out competition, harvest data from multiple angles, and then use funding from itself to weather out storms that the remaining competition can’t.

Sources:

https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election/?utm_source=pocket-newtab

https://www.bbc.com/future/article/20140728-why-is-all-the-news-bad

https://www.latimes.com/science/story/2019-09-05/why-people-respond-to-negative-news

https://www.businessinsider.com/facebook-employees-paranoid-burner-phones-2018-12

https://www.consumerreports.org/social-media/facebook-approved-ads-with-coronavirus-misinformation/

https://www.theguardian.com/world/2021/jul/17/covid-misinformation-conspiracy-theories-ccdh-report

https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/?utm_source=pocket-newtab