Posts Tagged

AI

TikTok’s Censorship is Bad To Convey Ideas

Elizabeth Technology May 19, 2022

“Unaliving” and Other Such Words

TikTok started out pretty rough when it was introduced to the US. Much like the old internet of yore, it was possible to stumble across something pretty disturbing, graphic, or violent just by using the app. However, upon introduction to the Apple app store, which required a stringent series of reviews, the app began censoring. Users, too, began self-censoring upon pain of being blocked or simply showered with hate comments. Eventually, the TikTok environment adapted to become more like the pool of the general internet plus some extra chlorine to stay in Apple’s good graces.

However… this has had some pretty bizarre side effects. The changing of words, for example! TikTok doesn’t want to do what Tumblr did when they first started and accidentally encourage the negative mental-health boards common to dark corners online. However, moderating such a large userbase is incredibly difficult. Instead, Tiktok relied on auto-shadowbanning (shadowbanning refers to banning someone or something without alerting them/it that it’s been banned) certain words instead, even if they technically didn’t violate guidelines. Two tiers of ‘bad’ words existed, in essence: words you couldn’t say at all, and words you couldn’t say and still appear on the FYP (for-you page) algorithm for. However, not every discussion featuring a banned word was encouraging it – for example, ‘suicide awareness’ has the word ‘suicide’ in it, but the bot couldn’t tell the difference, and you’d get that video shadowbanned from the algorithm’s front page queue anyway with no way to appeal it.

Instead, users began swapping words. At first, it was “Sewer Slide”, and then the more general “Unaliving” came in to replace killing, murder, suicide, etc. Every word that involves loss of life simply became ‘unalive’. And it worked. Where metaphors might have been inappropriate, the different word worked.

And Then It Got Cutesy

If you weren’t on Tumblr or Reddit during the ‘Heckin’ Pupper’ phase, you may be missing some context for how annoying this got – it was a way of baby-talking things no matter what they were, serious or not. One of the Heckin’ subreddits was Heckin’ Chonkers, a place for owners to post pictures of their obese pets. Many people understood this was unhealthy and were posting pictures of their rescues before they started their diets, but an alarmingly large amount of people saw that subreddit and thought ‘Wow! See, my pet’s just a ‘chonker’, it’s okay!’ when it wasn’t. But instead of having this serious conversation in a serious way, commentors had to fight through the ocean of ‘he’s just heckin’ chubby, lol!’ to get the original poster to understand that this was a problem.

Mixing a joke into something that’s actually serious can really screw up people’s perception of it.

 Back to ‘unaliving’. Consider replacing ‘murder’ with ‘unaliving’ or any other metaphor for what that means. When describing a murder, do you want the words to be said with a wink and a nudge? It didn’t start like that – it started as a way to describe crimes, threats, and real cases without losing too much of the case’s integrity to TikTok’s censorship, but as more people piled in, you saw phrases that were still allowed being replaced with ‘unaliving’. Phrases like ‘passed away’ were getting replaced with ‘unalived’. Even worse, some of the people doing that thought it was funny to do so – it was no longer a way to evade a ban to share info, but a way to share info and also signal in-group membership to other TikTok true-crimers. It depersonalized the issue for the people reading it out. You’re not describing a murder, suddenly, you’re describing an ‘unaliving’. A ‘nighty night’. A ‘fishy sleepover’. This is a stranger who died and simultaneously entertainment for their listeners. A real human life and just more words on a paper, just more audio on a website.

Swapping words for cuter ones when not strictly necessary is a cousin-problem to oversharing details while hiding others to make the case seem more mysterious, and otherwise fumbling the handling of a sensitive subject for likes and laughs. Who’s to say anybody wants to be described as ‘unalived’ when they die?

Others

Other words including slurs and targeted swears were also commonly censored… but some slurs aren’t really slurs unless they’re used as slurs maliciously. Additionally, words relating to the LGBTQ+ community that weren’t slurs were also censored, and that required people who wanted to talk about the community to swap words or censor weirdly too. The most egregious example was “Lesbian” being converted to “Le$bean” in text, which didn’t trigger the algorithm and couldn’t be read correctly by the autogenerated voices, leading to people pronouncing it like ‘Le-Dollar-Bean”, the way the computer reader did as a joke.

People tried to cash in on this in a way they hadn’t for ‘unaliving’. Natural crowd movements are something you can market so long as you’re ‘chill’ about it, so it’s not necessarily a horrid idea. However, trying to make a meme localized to a group of people accessible to everyone often kills the meme. People outside the community use it wrong, they use it to be mean, they use it to laugh at the people using the meme, not with them, and the Le-Dollar-Bean song soon became cringe because it was spreading to people who were making fun of the singer and the meme itself in bad faith.

It’s not just because it was LGBT, either, although the meme wouldn’t have happened in the first place if TikTok hadn’t considered that a controversial issue. For example, the same thing happened to the phrases ‘smol bean’ and ‘cinnamon roll’ on Tumblr, which were ways of describing characters who were innocent and cute. Eventually, people started using it to describe real people, and characters who didn’t fit the description but were conventionally attractive (mostly men). Stickers of mainstream actors with the phrases around them were made, even when it didn’t apply, and then those phrases became cringe too via overexposure.

There’s a political statement to be made about the censorship of gay issues that lead to this whole situation – the Le-Dollar-Bean song, a brief mark from people who just wanted to say the word, and ended up co-opted by people who trust corporations that put rainbows on shirts and bracelets with one hand and then funnel money into anti-LGBT bills with the other, is not that statement. Somebody got a little too serious about the joke and overused it, and now Le-Dollar-Bean is cringe, and the reason it’s like that has been forgotten in favor of the song that started the cringe around actually using Le-Dollar-Bean unironically.

The Ethics of Censoring Your Captions

The goal of any translation should be for the receiver to receive it as directly as possible, with some nuance allowed for things that other languages just don’t have. The Japanese don’t really have sarcasm, and may interpret a sarcastic comment as though you were being literal. Similarly, saying something like “I Love You” during a quiet moment comes across as bizarrely direct, so some Japanese may instead reference a poem or a common phrase as shorthand, which can be translated either literally or figuratively in media. Spanish, too, does something similar: if you watch Spanish soap operas, you may hear te quiero, instead of te amo, but both will be translated as “I love you” in the captions (te quiero being literally “I want you”, but understood as “I love you”). (This triggered a huge debate in the Supernatural fandom when the international dubs of the final episode came out, but that’s another story).

So, what does this have to do with English captions on English videos?

Creator-generated captions often censor swear words, or change what the creator is saying, which is not what those are used for! Captions are not the place to hide jokes. It’s an accessibility issue. While hearing viewers may find the dissonance between what’s in the captions and what’s being said funny, the deaf and hard-of-hearing viewers who don’t have that extra context may be confused. If you can’t swear in the captions for fear of censorship, then your interpretation should be ‘I can’t swear in this video’, not ‘I can’t type this swear in the captions’. It’s not ideal, obviously, to have to censor everything, but that’s TikTok’s problem and you should be complaining to TikTok about it, not giving the deaf audience a cleaner version of the video involuntarily.

It’s not all the creators’ fault – some mistakenly believe the app can’t hear them, but will be able to crawl the captions, and thus censor them so they can still be viewed. Others rely on the auto-generated captions, and sometimes it just doesn’t understand the word that’s being said, and mistranslates it to text. Still, effort should be made to convert the audio as closely as possible to the captioning. Don’t baby-talk, don’t misuse them to hide jokes, and don’t intentionally mistranslate!

V-Tubers: The Virtual Youtuber

Elizabeth Technology February 7, 2022

You might have seen videos on Youtube’s front page for what looks like anime characters playing games. What’s the deal?

The Human Ones

We all know fans can be insane. A fan fatally shot Selenas at a concert. A group of teenaged fans targeted celebrities to steal from. Fans surround famous TikTokers’s houses and park in the street, hoping to get a picture or video of them for the app. In Japan, idols are very reluctant to date, because the insane idol culture means that male fans see them as future girlfriends, and a real boyfriend would mean they were ‘cheating’. Superfans seem to think they ‘own’ celebrities. As such, it’s kind of dangerous to actually be out in the wild as a celebrity.

A solution? Make sure people don’t know what you, your house, or your room looks like, and it makes you harder to find. Software can be used to superimpose a 2-D character over a 3-D person, and have it follow their movements. The real person never actually appears on screen, but their facial expressions and gestures are still caught on screen via their avatar. Win-win – the streamer gets to livestream their reaction to their game anonymously.

However, obscuring one’s real identity isn’t the only reason they’re in use. Some streamers use them because they’re fun and colorful, others use them because they can be used to interact with chat without actively interacting with chat – text can scroll across blank spaces on virtual wings or T-shirts. Virtual confetti can rain down on the virtual streamer with some trigger from chat, with no mess to clean up. Sometimes, the person has appeared live before, but just doesn’t want to dress up for their stream – the V-Tuber version of themselves is always perfectly dressed!

The first one, Kizuna Ai, broke ground when she first began streaming. Motion-Capture tech used to be for movies only, as it was prohibitively expensive, and usually required special kinds of suits.

Motion Capture

If you were around for the filming of The Hobbit, you might remember that video of Benedict Cumberbatch flailing around on the ground in a skintight suit covered with white dots. That was the motion capturing process. They used that footage to rig to the face of Smaug, the villain of the story.

But why?

CGI artists would eventually hit a wall if they were to only make things move by hand. Yes, in the short term, doing it manually looked better (and was faster) than motion capturing, smoothing the capture out, rendering, adding in shadows, etc. However, in the long term, motion capture provides a much more realistic experience at a fraction of the cost and time of doing it the old way, especially as models got more and more detailed.

It also caught key parts of human expression and human movement better. Grimacing has many other, smaller facial movements than just the mouth turning downwards, for example. The artist used to have to move all those little details by themselves, and then repeat that for each expression or word, over and over. The other option was an uncanny-valley creation, or one that felt flat – there just wasn’t another way before motion capture.

When filming The Hobbit, Benedict just had to make his expressions into a camera, and then the computer could use key points of the human face to connect to key points of Smaug’s face. It could register his ‘skeleton’ in the footage with those dots on his suit, and use it to create a functioning, moving Smaug shell that followed along. The computer just has to be told where to attach the dots on his suit to the Smaug shell, and Voila!

Science World compares it to three dimensional rotoscoping. Over time, facial recognition software has gotten much better. The Virtual Youtuber doesn’t even need to be wearing a suit for the virtual model to work anymore. It simply understands what a face looks like now, which is incredible. The rigs that streamers use can understand facial expressions, and as long as you tell it where the eyebrows and mouth are, it can mimic them in the virtual shell. This allows for incredible freedom when designing the character – if you want your character to have a tail, all you have to do is tell it what the tail reacts to. Wings? Same deal, you can attach them to your arms’ movements if you want, and they’ll move when you move. Some programs understand clothing physics, and can move capes according to arm movements.

Many programs are in use today[HYPERLINK V-TUBER WIKI]. CodeMiko on Twitch uses the Unreal Engine software, a program used widely by game studios. FaceRig and Animaze are also popular choices, but freeware programs exist as well. It’s entirely possible to make yourself into a V-Tuber with a little elbow grease, and a willingness to work with the models.

An Opinion: V-Tubing is Friendlier than Virtual Influencing

I like V-Tubers. I don’t like Virtual Influencers. They arrived with a kind of smugness, from both their creators and assorted news outlets: “We’re winning. We’re totally funnier and hotter and more interesting than real people.” Yeah. That’s… not really a revelation. Of course an entire team of people, none of which have to actually appear in front of the camera, is going to be more successful at being hot than a real person. Lil Miquela doesn’t have pores or acne or feelings. She is a CGI’d doll that doesn’t have to actively respond to the environment like a V-Tuber rig does. The whole draw of influencers is that they create the illusion that attractive people exist – real people will photoshop themselves too, but normally they have the decency to hide it.

Meanwhile, V-Tubers have the opposite approach. “We all win. Let’s have fun together with this system.” When people can’t show their faces, they can wear a suit that shows their expressions, allows them to interact with chat, and allows them to communicate nonverbally where they otherwise couldn’t. The rig allows them to connect more organically to their audience, not take advantage of them. They were never meant to replace real people – they’re mostly anime-like characters with big eyes and big heads. The person behind the mask is still playing the games, and talking, too; Lil Miquela barely ever has to ‘appear’ for her audience. 90% of her interaction boils down to text that someone else writes and pictures someone else makes. Meanwhile, a V-Tuber is actually behind the screen. A V-Tuber is ultimately a real person with a tool, not a tool being used to replace a real person.  

Sources:

https://www.theguardian.com/lifeandstyle/2009/oct/27/lindsay-lohan-paris-hilton-robbed

https://www.engadget.com/2014-07-14-motion-capture-explainer.html

http://www6.uniovi.es/hypgraph/animation/character_animation/motion_capture/history1.htm

https://www.sportskeeda.com/esports/what-codemiko-really-like-off-camera

https://virtualyoutuber.fandom.com/wiki/List_of_VTuber-related_software_and_resources

Fast Fashion

Elizabeth Technology January 14, 2022

You’ve likely heard the term before – and for good reason. Fast fashion is bad for the environment, generates a ton of waste and discarded clothing, and more often than not uses sweatshop labor to keep production up and costs down simultaneously. Fast fashion also often rips off clothing from other, more sustainable, smaller brands, and idea theft in the fashion industry is becoming an increasing problem because of it.

Ultimately, well-made clothes don’t need to be replaced very often. Companies want you to replace them (because that’s how they make money). The clothes ‘got old’, so you should want something ‘new’, says advertising, even if there’s nothing wrong with the clothes themselves.

It’s an ugly thing to be a part of, ethically, environmentally, and monetarily. Fast fashion did exist in the past, but not to such extremes, and generally not for singular outfits and bizarre clothing with holes torn in the functional places. This couldn’t have happened without microtrends and the rise of social media.  

Fashion Nova, Shein, and ClickBait Fashion

Fast fashion produces strange results. Strappy sandals that go all the way up the shin combined with swimsuits in a similar fashion, pants with holes up and down the entirety of the leg and combined with ruffles, a denim bikini – not all of these are wearable, but they’re very eye catching on the website. ‘Fashion’ and ‘art’ and ‘clickbait’ all overlap with each other now in a way they didn’t used to.

Know that the website doesn’t need clothing to sell. It does not make all of these pieces to sell them; it makes them to make the other pieces that are actually wearable ‘pop’. When a fashion brand wants to unveil something controversial and exciting, the traditional play is to do it on the runway, and then tone it down for the actual line. When you see a model in an absurdly big hat, they don’t actually want to sell that hat, they just want to plant the idea of a big hat. The hat is an exaggeration of what they’re actually selling, a sort of caricature for the intended look. Fashion Nova and Shein have essentially started listing the giant hat alongside the real hats. The results are weird.

 Of the pieces that are wearable, they often don’t look as good on ordinary people – or they do, but only with a few other articles of clothing, meaning you’re always wearing one shirt with one set of pants or one pair of shoes to make it work and look good. This ultimately means that you’re not going to wear that item until the other items to go with it are clean and ready to wear, so it’s going to sit in the closet for much longer between wears – and it may be out of trend before you, the wearer, have truly gotten your money’s worth out of it.

Social Media And Cute Stuff

We know that art tends to get consumed and riffed on into unfamiliarity when there’s clout to snatch and money to make. A popular Mitski song about longing was turned into an anthem for strawberry animals, completely missing the point, and Saturn Eating His Son, one of Goya’s final paintings before he died (and a painting he did on the wall of his house, meaning he probably didn’t intend for anyone to actually see and document it before he passed) is sold on mugs and masks. Clothing, unfortunately, gets this treatment worse than most.

See the strawberry dress by designer Lirika Matoshi. A 300$ dress (which sometimes arrived with broken zippers and hanging threads, but that’s another issue) got passed around social media alongside remixes of that Mitski song. Some people bought it, some tried to recreate it for cheaper, and the strawberry dress held TikTok’s attention for long enough that it started appearing in anime fanart. That’s pretty rare! Marketing-wise, this was bizarre but ultimately welcomed. People had Pavloved others into liking this dress because it was awfully cute and fairly easy to draw, and everyone else was into it. Make art with it, that art will get likes. Cute Stuff Trends.

A specific item was in the spotlight – not the designer, not the line, just this one item from her, and then when it faded it was barely seen again except for in the art that recirculates every now and again.

This is the essence of a microtrend.

MicroTrends

Microtrend clothing is identifiable by a few different factors:

1) It’s cute and unique – but not so unique it’s shocking

2) It’s reasonably accessible

3) It’s very easy to photograph on almost anyone

4) It has no substitute or ‘dupe’ – only one item will do

5) it comes and goes before outsiders realize it’s ‘in’

Here are the differences between ‘trending items’ and ‘microtrends’. I would classify the brief flash of half-open Hawaiian shirts on picture platforms like Pinterest, TikTok, and Tumblr as a trend, not a microtrend. If all of these pictures were of the same shirt, a shirt conveniently available at Target, or Amazon, then it would have been a microtrend. Even then, the only point it misses is 4 – if everyone had been seeking out a specific shirt to take pictures in, it would have been a bona fide microtrend. The strawberry dress hits all five – by the time people were working out dupes for it, the item was no longer hot.

Microtrends are all of the issues with fast fashion condensed into singular pieces of clothing. A sweater featuring a hillside with cows grazing on it hit the big time after a TikTok creator wore it for a video. The strawberry dress spawned strawberry button-up shirts that had an even shorter lifespan. Once it’s no longer hot for pictures, and all the buyer bought it for was the pictures, what happens to it? Having it was the trend, and now the trend is over.

Small businesses try their best to keep up with microtrends and make something fashionable that could also be ‘viral’, which is a tip taken from fast fashion’s vice grip on social media. I don’t blame small creators for it, because they often do their best to keep things clean and ethical in their production (not all do, but many try). I do blame the big companies who are trying to spark viral want for specific items.

Wanting Clothing? Or Wanting What the Clothing Represents?

Chasing fashion has always been exhausting, but now it’s even worse because other people are expecting their favorite style influencers to have an item and showcase it, but not too late and not in a way that’s obviously ethically questionable.

While clothes are often props for influencers, microtrends and fast fashion items bring it to the extreme. Clothing items are expected to set a scene – that strawberry dress was always out and about, people were twirling in it and frolicking in grass fields. It was a prop for influencers, something especially appealing in color and composition for photos. It wasn’t a very practical item to just… wear. You don’t exist in a 300$ dress, you wear it, take pictures, and then hang it back up.

The same went for the half-open Hawaiian shirts – while the style was very flattering, it could also turn very revealing if the wearer wore it out and about the wrong way. Just like the stuff on those fast fashion sites, it looked good in photos, but the reality of wearing a shirt like that is that you don’t, you wear it a few more buttons done up, not the way the model or the influencers have it on.

The dress, the shirt, the sweater does not exist outside of the scene. These things were being sold by the scene, the same way advertisements try and sell you a lifestyle. The thing this time around is that the influencers had tricked themselves into the marketing for the item instead of waiting to be sponsored for it, because it was so appealing as a prop.

The dress was a symbol of whimsy, bright pink and red and not casual at all. These weird Fashion Nova items are in the same camp – it’s whimsy, flirty, and not casual at all. It’s total unwearable-ness is only a problem when you consider what these microtrends are outside of the internet.  

Digital Clothing

While some see it as the next logical slip into NFT territory and a slow descent into The Emperor’s New Clothes digitally, others are excited by the possibility. Nobody owns the idea, after all, so if you can design your own digital clothing, you can wear it.

Many aspects of fashion are so fashionable and exciting because it involved someone hand-beading 70,000 Swarovski crystals onto a gown. The excess is what makes the runway. While younger folks with experience in digital art understand how difficult it is to sculpt beads in a program like Blender (or something more proprietary) the old guard often sees digital art as ‘art the computer made’, not ‘art someone made with a computer’. Still, digital clothing prevents people from buying things made with sweatshop labor, and it’s flexibility means that influencers don’t have to be under size ten to wear trendy, untailored items, so it does have a lot of appeal.

The appearance often also leaves something to be desired. DressX, the latest platform to try their hand at digital art, offers one-time-use Photoshopping of clothes onto pictures you submit to them. The effects are anywhere from ‘completely believable’ to ‘obviously edited in’. See Youtuber Safiya Nygaard, a Youtuber who tried the service. The first few days of wearing tame clothing with pretty designs went really well… and then she bought a hat to wear, and the hat was so poorly executed that her fans realized some of the previous items were also digital. Imagine buying an item that looks so bad that people who see your pics of it begin to question the reality of other stuff you wore – that’s not what an influencer wants!

Another major downside is that the clothes still cost quite a lot of money for being single-picture-use items. On DressX ( as of right now, September of 2021), you only get one picture for your purchase. If you want more pictures, you have to pay for them separately. Discovering that you don’t like the pose you struck for the outfit is going to cost money. While trendy and neat, spending 60$ every time you want to show off another angle of an item is… not as economical as just buying it, and so this only solves the ethical issues with fast fashion and microtrends for the influencers who can afford to go digital. Any innovation has hard spots, however, so only time will tell if this becomes more accessible and better-looking.

Sources: https://www.thelist.com/418037/what-is-a-micro-trend-and-how-can-it-affect-fashion-sustainability/

Auto-Beauty Filters are a Problem

Elizabeth Technology January 10, 2022

Touchups

People have been faking it since the era of portraitry – you’ll notice the royal or rich subjects of paintings rarely have any blemishes on their skin, even though acne, smallpox, and rashes have always been around. Even when certain features had to be depicted, they were often minimized or altered to make the subject happy. As an extreme example, look at Charles II of Spain – a member of the Habsburg line, which had become notorious for incrossing from family instead of marrying out, a common tactic used to retain power within the bloodline.

Unfortunately, genetic conditions resulted. Charles’s physical deformities made his face somewhat difficult to use as a face– the poor guy had what’s known as ‘The Hapsburg Chin’, a genetic condition passed down from his parents. You can actually trace who in his family had it (and how severely they had it) via their portraits; the royal painters had to tread carefully between depicting their subject accurately and depicting them without insulting them. They had mirrors, of course he knew what he looked like – but, just like today, the subjects wanted to be remembered for more than their facial scars and the extensive mistakes of the royal family. The prince didn’t mind being depicted through the rose-tinted lenses of his artist. When paintings are an expensive luxury, the client wants what they want.

Our recordings of what he looked like and how he was described in writing differ somewhat – in theory, you could still recognize him from his portrait, and that was good enough. The same goes for any number of royals. Airbrushing has always existed!   

Film

Painters weren’t the only ones who took liberties.

Film was also an expensive luxury. If people from the past seemed unusually clear-skinned, they might not have been – a combination of makeup and film retouching removes blemishes like acne scars and wrinkles from the image. Cystic acne can be genetic, syphilis was uncured, and smallpox survivors were still around, but you’d never guess the subjects of professional portraits suffered from those conditions too! Editing in black-and-white or sepia images isn’t witchcraft: it’s as simple as color-matching the person’s skin on the negative, and then painting over the flaw so it’s invisible on the developed image.

So retouching was definitely still a thing – it just wasn’t digital.

When color film hit the market, retouching could still be done, but the process was more difficult as photographers had to compensate for three colors, not just the one. A combination of special dyes and extremely fine brushes on an oversized negative, combined with better makeup, cameras, and specialty lenses (lenses designed to ‘soften’ the image, for example) allowed photographers to make their magazine cover photo flawless. This took time, and it was expensive, however, so retaking the image was often easier than editing out blemishes in post.  

Digital Smoothing

Beyond film, how did you retouch things in the early days of digital filming?

Doing digital work on a person’s face was reserved for magazines, professionals, and hobbyists – not just anyone could pop the SD card into a computer and start removing things. Ironically, the widespread availability of picture-taking items like digital cameras made the overall quality worse. When professionals took digital pictures, they never showed the client the blurry ones, and thumbs were never over critical parts of the lens during the picture-taking. Digital cameras also had lower stakes – you weren’t wasting film by taking three or four shots of the same thing to be sure you ‘had’ it.

Editing software relies on the strength or power of the computer that’s attempting to edit the image. More powerful computers can handle larger images, and gradually-improving computer strength lessened the reliance on film. As a result, businesses and major voices in the photo and film industries switched over when they could, so there is no exact ‘moment’ where editing surpassed painting – it happened in steps.

This also meant that film – which editors were familiar with and could process faster than the still-developing editing software – still held the upper hand for quite some time. Film can be endlessly upscaled; digital images cannot be. The strange grain you see on shows from the 2000s comes from being recorded digitally before the technology was fully mature. That’s just what they looked like, and fixing it would take some pretty intense AI or editing intervention. Meanwhile, films made during that time don’t look old – the clothing, speaking, and actors date the recording, not the visual grain. See the difference between a show like Lost and a show like Real Housewives. The decision to film Lost on real 30mm film has ensured it’s not as dated as it could be.

The iPhone, and Early Retouch Apps

The iPhone wasn’t the beginning of selfies – people took plenty when cameras (especially film cameras) were cheap. And the iPhone didn’t start the trend of editing, either, as you can see above. What the iPhone did was merge the two and allow them to come together in the hands of laymen. Now, with an ‘app’, anyone can take a pic and retouch it, send it to friends and family, print it, rotate it, crop it, etc. all without expert help.

This is no substitute for professional work (the first iPhone took better pics than many other mobile phones, but worse ones than professional or digital cameras) but it isn’t asked to be – we are far beyond the times when pictures were special occasions. You can track how expensive a picture was by the quality and quantity of selfies taken during the period. Did that person dress nicely for the occasion? Was it taken somewhere special? Are they posed in a way that suggests it wasn’t casual? Are they sitting for the photo, or do they just happen to be sitting when the photo was taken?

Early retouch apps were clumsy and frequently difficult to use subtly. If images from Myspace and early Facebook are any indication, the line tool was about as good as it got for tweens – the phone camera couldn’t compare to the stuff magazines and TV shows were using. ‘Digital smoothing’ available to the average consumer was about as good as a blur filter today, which is not very good. MSPaint was a legitimate option for altering profile pics. It just… all looked sort of bad. But it was passable! It wasn’t ideal, but in an era where people are just beginning to learn about Photoshop, and only experts and hobbyists really have it, any editing done to a photo had to be really brazen to not pass as ‘makeup’ or ‘lighting’ to an inexperienced internet.

Of course, professional photos still look professional, and airbrushing celebrities has only gotten more intense, but the average user is not trapped by this yet. For every smoothed, poreless face on the cover of a magazine, there are programs on MTV and tabloids showing what they look like without the touch-ups. Celebrities are an other, and you and your friends still look normal.

SnapChat Filters

And here is where we begin to see issues. SnapChat filters became a thing, and started acting like mirrors. This is a bigger problem than it sounds, and you’ll know why – when you change your haircut, when you put on or take off glasses, or in pictures, you look a little alien to yourself, but eventually, the change settles in and your internal image adjusts to what your eyes are seeing in the mirror. What if you have two mirrors, and one isn’t telling the whole truth? Which image does your brain adapt to? According to research, it’s the one that exaggerates the features you like about yourself, not the honest one.

Snapchat’s filters almost universally slim the face and lighten it up a little, too, even under ‘goofy’ filters like the animal ear ones. Other versions don’t even bother with the pretense of animal ears, they just slap some butterflies on and call it a beauty filter. The end result is a face that may be perceived as more attractive than a plain selfie. This is a problem for a couple of reasons! Assuming a whiter, thinner person is always better than the default image has troubling implications, and while this could make the pictures more attractive to the user, it does so in a way that changes their idea of their own face so much that they can’t look at the regular mirror without feeling vague dysmorphia.

Because these apps aim at teens, tweens, and twenty-somethings, the issue is magnified by developmental steps. They’re right at the age where they begin to notice how they look (and how others may perceive them). Many people get acne as teens, for example, but the Snapchat filters reduce the appearance of red blotches and uneven spots, spots they will have to look at elsewhere. Like mirrors, school photos, photos taken in school clubs, family photos, etc. making special moments more difficult to capture without self-consciousness getting in the way.

Overuse of social media exacerbates the issue, and the baseline for what people really look like is lost. During this time, however, the tech was limited to a select number of apps. It’s still possible to avoid it, and the only people really being affected are people who were spending an unhealthy amount of time online anyway. Surely, simply curbing use and being aware of how filters change your face is good enough to combat it, right?

The Rest

It was… until this new generation of apps and phones came out, and all of the visual ones came with some sort of ‘enhancement’ feature. Some users on TikTok report blurring and re-coloring even when no filters are active. The iPhone and many Android devices now come with beauty filters on by default. The new cameras took in so much information that it seemed silly not to try and capitalize with AI. All of this on top of the social media apps, and magazines, and retouching already seen everywhere else.

If you want to use a social app, you will see other people (many of whom you may associate with IRL because it’s a social app) using filters, even if you decide not to. You will also find that adding cool effects without altering your face is difficult-to-impossible, because it all comes built in. Filters to change eye color? Filters to add fire or ice effects? Filters that make the image black and white, or sepia? All of them come with facial smoothing.

If looking up to absurdly skinny and unrealistically ripped folks causes body issues, imagine what auto-smoothing is doing. “You could look like this,” these auto-apps say, “but you don’t really. So keep using us because we’re the only place you look right.” It is a difficult world to navigate. The worst part is that many of them know this, but their solutions or damage-control attempts can’t come with advice to stop using the platform. The best TikTok does is recommend breaks, and Snapchat, with it’s Streaks deal, doesn’t encourage you to stop at all. Instagram is not better either.

Hidden and unremovable beauty filters are posing a bigger threat than their users realize – once you see yourself in the black mirror, the silver one seems inadequate.

Sources: https://www.pbs.org/newshour/nation/making-people-aware-of-their-implicit-biases-doesnt-usually-change-minds-but-heres-what-does-work

https://medium.com/invisible-illness/social-media-filters-adversely-affect-our-mental-health-3eec79db6383

https://www.womenshealthmag.com/beauty/a33264141/face-filters-mental-health-effect/ ( the gif at the top of the screen shows how even filters not marketed for ‘smoothing’ or ‘beauty’ smooth features.)

https://www.mentalfloss.com/article/83262/how-photo-retouching-worked-photoshop

https://fixthephoto.com/blog/retouch-tips/history-of-photo-retouching.html

https://www.acc.edu.au/blog/social-media-low-self-esteem/

Bad AIs Eat Their Own Data

Elizabeth Technology December 10, 2021

Poorly optimized algorithmic content is frustrating for users, in more ways than one.

Ads (For Items)

It’s a new, somewhat dystopian warning: look for gifts in Incognito Mode so the ads don’t give away what you were looking at. Unfortunately, in a world run by websites that want you to make an account for your purchase, Incognito Mode is less helpful than it used to be.

Websites take notice of what you look at and buy, and then they juggle that into a measure of intent – are you actually planning to buy X item? How many times did you check it, and how long did you look at the listing? Did you look at other listings like it? Did you message the shop owner, or ask a question? Did you ‘heart’ it? If you did, it’s going to recommend more proportionally to how much you interacted with said item. But what about gifts, you may ask? How does the algorithm know I’m not buying this nurse-themed cup and this teacher-themed lanyard for myself?

Turns out any website using Google tools to track engagement knows what data to leave out in the long-term – they’re gathering so much data that it’s not really a loss! Given enough time to read your patterns, they’ll be able to figure out you’re done looking and will squirrel that knowledge away for the Gift Finder stuff (or whatever Google does with all of the data it stores on you) later. That’s… creepy, but not necessarily worsening your experience.

But what About the Ones that Aren’t as Optimized?

What is worsening the user experience is a lack of understanding context by other, less developed and less conscientious algorithms. Google Ads was notorious for following you with an item you looked at once before their target-testing showed users didn’t like it, and it was prone to mistakes anyway; companies following Google as an example didn’t always move on when they figured that out, though. Target sending coupons out for baby carriers and bottles came across as gauche, even when it was right – you hope nothing bad ever happens, but the first trimester for a pregnant woman can be very scary, which is why it’s tradition to hold off until the second trimester to start sharing that info. Imagine a company butting in with a mailed coupon and effectively telling your household that you’re pregnant before you get to!  

And where ‘haunting’ a user with an item they glanced at is still popular, it can make it tough for users to go back to casual browsing without that item appearing, making a website less appealing to casually visit. For example, Etsy – Etsy does not seem to be able to distinguish between items you’d buy once, like musical instruments or coffee tables, and items you’d buy over and over, like soap and other consumables. As a result, if you buy an instrument off Etsy, you don’t necessarily get ads for items related to that instrument – you just get ads for more Instruments. Take these screenshots of my Etsy front page:

This was immediately after I bought an instrument from the shop OrientalMusic, and if this was candles or snacks or something, showing me more stuff from the same vendor would be reasonable – as it is, I can’t window-shop for stuff Etsy thinks I might like because all it thinks I might like right now are more instruments.

“Shuffle” and Spotify

Spotify allows its users to make playlists of songs, but it also attempts to generate separate playlists for the user. “Discover Daily” and Discover Weekly” are designed to show the user new (or new to them) music that they might like. “Release Radar” aims to get you to new songs from other bands in your playlist. And then there’s the “On Repeat” playlist, which is meant to play you the songs that you’ve heard most often.

 The obvious issue with that: if you’re a free listener, Spotify decides which songs you’ve heard most often. If you’re a mobile listener on the free plan, you don’t have the option to not shuffle on the playlists you make, so the algorithm determining what song you’re going to listen to next is also ultimately deciding the On Repeat playlist, not you. The other playlists also learn that you like those same songs more, and Spotify’s algorithms scramble to provide you recommendations based off of the songs you like the most… the songs it thinks you like the most, which aren’t songs in the playlist but are instead songs you listened to, which Spotify decided.

Effectively, Spotify is feeding itself its own data, not yours!

Even worse, the shuffle function isn’t truly random – it’s run on an algorithm too. True randomness would be a saving grace for “On Repeat” – if you have a song in multiple playlists that you listen to often, statistically, it’ll pop up in On Repeat before songs you only have in one. That is, if it were actually random – unfortunately, it’s also decided by an algorithm. If you’re getting the same three or four songs every time you start a playlist, and the same handful the majority of the time afterwards, even with plenty of other songs in the list, that’s not a coincidence.  

OneZero says that Spotify divides its functions into exploit and explore, and when it’s trying to exploit, it’s easily tricked into a feedback loop of the same music you hear all the time. Explore is in the same boat, but it uses other people’s data to suggest songs that Listeners of X liked – leading to the same conclusion every time you open the Discover playlist. If you didn’t like those songs last time, it doesn’t care – it’s recommending them again to you now because Listeners of X liked it, and you listened because the algorithm put it first in line in shuffle, which leads to it thinking you like X a lot. Wired.com says that it can get itself so stuck on what it thinks you want that trying to break out and get new recommendations in your Discover playlists is better done on a fresh account. Yikes.

Youtube Recommended

Youtube’s recommended page is usually pretty good at picking up what you’d probably want to watch… as long as it has some history about you first, and also as long as you don’t stray too far from what you normally consume. Countless Youtubers have filmed themselves opening Youtube in an incognito window so they can show how few videos it takes to get into some crazy conspiracy theory videos – turns out the Flat Earth is never more than five or ten clicks away! A phenomenon that some noted was that new accounts who didn’t have any other data would get funneled into a rabbit hole once Youtube had the slightest smidge of data about them – and when conspiracy theory videos have high engagement (i.e lots of comments arguing) and enough run time for ad breaks, they’re considered above average content. Wonder why Youtube is putting those little Context bars below videos with sensitive topics now? That’s because it was forced to reckon with the algorithm’s tendency to feed misinformation to newcomers and people who ‘did their own research’ right into believing the Earth was flat and lizard people were real.

Sources:

https://onezero.medium.com/how-spotifys-algorithm-knows-exactly-what-you-want-to-listen-to-4b6991462c5c

https://www.wired.co.uk/article/spotify-feedback-loop-new-music

Deepfakes: Should You be Concerned?

Elizabeth Uncategorized October 22, 2021

You might have seen those videos of “Tom Cruise” on TikTok, or maybe you saw someone’s face superimposed onto Superman. Deepfakes are getting better by the day!

Deepfake Software

Deepfakes are a species of visual edits that use pictures and video, combined with AI, to create something new! The AI uses a pre-existing video and a library of photos to replace one person’s likeness with another. If you have the pictures for it, you could deepfake your face onto Chris Hemsworth’s body, and other such shenanigans. And deepfakes aren’t just for videos! They can also be used to create better still images as well. Where Photoshop relies on a human’s touch to make it believable, deepfake tech can create a realistic still mostly by itself given the tools.

That’s the catch, not all deepfake AI has all the tools – some deepfakes are noticeably worse than others, for a couple of reasons. The tech is still pretty new, so most programs are still ‘learning’ what is and isn’t possible for a human face. The second issue is the quality of the images fed to the deepfake – if the images don’t give the deepfake enough information to accurately recreate angles, it’s going to have to get creative. This is a bad thing when you’re trying to make a believable video.

Celebrities Vs. The Average Joe

Deepfakes rely on data, so if the software doesn’t have much data to work with, the resulting deepfake looks…uncanny. Even really, really good deepfakes right now, with a ton of data, look a little uncanny. Picture the last movie you saw a dead celebrity in – you probably realized something was wrong even if you didn’t know they were dead, like General Tarkin in Rogue One. He’d had his whole head scanned at high quality before he died, and he still looked a little strange on-screen. It was little things, like his neck not moving perfectly with his mouth. Young Carrie Fisher at the very end of Rogue One had a noticeable grain due to the source images, and that same young Carrie Fisher in The Rise of Skywalker looked strangely plastic even in low, indirect light.

The average person doesn’t have enough high-quality video or images from even one angle for deepfake AI to make something believable. It only takes a split-second of slightly misplaced nose or mouth for someone to get creeped out by whatever you’re making and identify it as fake. The uncanny valley is instinctual, but it’s reliable! It takes serious work to overcome that instinct. If Hollywood can’t manage it, is there anything to worry about for the average person? Well… yes. Because the average person has access to it, and the tech is always getting better.

Controlling it

How do you control it? Big stars have to deal with their image being stolen all the time. If anyone’s prepared, it’s the celebs, who have to fight magazines and movies alike to be represented like they want to be. But what about the average folks when it starts to bleed downwards? Minor politicians, or competition for the cheerleading squad? Or explicit images made specifically to harm someone’s image, made by an amateur with juuust enough knowledge to make something that, at first glance, looks believable.

How do you account for that?

Lets look at the Tik Tok Tom Cruise account. The creator has gone out of his way to make it clear that Tom Cruise’s likeness there is not real. Even so, the videos are jarringly realistic. He used a Tom Cruise impersonator as the ‘base’ for the deepfake, and the end result barely catches any uncanny valley at all. He just looks a little stiff. That guy’s videos are still up, because it’s obviously not really Tom Cruise no matter how realistic it is.

And then there’s an account that’s putting Charlie D’amelio’s face on their own body, in an attempt to impersonate her. Tik Tok is removing these because it’s not obvious that it’s not Charlie, even though the quality is worse. Someone who watches it more than once is going to recognize that it’s not Charlie, but it’s still getting pulled, because it’s not being clear enough. They are crossing a line.

There’s also a distinction between the two for intent. ‘Tom Cruise’ is showcasing his technical skill, the Charlie impersonator is trying to be Charlie.

Legally, copyright law does have some precedent from: if an the music and art world: if an impersonator is so close in performance to the original that an average person can’t distinguish it from reality, then they’re violating copyright. Singers use this when covers get a little too close to the original. See Drake songs, for instance: the only covers you’ll find on Youtube are by female singers or men who sound totally different, because he’s very strict on his copyright. When the audience can’t tell them apart, they’re pulled.

The problem is enforcement. The average person is not going to have to time or resources to hunt down impostors and report them all. Charlie is famous on Tik Tok, but if she wasn’t, Tik Tok mods likely wouldn’t actively hunt down these impersonator accounts for her. If someone really, really hated an obscure user, they’d be able to overpower their reporting efforts with fake content, and that fake content only has to be believable enough for someone to scroll past it and think “wow, I can’t believe they’d do that”.

The average person is not equipped to scrutinize every single little bit of media that comes their way, it’s exhausting and unrealistic to expect that of them. It’s how disinformation campaigns work. If the deepfake is believable enough, and the original’s not aware of it, that deepfake may as well be fact for everyone who sees it and doesn’t realize it’s fake.

Implications

If you’re online a lot, you might have heard of that new Mountain Dew ad featuring Bob Ross’s likeness. This was… weird, to a lot of people, and for good reason. Using a person’s likeness to sell something has been a matter of debate ever since money became mainstream – you’d probably sell more spices if you said the king bought from you back in BC times. But normally the person is able to call them out for it. Now, with deepfakes, you can make celebrities say anything post-mortem, and nobody but the estate will be able to challenge it.

And, even if the estate gives permission, how specific do you have to be about that image? Actors struggle with Paparazzi images even today – Daniel Radcliffe famously wore the same shirts and pants for weeks while filming a movie, so the paparazzi’s images of him were worthless. Imagine having the ability to put Daniel Radcliffe in any pose or outfit you wanted for the front of a magazine. The person wouldn’t make unflattering faces for your pictures before they died? Well. Now they will.

Presumably Bob Ross’s estate allowed the use of his image, but in the same way we don’t take organs from dead bodies without consent of the deceased, maybe we shouldn’t allow the selling dead loved ones’ images for advertising purposes, without their consent beforehand. Especially now, when it’s easy to deceive people with this tech!

Is There Good?  

And then there’s the other side of the spectrum, where deepfakes can be used to bring people back to their glory days, or color black-and-white movies. They can be used to de-age actors, as seen in Captain Marvel, Star Wars, etc. Samuel L Jackson was 40 years younger thanks to deepfake tech, and Mark Hamill appeared as he was forty years ago for another Star Wars series.

Deepfakes, given the tools, do a better job of recreating someone’s face than human-controlled CGI ever could. They could have been used to make Henry Cavill’s Superman in Batman Vs. Superman mustache-less, instead of whatever they did instead that made his face look unsettling. He couldn’t shave his ‘stache because he was also filming Mission Impossible at the same time, so the only way out was either prosthetic facial hair, or CGI-ing over it. They picked the CGI. People noticed. Deepfake tech might have made his mouth’s movement a little less uncanny.

Deepfake tech could be used to disguise facial injuries, like Mark Hamill suffered during the original Star Wars trilogy, or create alien races without the heavy prosthetics traditionally used or sweatshop CGI-studio labor. They could make dubbed movies less visually jarring, and line up actors’ mouths with the words they’re supposed to be saying.

Deepfake technology is a very double-edged sword. All the good it could do isn’t outweighed by the bad. It’s dangerous technology, and in a world that’s increasingly using the internet to share information, disinformation is a powerful pollutant. 

Sources:

https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them

Captchas – How and Why

Elizabeth Uncategorized September 29, 2021

Captcha, which stands for Completely Automated Public Turing test to tell Computers and Humans Apart (what a mouthful) was first conceptualized in the early 2000s. Websites were already struggling with bots, and a website known as iDrive recognized their inability to ‘see’ the way people did. Paypal, also struggling with bot attacks, began using the same method to keep brute-force attacks from getting in. This is the true essence of Captcha – in 1997, the tech was first described as anything that could differentiate robots and humans, but it wasn’t known as ‘Captcha’ until Paypal got in on it.  It’s more advanced form, the reCAPTCHA, was first coined in 2007 and then absorbed into Google in 2009.

Type These Letters, Hear These Sounds

The original style is becoming easier to get past as AI improves, but it’s still better than nothing. An AI would still leave clues that it was ‘reading’ the letters (or trying to) as it tried to decipher the captcha text from the other random lines and fuzz on screen – Cloudflare, a security website, notes that AI couldn’t do much better than keysmashing and hoping to get in that way when this was first implemented. Now that AI can ‘see’ much better than it used to thanks to endless training to recognize text out in the real world, it gets more and more accurate for every captcha box it sees. Captchas may be algorithmically generated – AIs designed to account for algorithmically generated content in front of them are now capable of deciphering the text, and Captchas are actually sometimes used as tests!

That doesn’t mean they’re obsolete or useless for protection. Just because some people can create AIs that can get past it doesn’t mean that everyone can. Many basic bot creators would much rather go to an easier, less-well-defended site than sit there and try to program an advanced, specific AI for such a simple task. It’s not perfect protection – no protection is.

However, there were problems: unimpaired users often complained that solving them was hard. For visually impaired or deaf users, the captcha might genuinely be unsolvable. Screen readers, a common tool for blind folks who use the internet, allow them to browse the web by reading the page out loud. Because a captcha is a picture, not a text box, the screen reader doesn’t know it’s there. Accessibility software is often simpler than cutting-edge bots (and incapable of reading images), and so they were left behind.

Audio versions are a better solution, but their nature still makes it difficult for screen readers to ‘see’ the play buttons. Besides, audio-to-text AI was already more advanced than picture-to-text because there’s a market for automated captions and auto-transcripted phone calls. Transcription software has been around for ages, and it only gets better at separating noise from information as time goes on – there is almost nothing a captcha could add to the sound to make it hard to interpret for a machine and not a person. As such, these captchas are less common than the fuzzy text and image ones still seen everywhere today.

“I am Not a Robot”

One of the simplest types of Captcha is the “I am Not a Robot” check box. It seems like it could be easy to trick it – and it sort of is, but it’s not a walk in the park. The box works by tracking cursor movement before the user hits the little check box. On a desktop, an AI might jump directly to the box it needs to click, with no hesitation, or it might scan the entire page to locate the box visually if it’s unable to detect the clickable element. That’s not human behavior – people don’t ordinarily need to select the entire page and then contemplate it before clicking the right area. People are also generally unable to jump directly to the clickable elements as soon as the page loads, even if they’re using the tab keys or a touchscreen device.

This was easily one of the most user-friendly kinds of Captcha out there. No reading. No listening. No selecting blurry images or trying to guess at misshapen letters. As such, it was quicker to use than a number of other types of captcha tests were, even though someone with a lot of time and determination could rig something up to bypass it.

Click These Pics

This is the previously impossible barrier that stopped AI dead in its tracks. Training an AI to see and recognize like humans do used to be impossible, but now… now it’s on the horizon. Self-driving cars will need it. Google uses it for reverse-image search. Facebook uses it to find you in friend’s photos. If humanity was going to truly master AI that behaved like people did, AI was going to have to learn how to see; that meant other, outsider AI would also be learning how to see.

The pictures are easy – you get an image separated into 9 or 16 tiles, and you select imagery that matches its request within those tiles. An AI might be able to measure ‘red’ in an image, but the sort of uncomplicated AI that most amateur hackers could crank out wouldn’t know a fire truck from a stop sign. Even if it got lucky that one time, other human users are picking all of the right squares every time – so if it misses even a sliver of the red in another tile, or over-selects, it doesn’t pass and has to go again.

Is that… being used for something?

Google is using Captchas to crowd-source training for their AI. However, doing this meant that Google had access to a metric ton of training time – Wikipedia claims that people around the globe spend 500 hours completing CAPTCHAs every week. Unlike those text and audio ones, pictures with the features they need can’t just be generated indefinitely. If you’ve noticed a decline in picture quality for these captchas, you’re not alone. The quality really is getting worse. The sharper pictures are already trained into the database, so now all that’s left is the blurry, fuzzy, poor-quality ones everywhere else, the ones that weren’t ideal for the initial training.

Now, millions of people every day are telling the computer what a red car or a street sign looks like, instead of just a large handful of researchers. Some of this research is for smart-car training, some of it’s for reverse-image searching, some is purely to advance the state of AI – once AI can recognize things in its environment visually, it can usually behave with less human intervention. And the more training it has, the less likely it is to become confused at a really inconvenient time. Tesla has famously struggled with AI mis-recognizing things, such as the moon, blinking streetlights, and partially graffiti-d signs, but the more training it gets, on worse and worse quality images, the better it will eventually perform.

Sources:

https://www.cloudflare.com/learning/bots/how-captchas-work/

https://support.google.com/a/answer/1217728?hl=en

https://googleblog.blogspot.com/2009/09/teaching-computers-to-read-google.html

https://elie.net/publication/text-based-captcha-strengths-and-weaknesses/

Car Screens – Is It a Good Idea, Really?

Elizabeth Uncategorized September 10, 2021

We all know how addictive screens are. And yet, after endless campaigns to get teenagers to stop staring at their screens while driving, we’re introducing cars that practically require it. Why?

The Good

Screens exist basically everywhere. They’re oftentimes a good substitute for analog buttons, as in the case of phone keyboards, and can provide more flexibility and wear-time if the screen is going to be in front of the public, as in self-serve checkout screens in grocery stores. They’re easier to clean and more difficult to break.

However, screens and analog buttons don’t have to be enemies. Some modern cars come with air conditioning that can be set to an exact degree to aim for, but the number of possible answers means a digital readout is required. It gets both a screen and a set of buttons.

Other things necessitate screens if the customer wants them as a feature. You don’t have to have a screen for the radio, the buttons could perform all of the functionalities just fine, but if the customer wants to know the temperature outside, that takes a screen. Plus, that screenless radio would be annoying to select presets for, so they almost universally come with some sort of screen or indicator for the channels. Bigger, more complex car radios with screens can show more information about the broadcast, too!

Some features that help with safety and ease-of-driving come with screens too – backup cameras need a screen to function. There’s no way for that feature to exist without a screen somewhere, so it may as well be in the dashboard of the car.

The Bad

That being said…

Some things are better suited for buttons and physical inputs. Someone can adjust their volume by simply grazing their hand along the surface of the interior dash until they get to the right knob. Trying to do that with analog buttons and a digital readout is also doable – they will be able to both feel and hear the difference (of switching stations) in hitting different buttons, so they’ll eventually be able to land on the right one, as long as their fingers are on the buttons. Doing it with no physical feedback requires taking eyes off the road, otherwise the user doesn’t know if their fingers are even on the buttons on-screen.

Extra features that are useful are also often distracting while on-screen, so it’s not totally the screen’s fault. GPS hooked into the car’s screen makes sense. It’s safer than looking down at the cupholder or blocking off a bit of vision for a suction-cup phone holder on the windshield. However, typing on one is usually a nightmare because the positioning is awkward, right in the middle of the dash, even if the screen is top-of-the-line responsive. Syncing to a phone to use the GPS there, and then BlueTooth it to the screen fixes that problem but creates new problems in it’s wake. Even worse if these things are in separate menus, which means spending time navigating said menu to get to the GPS, Bluetooth hookup, or other assorted features in the first place. All of that should take place before driving – but isn’t it annoying to have to fix all of that up before even leaving? Flipping through the radio was effortless before screens made it more difficult than it needed to be.

Deeply Unnecessary and Largely Unwanted

Bizarrely, automakers also offer options to connect to the internet for reasons beyond simple GPS or music. As The Turning Signal points out, the layers upon layers of menus and features offer a lot of distraction while in the car, and no hierarchy of features. Radio should probably be fewer steps to get to than GPS, for example, because you’re not going to use GPS on every trip you make, but radio is almost always on. Another obvious downside is that if anything goes wrong with the screen itself, you’re trapped with the settings you had when it broke, and that’s really annoying.

Part of this isn’t even due to the screens – it’s because the automaker is desperate to stuff as many features as possible into the car. The sheer number of things a car can do now means even if everything were analog, the user would still be glancing down pretty often just to find the right button for the task. Seat warmers, directional AC, GPS, motorized seats, built in chair massagers??, the heater, turbo heating or cooling, the radio, Bluetooth, etc. etc. would all need their own buttons – multiple buttons for each. If automakers were to make these all real, physical buttons, your dashboard would look like something from Star Wars. It’s too late to go back unless the automaker wants to ditch features that other cars (and their previous cars) still have.

Even Worse

Ford announced plans to beam billboard information directly onto the screen, via a complicated system of computers and AI. While it’s not literally beaming every sign it sees into the car, and it is theoretically possible to shut off, it’s still an awfully ugly statement. The dashboard has become advertising space for billboards that used to be ignorable. A big question will be how it interacts with other apps on-screen. Does it get priority over the radio, or the GPS? Even assuming that’s all sorted, and the customer willingly has the ads open, glancing down to peep at a flash on-screen is a little bit dangerous, is it not? Their reasoning is that the consumer may have missed information they could be interested in. If the information is interesting, that’s worse! That makes the distraction issue worse! The screens are already horribly distracting as they are, with all of the menus and buttons and stuff to dig around in, so having an ad, which is inherently trying to snatch your attention away from what you were doing before you saw it, beamed directly into the car while the driver is driving, is effectively putting revenue above safety. I thought Ford had learned from the Pinto. Apparently not.

Many people jumped on Ford for even suggesting the option. As they should – billboards themselves have gotten into trouble for being too distracting, how beaming directly into the car was supposed to avoid those same issues is anybody’s guess.  

And then there’s things like games and social media apps built into the system. It’s weird anyway, because most people have phones, but whatever. Assuming it has the most basic of safety features built in, and won’t activate if the car is in drive – what’s to stop the driver from shifting into park at every red light to check up on their accounts?

Phones can at least be stuffed into pockets – this screen would have to be disabled.

Sources: https://www.motortrend.com/news/ford-billboard-ad-patent-system/

https://www.theturnsignalblog.com/blog/touch-screens/

https://www.motorbiscuit.com/why-are-automakers-replacing-buttons-with-touchscreens/

https://gizmodo.com/get-ready-for-in-car-ads-1846888390

https://newsroom.aaa.com/2017/10/new-vehicle-infotainment-systems-create-increased-distractions-behind-wheel/

Stop Hyping Autopilot

Elizabeth Uncategorized September 8, 2021

It’s not done yet!!

Tesla’s autopilot is really impressive. It’s just not done yet. Between failure to detect real objects and detecting ghost objects, the new Auto-pilot has a lot of really terrifying anecdotal cases.

A Word of Disclaimer

Tesla does tell users not to get in the back seat or otherwise take their eyes off of the road while autopilot is driving. They’re constantly updating their programs to include edge cases discovered on the road, and it’s really hard to do that if the car never gets to use the feature that’s causing bugs. However, I’m not sure it’s impossible to catch some of these user-reported issues in a testing environment. Elon Musk’s consistent belief that people will die for science is not comforting in this situation.

However, many of the issues in the following article are rare, fringe-case scenarios. It doesn’t represent the cars as a whole, it’s more of a warning – you really can’t trust the autopilot 100% yet, because users report multiple different issues stemming from the programming. Nothing most Tesla owners don’t already know.Drive without autopilot or drive while paying careful attention to the autopilot, and Tesla’s as good as any other car.

The irony of using cars out in the wild to ‘test’ is that a regular car’s cruise control is actually less stressful – the driver doesn’t have to pay active attention to the car’s surroundings on regular cruise control! The old-style cruise control couldn’t make the car suddenly brake or swerve into another car.

The Brakes, the Reads

Speaking of which, the brakes! A car capable of braking can brake itself into an accident in a split second on busy roads if it sees something it thinks is dangerous.

This is a cool feature, but it’s not done yet. Reddit’s Tesla subreddit has numerous accounts of the brakes engaging for little to no reason: phantom animals, suddenly ‘seeing’ a stop sign on the highway, misinterpreting special vehicles’ rear lights, and more. The biggest one is phantom overpasses, where it misunderstands the shadow as a reason to stop (users say that this was an older version of the software, and that newer ones don’t do it as much unless there are other, compounding factors, like tow trucks or construction lights. Still not ideal).

Nature released an article detailing how someone could hypothetically trick the car into seeing a speed limit sign instead of a stop sign, and get it to accelerate into an intersection. Specially painting trucks and cars so that the AI misinterprets what it’s seeing might turn into a great way to cause accidents. The AI seeing things is trying it’s best to look for issues, but as Nature describes it, AI is often ‘brittle’. The computer’s not totally sure what it’s looking at, so it makes its best guess. Unfortunately, it’s best guess is often pretty bad. A computer’s best guess as to what a food truck with a hot dog on top is might be that the truck’s actually an overpass, or maybe a deer, while even a baby human can tell it’s some sort of vehicle. Fringe cases like the hot-dog truck have to be manually added to the computer’s repertoire so it doesn’t freak out next time it sees it. However, it has to do this for each instance of a ‘hot dog truck’ it doesn’t recognize. Dale Gribble’s famous ant-van would confuse it too, for example, and it’s not hot dog-like enough for the AI to snap to that memory. It would be starting from scratch, every time.

It also occasionally fails to brake or move when there is something there. Commentors theorize that the computer is deliberately programmed to ignore things along its sides, so it doesn’t freak out about the railings and concrete barriers that run alongside highways.

The Lights and Cameras

Tesla’s auto-pilot is easily confused by wet road surfaces. One user reported that their Tesla couldn’t understand reflections from signs, or wet ground. It would see it’s own high-beams in the reflected light, and lower them automatically. And then it realizes it’s dark once it’s past the sign, so it flips them back on. It keeps doing this until it has a continuous level of darkness or brightness in-line with what it’s expecting from a dry road with few signs. Unfortunately, that means the car has to make it to an area with streetlights or other cars for it to figure out the low beams should be on, not the high beams. Or the user can flip it manually, which means turning off the autopilot, on some models. Speaking of light, it can’t always tell that lights are lights and not more white lines.

It also struggles with overpasses – it doesn’t understand bridges, and there are so many bridges, overpasses, and assorted vertical shadow-casters that distinguishing it from a regular stoplight pole is a Herculean challenge. As such, it often erred on the side of caution before reprogramming fixed its confusions.  

The built-in monitor can also display what the camera thinks it’s seeing, which gives the user some valuable insight into how it works. When it pings something as a thing, that thing is there now. See this gif of someone driving behind a truck with stoplights on it:

 This is a hilarious edge case, and I don’t blame the car for not understanding what’s happening, but the lights stick to the place in the road where the Tesla identified them. Once it’s there, it’s there – a box or bag in the road that’s incorrectly identified might not get re-identified correctly. Of course not! Because if the Tesla was told to constantly re-ping it, it might misidentify things it got right the first time, and the more opportunities the programmers give it to do that, the more likely it is to happen. Right now, what Tesla has going on is good for ideal conditions. The struggle is getting all of that to work in the real world.

The Hardware

The cameras are great. This issues with the autopilot are purely AI-driven. The flash memory used in older models was prone to failure and had to be treated like a warranty item to avoid a total recall, which sucked for users, but otherwise – the hardware directly tied to software functions is more or less working as advertised. It’s the other parts of being a car where Tesla falls down.

It’s unfortunate, but Tesla’s ‘Model S’ front axels are prone to deforming. It doesn’t happen quite often enough to warrant a recall, but enough for some disgruntled users to post about it online. Something as simple as driving onto the curb bends the front axle, and the user then starts to hear strange noises from around the wheel area when they turn. Many Tesla superfans attribute these complaints to one guy in Australia harping on it, but scattered posts (from various devices, locations, and dates) across the Tesla subreddit as well as Tesla forums suggest this is a bigger issue than those superfans (and Tesla) want to believe. Tesla revolutionized electric cars, but it also re-did a lot of design work itself, from scratch. Is it really that unbelievable that cars across nearly a decade could be suffering from a premature parts failure? It happens to non-electrics all the time!

Design

Also, from a design standpoint, I just… don’t think the cyber-truck looks that good. The previous four-door Teslas look great! They’re very slick, but they look a lot like some of the hottest cars in the market. A family car, or a commuter car. It blends in with the pack, and only stands out in traffic in good ways, like it’s lack of noise. The cyber truck looks nothing like the trucks it’s meant to compete with. The sides of the bed are raised so it meets the rest of the body on a nice, straight line. That sure looks cool, but for anything of actual weight, the driver can’t toss items in over the side. That’s one of those minor-but-annoying things that peeves owners off over time.

The glass is also armored, which is cool, but… what for? Who is driving this? Who’s afraid of getting hailed on or shot at, and doesn’t want a less conspicuous vehicle?  Or, the inverse – bougie celebrities with a lot of money and a lot of enemies might want a really conspicuous car but with stronger glass. Does the cyber truck do that? Kinda… but so do many sports cars.

It’s a cool idea, but it’s just that – an idea. The truck of the future, not the truck of right now. An electric truck is a great idea! But it doesn’t look anything like other company’s versions of the same concept does, so people may be reluctant to jump to Tesla, instead of Ford. Differentiation in cars can either give you the VW Beetle, or the Pontiac Aztec. Only time will tell how the cyber truck fares.

Sources:

https://www.tesla.com/cybertruck

https://www.nature.com/articles/d41586-019-03013-5

https://forums.tesla.com/discussion/60330/model-s-axle-problems

https://www.nature.com/articles/d41586-019-03013-5

https://www.forbes.com/sites/bradtempleton/2020/10/23/teslas-full-self-driving-is-999-there-just-1000-times-further-to-go/?sh=7c7734c32ba6

AI: You Get Out What You Put In

AI needs training to understand what it’s meant to do. The quality of the training determines its outcomes.

 

Tay Fed Tweets

 

Microsoft’s Tay was exposed to the worst of the internet at incredible speed. Once Microsoft announced the project, Tay began forming her own tweets out of content she was sent, and it went about as well as you’d expect. Racist, sexist, anti-Semitic language ruled her feed, and she was shut down shortly after. This is an unfortunate experiment, because Tay might not have turned so quickly if she’d just been exposed to open Twitter anonymously. There are still racist tweets out on Twitter, but being targeted by the mob produces a disproportionate amount of ‘bad’ tweets towards the target. If they didn’t announce Tay’s existence, they wouldn’t have gotten as many messages, though. Knowing what the experiment is allows people to screw with it in the same way it allows them to participate.

The experiment was still considered a success: Tay took exactly what she was given and rebuilt it in new, interesting ways. Racist new ways, but still new ways. A machine was able to successfully learn how to make a targeted threat towards another Twitter user. That was huge for machine learning. Of course, Microsoft doesn’t exactly want that to be the face of their machine learning program, so Tay was reset and reinstated with filters in place, under different names.

This is a key experiment, not only because of how fast Tay keyed on to how she was ‘supposed’ to behave, but also because it highlights issues with reinforcing learning in the active environment. AI may end up learning things it’s not supposed to, to the detriment of the environment it’s supposed to be part of!

 

Google Deep Dream Fed Eyes

 

Google’s Deep Dream software was famous when it first reached the public. It was fed pictures of living things, and so living things were all it could see, anywhere. Everything it touched was coated in eyes and fur-texture. It was horrifying. Deep Dream pictured all sorts of eldritch horrors in everyday items – it was doing it’s best, but all it knew where pictures of dogs.

Google fed Deep Dream a set of images from a database assembled by a university, but it wasn’t given all of the images, since that would have been a huge amount of data for the then-small Deep Dream. Instead, Deep Dream consumed the library in smaller pieces, and one of those pieces was a very fine-grained sub-library of images of dogs. Deep Dream’s specific instructions were to take the picture – illustrate what it saw – repeat. Little aberrations slowly turned into eyes and whiskers.

Since then, Deep Dream has added filters that allow users to pick which database of images they want to use, each of which creates a new, wacky image out of their own uploaded images, but the dog filter still sits strong in people’s favorites. Sometimes things are so creepy they’re cute! The story of an AI chugging along and doing it’s best is one for the ages.

 

Art-Breeder

 

Art Breeder is an AI-powered character creation tool, and it’s already been covered by some of the largest YouTube channels on the website. Art Breeder breaks down human expression inputs into granular emotions, such as happiness or fear. Using sliders, the user can then alter a preexisting image, or create a face out of thin air! Art Breeder uses it’s database to put together what it thinks happiness or sadness does to the human face. It’s difficult to get a perfectly realistic human face – most still appear animated or cartoony – but it’s also frighteningly easy to accidentally create a blob-monster.

Art Breeder’s AI doesn’t actually know what it’s seeing, it’s just doing it’s best based on pictures it’s been fed. It doesn’t know that glasses are separate from a person’s face, for example, or that mouths don’t have four corners for smiling and frowning at the same time. It also doesn’t necessarily understand freckle patterns, or where blush belongs. Art Breeder’s fascinating. It can make faces, or it can make motorcycle accident victims, all with a single mis-click of a slider.

 

AI-Dungeon Fed Fanfiction

 

AI-Dungeon, a young and upcoming Steam game, made some waves when it announced that it was trying to fix issues within its script generation – especially the AI’s tendency to get explicit with named characters. Why, you ask? The source AI (which AI Dungeon uses) was partially trained on fanfiction alongside Wikipedia and assorted other text sources. Fanfiction, for those of you who don’t know, is fan-written fiction about popular media.

Fanfiction is great because it can go absolutely buck-wild with the content: characters are put into alternate universes where a certain character’s death doesn’t happen, or maybe the entire cast is working in a coffee shop. Maybe two characters end up in a relationship even though nothing in the canon of the work suggested that could happen. It’s a great place to start for aspiring writers, since the characters are all already written – all that’s left is to put them together in a different way.

Unfortunately, a lot of fanfiction is… explicit, so filtering is very necessary. Feed AI Dungeon explicit content, and it will attempt to recreate what it was trained on, which was an absurd amount of explicit content mixed in with all the general-audience and PG-13 rated content the developers wanted to use.

The worst part is not the explicit content, which is allowed – it’s that the machine didn’t know it was only supposed to apply to adult characters, which ended up creating some very awkward, uncomfortable content for people who discovered the flaw. As such, they’ve updated their reporting system to keep that from happening again, and the Dungeon is now allowed to auto-flag itself if it spots content it’s not supposed to be making.

 

Potential for Racism

 

Unfortunately, training facial recognition software with mostly white people means that the computer only understands white faces, and it doesn’t have sufficient training in other areas as a result. When the only bird you’ve seen is budgies, every budgie looks different – but all cockatiels look the same until you’ve gotten more experience handling them. AI isn’t being given the necessary experience, and as a result it’s flagging Black and Asian men disproportionately.

It’s happened before. It will continue to happen unless steps are taken to prevent it. All it takes is a mistake that the human handlers either don’t catch or deliberately ignore, and an innocent person is a suspect where they otherwise wouldn’t have been.

Interestingly enough, this phenomenon is also identified in more primitive AI, the kind that soap dispensers and automatic door openers use. “Racist soap dispensers” sounds like political fluff, but it is an issue: the dispenser is programmed to detect white hands. It doesn’t know if it’s supposed to respond to darker palms, so it just… doesn’t. Older styles that relied purely on movement were actually doing a better job than the kind that’s supposed to identify hands to dispense. Exclusion may be an accident, but its result is still the unfair treatment of different races.

The biases of the researcher are reflected in their research, and if they don’t notice it themselves, they may believe they haven’t had a hand in how the data was collected, or how the questions were chosen. That’s why it’s so critical to test, re-test, and re-re-test experiments. Biases are difficult to rule out, but not impossible. Don’t assume that a machine is perfectly logical and always right: it was made by humans, after all.

Sources:

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://www.artbreeder.com/

https://latitude.io/blog/update-to-our-community-ai-test-april-2021/

https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

https://www.nature.com/articles/d41586-020-03186-4

https://www.nature.com/articles/d41586-019-03013-5