AI needs training to understand what it’s meant to do. The quality of the training determines its outcomes.

 

Tay Fed Tweets

 

Microsoft’s Tay was exposed to the worst of the internet at incredible speed. Once Microsoft announced the project, Tay began forming her own tweets out of content she was sent, and it went about as well as you’d expect. Racist, sexist, anti-Semitic language ruled her feed, and she was shut down shortly after. This is an unfortunate experiment, because Tay might not have turned so quickly if she’d just been exposed to open Twitter anonymously. There are still racist tweets out on Twitter, but being targeted by the mob produces a disproportionate amount of ‘bad’ tweets towards the target. If they didn’t announce Tay’s existence, they wouldn’t have gotten as many messages, though. Knowing what the experiment is allows people to screw with it in the same way it allows them to participate.

The experiment was still considered a success: Tay took exactly what she was given and rebuilt it in new, interesting ways. Racist new ways, but still new ways. A machine was able to successfully learn how to make a targeted threat towards another Twitter user. That was huge for machine learning. Of course, Microsoft doesn’t exactly want that to be the face of their machine learning program, so Tay was reset and reinstated with filters in place, under different names.

This is a key experiment, not only because of how fast Tay keyed on to how she was ‘supposed’ to behave, but also because it highlights issues with reinforcing learning in the active environment. AI may end up learning things it’s not supposed to, to the detriment of the environment it’s supposed to be part of!

 

Google Deep Dream Fed Eyes

 

Google’s Deep Dream software was famous when it first reached the public. It was fed pictures of living things, and so living things were all it could see, anywhere. Everything it touched was coated in eyes and fur-texture. It was horrifying. Deep Dream pictured all sorts of eldritch horrors in everyday items – it was doing it’s best, but all it knew where pictures of dogs.

Google fed Deep Dream a set of images from a database assembled by a university, but it wasn’t given all of the images, since that would have been a huge amount of data for the then-small Deep Dream. Instead, Deep Dream consumed the library in smaller pieces, and one of those pieces was a very fine-grained sub-library of images of dogs. Deep Dream’s specific instructions were to take the picture – illustrate what it saw – repeat. Little aberrations slowly turned into eyes and whiskers.

Since then, Deep Dream has added filters that allow users to pick which database of images they want to use, each of which creates a new, wacky image out of their own uploaded images, but the dog filter still sits strong in people’s favorites. Sometimes things are so creepy they’re cute! The story of an AI chugging along and doing it’s best is one for the ages.

 

Art-Breeder

 

Art Breeder is an AI-powered character creation tool, and it’s already been covered by some of the largest YouTube channels on the website. Art Breeder breaks down human expression inputs into granular emotions, such as happiness or fear. Using sliders, the user can then alter a preexisting image, or create a face out of thin air! Art Breeder uses it’s database to put together what it thinks happiness or sadness does to the human face. It’s difficult to get a perfectly realistic human face – most still appear animated or cartoony – but it’s also frighteningly easy to accidentally create a blob-monster.

Art Breeder’s AI doesn’t actually know what it’s seeing, it’s just doing it’s best based on pictures it’s been fed. It doesn’t know that glasses are separate from a person’s face, for example, or that mouths don’t have four corners for smiling and frowning at the same time. It also doesn’t necessarily understand freckle patterns, or where blush belongs. Art Breeder’s fascinating. It can make faces, or it can make motorcycle accident victims, all with a single mis-click of a slider.

 

AI-Dungeon Fed Fanfiction

 

AI-Dungeon, a young and upcoming Steam game, made some waves when it announced that it was trying to fix issues within its script generation – especially the AI’s tendency to get explicit with named characters. Why, you ask? The source AI (which AI Dungeon uses) was partially trained on fanfiction alongside Wikipedia and assorted other text sources. Fanfiction, for those of you who don’t know, is fan-written fiction about popular media.

Fanfiction is great because it can go absolutely buck-wild with the content: characters are put into alternate universes where a certain character’s death doesn’t happen, or maybe the entire cast is working in a coffee shop. Maybe two characters end up in a relationship even though nothing in the canon of the work suggested that could happen. It’s a great place to start for aspiring writers, since the characters are all already written – all that’s left is to put them together in a different way.

Unfortunately, a lot of fanfiction is… explicit, so filtering is very necessary. Feed AI Dungeon explicit content, and it will attempt to recreate what it was trained on, which was an absurd amount of explicit content mixed in with all the general-audience and PG-13 rated content the developers wanted to use.

The worst part is not the explicit content, which is allowed – it’s that the machine didn’t know it was only supposed to apply to adult characters, which ended up creating some very awkward, uncomfortable content for people who discovered the flaw. As such, they’ve updated their reporting system to keep that from happening again, and the Dungeon is now allowed to auto-flag itself if it spots content it’s not supposed to be making.

 

Potential for Racism

 

Unfortunately, training facial recognition software with mostly white people means that the computer only understands white faces, and it doesn’t have sufficient training in other areas as a result. When the only bird you’ve seen is budgies, every budgie looks different – but all cockatiels look the same until you’ve gotten more experience handling them. AI isn’t being given the necessary experience, and as a result it’s flagging Black and Asian men disproportionately.

It’s happened before. It will continue to happen unless steps are taken to prevent it. All it takes is a mistake that the human handlers either don’t catch or deliberately ignore, and an innocent person is a suspect where they otherwise wouldn’t have been.

Interestingly enough, this phenomenon is also identified in more primitive AI, the kind that soap dispensers and automatic door openers use. “Racist soap dispensers” sounds like political fluff, but it is an issue: the dispenser is programmed to detect white hands. It doesn’t know if it’s supposed to respond to darker palms, so it just… doesn’t. Older styles that relied purely on movement were actually doing a better job than the kind that’s supposed to identify hands to dispense. Exclusion may be an accident, but its result is still the unfair treatment of different races.

The biases of the researcher are reflected in their research, and if they don’t notice it themselves, they may believe they haven’t had a hand in how the data was collected, or how the questions were chosen. That’s why it’s so critical to test, re-test, and re-re-test experiments. Biases are difficult to rule out, but not impossible. Don’t assume that a machine is perfectly logical and always right: it was made by humans, after all.

Sources:

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://www.artbreeder.com/

https://latitude.io/blog/update-to-our-community-ai-test-april-2021/

https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

https://www.nature.com/articles/d41586-020-03186-4

https://www.nature.com/articles/d41586-019-03013-5