Category Archive


Why do Recipe Websites Give you an Essay Before the Recipe? Or, the ~Magic~ of Ads.


Cookbooks. They’re great. They don’t have to load their contents, and they usually contain tons of helpful technique information on top of recipes. But they can be expensive, and they don’t always have every recipe you want. So recipe-makers turned into recipe-bloggers. Over time, the content got longer, and longer, and longer… and more websites sprang up out of nowhere with recipes.

And are the essays at the top of the recipe really that annoying?

Longer Websites = More Engagement… With Ads


This is the root of the problem. Access to the website doesn’t cost money, but it’s not free. To provide the platform for this recipe, most recipe bloggers use ads. If everything fits onto one scroll-bar’s worth of page, then they only have room for one scroll-bar’s worth of ads. ‘What, so I have to scroll for two minutes because they want more money?’ doubters might say. Well, yeah.

Hosting a website costs money. That doesn’t even include the labor of producing the recipe, taking the photos, and ultimately, creating the content that makes the website tick. Hosting something that other people can anonymously comment on is brutal and often thankless. The essay system allows many websites to keep running even if they’re very small. Recipe bloggers are asking for your time in exchange for free access to a quality recipe, instead of money, like cookbooks would.

Surely, viewers are adaptable enough to understand that, right? Most people are reasonable enough to wait or scroll for content they value… right?

Unfortunately, the end consumer doesn’t know the quality of the recipe before they invest this effort to get to it. It might have five stars, but only produces two servings when four are needed, or it might have five stars, but all the comments noted that ‘it fell apart, but it would be great if it had eggs!’ So it is frustrating to wait for the ads to load, wait for the page text to load, sit there as the auto-play video buffers so you can close it before it makes noise, scroll down so the recipe itself loads, wait as the screen jerks around because the top bar ad still had to load… it feels agonizing to wait for something when Google made it seem so easy and just scraped the ingredients for the slug.

It’s even worse if you don’t know how long it will take for it to finish – unpredictable waiting times make consumers angry!


More Engagement = More Ad Revenue – No Matter the Quality


Try to assume the worst of the recipe blogger, for a second. Assume the story’s obviously made up, or irrelevant to the final recipe. Assume it’s poorly written, and the narrative style doesn’t capture your attention. You only notice this if you’re actually reading these things or if the website sucks so badly you can’t jump to the recipe. Both of these scenarios mean you’re interacting with the site. The motivation to make the site better and shorter is merely “pleasant feelings from consumers”, but the motivation to keep it as-is would be ad money. If it’s what’s called a ‘click-farm’, then they don’t even care about consumer feelings. Click-farms don’t care about anything but views, they don’t care how many users hate the site, they avoid optimizing on purpose because you stay longer.

You’re more likely to click an ad, accidentally or not, if the website’s laggy, jumpy, or slow. You blame your frustration on that essay, because it’s the only thing you can still see when the site’s lagging, and it’s all totally pointless to you. (My conspiracy theory is that those auto-play videos aren’t meant to actually play a video, they’re there to slow you down. I have no proof of this. Don’t quote me.)

Bad recipe websites make users less tolerant of the ones that don’t make users suffer like this. And it’s not about the essay, it’s about formatting!

Determining a recipe’s worth has become harder because of this essay/ad space system, and frustration caused by poorly optimized websites is now transferred to the website’s format, which is a different thing entirely. Furthermore, click-farm websites exploiting the format get mixed in with the real sites ran by real people, but the end user can’t tell which is which. Recipe websites didn’t used to be like this, and many still remember the good old days. In fact, the good old days are still here, but because so many people are using mobile phones instead of desktop, this essay issue feels more prevalent than it actually is.


You’ll probably stay unless the website is atrocious – and they know that


I don’t find myself often visiting the same site twice; when I’m looking for a recipe, I usually already know what I want to make, and I’m just looking for a recipe to facilitate that. I, like many people, don’t follow these recipe blogs for ideas. There’re so many websites following the same format that they’re all more or less interchangeable. So it would make sense for a good website to try and outcompete the others by optimizing better, right?

That’s the trick, that website has to show up in the results first for that strategy to be effective. But if they’re new (and if they’re one of the millions of sites with a blueberry muffin recipe) they’ll get sorted to the bottom, and the top sites all follow the winning format because the winning format can pay for their ads. The newbies then have to optimize for the limited number of visitors to their website, which – you guessed it – means following the winning strategy. Increased funding means they can now pay for advertising campaigns, and now they’re one of the horde.

Besides, If I click on a website and realize it’s terrible, I’m still going to wait for it to finish loading. I don’t know if the other websites with similar recipes are going to have the same loading time, so I’m not saving any time if I risk it and find out the second result from the top is also poorly optimized. They’re all playing chicken, and they know that aside from standouts like Allrecipes and other crowd-sourced sites, you really don’t have another option. You won’t leave unless the website’s truly, truly horrible.


Personality books and TV – Hope


This whole event is so frustrating that cookbooks have come back into fashion, but with online personalities instead of TV ones. Binging with Babish, Sohla El-Waylly, Claire Saffitz – you might not know these people, but they have a big enough following on Youtube to create and sell their own recipe books.

I know these names because they got big – and because they broke through the format that haunts these smaller recipe bloggers. Therefore, I don’t worry that Babish’s website is going to suck because I enjoy his content, and I know the quality is going to be there. I know Sohla is an expert in her field, and I know the recipes she films have worked for me in the past, so I know the cookbook’s going to be decent at minimum. I don’t know that for these recipe bloggers. I’m interested in what Claire has to say about technique, because she went to school for it, and she tells her viewers where these techniques came from. Recipe bloggers screw up techniques (or oversimplify them) all the time, so trusting one feels more dangerous than it should feel.






How do RFID chips work?



RFID stands for Radio Frequency Identification, and it’s usually used in the context of a chip! There are active and passive types: an active RFID chip has a tiny battery with it, while a passive one is powered by the energy of the reader’s signals alone. Active chips can be read from much greater distances, but the battery makes them heavier and more expensive. Meanwhile passive chips have to be blasted with the RFID signal to be read.


How do they work?


RFID chips are great because they’re small, and they don’t take line-of-sight to read like many other cataloguing techs do.

There are three major parts to an RFID chip: the microchip, an antenna for receiving and broadcasting signals, and substrate to hold it together. RFIDs work with radio waves, a form of electromagnetic radiation. They actually got their start during the end of WWII, where a Soviet engineer created a passive listening device activated by radio waves, which would then store a small amount of information about the transmission. It wasn’t really the same as what we use in security tags and inventory systems today, but it was a tiny passive chip with information stored on it passively, and that’s close enough! 1973 saw a real attempt at the kind we have today, and ever since, they’ve been shrinking in size.

RFID chips can also come with read-only or read/write memory, depending on the style of that chip. Essentially, it has a very small amount of memory on it, just enough to store things like batch number, serial number, or address, in the case of pet tags. They’re not very complex: in the case of an active tag, the reader simply dings the RFID chip, which then responds on a compatible wavelength with the relevant information via that antenna.

Some chips broadcast constantly, while others broadcast on a regular interval, and some wait for the RFID reader to ding them before they send their data. In a passive chip, the RFID reader has to ding the chip so hard that it absorbs enough EM radiation to respond – energy hits the antenna, travels to the chip, and powers it enough to activate the antenna for signalling, which then causes the chip’s signal to travel back up the antenna and transmit to the reader. Neat!




An RFID chip’s low profile and small size makes them great for inventory management. Since the chip doesn’t need line-of-sight like barcode scanners do, production doesn’t have to worry about maintaining a certain orientation towards cameras for their items, they can just pass them over an RFID scanner and they’re good to go. Radio waves can pass through solid objects!

The RFID chips are also good at tracking inventory while in the store: you’ll notice many big box stores have an exit with detectors alongside the doors, which prevents unscanned or active chips from getting out the door. It also sometimes triggers on nametags and items the cashier had to scan in the cart, but most of the time it works as intended.

RFID chips are great for livestock and pet chipping – they’re small, and not only are they less painful than a tattoo, the data is also unlikely to migrate or blur like ink could in a pet’s ear. The initial wound is also smaller, which makes infection less likely. That doesn’t mean they’re perfect, but they carry a lot more information for less relative risk to the animal.

On the human side, RFID chips are frequently used in employee identification badges – the theory is that it’s harder to copy and easier to read than a barcode scanner for restricted areas. Some people go so far as to get them implanted, but the ethics of that are… iffy, to say the least, even if they want the implant. The long-term effects in humans just aren’t that well-known, and while pets are a good indicator that nothing should go wrong, pets also don’t have to worry about getting their phone hacked because their pet tag carried a virus along.

RFID chips are now popular in credit cards! The chip in the card is (in theory) safer than the regular magnetic stripe, and it’s supposed to be much harder to copy. Of course, early versions still had their issues, but now they’re difficult to signal from a distance.




RFID chips aren’t free from flaws.

Security can be a problem, especially for active chips, which can be read from hundreds of meters away. Most vendors have some sort of protocol in place, but for a hot minute, RFIDs in cards were a potential security nightmare. Remember all those anti-RFID chip wallets? That’s because readers were able to access the chip as though they were being used for a purchase. It just wasn’t very safe before protocols were established.

Secondarily, a bunch of folks went out of their way to prove that the more complex RFIDs could become transmission sites for computer viruses – one guy had one implanted in his hand, and if the virus could infect that hand, then the virus could get anywhere he could wirelessly. The perfect crime! Airgapped networks were no longer safe if RFIDs were on the table.

Incompatible readers can make inventory transfers more painful than they need to be, as well – the ISO sets standards for which channels get to be used for what purposes, but the companies have to comply with them first. They also have to have the right kind of reader – is it scanning for active or passive chips? The two have very different needs. An active reader might not be able to find a passive chip!

There’s also the sticky issue of privacy and destruction. How do you get rid of the tag on the product once it’s no longer needed for inventory? RFIDs can be destroyed by microwaves, but that doesn’t help if they’re attached to an electronic, which can also be destroyed by microwaves. They can be wrapped in foil a couple of times, and stop transmitting long distances – on some objects, that makes them unusable. It takes special equipment and some professional skill to actually scan a building for RFIDs, but it’s not totally impossible.


It just takes work, the kind of work a stalker or government agent might be willing to put in if they needed info on a person so badly that they’d want to see what items they had in their house. This is also more difficult than it sounds because most chips go by something vaguely specific, like a batch or serial number with no product name attached, but it’s not impossible. It would just take quite a lot of effort when stalking via binoculars is much easier.

It’s also still possible to clone RFIDs – passports with RFIDs in them could be an especially large problem for both the original holder and the government of that country. The obvious option, credit cards, are still cloneable too, although with modern banking it’s often not worth the investment for the scammers.

However. With tech improving every day, it may be possible to limit what chips respond to which scanners, which would make it much more difficult to invade privacy. Chips get smaller and smaller every day, so it’s entirely possible a password- or signal- protected RFID may some day come into power.




Good and Bad Ways to Cool a Computer

Elizabeth humor, Ideas July 7, 2021


Listen, sometimes machines get old, and they work too hard, and then you don’t want to burn yourself by watching Netflix, so you resort to other methods of cooling your computer. There are right ways, and there are wrong ways.


DON’T: Put Your Machine in the Freezer or Fridge


It sounds like a good idea, but it’s really not. Condensation can form on the inside of the machine, which can then permanently break things as said condensation re-melts and drips onto other components inside your device. Plus, if it’s a systemic issue like a broken fan or overworked CPU, this isn’t actually fixing the issue. You’re going to be taking your machine in and out of the freezer forever!

Cold screws up glue over time, too!

As an unrelated hack, freezing gum can usually get it off the bottom of your shoe.


DON’T: Put Ice Packs, Popsicles, or Bags of Ice on or in the Machine


Condensation, once again, can ruin your machine if it drips into the wrong spot. However, ice bags have the added danger of leaking! Ice sometimes has sharp enough points to pierce its own bag. Popsicles, while usually sealed for safety, are not worth the risk of some sharp component in your machine piercing the bag full of sugary dyed liquid. If that doesn’t kill the machine, it will make you wish it had!


DON’T: Run Every Program at Once


You shouldn’t be running high-distance Minecraft alongside high-render Overwatch while also running your internet browser for a live Youtube stream in 4K unless you’ve got a super-computer. If it lets you get those programs open and running, but you notice your computer is unusually, abysmally hot, those programs might be contributing. You can overload your CPU! If you can’t identify which program specifically is eating up all your CPU’s power, check the task manager! Windows devices have a task manager that allows them to see how much of the RAM, the hard drive, and the CPU the program is using. Just hit (Ctrl + Alt +Delete) and you’ll reach a menu with Task Manager at the bottom. If you can’t narrow your issue down to a specific program, then restarting the computer may fix whatever background program has gotten stuck in the RAM. It’s a good idea to reboot regularly anyway!


Now that we’re past the don’ts, what should you do? You obviously can’t let it stay hot, that will slowly fry the hard drive. Excessive heat is worse for electronics than cold is, especially the kinds with batteries in them. You should take steps to cool off your machine if it’s getting ridiculously hot.



DO: Use a Fan


There’s a small fan inside of your computer already. If it’s not cutting it, then the next best step is to use a real fan, and just position the intake for your device in front of it. The extra air flow is just doing what the fan inside the device was already doing, but on a bigger scale! You might find that repositioning your computer so the fan will fit by the intake can help cool it down, too – computers in front of windows might be absorbing more heat than you realize.


DO: Use a Specially Designed Cooling Pad


Some companies sell cooling pads, pads that cool the device down externally. These are specially designed to avoid condensation, while still wicking away heat safely. If you can’t get a fan into the area it needs to be, a cooling pad is a solid second option. Unfortunately, due to the shape and size of PC towers, this is generally only feasible for laptops.


DO: Make Sure the Vents Are Clear


If the machine’s pretty young, and the programs on it aren’t too intense for its specs, the reason may be external. Check where it’s vents are! Especially for PCs. If the tower is pushed right up against the wall, it might not be able to generate the airflow it needs. Also, don’t put stickers or decorations over vents. That’s also bad for the vent’s venting power.

Speaking of vents, make sure the vents are cleared of dust, too! Clean off the vents, and if you have the technical know-how to clean off the fans when they get dusty, that may also help! Cleaning them improves efficiency.


DO: Restart Every Once in a While


Your computer is doing a lot of things in the background for you. Many programs are still doing things after you close them! Steam, a popular gaming platform, is almost always also connected to the internet when users aren’t looking. It does this at start up, and it keeps an eye on it’s own connection to let you know if you lost internet. It’s not the only program to do this! As such, it’s important to occasionally restart, so these programs don’t ‘get stuck’ on eating processing power for their own little functions.


DO: Consider a Shop


If the computer’s hot enough to fry eggs, the odds are pretty good that something’s up with the CPU, the fan, or it’s own internal thermometer, depending on the age of the machine. If you’ve tried everything you can think of to cool it off, or keep it from getting so hot in the first place, it might be time to visit a shop. At the very least, you should be keeping backups of your files. If the heat eventually kills the machine, a backup saves you a lot of money on very expensive data recovery.




When did Brand Twitter get Creepy?

Where is the line, here?

Strange Marketing Decisions

You might have seen a strange tweet from Sunny D, an orange-flavored drink company.

It’s jarring – a bright orange bottle wrapped in a brightly colored label is suffering from a mental health crisis. Other brands reply, a chorus of “we’re here for you!” and “You got this!” flood Sunny D’s channel. It’s equally strange for Moon Pie, a dry pastry treat filled with marshmallow, to be replying with “stay strong”. How could we have strayed so far from the original intent of Brand Twitter?

Easy: brands do what people do even when they shouldn’t, for sales.

Brand Twitter: The myth of the underpaid intern

Perhaps it started as a truth – when Twitter was small, many brands put minimum effort into showing up. They’d rather spend their advertising dollars on tried-and-true methods of reaching consumers, and Twitter wasn’t exactly big. When Twitter got bigger, organizations started to take notice – but marketing was busy doing other things, so Twitter advertising turned into a pet project for the person who wanted to fool with it. Sometimes that was an intern.

Thus, the legend is born! “The intern” has basically free reign to do as they please because the higher-ups aren’t looking in their direction. It paints the image of a scrappy youngster doing their best for their company the way only an intern can. Of course, most messages are generic, but whatever – that’s a real person representing the organization, and if you tweet at them, they’ll respond like a real person! You have a direct line of access to the company! The brand is a person after all!

This doesn’t stay this way. Companies take notice and realize that this is a great way to connect with consumers, but they also realize this person has an enormous amount of power over the brand’s image now that Twitter’s gotten bigger. Teams and more experienced people up the line begin to take over, and the interns are relegated to ‘help write for’ instead of ‘manage’ the account.

And yet, the myth of the underpaid intern persists. Companies deliberately behave like their account is ran by one. Even when tweets are traceable to different people, the brand acts like they all came from one person, one very funny, energetic, and quick-to-respond intern who knows just the right things to say at any time of day. (You may notice memes are outdated by the time the brand gets to posting them – that’s a clue that it’s a team, rather than a single person with full control).

Sunny D

All of this makes Sunny D’s post even weirder. “I can’t do this anymore”. It’s so simple, and very concerning – it echoes a suicide note. It implies something that corporations are literally not capable of, and if this weren’t brand Twitter, I’d be seriously concerned about the person behind the tweet. In a way, I still am – was this someone who got control of the account for a moment, or did the main account controller forget to switch to their personal account? Surely, this wasn’t intentional. Surely, Sunny D’s marketing dept didn’t look at that and think “yeah, this is okay.”

If they really didn’t mean to tweet that, it makes sense that Sunny D would then try to salvage this tweet that’s already been seen by thousands – a savvy Twitter user knows deleting a tweet like that could cause serious alarm, and Sunny D would be forced to handle it seriously. By replying as though they meant this to happen, they save face. It’s still in poor taste, but it looks like it’s intentional and not as serious as the initial tweet might suggest.

Digital Overstep

If they did mean for this to happen – It’s scummy. It’s a drink brand. Corporations are not really people in real life, no matter what the law says. There is no person with the identity of Sunny D that you could physically harm, there is no body or brain of Sunny D that could be experiencing “I can’t do this anymore”. There are the people that make up Sunny D – that’s a different matter. Those people are real. Sunny D the brand is entirely fictitious, it exists only as an idea used to sell orange liquid. It is a concept. Not a person.

This wasn’t particularly effective advertising, and it was overall sort of creepy. Sunny D markets to children, right? Children can have these issues, but that’s not what Sunny D brought attention to. A brand twitter behaving like a person is distinctly unsettling, in a world where ads are slowly creeping into every facet of life. Brands aren’t supposed to require this kind of mental burden. You aren’t supposed to see something a brand has tweeted and feel heavier for it. It’s incredibly weird. It oversteps boundaries.

Marketing: A Tangent

But that’s what marketing does. It pushes, and it pushes, and if it pushes too far, other brands take a note of how much or what kind of reaction they got. Pepsi’s ill-thought-out ad with Kylie Jenner, for example, was a laughingstock, and as a result many companies quietly pulled back from social issues of the time so they wouldn’t get swept up in the widespread scrutiny that happened afterwards.

Even then, had Pepsi dared to throw its weight behind one side or the other, they would have come out better. Their audience would have been polarized, but the folks who stuck around would have been more devoted. See Gillette’s The Best a Man Can Be ad – they split off some of their userbase with that one, but they also came to be regarded more favorably by the ones who appreciated the message of the ad.

Or, you could look at the MyPillow Founder’s fervent support of former president Donald Trump – that, once again, split his audience (and ruined relationships with vendors), but he made more money off of his target audience as a result. It goes to show that it doesn’t matter which direction you push, as long as you pick one. There are people who believe the moon landing was faked, and those people buy products too – you can sell to them as long as you do it wholeheartedly.

 MyPillow picked one. Gillette picked one. Pepsi didn’t.

Sunny D’s undefinable agenda in this tweet is only working against it.

But what is it for? What purpose did this tweet serve? What agenda is Sunny D advancing?


Simultaneously, brands feel enormous pressure from brands like Wendy’s, which has the benefit of being one of the first to be ‘sassy’ on Twitter. Other notable front runners for this trend included Denny’s, which posted to Tumblr as a singular person before it was cool. Other brands know people remember Wendy’s for being funny. They know Wendy’s is on the consumer’s mind before they are. However, those brands have also noticed the market’s a bit oversaturated for Wendy’s style content at the moment, and it’s not appropriate for every audience anyway. Tony the Tiger might get some angry replies from parents if it roasted a customer. Wendy’s can afford to be mean because it’s not selling to children as it’s main audience.

Other brands then have to figure out how to transmute what Wendy’s has with its audience into something they can say is their own.

Steak-Umms owns that it’s not great quality meat, but it is tasty, and that’s endearing in its own way. It jokes that it’s set is cheaply made online. Denny’s is still Denny’s, and they own the strange atmosphere of their restaurants and turn it into a selling point.

These campaigns are unique, a little edgy, but certainly not tasteless. Unique enough to be remembered, not unique enough to revolutionize the space. Meanwhile, Moon Pie and Frosted Flakes are incredibly milquetoast, but their branding is familiar, and the campaigns aren’t offensive.

All Together

Sunny D, for some reason, strayed from all of this. The tweet’s edgy. It’s too dark for the target audience: if they understand it, a drink brand just threatened suicide, if they don’t, then they don’t get the tweet and it was sort of meaningless. Agenda-less. Not promoting anything except empty words of affirmation towards an entity that only exists on paper. Meaningless chatter that also crosses a line.

The tweet is somehow totally devoid of meaning and wildly too personal at the same time!

Sunny D’s tweet seems so creepy in comparison to other tweets because it’s too much for the average consumer to take in from a stranger even if Sunny D were a person. This is the kind of thing you text to a friend in private. It’s a personal moment. It requires a lot of delicacy and understanding, the kind that a stranger can’t often provide for another stranger. How would you even go about comforting a drink brand?

And keep in mind: a brand always wants to be your friend, because most people listen to their friends. Ask people for favors, and they’ll feel obligated to help – and people like feeling helpful, so generally they’ll think a little better of you. Sunny D is pantomiming a very real, personal interaction with you via this tweet and asking for help it is literally incapable of receiving: once again, it’s a non-sentient drink brand. It’s asking for a favor it can’t use, one you’d never refuse to a real person.

Sunny D is behaving like it’s something it’s not, and in doing so it’s entered a digital uncanny valley. It’s mimicking human interactions in an attempt to sell more orange liquid.

Traditional Storage vs Quantum Storage: What Does it Actually Mean?


Traditional Methods

Traditional storage means a lot of things, right now. Magnetic storage is still used pretty consistently, as SSDs aren’t quite at the point where they replace everything like hard drives themselves did.

Now, quantum computing occasionally hits the news when a major breakthrough happens, and for good reason! Quantum computing promises to do more than any major storage advancement before. Quantum computing isn’t just ‘better’ classical computing – it’s a whole new ballpark, assembled with totally new technology.

What is ‘Quantum’?


Quantum mechanics. It’s frequently used by the sci-fi show’s token show-off to demonstrate their knowledge of physics. But what are quantum mechanics, really? As a concept, they’re not that tough to grasp, and you’ve probably witnessed some of the principles in action without even realizing it! For example, have you ever played the game of hiding a coin under one cup, and then shuffling it with two other cups?

Assume someone sits down to pick a cup, and they can’t tell where the coin is based on you, or your observation. Until they pick up a cup, the coin could be under all three cups. Basically, there’s a 33.33% chance the coin is under the cup they choose. However, once you pick up the two cups you know are empty, the odds condense. There’s now 100% certainty the coin is beneath the final cup, and 0% possibility it’s under the other two cups.

In real physics, this example doesn’t work perfectly. Most quantum mechanics, once observed, break down into observable truths, and you’re an observer too. You, the shuffler, have some way of knowing which cup the coin is under. The coin is probably making a sound as it’s dragged around the table, or maybe the coin is so heavy it is obvious which cup is holding it. If you know where the coin might be at all, it means that there is one observable outcome where the coin’s underneath the noisy cup, and not three potential outcomes where the coin is under all the cups. Observing this makes it true for your opponent, as well!

Assuming coins are actually particles, and the cups are really probable locations, you’ve got something that gets close to real quantum mechanics in action!


Make Waves


Quantum mechanics (without any math in the explanation) are just a way to explain the probability of a particle existing somewhere in a real, physical environment when it’s actual location can only be expressed through that probability, or else it stops behaving the way it’s ‘supposed to’.

This probability breaks down into wave forms, where certain spots are more likely than others to have a particular particle than others. For example, the cups all have a 33.3% chance of coin, but the table outside the cup has a 0% chance of coin. In a dark room, where nobody can observe that the surroundings are coinless, but everyone ‘knows’ coins go under cups, (like we ‘know’ where electrons tend to be found in an electron shell), the chance of it being on the floor are very, very small – but not 0%.

Out of the places you’d pick a coin to be, though, it’s probably still under one of the cups, and almost certainly still on the table. If you looked at this probability on a chart, you’d see hills of likelihood where the cups are, and dips where they aren’t! In this way, we calculate the probable locations of things like electrons and photons, which behave in ways humans don’t fully understand yet. The coins in the above example are like those particles! A photon is probably in a certain area given what we know about its behavior – but attempting to actually measure it as a wave makes it behave like a particle, breaking it’s quantum state. Information is lost, and the particle no longer behaves like it did when it wasn’t being observed. Picking up the ‘cup’ to observe fundamentally changes the behavior of the ‘coin’ underneath!

How does this turn into a revolutionary computing method?




Quantum entanglement describes items (like particles) being tied to each other in such a way that one item can’t be described without also describing the other items in the system, which causes it to collapse as though you were looking at all of it. For example, say you put two different coins under two cups. Each cup has a coin, but which cup has which coin can’t be accurately described until one cup is lifted.

Once that cup is lifted, the first coin is described. The second coin has now also been described because there’s no way the coin you’re looking at is under the other cup, and each cup now contains/has only contained its respective coin. But only once you observed it. The probabilistic wave forms have now collapsed into two points with 100% likelihood.

That doesn’t mean that one coin/particle was always, 100%, underneath its specific cup – until you picked up the cup, both were underneath both cups, mathematically speaking (remember, this is a rough example – coins and particles have different laws attached). Entanglement also has a lot to do with superposition, since both coins would have had to share a location for the cup/coin thing to happen.




Superposition describes things existing in the same space – and it’s not solely tied to quantum mechanics. Two notes played on an instrument at the same time, for example, create a new note out of their superposition. The big thing about superposition is waves. Physical objects can’t be superimposed upon one another, and two particles can’t be in exactly the same location. However… properties of objects can be expressed mathematically, in wave forms, and in that way they can be superimposed. Much like different wavelengths of light can combine to form a new color, the odds of objects being in a certain state, or being in a certain, unobservable spot can combine in superposition!

In the two-cup example, the coins are in a state of superposition until the cup is removed and their options are solidified; before the cups are removed, whatever equations are used to describe a coin’s location can be added to the equation to describe the other coin, and both equations are still valid. Neither is disproven by the existence of the other until one is observed. Until one is observed, the superposition stands.

These concepts, when put together, allow computers to read bits that aren’t yet bits, but could be bits.


Sum Total


All of this sounds really complicated – and it is, mathematically – but conceptually, it just boils down to ‘things can be predicted to be in multiple spots at once’, and ‘things can be a combination of the probabilities of other things, instead of just one thing, until observed’.

A quantum computer looks at probabilistic bits like we look at those coins, and it doesn’t think ‘that’s a 1’ – it thinks ‘this is probably a 1, but if it was a 0, how does that change the data?’ and ‘how does this being a 1 affect later bits?’ The most common path of quantum computing research uses qubits, which stay in a state of superposition.

This means that the qubit is both a zero and a one until the computer looks at it and determines its state via some randomized metric that maintains the quantum state. It could be the state of the electrons at the time the computer reads it, it could be the magnetic direction the qubit is excited into randomly, etc. it just has to behave in a way that outside observers can’t definitively say leads to one specific outcome. If it can manage that, then it can calculate all the available options all at once.




How is this faster, you may ask? Well, the qubit is ‘stacked’ onto other bits. The qubit can be two states, and subsequent qubits can be two states, and… they daisy-chain together to form exponentially larger potential states, which then lead to answers being calculated simultaneously, instead of linearly. In a perfect system, faults are discarded, and then the quantum computer spits out the right answers in a fraction of the time it would have taken a classical computer.

For example, let’s say a password is tied directly to the state of a pair of dice in an automatic shaker. A quantum computer will be able to spit out a probabilistic password, but a classical computer won’t be able to compete! Even if it’s a supercomputer, it will have to get lucky if it wants to guess what  the shaker’s results are going to be before the dice are shaken again.

While this sounds very futuristic, websites are already using algorithms to convert random footage into protection for their servers: the lava lamp wall used by Cloudflare is one such example. By the time a classical computer has calculated what the algorithm required when lava lamps A-Z were in any position, literally all of them have changed. As a result, the code has changed as well, rendering that math useless. A quantum computer will be able to step up to the plate where the classical computer has struggled!

As Dr. Shohini Ghose puts it, this isn’t the equivalent of several classical computers, or one big classical computer compressed into a smaller state – it’s a totally new technology that will behave differently as it advances. Even a super computer would struggle with the lava lamp wall! However, quantum computers may not. Every qubit used to calculate has the potential to lead to a correct answer, or a wrong one. Good quantum computing will kick out incorrect answers as soon as they’re produced, and you’re left with something that the lava-lamp wall algorithm will take as an answer.

Dr. Ghose uses the example of a coin-flip game, where participants face off against a quantum computer. If the computer is told to win, and it goes first, it produces a probabilistic result that only collapses with the other player’s input – the computer is essentially allowing its coin to continue spinning in the air until it can tell what the human player has, and then it catches it, to spit out the answer that it always had. The answer existed in a probabilistic state – and it won, it just needed to be observed to tell the human that. The computer only loses when it mistakes the ‘noise’ answer for the actual result. If it were able to successfully suppress noise, it would win 100% of the time.


Why Not Earlier?


These computers have been seriously considered as a project since the 80s and 90s. And now, they’re making a resurgence. What kept them from being considered earlier?

Logical faults are a big part. Modern AI can suppress things it knows aren’t ‘really’ part of an equation’s answer, but the coin-flip computer above still lost 7% of the time to bad answer output. In the past, quantum computers wouldn’t have been able to correctly identify their own mistakes even down to 7% without a classical computer running alongside them, which defeats the purpose. Unlike classical computers, where faults like that come from the hardware, quantum computers are getting these errors from the state of universe itself. Of course that’s difficult to compensate for.

Aside from that, there were also mechanical issues to sort out first. The computer can’t be allowed to turn the qubit into a regular bit, which is called ‘decoherence’. Decoherence happens once the system is connected to something measurable, observable: out of two cups, lifting one solidifies the probability, and the other cup, even though it hasn’t been observed, definitely has the other coin. If it’s solidified into a regular bit, it may as well have not been a qubit at all!

Mechanically, to avoid decoherence, speed and environmental controls are essential. In quantum computing, you aren’t maintaining that quantum state indefinitely – the longer the computer has to maintain that, the worse off the state is, until eventually something collapses in a measurable way. Heat will do it, stray magnetic or electricity pulses will do it – flip one qubit, screw up the system or collapse it entirely. Decoherence has destroyed the calculations.

Side note: if you’ve heard of the double slit experiment, that’s an example of decoherence! Measuring the particles breaks the system while deliberately not measuring them allows for that nice waveform. Their final location becomes known, but not the path they took to get there. In computing, measuring the qubit before the computer gets to then breaks it down into a not-qubit. Rendering the system decoherent, and screwing up the results of the calculations.




Ironically, Schrodinger haaated that his ‘cat experiment’ got big because folks were taking it too literally. For those of you who haven’t heard of the thought experiment (no cats were ever actually put in a box) the experiment’s set-up was that radioactive material has a certain % chance every second to release a radioactive particle, and then putting this material next to a particle-sensitive trigger would release poison via that trigger into the cat’s box. If there’s no guarantee of poison being released into the box, there’s no mathematical certainty that the cat’s either alive or dead, so it’s both. Just like the coin is under all three cups.

 But not really. At the scale the experiment would have to take place, the cat’s as good as already poisoned (a lump of radiation has so many individual atoms that the odds of one not releasing a particle at any one moment is basically zero), but Schrodinger was struggling to explain the concept to laypersons who otherwise had no exposure to physics.

The thought experiment does a great job of breaking down what’s actually occurring with superposition. It’s not about the cat, or poison, it’s about the particles. If the experiment could be particle-sized, it would work the way it’s described.




Shohini Ghose via TED Talk (direct link:


Public Campaigns Vs. Public Image: A McDonald’s Story

#Farmers of McDonalds


#MeettheFarmers was the initial campaign, the one that McDonald’s had planned for. There was some static; the same issues that apply to the meat industry as a whole apply to McDonalds, but complaints of cruelty and poor animal husbandry didn’t completely ruin the hashtag. Generally, the farmers were happy, the animals looked happy, and McDonalds was happy with it’s campaign.

It’s always risky to get folks involved if the company doesn’t know for sure that they think of it positively, but so many farmers rely on McDonald’s that it was basically a slam dunk. Sure, PETA got a little rowdy, and nobody’s going to make themselves look bad by posting pictures of their sad or poorly treated animals, but the crop farmers generally felt that they’d been treated fairly by McDonald’s. This was great! McDonald’s does a lot to support local farming, and it was smart to emphasize how much of their food they bought from US farms. The patties are still packed with things that keep them from molding, but the cow didn’t have to fly overseas to get to it’s destination packing plant or restaurant.

It’s a genuinely good campaign – they could trust that they’d done right by the people who’d be replying, and they’d retweet specific responses as a form of curation.  Marketing done right!




After the success of #FarmersOfMcDonalds, McDonald’s paid to promote their next big hashtag, #McDStories. This went downhill, very fast. Where McD farmers are a small, controllable group who are generally professional, the public is… not. And it turns out, when a corporation consistently underpays and overworks it’s workers, they’re going to do things or skip things that customers notice. McDonalds yoinked the paid promotion slot, but by then it was already out of control.

All they could do was damage control as all sorts of nasty stories rolled in. Violence by staff members, sanitation issues in the bathroom, uncleaned ice tanks, solicitors in the parking lot, solicitors inside the store, solicitors coming up to windows in the drive through – customers had seen it all! Heck, even workers joined in – McDonald’s was apparently struggling with it’s management chain in places, and issues that could have been resolved with better training and store support (the shift lead isn’t supposed to be in charge of pest control, for example) just weren’t even getting noticed, until #McDStories forced them to the front of the line. Bugs. Food contamination. Food poisoning. Incorrect cleaners being used for grills and the ice cream machine. Rats. All things that could be controlled or even eliminated with better contact from whoever’s in charge of regional management.

Beyond that, though, the campaign showed that McDonald’s didn’t really know how it looked on the outside. A fast food restaurant universally loved by children, or a fast food restaurant with locations that play it fast and loose with adult customers’ food? The upper management had allowed both to happen, but only one of those demographics is regularly on Twitter.

McDStories highlights a critical disconnect between the McDonald’s marketing department and the outside world.


Marketing Mistake


Every company wants to be perfect in the customer’s mind. They all want to be clean, friendly (except for Dick’s), and accessible. However, things start to split when you get specific: pubs cater to adults, so they wouldn’t have the same bright colors as family-friendly restaurants. Therefore, the marketing for a pub is going to be very different than the marketing for a fast-food restaurant for kids. McDonald’s has been trying to shift more towards adults in modern times, and since adults care about different things than they used to, McDonald’s has been struggling to find a common thread among McDonald’s customers. Maybe this was a crowd-sourcing campaign for ad ideas, maybe it was just an attempt to appeal to adults.

Either way, it made a disconnect between ‘McDonald’s the brand’ and ‘McDonald’s the restaurant chain’ pretty obvious. You’d never see Waffle House doing this sort of campaign. Waffle House knows what kind of people stumble into their restaurants at 3 AM for a couple of post-bar waffles, and wild stories of incidents inside Waffle Houses scatter the web. A famous Vine shows two of the employees fighting while a customer asks for a waffle in the background. And yet, Waffle House is well-liked. It knows what it is, it doesn’t try to pretend every customer has a great time; they’re there for cheap food and the strange sense of community a 3AM Waffle House has. Besides, Waffle House’s management style seems to keep customers and workers alike pretty happy!

Denny’s, another cheap diner with 24 hr locations, has incidents, but they rarely go viral. They’ve gotten a cultural image of ‘you ate at Denny’s, you knew what you were getting into’. McDonald’s has unknowingly slipped into the same territory – appealing to adults with a 24hr schedule means you’re going to get some strange customers. They don’t seem to realize that’s where they’re at, so they don’t know how to lean into it yet like other 24 hr restaurants do. They’re very concerned with being family friendly. Management- and Marketing-wise, something’s obviously slipping if these stories were genuinely unexpected.


How Could They Fix It?


The long and the short of it is to listen. Burger King’s rat-bun scandal caused the store to shut down while Burger King corporate handled the issue. McDonald’s has had complaint-tweets before, but somehow they don’t get much traction until the tweet’s got a bunch of retweets, so really, they set themselves up by using a scrollable tag. Companies that won’t respond to anything but highly public tweets about their issue do this to themselves. If a complaint to the manager or to corporate doesn’t change things, then of course the customer is going to resort to what works.

Listen to customers through official channels, and the unofficial ones that everyone can see won’t be flooded with horrible stories of missing quality!



How to make a bad UI – Dropdown Edition


I’ve seen all of these. Witnessed them with my own eyes. How can a fair and just universe allow such crimes?

Anyway… here’s how to make a bad UI for your fillable forms.


1) Be sure to use a dropdown wherever you can.


It is so much easier to just include a type-in text box than it is to try and imagine every. Possible. Response to your question. Exceptions are rare: countries, states, shipping speeds, and other static items are suitable for dropdown menus because their numbers are limited. Anywhere or anything else? A dropdown can be a nuisance, especially because you might still miss options if you’re trying to keep your menu short. There’s nothing more irritating than a question asking me to be specific when they don’t have my option!

Dropdowns should also be avoided if the user could want to select multiple items: dropdowns are for single choices. They aren’t meant to act like checklists. Picture trying to order a burger for online pickup, but the ‘extras’ selection menu is a dropdown. I can have extra onion or extra pickles, but not both without calling in the order. Why?!?


1.5) Mix unrelated items into the same dropdown menu.


Ryanair forces (or at least used to force) their users into adding insurance by putting the opt-out option in the same menu as the location choices. Users who didn’t want the insurance may have already given up on finding the opt-out option by the time they’re picking their insurance, only to discover it near the bottom of the menu. That doesn’t mean they now want the insurance. It just means they’re frustrated by Ryanair.  A yes/no question shouldn’t have the conditionals for ‘yes’ in the same list as a hard ‘no’.

Ryanair’s reputation is garbage because of decisions like these. Is the short-term money worth alienating customers?


2) Make the dropdown incredibly long, and don’t organize it alphabetically.


Once again, including a typed response or blank box ‘other’ option allows a business a lot more flexibility than listing out every. Possible. Response, but if you have to, arrange it in a way that makes sense. Some UI websites even recommend breaking down your menus into submenus, so Greek users looking for expedited service on your site can select the “Expedited” option from service levels, and then “Greece” from there.  Each menu is shorter and easier to navigate when broken down by relevancy. You don’t need to list Greece twice in the same drop-down for first-class and then expedited shipping.

If you want your users to suffer, don’t arrange your inputs in any logical manner! Numbers can be in any order you want! European countries should be mixed in with the Asian and African ones, and there better not be any alphabetizing going on! Don’t forget to put Sealand at the top!


3) No autofill.


We die like men, scrolling through that endlessly long dropdown with two or three options for every state or country, to get to where we want to be. Most good dropdowns allow the user to type the first couple of letters of what they want, so they can skip opening the menu entirely. On eBay, I

hit the keys “N V” and I get to NV without needing to even open the menu. However, that’s a good idea. And we’re not doing good ideas today. Bad dropdowns don’t accept typed input. If you’re really feeling evil, make sure the button presses interact with the page in a strange way, like highlighting questions or activating the ‘next page’ button. For some reason.


4) Don’t allow users to leave it on “select an option” or blank.

Listen, websites shouldn’t punish users for accidentally clicking on a drop-down only to realize it doesn’t apply to them. If the ‘leave blank’ option disappears after clicking, then it’s poorly designed – for those ‘leave blank’ slots, users should be able to un-click the box or leave it. For example: Users may be coerced into picking ‘sneezing’ if they really can’t leave this question blank.


This also applies to question trees. A user should always be able to go back and select ‘no’ for a question that comes with follow-up questions, without having to still answer the follow-ups. Accidentally selecting ‘yes’ to special instructions and then having to type N/A in the pizza delivery place’s special instructions box makes users feel like they’re being annoying!

Similarly, optional multi-choice questions should either be possible to uncheck or have a “none of the above” option. Don’t make users refresh the page to get their correct answer!


5) For necessary items, let users skim past.


On the other hand, some dropdowns shouldn’t default to an option that allows the computer to continue forward. If you’ve ever checked out from a small website before, you’ll know that sometimes they put the USA and Canada at the top in shipping – their most used options get to go first. However, the USA shouldn’t be in that blank select-a-place box automatically. Locations for shipping are one of the few things absolutely necessary to complete an order, and if the customer misses that, they’re going to have a bad time. After all, there’s a Paris in Texas and an Athens in Georgia! Don’t default to USA just because it’s the most used option, or skimming customers are going to be very angry when their package doesn’t arrive.


6) Don’t include Dialogue Boxes When Doing Negative Actions


Dialogue boxes prevent mis-clicks from destroying your user’s life, patience, and data. Even a single ‘are you sure’ prevents tons of data loss. Remember, around 30% of the data lost last year was due to human error, and that’s with verification – the number would be much higher without these checks in place.

If you want to do a bad job, go ahead and skip these altogether! No verification questions! No dialogue boxes! Clicking the little trashcan icon once makes the item disappear forever! Now your user is mad!

Alternatively, if you want to do a terrible job subtly, it’s always a bad idea to be vague. Be as vague as possible when designing your verification question. “Are you sure you want to X?” is too many words. Make that question as confounding as possible.

“Complete Action?” “Don’t Delete X?” “Delete?” “Keep X?” and so on. Only having the options of “Cancel” and “No” are also good for maximum confusion.

6.5) Include too many Dialogue Boxes


You could also use a dialogue box for every action to make a bad site.

Search for X.

Nice, I have results. Add to Cart? Are you sure you want to do that? Oh, you are? Okay. Oh, you want to add this to the cart too? Are you sure? Hmm, okay. How about leaving this page to go to the next one, are you sure you want to do that? No? See, I saved you a click! Oh, I’m sorry, you must have hit the wrong button, you really did want to go to the next page, heh. Do you really want to narrow down your search results? Are you sure? Are you sure, you’re sure? Are you-


7) Make the end user do too many things to access certain information.


If you want to waste everyone’s time, make the end user fill out the entire form to get to critical information, such as when appointments are held or what requirements must be met to enter the building. The end user doesn’t even know if they’re even interested in the service yet!

If the business needs to screen appointments, it can do that when the customer is actually going to make the appointment. If the business can only accept certain file types for printing, it needs to make that clear before the customer goes through all the work of selecting the style of banner they want.

If you want your end users to suffer, be completely opaque about whether or not your services will suit them. Heck, you could even go so far as to hide your business hours! “Hah, you’re looking for music lessons, and you want to know what kind of music instructors we have? Too bad, fill out this skill level form before we tell you that we don’t teach the flute here. Or adults, we’re kids-under-14 only.” Do that. If you want your website to cause frustration, do that.




Optical Storage


Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc. Don’t microwave the disc unless you plan on trashing the microwave soon, though. Most discs contain some amount of metal, and that can wear the microwave out faster. Fun!




“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?




The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.




DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.




That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.




CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

DVDs also carve out pits (or burn marks) into the shiny material of the disk. Just like CDs, a DVD can only be written on once, although DVD-RWs do exist (and struggle like CD-RWs do).


FlexPlay Self-Destructing Entertainment


Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.




Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.




Curb-Cutting Effect: Accessible Software is Good for Everyone


Certain design choices make software easier to use – and there’s no reason not to use them.

What is the curb-cutting effect?

Most sidewalks have a dip in the curb, where the concrete comes down to meet the street. This is designed for accessibility: people in wheelchairs would struggle to cross the street if the curb didn’t do this. They could even be injured trying to get up and down over the curb, or rolling along in the street til the next loading ramp for a business intersected with the road. However, the dip also makes life easier for everyone on wheels – skateboarders, bicyclists, roller-skaters, etc. don’t have to dismount and carry their transportation over the curb. Elderly folks, able-bodied people, and people suffering from conditions like drop-foot can now aim for the dipped curb as well to make tripping less likely. The curb being carved out has benefited everyone, even though it was only put in place for the folks in wheelchairs.

Accessibility features, when done right, can benefit everyone, not just the people they were made for. This is the curb-cutting effect in action!


Menu Design


Good, clean, accessible menus benefit everyone, not just sight- or mobility- impaired people. The faster and easier it is for someone to find information, the less likely they are to leave the site. How many times have you tried to find something specific on a website, only to give up and Google it again to find it, on the same website? For example, say you’re looking for a local museum’s hours. They have a drop-down menu along the top, but the options are only there when you hover.

You see information about the exhibits, you see information about the team, about the funding, about the history of the museum itself, you see where to buy tickets or make donations – but the hours are nowhere to be found. You give up, turn around, and instead of engaging with the website, you engage with Google to find the hours. This is obviously annoying for the end-user, especially since that info may be outdated! For people with dyslexia, people with cognitive delays, and sight-impaired people, it’s nearly impossible to navigate, and they end up calling instead.

If your website is really, really big, it’s better to include a table of contents menu instead of a dropdown. You can get more specific about what exactly is in each section, so users can navigate the site faster. If your website’s pretty small, a menu that just brings the user to the right point on the page may work – the user can get there either through scrolling or the menu, so they can Ctrl + F to find the museum’s hours. This also helps users in a hurry to find info!  If your website’s pretty medium, a solid, non-retreating, clickable menu is more helpful than ‘hover’ menus. All of these options also have the benefit of making the website easier to use on mobile devices!


Readable Text


Screen readers are a popular choice for blind and seeing-impaired folks, but a critical problem they face is images of text rather than plain text the reader can read. Unfortunately, a lot of information is lost when the image also contains text, like screenshots of tweets or memes with the caption built in, and text captions can give at least some of that information back. Captions help screen-readers out tremendously, and make websites like Reddit, Facebook, and Tumblr, which all use a lot of pictures, more accessible to the blind. However…

Readable text is also easier to copy-paste, easier to cite or quote, and easier to search for. If you remember part of a quote from someone on LinkedIn and want to share it elsewhere, for example, you’ll probably search for it on Google and see if the website crawlers can find it first. If that quote was shared in a photo, the website crawlers won’t be able to find it – there’s no text for them to read. You’ll be forced to either plagiarize (don’t do that), drop the quote, or scroll through your feed til you find it. All of that could be avoided if the person had captioned their photo.

Closed-Captions and Transcript Options

Captions aren’t just for the blind. Many deaf and hard-of-hearing people can lipread, but many more can’t, and some would just rather not if other options, like captions, are available. Besides, lipreading isn’t perfect anyway. Think about shows where someone’s saying something off-screen, or mouths are covered, like Cops, or Grey’s Anatomy. Captions make these shows watchable for people who don’t want to spend half a show guessing what the person with the surgical mask is saying.

Captions also help hearing people. If you’ve learned a language in a classroom setting, including captions, even if they’re also in that language, helps comprehension. Captions make it possible to mute a training video in the office and still gain the information, without disturbing coworkers.

Transcription options are another great example. Most adults read faster than they can speak – a transcript that takes five minutes to read may take ten or fifteen to speak out loud. When a website has no textual information about a video they’re hosting, it suddenly takes 20 minutes of video to gain information that could be read in seven minutes, without the annoying stop-start of rewinding to critical information, watching it forwards again, rewinding when you can’t figure it out, rewatching, watching further to see if they mention it again, etc. etc. It’s so much more efficient to have a transcript. Transcripts also allow readers to use Ctrl + F if the transcript is digital, so singling out specific lines of information are easy as pie.

Besides video-tutorials and learning material, choosing poorly while audio-mixing will make captions mandatory. Tenet was an entirely different movie when there were captions. I’m not joking, important information was completely incomprehensible the first time I watched it, and I’m not hard-of-hearing. Captions were only way to get all the information out of the movie!

Better Mouse-Input Technology

Part of the reason so many things behave themselves with gaming mice is because of accessible software. Not everyone has the strength or dexterity to use a mouse and keyboard, so software designers comply with the ADA by making the software take any generic input as a mouse click, if previously set by the operating system. Things like on-screen keyboards further help people with limited movement, and voice-to-text or voice-commands enable folks like Stephen Hawking to communicate and control their surroundings better.

Game designers took the idea a step further and allowed users to key-bind to any set of keys they like. Players that only have enough mobility to reach half a keyboard, or use a thirteen-button gaming mouse, alongside people going for speed records and people on small desks, can now set the buttons to be on a half-board if they so desire!

Those onscreen keyboards also help if you’re using your laptop as your media center and have it semi-permanently hooked up to a bigger screen. Are you going to get up off the couch to look up a video on Youtube? With the onscreen keyboard, all you need is a Bluetooth mouse, and you don’t have to get up anymore!

These software choices are not only good for ADA compliance – they also make your website more pleasant to use!





Moderator Bots: Baby Steps


In a world of ever-growing conversations and large forums, moderating manpower is in high demand. Websites turn to bots. Is that really the best idea?


Children’s MMOs And Overzealous Bots


Poorly configured bots will spot curse words in other words, so bot configuration is especially important to prevent kids from reverse-discovering a curse word. Kid’s games with open chat are notorious for this issue, even though they should have more attention and care put into their bot moderation than anywhere else. That’s the problem: they’ll go to extreme lengths to protect these children! The people programming auto-moderator bots get overaggressive and say ‘no exceptions. None’ to their bots. Context doesn’t matter, if it sees a combination of letters that add up to a curse word, then it has to be removed before other children see it. This, however, causes problems.

If someone tries to type ‘assess the situation’ they may end up with a message that says ‘***ess the situation’. They can confirm or deny words their friends told them were actually curse words by bouncing it off the chat filter. Children may be naïve, but they aren’t stupid!

Moderator bots were also trained to spot curse words separated by spaces l i k e t h i s later on. This isn’t a bad idea – it just has to be more delicately configured. People will do their best to worm around content filters, and if spaces work, then they’ll use spaces to curse out other players. The problem is that the machine frequently doesn’t understand the context of the letters surrounding it, and you get “Ay* **mells weird” instead of “Aya Ssmells weird” from some little kid’s typo.

The irony of all of this is that it creates a reverse censor effect – clean words seem dirty because the bot’s censored them, words like ‘Assassinate’, or “Scattered”, things kids might use in a game. Typos under this system turn into a fount of forbidden knowledge. People will worm around bot moderators, but – especially on children’s forums – it’s important that the bot understands context, at least a little. If it can’t do that, a human teammate is necessary to whitelist weird word combinations as they appear.


Paleontology and Oversized Profanity Libraries


There are many bones. And if you were going to single out a specific bone (in the context of paleontology) just to cause problems, which bone would you pick? The censor library picked the pubic bone, alongside a host of other totally normal words like ‘stream’ and ‘crack’. There were curse words in the library too, but, of course, like most normal, professional conferences, the curse words did not appear nearly as much as the other words used in completely scientific contexts.

As in the children’s MMO example, it wasn’t an innuendo to say ‘the bone was found in a stream’ until the censor library did the equivalent of adding the flirty wink emoji to the end of the statement. Since tone can’t be conveyed over text except by word choice, the computer choosing to single out a definition for ‘stream’ and apply it to all uses is what made it a dirty word. Besides the words with no connection to actual profanity, pubic bones do come up quite a lot when talking about fossils, because it provides information about how fossilized animals would walk. The pubic bone is the ‘front’ bone in the pelvis: two-legged animals have a differently shaped one than four-legged ones, and animals that walk totally upright like humans have differently shaped ones than animals that ‘lean forwards’, like birds.

Why make a moderation bot too strict to have conversations around? They didn’t make the bot! The conference organizers were using a pre-made program that included its own profanity library. Buying a software that includes censorship already baked-in sounds like a great idea! If applied correctly, it can save everyone time and prevent profanity from appearing where it shouldn’t, even anonymously. However, ask two people what profanity is, and you’ll get two different answers. Everyone has a different threshold for professional language, so it’s better to build a library of the ‘obvious’ ones and go from there based on the event. The best censoring software is the kind you don’t have to use. Professional events are better off stating their expectations, before frustrating their attendees with a software that causes more harm than good.


Weaponizing Profanity Filters


Twitter had a bit of a kerfuffle involving the city of Memphis. People using the word Memphis in a tweet got a temporary ban. Then, a rash of baiting other Twitter users into using Memphis hit once word got around. Memphis getting users banned was the result of a bug, but the incident itself highlights issues with profanity filters. It’s possible to bait people into using banned words, especially if they aren’t inherently a profane word when used out of context.

For example, some online games will filter out the countries of Niger and Nigeria, to prevent misspellings of a racial slur from evading a deserved ban. Why would North Americans ever be discussing African countries over a game set in Russia, after all? But, by including them, they’ve created a way to troll other players without saying anything profane (in context). Baiting another user into answering questions about the countries will result in them getting banned, not the question-asker. The person who answered now has to contact the human support line to get unbanned, or wait for their timeout to end, which is annoying and inconvenient for them. The anti-profanity filter has been weaponized!

Building a positive culture around a game takes a lot of effort, and profanity filters are an integral part of keeping arsonists and trolls out. Nobody should feel targeted in game chat for reasons outside the game. However, just like with every example mentioned here, humans should be on call to un-ban and un-block users who were genuinely attempting to answer a question. Err on the side of caution, both with the software and customer support.


Are Bots a Cure?


Short answer: no. Most good moderation teams have at least one human on them in case the bot screws up. Preferably, they’ll be able to respond to ‘deleted comment’ or ‘banned user’ complaints right away. Even better, if the bots are configured well enough, they’re not going to be jumping the gun often enough to take a team!

It’s just very difficult to make a bot that understands people well enough to understand every instance of bad language.

If you’re running a forum and you don’t want people using profanity, you will censor the profane words. A bot could do that. But then there’s things like LeetSpeek, where users will spell the colloquial name for a donkey with two fives in place of the ‘s’s. Do you ban that too? Sure, you could add that to the bot’s library. But then they change the A to a 4. Do you censor that too? If you do, people will push to figure out what is and isn’t acceptable to your bots, and they will. Not. Stop.

And then there’s things like homophones! TikTok, a popular video app, has a fairly robust profanity filter for text. Videos with curse words and sensitive topics in them are noticeably less popular than ones without those words, due to TikTok’s algorithm.  However, people making videos on sensitive topics use phrases like ‘Sewer Slide’ and ‘Home of Phobia’ to evade the bots. The bots, then, have not stopped anything. These conversations will happen no matter what TikTok’s moderators want, and banning the word ‘sewer’ is only displacing the problem. If you don’t want users discussing these things on your site, you’ll have to have human moderators at some point.

Language is dynamic, and bots simply can’t keep up. It takes real people to study languages – why wouldn’t it take real people to moderate it online?