Category Archive

Innovation

First Computer ‘Bug’

Behold, a common moth. It died stuck in the newest Harvard Mark II after being caught in a relay. But there’s some misconception over whether or not this is why errors are called ‘bugs’, and the rest of the story’s sometimes forgotten. Here’s a brief overview – there’s links at the end for more information on each section, if you’d like to know more!

 

Source of the Term

 

The terms ‘bug’ and ‘debugging’ were already in use in regards to computers and similar machines. This moth incident wasn’t the origin of the term, but it’s a funny side note that an actual, literal bug got into the machine. It would be like actually ‘dropping the ball’ during a game.

The term ‘bug’ has been in use for machines since Edison, comes from Welsh, and used to mean irritating or vexing things. Over time, machinery problems got pretty freaking annoying, and insects, spiders, and other ‘bug’ family members bit and stung people. As English formed, the word was incorporated into general use, until it eventually came to mean three things to English speakers: little crawly things, computer problems, and something being persistent and annoying.

Bugbears, someone bugging you, something being buggy, etc. all come from this!

 

Grace Hopper

 

Grace graduated from Yale with her doctorate in math between wars, and worked as a professor before taking a leave of absence to try and get sworn in (several times) for the Navy. She’d been slightly too old, and then she was too thin, and then she was too useful to let go to war – eventually, she got in via persistence, and the military was lucky to have her.

She’d go on to help create UNIVAC as well as the Harvard math machines. Mark I was used towards the atomic bomb. She’d also continue to write and research in peacetime – COBOL, a programming language still used today (albeit rarely) was her creation, making computer language easier to read and write. She’d write papers the entire time she worked under the Navy, constantly pushing forward and improving the tech she worked on.

She was retired out of the Navy at 60, and then called back – and then she retired again, and then she got called back – she never really “retired”, as even when she left the service she was scouted by the private sector. Most of her roles in this time were advisory, and she helped guide tech companies in the right direction.

 

Harvard Math Machine

 

Harvard’s computer department is where Grace did some of her most memorable work! She’d helped assemble, program, and test these absolute behemoth machines. Harvard MK II is where the bug came in, but given the nature of the system, it’s safe to say her contributions in programming made it into all four machines. They built off of each other, each machine containing the best parts of the ones before it.

Anyway, Grace was crucial to testing, re-testing, re-retesting… and making improvements! Under Grace and a man named Howard Aiken, the department built and programmed the Harvard Mark II, a beast of a machine weighing well over 20 tons, taught to think with relay switches, magnetic tapes, and vacuum tubes. Besides the obvious talent in math, she was a pretty decent speaker, too! The duty of explaining the system usually fell to her, and she’d have to do so in a way the upper management would understand.

This wasn’t the only Harvard machine, either. They got to Mark VI before another computer-tech leap was made that moved Harvard University beyond the gigantic fridge-walls they’d been using. In short: Grace and the rest of the Harvard team were at the very forefront of new tech at the time, and usually either on the cutting edge or barely behind it.

 

Bug Time

 

Relay technology was new, and everybody who could get their hands on it was eager to see just what it could do. The Harvard Mark II was very fast, thanks in part to the relays that replaced the mechanical counters used in the Mark I. The downside to those relays is that they require a space between the contact points to work. When the space was open, no signal was sent (there was an air gap between the two parts), and when the space was closed, the signal could make it across. Relays are very fast, much faster than the counters previously used, and are still used today in things like light switches and car signals.

The moth got into the gap, closed the bridge, and allowed signals to be sent continuously, which ruined the machine’s results. Luckily, the machine was advanced enough to put up an error message instead of just shutting down or giving an incorrect answer, which would have been significantly more difficult to track down.

It’s a funny coincidence. A moment in time, caught in the relay of the world’s fastest computer. What a way to go. Good for you, moth. Good for you.

 

Sources:

For more on the bug (with pictures):

https://www.nationalgeographic.org/thisday/sep9/worlds-first-computer-bug/

https://www.atlasobscura.com/places/grace-hoppers-bug

For more on the definition of bug:

https://www.computerworld.com/article/2515435/moth-in-the-machine–debugging-the-origins-of–bug-.html (Warning: this website would like you to sign up for their newsletter via pop-up)

https://www.etymonline.com/word/bug

For more on Rear Admiral Grace Hopper:

https://news.yale.edu/2017/02/10/grace-murray-hopper-1906-1992-legacy-innovation-and-service

For more on the Harvard MK II

https://www.semanticscholar.org/topic/Harvard-Mark-II/296718

https://en.wikipedia.org/wiki/Harvard_Mark_II

 

 

When did Brand Twitter get Creepy?

Where is the line, here?

Strange Marketing Decisions

You might have seen a strange tweet from Sunny D, an orange-flavored drink company.

It’s jarring – a bright orange bottle wrapped in a brightly colored label is suffering from a mental health crisis. Other brands reply, a chorus of “we’re here for you!” and “You got this!” flood Sunny D’s channel. It’s equally strange for Moon Pie, a dry pastry treat filled with marshmallow, to be replying with “stay strong”. How could we have strayed so far from the original intent of Brand Twitter?

Easy: brands do what people do even when they shouldn’t, for sales.

Brand Twitter: The myth of the underpaid intern

Perhaps it started as a truth – when Twitter was small, many brands put minimum effort into showing up. They’d rather spend their advertising dollars on tried-and-true methods of reaching consumers, and Twitter wasn’t exactly big. When Twitter got bigger, organizations started to take notice – but marketing was busy doing other things, so Twitter advertising turned into a pet project for the person who wanted to fool with it. Sometimes that was an intern.

Thus, the legend is born! “The intern” has basically free reign to do as they please because the higher-ups aren’t looking in their direction. It paints the image of a scrappy youngster doing their best for their company the way only an intern can. Of course, most messages are generic, but whatever – that’s a real person representing the organization, and if you tweet at them, they’ll respond like a real person! You have a direct line of access to the company! The brand is a person after all!

This doesn’t stay this way. Companies take notice and realize that this is a great way to connect with consumers, but they also realize this person has an enormous amount of power over the brand’s image now that Twitter’s gotten bigger. Teams and more experienced people up the line begin to take over, and the interns are relegated to ‘help write for’ instead of ‘manage’ the account.

And yet, the myth of the underpaid intern persists. Companies deliberately behave like their account is ran by one. Even when tweets are traceable to different people, the brand acts like they all came from one person, one very funny, energetic, and quick-to-respond intern who knows just the right things to say at any time of day. (You may notice memes are outdated by the time the brand gets to posting them – that’s a clue that it’s a team, rather than a single person with full control).

Sunny D

All of this makes Sunny D’s post even weirder. “I can’t do this anymore”. It’s so simple, and very concerning – it echoes a suicide note. It implies something that corporations are literally not capable of, and if this weren’t brand Twitter, I’d be seriously concerned about the person behind the tweet. In a way, I still am – was this someone who got control of the account for a moment, or did the main account controller forget to switch to their personal account? Surely, this wasn’t intentional. Surely, Sunny D’s marketing dept didn’t look at that and think “yeah, this is okay.”

If they really didn’t mean to tweet that, it makes sense that Sunny D would then try to salvage this tweet that’s already been seen by thousands – a savvy Twitter user knows deleting a tweet like that could cause serious alarm, and Sunny D would be forced to handle it seriously. By replying as though they meant this to happen, they save face. It’s still in poor taste, but it looks like it’s intentional and not as serious as the initial tweet might suggest.

Digital Overstep

If they did mean for this to happen – It’s scummy. It’s a drink brand. Corporations are not really people in real life, no matter what the law says. There is no person with the identity of Sunny D that you could physically harm, there is no body or brain of Sunny D that could be experiencing “I can’t do this anymore”. There are the people that make up Sunny D – that’s a different matter. Those people are real. Sunny D the brand is entirely fictitious, it exists only as an idea used to sell orange liquid. It is a concept. Not a person.

This wasn’t particularly effective advertising, and it was overall sort of creepy. Sunny D markets to children, right? Children can have these issues, but that’s not what Sunny D brought attention to. A brand twitter behaving like a person is distinctly unsettling, in a world where ads are slowly creeping into every facet of life. Brands aren’t supposed to require this kind of mental burden. You aren’t supposed to see something a brand has tweeted and feel heavier for it. It’s incredibly weird. It oversteps boundaries.

Marketing: A Tangent

But that’s what marketing does. It pushes, and it pushes, and if it pushes too far, other brands take a note of how much or what kind of reaction they got. Pepsi’s ill-thought-out ad with Kylie Jenner, for example, was a laughingstock, and as a result many companies quietly pulled back from social issues of the time so they wouldn’t get swept up in the widespread scrutiny that happened afterwards.

Even then, had Pepsi dared to throw its weight behind one side or the other, they would have come out better. Their audience would have been polarized, but the folks who stuck around would have been more devoted. See Gillette’s The Best a Man Can Be ad – they split off some of their userbase with that one, but they also came to be regarded more favorably by the ones who appreciated the message of the ad.

Or, you could look at the MyPillow Founder’s fervent support of former president Donald Trump – that, once again, split his audience (and ruined relationships with vendors), but he made more money off of his target audience as a result. It goes to show that it doesn’t matter which direction you push, as long as you pick one. There are people who believe the moon landing was faked, and those people buy products too – you can sell to them as long as you do it wholeheartedly.

 MyPillow picked one. Gillette picked one. Pepsi didn’t.

Sunny D’s undefinable agenda in this tweet is only working against it.

But what is it for? What purpose did this tweet serve? What agenda is Sunny D advancing?

Wendy’s

Simultaneously, brands feel enormous pressure from brands like Wendy’s, which has the benefit of being one of the first to be ‘sassy’ on Twitter. Other notable front runners for this trend included Denny’s, which posted to Tumblr as a singular person before it was cool. Other brands know people remember Wendy’s for being funny. They know Wendy’s is on the consumer’s mind before they are. However, those brands have also noticed the market’s a bit oversaturated for Wendy’s style content at the moment, and it’s not appropriate for every audience anyway. Tony the Tiger might get some angry replies from parents if it roasted a customer. Wendy’s can afford to be mean because it’s not selling to children as it’s main audience.

Other brands then have to figure out how to transmute what Wendy’s has with its audience into something they can say is their own.

Steak-Umms owns that it’s not great quality meat, but it is tasty, and that’s endearing in its own way. It jokes that it’s set is cheaply made online. Denny’s is still Denny’s, and they own the strange atmosphere of their restaurants and turn it into a selling point.

These campaigns are unique, a little edgy, but certainly not tasteless. Unique enough to be remembered, not unique enough to revolutionize the space. Meanwhile, Moon Pie and Frosted Flakes are incredibly milquetoast, but their branding is familiar, and the campaigns aren’t offensive.

All Together

Sunny D, for some reason, strayed from all of this. The tweet’s edgy. It’s too dark for the target audience: if they understand it, a drink brand just threatened suicide, if they don’t, then they don’t get the tweet and it was sort of meaningless. Agenda-less. Not promoting anything except empty words of affirmation towards an entity that only exists on paper. Meaningless chatter that also crosses a line.

The tweet is somehow totally devoid of meaning and wildly too personal at the same time!

Sunny D’s tweet seems so creepy in comparison to other tweets because it’s too much for the average consumer to take in from a stranger even if Sunny D were a person. This is the kind of thing you text to a friend in private. It’s a personal moment. It requires a lot of delicacy and understanding, the kind that a stranger can’t often provide for another stranger. How would you even go about comforting a drink brand?

And keep in mind: a brand always wants to be your friend, because most people listen to their friends. Ask people for favors, and they’ll feel obligated to help – and people like feeling helpful, so generally they’ll think a little better of you. Sunny D is pantomiming a very real, personal interaction with you via this tweet and asking for help it is literally incapable of receiving: once again, it’s a non-sentient drink brand. It’s asking for a favor it can’t use, one you’d never refuse to a real person.

Sunny D is behaving like it’s something it’s not, and in doing so it’s entered a digital uncanny valley. It’s mimicking human interactions in an attempt to sell more orange liquid.

Traditional Storage vs Quantum Storage: What Does it Actually Mean?

 

Traditional Methods

Traditional storage means a lot of things, right now. Magnetic storage is still used pretty consistently, as SSDs aren’t quite at the point where they replace everything like hard drives themselves did.

Now, quantum computing occasionally hits the news when a major breakthrough happens, and for good reason! Quantum computing promises to do more than any major storage advancement before. Quantum computing isn’t just ‘better’ classical computing – it’s a whole new ballpark, assembled with totally new technology.

What is ‘Quantum’?

 

Quantum mechanics. It’s frequently used by the sci-fi show’s token show-off to demonstrate their knowledge of physics. But what are quantum mechanics, really? As a concept, they’re not that tough to grasp, and you’ve probably witnessed some of the principles in action without even realizing it! For example, have you ever played the game of hiding a coin under one cup, and then shuffling it with two other cups?

Assume someone sits down to pick a cup, and they can’t tell where the coin is based on you, or your observation. Until they pick up a cup, the coin could be under all three cups. Basically, there’s a 33.33% chance the coin is under the cup they choose. However, once you pick up the two cups you know are empty, the odds condense. There’s now 100% certainty the coin is beneath the final cup, and 0% possibility it’s under the other two cups.

In real physics, this example doesn’t work perfectly. Most quantum mechanics, once observed, break down into observable truths, and you’re an observer too. You, the shuffler, have some way of knowing which cup the coin is under. The coin is probably making a sound as it’s dragged around the table, or maybe the coin is so heavy it is obvious which cup is holding it. If you know where the coin might be at all, it means that there is one observable outcome where the coin’s underneath the noisy cup, and not three potential outcomes where the coin is under all the cups. Observing this makes it true for your opponent, as well!

Assuming coins are actually particles, and the cups are really probable locations, you’ve got something that gets close to real quantum mechanics in action!

 

Make Waves

 

Quantum mechanics (without any math in the explanation) are just a way to explain the probability of a particle existing somewhere in a real, physical environment when it’s actual location can only be expressed through that probability, or else it stops behaving the way it’s ‘supposed to’.

This probability breaks down into wave forms, where certain spots are more likely than others to have a particular particle than others. For example, the cups all have a 33.3% chance of coin, but the table outside the cup has a 0% chance of coin. In a dark room, where nobody can observe that the surroundings are coinless, but everyone ‘knows’ coins go under cups, (like we ‘know’ where electrons tend to be found in an electron shell), the chance of it being on the floor are very, very small – but not 0%.

Out of the places you’d pick a coin to be, though, it’s probably still under one of the cups, and almost certainly still on the table. If you looked at this probability on a chart, you’d see hills of likelihood where the cups are, and dips where they aren’t! In this way, we calculate the probable locations of things like electrons and photons, which behave in ways humans don’t fully understand yet. The coins in the above example are like those particles! A photon is probably in a certain area given what we know about its behavior – but attempting to actually measure it as a wave makes it behave like a particle, breaking it’s quantum state. Information is lost, and the particle no longer behaves like it did when it wasn’t being observed. Picking up the ‘cup’ to observe fundamentally changes the behavior of the ‘coin’ underneath!

How does this turn into a revolutionary computing method?

 

Entangled

 

Quantum entanglement describes items (like particles) being tied to each other in such a way that one item can’t be described without also describing the other items in the system, which causes it to collapse as though you were looking at all of it. For example, say you put two different coins under two cups. Each cup has a coin, but which cup has which coin can’t be accurately described until one cup is lifted.

Once that cup is lifted, the first coin is described. The second coin has now also been described because there’s no way the coin you’re looking at is under the other cup, and each cup now contains/has only contained its respective coin. But only once you observed it. The probabilistic wave forms have now collapsed into two points with 100% likelihood.

That doesn’t mean that one coin/particle was always, 100%, underneath its specific cup – until you picked up the cup, both were underneath both cups, mathematically speaking (remember, this is a rough example – coins and particles have different laws attached). Entanglement also has a lot to do with superposition, since both coins would have had to share a location for the cup/coin thing to happen.

 

Superposition

 

Superposition describes things existing in the same space – and it’s not solely tied to quantum mechanics. Two notes played on an instrument at the same time, for example, create a new note out of their superposition. The big thing about superposition is waves. Physical objects can’t be superimposed upon one another, and two particles can’t be in exactly the same location. However… properties of objects can be expressed mathematically, in wave forms, and in that way they can be superimposed. Much like different wavelengths of light can combine to form a new color, the odds of objects being in a certain state, or being in a certain, unobservable spot can combine in superposition!

In the two-cup example, the coins are in a state of superposition until the cup is removed and their options are solidified; before the cups are removed, whatever equations are used to describe a coin’s location can be added to the equation to describe the other coin, and both equations are still valid. Neither is disproven by the existence of the other until one is observed. Until one is observed, the superposition stands.

These concepts, when put together, allow computers to read bits that aren’t yet bits, but could be bits.

 

Sum Total

 

All of this sounds really complicated – and it is, mathematically – but conceptually, it just boils down to ‘things can be predicted to be in multiple spots at once’, and ‘things can be a combination of the probabilities of other things, instead of just one thing, until observed’.

A quantum computer looks at probabilistic bits like we look at those coins, and it doesn’t think ‘that’s a 1’ – it thinks ‘this is probably a 1, but if it was a 0, how does that change the data?’ and ‘how does this being a 1 affect later bits?’ The most common path of quantum computing research uses qubits, which stay in a state of superposition.

This means that the qubit is both a zero and a one until the computer looks at it and determines its state via some randomized metric that maintains the quantum state. It could be the state of the electrons at the time the computer reads it, it could be the magnetic direction the qubit is excited into randomly, etc. it just has to behave in a way that outside observers can’t definitively say leads to one specific outcome. If it can manage that, then it can calculate all the available options all at once.

 

Advancements

 

How is this faster, you may ask? Well, the qubit is ‘stacked’ onto other bits. The qubit can be two states, and subsequent qubits can be two states, and… they daisy-chain together to form exponentially larger potential states, which then lead to answers being calculated simultaneously, instead of linearly. In a perfect system, faults are discarded, and then the quantum computer spits out the right answers in a fraction of the time it would have taken a classical computer.

For example, let’s say a password is tied directly to the state of a pair of dice in an automatic shaker. A quantum computer will be able to spit out a probabilistic password, but a classical computer won’t be able to compete! Even if it’s a supercomputer, it will have to get lucky if it wants to guess what  the shaker’s results are going to be before the dice are shaken again.

While this sounds very futuristic, websites are already using algorithms to convert random footage into protection for their servers: the lava lamp wall used by Cloudflare is one such example. By the time a classical computer has calculated what the algorithm required when lava lamps A-Z were in any position, literally all of them have changed. As a result, the code has changed as well, rendering that math useless. A quantum computer will be able to step up to the plate where the classical computer has struggled!

As Dr. Shohini Ghose puts it, this isn’t the equivalent of several classical computers, or one big classical computer compressed into a smaller state – it’s a totally new technology that will behave differently as it advances. Even a super computer would struggle with the lava lamp wall! However, quantum computers may not. Every qubit used to calculate has the potential to lead to a correct answer, or a wrong one. Good quantum computing will kick out incorrect answers as soon as they’re produced, and you’re left with something that the lava-lamp wall algorithm will take as an answer.

Dr. Ghose uses the example of a coin-flip game, where participants face off against a quantum computer. If the computer is told to win, and it goes first, it produces a probabilistic result that only collapses with the other player’s input – the computer is essentially allowing its coin to continue spinning in the air until it can tell what the human player has, and then it catches it, to spit out the answer that it always had. The answer existed in a probabilistic state – and it won, it just needed to be observed to tell the human that. The computer only loses when it mistakes the ‘noise’ answer for the actual result. If it were able to successfully suppress noise, it would win 100% of the time.

 

Why Not Earlier?

 

These computers have been seriously considered as a project since the 80s and 90s. And now, they’re making a resurgence. What kept them from being considered earlier?

Logical faults are a big part. Modern AI can suppress things it knows aren’t ‘really’ part of an equation’s answer, but the coin-flip computer above still lost 7% of the time to bad answer output. In the past, quantum computers wouldn’t have been able to correctly identify their own mistakes even down to 7% without a classical computer running alongside them, which defeats the purpose. Unlike classical computers, where faults like that come from the hardware, quantum computers are getting these errors from the state of universe itself. Of course that’s difficult to compensate for.

Aside from that, there were also mechanical issues to sort out first. The computer can’t be allowed to turn the qubit into a regular bit, which is called ‘decoherence’. Decoherence happens once the system is connected to something measurable, observable: out of two cups, lifting one solidifies the probability, and the other cup, even though it hasn’t been observed, definitely has the other coin. If it’s solidified into a regular bit, it may as well have not been a qubit at all!

Mechanically, to avoid decoherence, speed and environmental controls are essential. In quantum computing, you aren’t maintaining that quantum state indefinitely – the longer the computer has to maintain that, the worse off the state is, until eventually something collapses in a measurable way. Heat will do it, stray magnetic or electricity pulses will do it – flip one qubit, screw up the system or collapse it entirely. Decoherence has destroyed the calculations.

Side note: if you’ve heard of the double slit experiment, that’s an example of decoherence! Measuring the particles breaks the system while deliberately not measuring them allows for that nice waveform. Their final location becomes known, but not the path they took to get there. In computing, measuring the qubit before the computer gets to then breaks it down into a not-qubit. Rendering the system decoherent, and screwing up the results of the calculations.

 

Tid-Bit

 

Ironically, Schrodinger haaated that his ‘cat experiment’ got big because folks were taking it too literally. For those of you who haven’t heard of the thought experiment (no cats were ever actually put in a box) the experiment’s set-up was that radioactive material has a certain % chance every second to release a radioactive particle, and then putting this material next to a particle-sensitive trigger would release poison via that trigger into the cat’s box. If there’s no guarantee of poison being released into the box, there’s no mathematical certainty that the cat’s either alive or dead, so it’s both. Just like the coin is under all three cups.

 But not really. At the scale the experiment would have to take place, the cat’s as good as already poisoned (a lump of radiation has so many individual atoms that the odds of one not releasing a particle at any one moment is basically zero), but Schrodinger was struggling to explain the concept to laypersons who otherwise had no exposure to physics.

The thought experiment does a great job of breaking down what’s actually occurring with superposition. It’s not about the cat, or poison, it’s about the particles. If the experiment could be particle-sized, it would work the way it’s described.

 

 

Sources:

https://indianapublicmedia.org/amomentofscience/the-heisenberg-uncertainty-principle.php

https://www.sciencealert.com/quantum-computers

https://jqi.umd.edu/glossary/quantum-superposition

Shohini Ghose via TED Talk (direct link: https://www.youtube.com/watch?v=QuR969uMICM)

https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/

https://www.nature.com/articles/s41598-020-75730-1

https://newsroom.ibm.com/2015-04-29-IBM-Scientists-Achieve-Critical-Steps-to-Building-First-Practical-Quantum-Computer

 

Public Campaigns Vs. Public Image: A McDonald’s Story

#Farmers of McDonalds

 

#MeettheFarmers was the initial campaign, the one that McDonald’s had planned for. There was some static; the same issues that apply to the meat industry as a whole apply to McDonalds, but complaints of cruelty and poor animal husbandry didn’t completely ruin the hashtag. Generally, the farmers were happy, the animals looked happy, and McDonalds was happy with it’s campaign.

It’s always risky to get folks involved if the company doesn’t know for sure that they think of it positively, but so many farmers rely on McDonald’s that it was basically a slam dunk. Sure, PETA got a little rowdy, and nobody’s going to make themselves look bad by posting pictures of their sad or poorly treated animals, but the crop farmers generally felt that they’d been treated fairly by McDonald’s. This was great! McDonald’s does a lot to support local farming, and it was smart to emphasize how much of their food they bought from US farms. The patties are still packed with things that keep them from molding, but the cow didn’t have to fly overseas to get to it’s destination packing plant or restaurant.

It’s a genuinely good campaign – they could trust that they’d done right by the people who’d be replying, and they’d retweet specific responses as a form of curation.  Marketing done right!

 

#McDStories

 

After the success of #FarmersOfMcDonalds, McDonald’s paid to promote their next big hashtag, #McDStories. This went downhill, very fast. Where McD farmers are a small, controllable group who are generally professional, the public is… not. And it turns out, when a corporation consistently underpays and overworks it’s workers, they’re going to do things or skip things that customers notice. McDonalds yoinked the paid promotion slot, but by then it was already out of control.

All they could do was damage control as all sorts of nasty stories rolled in. Violence by staff members, sanitation issues in the bathroom, uncleaned ice tanks, solicitors in the parking lot, solicitors inside the store, solicitors coming up to windows in the drive through – customers had seen it all! Heck, even workers joined in – McDonald’s was apparently struggling with it’s management chain in places, and issues that could have been resolved with better training and store support (the shift lead isn’t supposed to be in charge of pest control, for example) just weren’t even getting noticed, until #McDStories forced them to the front of the line. Bugs. Food contamination. Food poisoning. Incorrect cleaners being used for grills and the ice cream machine. Rats. All things that could be controlled or even eliminated with better contact from whoever’s in charge of regional management.

Beyond that, though, the campaign showed that McDonald’s didn’t really know how it looked on the outside. A fast food restaurant universally loved by children, or a fast food restaurant with locations that play it fast and loose with adult customers’ food? The upper management had allowed both to happen, but only one of those demographics is regularly on Twitter.

McDStories highlights a critical disconnect between the McDonald’s marketing department and the outside world.

 

Marketing Mistake

 

Every company wants to be perfect in the customer’s mind. They all want to be clean, friendly (except for Dick’s), and accessible. However, things start to split when you get specific: pubs cater to adults, so they wouldn’t have the same bright colors as family-friendly restaurants. Therefore, the marketing for a pub is going to be very different than the marketing for a fast-food restaurant for kids. McDonald’s has been trying to shift more towards adults in modern times, and since adults care about different things than they used to, McDonald’s has been struggling to find a common thread among McDonald’s customers. Maybe this was a crowd-sourcing campaign for ad ideas, maybe it was just an attempt to appeal to adults.

Either way, it made a disconnect between ‘McDonald’s the brand’ and ‘McDonald’s the restaurant chain’ pretty obvious. You’d never see Waffle House doing this sort of campaign. Waffle House knows what kind of people stumble into their restaurants at 3 AM for a couple of post-bar waffles, and wild stories of incidents inside Waffle Houses scatter the web. A famous Vine shows two of the employees fighting while a customer asks for a waffle in the background. And yet, Waffle House is well-liked. It knows what it is, it doesn’t try to pretend every customer has a great time; they’re there for cheap food and the strange sense of community a 3AM Waffle House has. Besides, Waffle House’s management style seems to keep customers and workers alike pretty happy!

Denny’s, another cheap diner with 24 hr locations, has incidents, but they rarely go viral. They’ve gotten a cultural image of ‘you ate at Denny’s, you knew what you were getting into’. McDonald’s has unknowingly slipped into the same territory – appealing to adults with a 24hr schedule means you’re going to get some strange customers. They don’t seem to realize that’s where they’re at, so they don’t know how to lean into it yet like other 24 hr restaurants do. They’re very concerned with being family friendly. Management- and Marketing-wise, something’s obviously slipping if these stories were genuinely unexpected.

 

How Could They Fix It?

 

The long and the short of it is to listen. Burger King’s rat-bun scandal caused the store to shut down while Burger King corporate handled the issue. McDonald’s has had complaint-tweets before, but somehow they don’t get much traction until the tweet’s got a bunch of retweets, so really, they set themselves up by using a scrollable tag. Companies that won’t respond to anything but highly public tweets about their issue do this to themselves. If a complaint to the manager or to corporate doesn’t change things, then of course the customer is going to resort to what works.

Listen to customers through official channels, and the unofficial ones that everyone can see won’t be flooded with horrible stories of missing quality!

 

Sources:

https://www.businessinsider.com/mcdonalds-twitter-campaign-goes-horribly-wrong-mcdstories-2012-1

https://www.forbes.com/sites/kashmirhill/2012/01/24/mcdstories-when-a-hashtag-becomes-a-bashtag/?sh=2aeca912ed25

AI: You Get Out What You Put In

AI needs training to understand what it’s meant to do. The quality of the training determines its outcomes.

 

Tay Fed Tweets

 

Microsoft’s Tay was exposed to the worst of the internet at incredible speed. Once Microsoft announced the project, Tay began forming her own tweets out of content she was sent, and it went about as well as you’d expect. Racist, sexist, anti-Semitic language ruled her feed, and she was shut down shortly after. This is an unfortunate experiment, because Tay might not have turned so quickly if she’d just been exposed to open Twitter anonymously. There are still racist tweets out on Twitter, but being targeted by the mob produces a disproportionate amount of ‘bad’ tweets towards the target. If they didn’t announce Tay’s existence, they wouldn’t have gotten as many messages, though. Knowing what the experiment is allows people to screw with it in the same way it allows them to participate.

The experiment was still considered a success: Tay took exactly what she was given and rebuilt it in new, interesting ways. Racist new ways, but still new ways. A machine was able to successfully learn how to make a targeted threat towards another Twitter user. That was huge for machine learning. Of course, Microsoft doesn’t exactly want that to be the face of their machine learning program, so Tay was reset and reinstated with filters in place, under different names.

This is a key experiment, not only because of how fast Tay keyed on to how she was ‘supposed’ to behave, but also because it highlights issues with reinforcing learning in the active environment. AI may end up learning things it’s not supposed to, to the detriment of the environment it’s supposed to be part of!

 

Google Deep Dream Fed Eyes

 

Google’s Deep Dream software was famous when it first reached the public. It was fed pictures of living things, and so living things were all it could see, anywhere. Everything it touched was coated in eyes and fur-texture. It was horrifying. Deep Dream pictured all sorts of eldritch horrors in everyday items – it was doing it’s best, but all it knew where pictures of dogs.

Google fed Deep Dream a set of images from a database assembled by a university, but it wasn’t given all of the images, since that would have been a huge amount of data for the then-small Deep Dream. Instead, Deep Dream consumed the library in smaller pieces, and one of those pieces was a very fine-grained sub-library of images of dogs. Deep Dream’s specific instructions were to take the picture – illustrate what it saw – repeat. Little aberrations slowly turned into eyes and whiskers.

Since then, Deep Dream has added filters that allow users to pick which database of images they want to use, each of which creates a new, wacky image out of their own uploaded images, but the dog filter still sits strong in people’s favorites. Sometimes things are so creepy they’re cute! The story of an AI chugging along and doing it’s best is one for the ages.

 

Art-Breeder

 

Art Breeder is an AI-powered character creation tool, and it’s already been covered by some of the largest YouTube channels on the website. Art Breeder breaks down human expression inputs into granular emotions, such as happiness or fear. Using sliders, the user can then alter a preexisting image, or create a face out of thin air! Art Breeder uses it’s database to put together what it thinks happiness or sadness does to the human face. It’s difficult to get a perfectly realistic human face – most still appear animated or cartoony – but it’s also frighteningly easy to accidentally create a blob-monster.

Art Breeder’s AI doesn’t actually know what it’s seeing, it’s just doing it’s best based on pictures it’s been fed. It doesn’t know that glasses are separate from a person’s face, for example, or that mouths don’t have four corners for smiling and frowning at the same time. It also doesn’t necessarily understand freckle patterns, or where blush belongs. Art Breeder’s fascinating. It can make faces, or it can make motorcycle accident victims, all with a single mis-click of a slider.

 

AI-Dungeon Fed Fanfiction

 

AI-Dungeon, a young and upcoming Steam game, made some waves when it announced that it was trying to fix issues within its script generation – especially the AI’s tendency to get explicit with named characters. Why, you ask? The source AI (which AI Dungeon uses) was partially trained on fanfiction alongside Wikipedia and assorted other text sources. Fanfiction, for those of you who don’t know, is fan-written fiction about popular media.

Fanfiction is great because it can go absolutely buck-wild with the content: characters are put into alternate universes where a certain character’s death doesn’t happen, or maybe the entire cast is working in a coffee shop. Maybe two characters end up in a relationship even though nothing in the canon of the work suggested that could happen. It’s a great place to start for aspiring writers, since the characters are all already written – all that’s left is to put them together in a different way.

Unfortunately, a lot of fanfiction is… explicit, so filtering is very necessary. Feed AI Dungeon explicit content, and it will attempt to recreate what it was trained on, which was an absurd amount of explicit content mixed in with all the general-audience and PG-13 rated content the developers wanted to use.

The worst part is not the explicit content, which is allowed – it’s that the machine didn’t know it was only supposed to apply to adult characters, which ended up creating some very awkward, uncomfortable content for people who discovered the flaw. As such, they’ve updated their reporting system to keep that from happening again, and the Dungeon is now allowed to auto-flag itself if it spots content it’s not supposed to be making.

 

Potential for Racism

 

Unfortunately, training facial recognition software with mostly white people means that the computer only understands white faces, and it doesn’t have sufficient training in other areas as a result. When the only bird you’ve seen is budgies, every budgie looks different – but all cockatiels look the same until you’ve gotten more experience handling them. AI isn’t being given the necessary experience, and as a result it’s flagging Black and Asian men disproportionately.

It’s happened before. It will continue to happen unless steps are taken to prevent it. All it takes is a mistake that the human handlers either don’t catch or deliberately ignore, and an innocent person is a suspect where they otherwise wouldn’t have been.

Interestingly enough, this phenomenon is also identified in more primitive AI, the kind that soap dispensers and automatic door openers use. “Racist soap dispensers” sounds like political fluff, but it is an issue: the dispenser is programmed to detect white hands. It doesn’t know if it’s supposed to respond to darker palms, so it just… doesn’t. Older styles that relied purely on movement were actually doing a better job than the kind that’s supposed to identify hands to dispense. Exclusion may be an accident, but its result is still the unfair treatment of different races.

The biases of the researcher are reflected in their research, and if they don’t notice it themselves, they may believe they haven’t had a hand in how the data was collected, or how the questions were chosen. That’s why it’s so critical to test, re-test, and re-re-test experiments. Biases are difficult to rule out, but not impossible. Don’t assume that a machine is perfectly logical and always right: it was made by humans, after all.

Sources:

https://www.fastcompany.com/3048941/why-googles-deep-dream-ai-hallucinates-in-dog-faces

https://www.artbreeder.com/

https://latitude.io/blog/update-to-our-community-ai-test-april-2021/

https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential

https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html

https://www.nature.com/articles/d41586-020-03186-4

https://www.nature.com/articles/d41586-019-03013-5

Optical Storage

 

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc. Don’t microwave the disc unless you plan on trashing the microwave soon, though. Most discs contain some amount of metal, and that can wear the microwave out faster. Fun!

 

CDs

 

“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?

 

CD-RW

 

The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.

 

DVDs

 

DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.

 

DVD +R DL

 

That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.

 

Compatibility

 

CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

DVDs also carve out pits (or burn marks) into the shiny material of the disk. Just like CDs, a DVD can only be written on once, although DVD-RWs do exist (and struggle like CD-RWs do).

 

FlexPlay Self-Destructing Entertainment

 

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.

 

Blu-Ray

 

Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.

 

Sources:

https://www.britannica.com/technology/optical-storage

https://www.dell.com/support/kbdoc/en-us/000149930/what-are-the-different-cd-and-dvd-media-formats-available

http://www.osta.org/technology/cdqa13.htm

https://www.techrepublic.com/article/all-about-cd-r-and-cd-rw/

https://www.scientificamerican.com/article/whats-a-dvd-and-how-does/

https://kodakdigitizing.com/blogs/news/cd-vs-dvd-how-are-they-different

http://recordhead.biz/difference-blu-ray-dvd/

https://www.dell.com/support/kbdoc/en-us/000147805/guide-to-optical-disk-drives-and-optical-discs

 

In Terms of Memory

Elizabeth History, Innovation April 23, 2021

The first Bit of Data

 

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.

 

Processing

 

If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

 

Then, a Byte

 

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.

 

Alphabet

 

8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!

 

Kilobyte

 

1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.

 

Megabyte

 

A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.

 

Gigabyte

 

A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.

 

Terabyte

 

A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.

Petabyte

A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.

 

Sources:

https://en.wikipedia.org/wiki/Bit

https://kb.iu.edu/d/ahfr

http://www.differencebetween.net/technology/software-technology/difference-between-unicode-and-ascii/

https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3155B.html

https://www.pcworld.com/article/127105/article.html

https://www.wired.com/2014/10/astrophysics-interstellar-black-hole/

https://www.merriam-webster.com/dictionary/gigabyte

https://www.nasa.gov/mission_pages/chandra/news/black-hole-image-makes-history

https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/

Magnetic Storage Types

 

Magnetic Tape

 

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live. Music recording tech already existed, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

 

Carousel Memory

 

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

 

Core Rope Memory

 

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

 

Core Memory

 

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

 

Magnetic Bubbles

 

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

 

Semi-Conductor DRAM – Honorable Mention

 

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

 

Magnetic Disk (Hard Disk Drive)

 

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

 

Sources:

https://spectrum.ieee.org/tech-history/space-age/software-as-hardware-apollos-rope-memory

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

https://electronics.howstuffworks.com/vcr.htm

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html

https://www.computerhistory.org/revolution/memory-storage/8/253

https://nationalmaglab.org/education/magnet-academy/watch-play/interactive/magnetic-core-memory-tutorial

https://www.rohm.com/electronics-basics/memory/what-is-semiconductor-memory

https://cs.stanford.edu/people/nick/how-hard-drive-works/

https://psap.library.illinois.edu/collection-id-guide/audiotape

https://www.engadget.com/2014-04-30-sony-185tb-data-tape.html?guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAC5GC2YOKsvhOs9l4Z2Dt1oHX3-YxjPyJC60qfkq6_6h8zyckkBK9V9JJC9vce3rCmcgyehT-RB6aORBfzB9b5oiBoF1Fbic_3653XVM8fsUTHHnTgxKx4piCeEl65Lp54bkbMcebEEddwlq-EDnAcM7zuv49TXYHcgq9lmnrBln

https://en.wikipedia.org/wiki/Carousel_memory (all primary sources regarding carousel memory are in Swedish)

 

 

 

 

Curb-Cutting Effect: Accessible Software is Good for Everyone

 

Certain design choices make software easier to use – and there’s no reason not to use them.

What is the curb-cutting effect?

Most sidewalks have a dip in the curb, where the concrete comes down to meet the street. This is designed for accessibility: people in wheelchairs would struggle to cross the street if the curb didn’t do this. They could even be injured trying to get up and down over the curb, or rolling along in the street til the next loading ramp for a business intersected with the road. However, the dip also makes life easier for everyone on wheels – skateboarders, bicyclists, roller-skaters, etc. don’t have to dismount and carry their transportation over the curb. Elderly folks, able-bodied people, and people suffering from conditions like drop-foot can now aim for the dipped curb as well to make tripping less likely. The curb being carved out has benefited everyone, even though it was only put in place for the folks in wheelchairs.

Accessibility features, when done right, can benefit everyone, not just the people they were made for. This is the curb-cutting effect in action!

 

Menu Design

 

Good, clean, accessible menus benefit everyone, not just sight- or mobility- impaired people. The faster and easier it is for someone to find information, the less likely they are to leave the site. How many times have you tried to find something specific on a website, only to give up and Google it again to find it, on the same website? For example, say you’re looking for a local museum’s hours. They have a drop-down menu along the top, but the options are only there when you hover.

You see information about the exhibits, you see information about the team, about the funding, about the history of the museum itself, you see where to buy tickets or make donations – but the hours are nowhere to be found. You give up, turn around, and instead of engaging with the website, you engage with Google to find the hours. This is obviously annoying for the end-user, especially since that info may be outdated! For people with dyslexia, people with cognitive delays, and sight-impaired people, it’s nearly impossible to navigate, and they end up calling instead.

If your website is really, really big, it’s better to include a table of contents menu instead of a dropdown. You can get more specific about what exactly is in each section, so users can navigate the site faster. If your website’s pretty small, a menu that just brings the user to the right point on the page may work – the user can get there either through scrolling or the menu, so they can Ctrl + F to find the museum’s hours. This also helps users in a hurry to find info!  If your website’s pretty medium, a solid, non-retreating, clickable menu is more helpful than ‘hover’ menus. All of these options also have the benefit of making the website easier to use on mobile devices!

 

Readable Text

 

Screen readers are a popular choice for blind and seeing-impaired folks, but a critical problem they face is images of text rather than plain text the reader can read. Unfortunately, a lot of information is lost when the image also contains text, like screenshots of tweets or memes with the caption built in, and text captions can give at least some of that information back. Captions help screen-readers out tremendously, and make websites like Reddit, Facebook, and Tumblr, which all use a lot of pictures, more accessible to the blind. However…

Readable text is also easier to copy-paste, easier to cite or quote, and easier to search for. If you remember part of a quote from someone on LinkedIn and want to share it elsewhere, for example, you’ll probably search for it on Google and see if the website crawlers can find it first. If that quote was shared in a photo, the website crawlers won’t be able to find it – there’s no text for them to read. You’ll be forced to either plagiarize (don’t do that), drop the quote, or scroll through your feed til you find it. All of that could be avoided if the person had captioned their photo.

Closed-Captions and Transcript Options

Captions aren’t just for the blind. Many deaf and hard-of-hearing people can lipread, but many more can’t, and some would just rather not if other options, like captions, are available. Besides, lipreading isn’t perfect anyway. Think about shows where someone’s saying something off-screen, or mouths are covered, like Cops, or Grey’s Anatomy. Captions make these shows watchable for people who don’t want to spend half a show guessing what the person with the surgical mask is saying.

Captions also help hearing people. If you’ve learned a language in a classroom setting, including captions, even if they’re also in that language, helps comprehension. Captions make it possible to mute a training video in the office and still gain the information, without disturbing coworkers.

Transcription options are another great example. Most adults read faster than they can speak – a transcript that takes five minutes to read may take ten or fifteen to speak out loud. When a website has no textual information about a video they’re hosting, it suddenly takes 20 minutes of video to gain information that could be read in seven minutes, without the annoying stop-start of rewinding to critical information, watching it forwards again, rewinding when you can’t figure it out, rewatching, watching further to see if they mention it again, etc. etc. It’s so much more efficient to have a transcript. Transcripts also allow readers to use Ctrl + F if the transcript is digital, so singling out specific lines of information are easy as pie.

Besides video-tutorials and learning material, choosing poorly while audio-mixing will make captions mandatory. Tenet was an entirely different movie when there were captions. I’m not joking, important information was completely incomprehensible the first time I watched it, and I’m not hard-of-hearing. Captions were only way to get all the information out of the movie!

Better Mouse-Input Technology

Part of the reason so many things behave themselves with gaming mice is because of accessible software. Not everyone has the strength or dexterity to use a mouse and keyboard, so software designers comply with the ADA by making the software take any generic input as a mouse click, if previously set by the operating system. Things like on-screen keyboards further help people with limited movement, and voice-to-text or voice-commands enable folks like Stephen Hawking to communicate and control their surroundings better.

Game designers took the idea a step further and allowed users to key-bind to any set of keys they like. Players that only have enough mobility to reach half a keyboard, or use a thirteen-button gaming mouse, alongside people going for speed records and people on small desks, can now set the buttons to be on a half-board if they so desire!

Those onscreen keyboards also help if you’re using your laptop as your media center and have it semi-permanently hooked up to a bigger screen. Are you going to get up off the couch to look up a video on Youtube? With the onscreen keyboard, all you need is a Bluetooth mouse, and you don’t have to get up anymore!

These software choices are not only good for ADA compliance – they also make your website more pleasant to use!

 

 

Sources:

https://www.washington.edu/doit/designing-software-accessible-individuals-disabilities

https://medium.com/@mosaicofminds/the-curb-cut-effect-how-making-public-spaces-accessible-to-people-with-disabilities-helps-everyone-d69f24c58785

https://ssir.org/articles/entry/the_curb_cut_effect

https://www.smashingmagazine.com/2017/11/building-accessible-menu-systems/

 

Knockoff Media Sites Are Usually A Bad Idea

Elizabeth Ideas, Innovation April 12, 2021

 

Every great once in a while, someone gets the idea to make a social media app that’s just their favorite app, but slightly different. Maybe the rules are a little different, maybe reposting old content is encouraged, instead of a faux pas. Maybe the rules about hate speech are nonexistent, maybe there’s nothing in the rules at all about moderators! Or, maybe it’s just like the worst parts of the site’s early days combined with the worst parts of its modern day experience, because none of the founders really know how to code or secure the platform from hackers. Is that what you wanted, when you left? Well, is it? Is it?!

 

Reddit’s Split-Off

 

Reddit has made…choices. When Digg made decisions that users didn’t like, the users could hold up Reddit as the shining example. “I’ll go to Reddit if they change X”. And users did! Reddit slowly accumulated most of the runoff from Digg simply by absorbing features that Digg shed. Digg eventually imploded.

No website can continue this pattern forever, and Reddit was facing pressure to disown certain communities. Even worse, if it wanted to be on the app store (and it did), it was going to have to ban a lot of content that it had let grow through inaction. Even if there was nothing legally wrong, Apple’s app store is notoriously Puritan.

Subreddits (a subreddit is how Reddit divvies up its forum topics) only get deactivated when the general public notices them, and trying to get in the app store had attracted public notice. If the public doesn’t notice, user complaints go unrecognized.

This is important to the story.

Hired a pedophile sympathizer as an admin? “Unnoticed”, until the public noticed. Widespread brigading goes unrecognized. Regular doxxing? Not a problem til the media reports on it. Some subreddits are allowed to ruin other subreddits because the admins simply don’t feel like getting involved, and if it doesn’t go public, they have no recourse. As a result, Reddit’s communities are entirely dependent on the mods that monitor them, but otherwise it’s pretty lax. Admins rarely step in to fix problems.

There is no censorship until a subreddit makes itself into a problem. And once they were in the app store, they went back to their old policy of “don’t touch it til it’s an issue”.

 

The Climb

 

Two big events lead to major migration to VOAT: the banning of the subreddit R/FatPeopleHate, which is exactly what it sounds like, and the hiring/firing of Ellen Pao, who was used as a scapegoat to introduce much-needed changes. Reddit discovered that once it reached a certain size, the things Digg had done to itself made sense. They also realized people who had been there since Digg would recognize the hypocrisy. This left Reddit in the difficult position of implementing the changes and losing people, or implementing the changes and losing less people. They picked less, by using Pao. People were outraged! They wanted to leave, and make a statement on the way out!

But there was a problem. Leavers realized Reddit didn’t have any real competition, and that if they wanted their threats to have any weight, they would need another site to champion. A young and upcoming website known as VOAT became their champion, and people who left smugly waited for Reddit to suffer the same fate as Digg. They’d get to watch VOAT turn into Reddit, but better (for them)! After all, Reddit was “censoring speech”, and VOAT wasn’t – how could anyone ever enjoy such an oppressive website? It will inevitably start censoring all but the ‘right’ opinions, they said. Reddit’s going to go to the dumps, they said. Slippery slope, they said.

Yikes.

 

VOAT: A Drain Filter

 

Obviously, Reddit’s still here and VOAT’s not. And even better, Reddit’s age-old policy of “leave it to the mods, as long as it’s not breaching admin rules” works out more often than not. This hands-off approach allowed a lot of troll communities, but they fixed the issues those caused with policy changes. Or… ‘fixed’ it to Reddit standards, which was better than nothing. Generally, communities were allowed to self-police.

As said before, Reddit’s genuinely so lax it was causing issues publicly! It was never going to turn PG-E, and any pretense of being PG-13 were dropped after Apple accepted them. They only had to kick the subreddits that were failing to self-police anyway, and even if those problem subreddits didn’t self-moderate, they could still stay if they were quiet. “Just don’t get the news involved”. No dead bodies, no child-trafficking, no gore or NSFW content, and no actionable threats. Everything else could stay if it didn’t draw negative attention from the press.

There are a plethora of alt-right and conspiracy subreddits still open, right now, because they stayed quiet and didn’t brigade, dox, or harass outside subreddits. Extremists aren’t kicked, as long as they don’t cause problems. If the leavers couldn’t manage to coexist with other subreddits, then VOAT was welcome to have them, as far as Reddit was concerned.

After the purge, they could then give the reigns back to subreddit moderators and say “keep it within our admin rules”, and everything was fine for the people who stayed. The same can’t be said for new VOAT members.

 

The…Climb?

 

Voat was supposed to be Reddit, if Reddit were even less censored. You know, the safe haven for the people who got kicked. ‘Hey, R/FatPeopleHate escapees, come here and continue to hate! Hey, R/Jailbait! Betcha missed posting pictures of underage girls! And R/watchpeopledie, guess what? Dead body pictures are welcome!’

Notice a theme? Anything too hateful, pervy, or gory for Reddit, which only kicked them out because they were becoming a legal liability, got pushed into VOAT. And VOAT took everyone in. Since the regular Redditors weren’t getting hard-core harassed anymore, the leftovers of banned subreddits were all VOAT had coming. Understandably, VOAT’s advertisers were unhappy.

Hypothetically,

If you and a bully are getting into conflicts in the cafeteria, and the bully gets kicked out and heads to the gas station parking lot with the other “cool kids” who got kicked out, are you going to go to the gas station parking lot? Absolutely not. In fact, most of the other kids in your class are going to stay in the cafeteria, even the ones who sometimes agreed with the bully – the gas station parking lot has gotten a reputation, see.

The people who only sometimes agreed with the bully are okay-ish with just talking quieter, or more peacefully, so that they’re not associated with that guy. And that guy is VOAT. VOAT was not a place for moderates. People who headed there for “the principle” and “standing for free speech” soon discovered they’d have to assimilate with the far right that made up the majority of the platform. Or, if they wanted, they could stay quiet, because their moderate or centrist opinions were not welcome. Remember, the people who left were too much for Reddit.

That’s VOAT’s claim to fame. A separation of a bigger site’s problem forums to get into App stores. It eventually lost funding and had to shut down.

 

Tumblr’s Split Offs – The Opposite Side

 

Where people migrated to VOAT because Reddit stopped letting them misbehave, Tumblr’s mass-migration was caused by Tumblr re-defining what misbehaving was.

Tumblr ex-users left because the app wanted to water itself down to get in Apple’s good graces. Much like Reddit. However, unlike Reddit, there weren’t very many good alternatives, and no “anti-explicit” clause existed in it’s rules, like anti-harassment did for Reddit.

Blogging sites had existed forever, and Tumblr had so many users because it was a mix of all the good features with none of the bad. Tumblr was anonymous, follower counts were hidden, and the content feed was chronological, all common complaints for sites like Twitter or Instagram. Unfortunately, Tumblr felt that its platform was difficult to advertise on because of the explicit content it held. The App store could provide more reach, they thought.

Many of the users would prefer the website stay as un-advertisable as it always was. Tumblr’s owners decided they wanted to ban explicit content – and it’s a private platform, they can do what they want. However, most of their content creators had some kind of content that the auto-filter might consider explicit. They gave warning, but when the filters and new policies kicked in, people were upset.

‘Poor Filtration – Please Advise’

 

The people who wanted to stay were tasked with digging through years’ worth of their own posts to find and remove “explicit” content, which according to the filter, was anything that had too much beige and pink in it, things with visible water, art, or too much visible muscle. Artists and bodybuilders were all put out to find that every other post they made was getting flagged. It took manual review from an overworked human somewhere in the backend to get that post back up. Tumblr promised art would still be allowed, but the filter couldn’t tell the difference between explicit content and art that happened to feature nudity. Many didn’t bother trying to appeal their entire post history, and just left.

Additionally, posts featuring other content, like owls along the sides of the image, and images where the explicit content wasn’t centered in-frame, weren’t flagged even if they were obviously explicit – the filter hadn’t been trained enough before launch. Even worse, tagging a post as #SFW was effective content-filter evasion, and the bots plaguing users for years got a second wind once their programmers realized. Even more people left. Tumblr wiped out it’s advertisability, and didn’t get rid of other issues within their website. Believe it or not, Tumblr also had issues with alt-right communities misbehaving! And this didn’t get rid of them, only the explicit photos.

Other Art Scenes

 

A lot of Tumblr’s culture revolved around it’s easy-going art scene. They were a legitimate Wild West, as long as it wasn’t illegal, anything went. Just like Twitter, you could get into arguments over whatever you posted, but unlike Twitter, follower counts were hidden. People could weaponize their followers, but nobody on the outside looking in would know that they were “big personalities”. That’s what brought many of the artists to the site! And because it’s Tumblr, those artists produced a mix of content, but they lost the ability to when Tumblr decided it would trade its soul to be in the Apple App Store.

Instagram didn’t allow explicit art. Twitter was less anonymous. Deviantart allowed explicit content, but their poor discovery feature made it difficult to accumulate fans. Pinterest was a non-starter. Nobody could do what Tumblr did, so once Tumblr dropped a feature, nobody had a plan B. People made downloadable mod packs for the site to restore features, that’s how difficult it was to recreate Tumblr.  Splinter sites that were explicit and did blogging like Tumblr were frequently completely explicit, and those fleeing artists were not welcome to make mixed content there, either.

Relative

Just like VOAT, a couple of people made websites specifically to siphon folks from Tumblr, but none of them ever even got close to VOAT’s initial surge and longevity. VOAT wasn’t great, but it was poised and ready to go – brand new websites that wanted to be Tumblr just weren’t. The splinter websites, unless they were already big, gradually plateaued.

Eventually, the users who stuck around switched to a mix of Patreon, Twitter, and Tumblr. This was A) better-supported, B) better populated, C) easier to use and D) less invasive than the majority of the alternatives. Tumblr’s value as a site was slashed to less than a quarter of what it was before the art ban, and the user count plummeted. No viewers meant no advertising revenue, and the website was passed around by buyers trying to ‘fix’ it.

What’s the long and the short of all of this? If a big website is shedding users because of a policy change, then the websites that try to get those users are going to be worse. Reactionaries do not a website make.

 

Sources:

https://www.forbes.com/sites/masonsands/2018/12/20/tumblocalypse-where-tumblr-and-its-users-are-headed-after-the-ban/?sh=7133e3847020

https://www.cnet.com/news/tumblr-porn-ban-goes-into-effect-flags-puppy-and-pokemon-photos/

https://www.independent.co.uk/life-style/gadgets-and-tech/voat-shutdown-christmas-qanon-reddit-b1779890.html

https://www.ionos.com/digitalguide/online-marketing/online-sales/tumblr-alternatives/