Category Archive

History

Public Campaigns Vs. Public Image: A McDonald’s Story

#Farmers of McDonalds

 

#MeettheFarmers was the initial campaign, the one that McDonald’s had planned for. There was some static; the same issues that apply to the meat industry as a whole apply to McDonalds, but complaints of cruelty and poor animal husbandry didn’t completely ruin the hashtag. Generally, the farmers were happy, the animals looked happy, and McDonalds was happy with it’s campaign.

It’s always risky to get folks involved if the company doesn’t know for sure that they think of it positively, but so many farmers rely on McDonald’s that it was basically a slam dunk. Sure, PETA got a little rowdy, and nobody’s going to make themselves look bad by posting pictures of their sad or poorly treated animals, but the crop farmers generally felt that they’d been treated fairly by McDonald’s. This was great! McDonald’s does a lot to support local farming, and it was smart to emphasize how much of their food they bought from US farms. The patties are still packed with things that keep them from molding, but the cow didn’t have to fly overseas to get to it’s destination packing plant or restaurant.

It’s a genuinely good campaign – they could trust that they’d done right by the people who’d be replying, and they’d retweet specific responses as a form of curation.  Marketing done right!

 

#McDStories

 

After the success of #FarmersOfMcDonalds, McDonald’s paid to promote their next big hashtag, #McDStories. This went downhill, very fast. Where McD farmers are a small, controllable group who are generally professional, the public is… not. And it turns out, when a corporation consistently underpays and overworks it’s workers, they’re going to do things or skip things that customers notice. McDonalds yoinked the paid promotion slot, but by then it was already out of control.

All they could do was damage control as all sorts of nasty stories rolled in. Violence by staff members, sanitation issues in the bathroom, uncleaned ice tanks, solicitors in the parking lot, solicitors inside the store, solicitors coming up to windows in the drive through – customers had seen it all! Heck, even workers joined in – McDonald’s was apparently struggling with it’s management chain in places, and issues that could have been resolved with better training and store support (the shift lead isn’t supposed to be in charge of pest control, for example) just weren’t even getting noticed, until #McDStories forced them to the front of the line. Bugs. Food contamination. Food poisoning. Incorrect cleaners being used for grills and the ice cream machine. Rats. All things that could be controlled or even eliminated with better contact from whoever’s in charge of regional management.

Beyond that, though, the campaign showed that McDonald’s didn’t really know how it looked on the outside. A fast food restaurant universally loved by children, or a fast food restaurant with locations that play it fast and loose with adult customers’ food? The upper management had allowed both to happen, but only one of those demographics is regularly on Twitter.

McDStories highlights a critical disconnect between the McDonald’s marketing department and the outside world.

 

Marketing Mistake

 

Every company wants to be perfect in the customer’s mind. They all want to be clean, friendly (except for Dick’s), and accessible. However, things start to split when you get specific: pubs cater to adults, so they wouldn’t have the same bright colors as family-friendly restaurants. Therefore, the marketing for a pub is going to be very different than the marketing for a fast-food restaurant for kids. McDonald’s has been trying to shift more towards adults in modern times, and since adults care about different things than they used to, McDonald’s has been struggling to find a common thread among McDonald’s customers. Maybe this was a crowd-sourcing campaign for ad ideas, maybe it was just an attempt to appeal to adults.

Either way, it made a disconnect between ‘McDonald’s the brand’ and ‘McDonald’s the restaurant chain’ pretty obvious. You’d never see Waffle House doing this sort of campaign. Waffle House knows what kind of people stumble into their restaurants at 3 AM for a couple of post-bar waffles, and wild stories of incidents inside Waffle Houses scatter the web. A famous Vine shows two of the employees fighting while a customer asks for a waffle in the background. And yet, Waffle House is well-liked. It knows what it is, it doesn’t try to pretend every customer has a great time; they’re there for cheap food and the strange sense of community a 3AM Waffle House has. Besides, Waffle House’s management style seems to keep customers and workers alike pretty happy!

Denny’s, another cheap diner with 24 hr locations, has incidents, but they rarely go viral. They’ve gotten a cultural image of ‘you ate at Denny’s, you knew what you were getting into’. McDonald’s has unknowingly slipped into the same territory – appealing to adults with a 24hr schedule means you’re going to get some strange customers. They don’t seem to realize that’s where they’re at, so they don’t know how to lean into it yet like other 24 hr restaurants do. They’re very concerned with being family friendly. Management- and Marketing-wise, something’s obviously slipping if these stories were genuinely unexpected.

 

How Could They Fix It?

 

The long and the short of it is to listen. Burger King’s rat-bun scandal caused the store to shut down while Burger King corporate handled the issue. McDonald’s has had complaint-tweets before, but somehow they don’t get much traction until the tweet’s got a bunch of retweets, so really, they set themselves up by using a scrollable tag. Companies that won’t respond to anything but highly public tweets about their issue do this to themselves. If a complaint to the manager or to corporate doesn’t change things, then of course the customer is going to resort to what works.

Listen to customers through official channels, and the unofficial ones that everyone can see won’t be flooded with horrible stories of missing quality!

 

Sources:

https://www.businessinsider.com/mcdonalds-twitter-campaign-goes-horribly-wrong-mcdstories-2012-1

https://www.forbes.com/sites/kashmirhill/2012/01/24/mcdstories-when-a-hashtag-becomes-a-bashtag/?sh=2aeca912ed25

GameStop: Lazarus of the Gaming World

Elizabeth History June 4, 2021

GameStop

 

Gamestop first began as an offshoot of Babbage’s, and initially launched in 1999 with a small collection of stores in malls. A brief ownership by Barnes and Noble gave GameStop some opportunity to separate  – once it went public, Barnes and Noble owned the majority of its stock, but they distributed it to the folks who also owned stock in their company when they left. As a result, Gamestop then owned the majority of its shares, and became independent.

Gamestop is obviously different from Best Buy or Radio Shack. It’s games-only. It thrived in an era where games were good, but internet was inconsistent. Game consoles were their bread and butter! They didn’t need to worry about anything but disks and trading games, because online games just couldn’t compete. It was a profitable model for quite some time.

Other companies focused on rentals over trade-ins, primarily GameFly and Netflix, and smaller individual stores where GameStops didn’t exist yet. GameStop’s growth correlated strongly with the increasing popularity of console games – owning a disk was preferable to renting one from a service like GameFly, but it was expensive. Triple A games could run up to 60$, and that would shuffle much of GameStop’s market to the substitutes – trade-ins and store credit became the work-around. Customers were happy. GameStop had a good reputation with both vendors and customers.

However, around 2016, things began to change.

 

2016

 

After a series of acquisitions and successful growth years, Gamestop’s growth began to decline, and then reverse – consoles could now come with live services, so disks were less necessary. Even worse, big ticket games were switching from a physical disk to a downloadable file, and the internet was generally good enough to allow players to download games overnight. Rural areas were still relying on disks, but the rural market wasn’t nearly big enough to cover the amount of people who were leaving disks behind.

Their game trading system encouraged buying, but since any one game was often worth more money on eBay than in-store, there was no reason to trade into Gamestop except to get rid of old disks without throwing them out. GameStop got the chaff, eBay got the wheat.

Customers also often found Gamestop’s trading system encouraged bad behavior by employees – they were expected to complete a certain number of trade-ins per period to hit a quota, and employees in slow stores found it difficult or impossible to hit their numbers honestly.

They weren’t selling new disks, and while they were making profit on returned disks via their Circle of Life system, they weren’t making enough. Consoles and exclusive pre-releases became their primary moneymaker, but with disappointing pre-release after disappointing pre-release (and pre-orders), they were losing their footing via attrition.

Next best option: start acquiring other companies…? A couple of attempts of that later, and GameStop’s found itself in debt. Acquiring a company means nothing if there’s no plan to fix it in a way that provides profit, you’re just buying another sinking ship. If you’re buying a successful company and ousting leadership, you can turn it into a sinking ship! Think of Yahoo’s acquisition of Tumblr. Starting value was in the hundreds of millions, ending value was 3 million. No plan to fix issues combined with a poor plan to get rid of features that users liked = no profit. GameStop shot itself in the foot a couple of times doing this, and doing this has signed death warrants for other companies all by itself.

 

2018

 

Every new announcement of an Xbox or PS4 live service stripped away customers and market share. The Xbox Game Pass announcement alone snipped 8% off of their stock price in 2017, and it only got worse. Stock prices were depressed, and Gamestop couldn’t find a solid buyer – their attempt to get acquisition-ed failed because they weren’t producing enough revenue to fund their own purchase. They’d be a loss, and there was no way to convince other companies that they wouldn’t be. This is also the year they lost over half a billion dollars, the worst in their history. The company was forced to eliminate their dividend, making their stock less appealing.

From there it was a downhill slide. Gamestop was forced to do things that injured its future odds of survival. CEOs came and went, personal reasons and illness flare-ups prevented a cohesive line of succession from forming. Customers complained about the trade-in system, and GameStop employees complained about how the scheduling left them as perma-part-timers despite years of loyalty.

 

2020

 

It’s 2020, and COVID-19 hits. Gamestop has an opportunity to get back on top, and they try – they have GameStop-exclusive releases like new Funko pops, the Nintendo Switch preorders bump up their performance temporarily, and if they spin it right, they could perhaps convince the public that their stock is improving. It is seeing little waves, but not the growth it needs to get loans and the like.

It wasn’t being artificially suppressed, yet, but things still weren’t looking good. Stock prices fluctuated some, usually below 20$ or so per share. Without a dividend, a poorly aging games company isn’t looking like a good spot to invest. And yet, GameStop improved slightly. Losses slowed down a touch, juuuust enough for a retail stock trader to see it and thing “Heh, maybe I’ll buy some GameStop. As a joke.” It’s not a joke anymore.

 

GameStop Now?

 

GameStop hit the news recently because hedge funds attempted to short it to death. The process of ‘shorting’ a stock often involves doing shady things that aren’t quite illegal to lower the stock price. Maybe hedge-fund affiliated folks tell the market that, in their professional opinion, GameStop’s not going to recover. Maybe they automate buying and selling at periods where they know retail traders are going to lose confidence. Or maybe they even get popular stock apps like Robinhood and the like in on it!

The outside public has no idea this is happening, they just see a floundering games company slowly asphyxiating to death.

By getting certain big buyers and personalities (like Citron) to show a lack of confidence, they lower the price, and make a profit when they buy back the shares they’ve borrowed. Stocks can be dirt-cheap and a company can still make it, but the stock price signals a level of faith in the company: stocks that are ‘too cheap’ mean that brokers don’t value it, usually because they don’t think it’s going to bounce back, which then means that banks are reluctant to loan them money – so they can’t invest in things that might fix their problems.

Bad stock price = no loans = no fixing problems =  eventual death of the company, barring a miracle.

Luckily for GameStop, a miracle is essentially what happened. Somebody saw through all the garbage back-channel trading and realized that GameStop had been oversold. Overselling is also legal, but usually it means that the company’s sold too many of it’s shares – it’s not suspicious in and of itself. Here, hedge fund itself sold more shares than it could afford to leverage. It had oversold so badly that a couple of ‘whale buyers’ grabbing a % or two of shares made the price go up, and people noticed. It should go up, but it shouldn’t have gone up that much. By buying an absurd amount of shares for dirt cheap, they sucked up the shorted supply – and the price rocketed to a whopping 483$ at its peak, from a low of three dollars just a few weeks earlier.

Now, GameStop has some time to plan ahead.

And plan, they have! GameStop plans to pivot into selling gamer gear beyond the regular consoles and games they ordinarily sell, effectively allowing them to fill a niche left behind by the death of Fry’s Electronics and Radio Shack. Amazon is still an issue, but Amazon’s generally unable to deliver parts in the same day. With their new commitment to price-match, suddenly Amazon’s at a disadvantage! Same day parts, less aggressive sales tactics, and new channels to sell games.

GameStop’s got a chance again. Thanks, retail stock traders.

 

Sources: https://www.bloomberg.com/news/articles/2021-01-29/citron-says-in-tweet-it-will-discontinue-short-selling-research

https://www.nasdaq.com/articles/gamestop-oversold-2019-01-29

https://www.wsj.com/articles/gamestop-eliminates-dividend-as-sales-fall-11559683871

https://screenrant.com/gamestop-price-matching-first-improvement-new-leadership/

 

 

What’s the Deal with NFTs?

Is it Bad for the Environment?

 

Well – that’s complicated. Blockchain technology already takes a lot of energy, but because it’s limited to the people and companies that can afford powerful computers, its impact is limited. However, that doesn’t mean it will stay limited. NFTs use a form of blockchain tech that’s less efficient than the kind cryptocurrencies use, so the power consumption of NFT tokens are going to be a little more intense than they are for cryptos. That’s on top of the cryptocurrencies and blockchain tech already in use. What happens when everyone wants a piece of the Bitcoin mining success? Or when everyone wants to create these tokens? The effects trickle down, and the fact that we’re seeing a power usage impact from the limited number available now doesn’t bode well. Computers are getting more powerful anyway, but this could create demand for computers that are powerful enough for NFTs but too powerful for anything else. Powerful computers still consume more power than less powerful ones, even if all else is running equal.

While energy demands are slowly being met with renewables, non-renewables still make up the majority of the energy supply in many places. In fact, many countries are actively resisting the switch. If NFTs need a noticeable amount of power to be feasible, they’re going to produce a noticeable amount of pollution. Not to mention the difficulty of finding and mining rare-earth metals for some of the computer parts. Some of those, like Yttrium, are more expensive than gold, and even rarer. The mining needed to supply bottleneck parts could turn into a disaster all on its own even if the power comes from non-polluting sources, unless suitable substitutes are found.

 

It’s an attempt at re-introducing scarcity

 

Much like signed works retain value, NFTs also retain value. In theory. What makes them special is the ability to hold value in digital space as a “rare” item, via blockchain technology attached to the item. This should be impossible in a world like ours, where a downloadable image can be copied limitless times. In fact, some people hate the concept for this very reason – there’s no actual scarcity! The quality doesn’t change, the picture doesn’t change, and having an NFT piece doesn’t give you the right to use the piece, it gives you the digital file and that’s it. It isn’t copyright. It’s creating a real, solid, theoretically permanent object in a digital, everchanging world with exactly the same properties as it’s copies, except for the token. It’s a digital Beanie Baby: scarcity for the sake of it.

Some say the current craze is almost certainly a bubble. If NFTs can be created out of anything, and it’s possible to make an unlimited amount… where does the value come from? A signed copy of an album is expensive because it’s rare, computers are expensive because they’re functional, but what does an NFT do? It’s not like BitCoin, which is the money, NFTs are a “real” object that has to be sold first for money. Hence the “non-fungible” part of the name. They’re subject to all the same things any one-of-a-kind item is, including becoming more common, and becoming unpopular. Eventually, the cost of the NFT might be tied to the real value of the item (good!), and most real items aren’t worth what that Nyan Cat token sold for (bad!).

This whole system, from the outside, looks like people with supercomputers trying to make something to sell to users who don’t fully understand it.

 

It’s Confusing

 

NFTs don’t do anything but store hypothetical wealth, much like signed pictures do. Unlike signed pictures, which are a physical item, other people can still view digital images of what you’ve got. It just won’t have the token. If people don’t care about the token, the token loses value.

Don’t let their newness confuse you – they’re only worth what the market says they’re worth. Right now, NFTs are being used like baseball cards and Beanie Babies, but disguised by the blockchain technology.

Many people conflate BitCoin with blockchain technology, and BitCoins cost a lot. Therefore, to investing newcomers, blockchain items cost a lot. The folks who mistakenly follow this train of thought buy these things on the bubble and think they’ve gotten a good deal. The second generation of NFTs has already experienced a price dip, and it’s not crazy to think they’ll stay down.

NFTs will have to gain value by being sold as unique items instead of relying on their newness to sell.

For example, a tennis player is selling tokens of her arm, the way old school baseball orgs would sell cards with their players on it. She’s a good tennis player, so people who like her will buy the NFTs at the price they think is acceptable, considering the rarity of the ‘item’, her arm. As long as people value her as a player, they’ll value the NFTs they buy from her. The same goes for baseball cards, resellers know popular players with few cards will sell for higher prices than unpopular players. They, in this case, hold some kind of value – but buying an NFT without looking at what it’s attached to, what’s supposed to give it it’s value, is like buying a blank index card at premium baseball card prices.

 

It’s…Doing Weird Things

 

Where real objects can be used as leverage, digital NFTs are a whole other world, legally and economically. To expand on hooking up NFTs to real objects, you don’t own the tennis player’s arm, just like buying an NFT of a picture doesn’t give you copyright permissions. Buyers own a token that represents ‘something’, not the ‘something’ itself.

Just like holding collectibles of any kind, holding NFTs is like holding stocks in something without any of the rights that come from holding stocks, and all of the liabilities. The price can do whatever it wants, it’s subject to fluctuations, and you have no say in what the artist or tennis player does, even if it reduces the value of your NFT. In essence, it’s a collectible market. This huge surge in pricing likely won’t stick around once people get used to the tech being here.

 

Bright Side

 

In a world where fake images are getting better and better, NFT tokens might help prevent some Photoshop fraud if applied correctly, which is valuable. NFTs can also follow contracts and other digital items where all parties should only have one primary copy. This huge hype around NFTs as an investment is overlooking many of it’s other benefits.

These NFT items aren’t like signed lithograph prints or limited edition cards, where illegitimate copies are much worse quality, the only thing that increases an NFT art’s value is the NFT. There are high-quality pictures of Nyan Cat all over the internet, for example. Once collectors figure that out, all but the first gen of Nyan Cat tokens should lose value: there’s no real scarcity, only imagined scarcity, created by people with powerful machines. If someone could have a print of their favorite artist’s work for cheap at the same quality and definition as the original, but without the artist’s signature, would they do it? A fair amount of people are going to answer “yes”. In fact, even if the quality is noticeably worse, most people will still take a poor copy over no copy. Look at how many people hang up flat pictures of Van Gogh’s work in their house!

Even beyond art, NFTs might struggle to keep footing with the physical item they’re attached to. If it’s really rare, the collector could sell it by itself without the headache of getting the token back, or negotiating with the buyer to prevent the (sold) NFT from losing value if that buyer wants to buy it and alter it. A whole new world of property law is on the horizon, and only time will tell how it goes.

 

 

Sources: https://www.bbc.com/news/technology-56371912

https://www.cnn.com/2021/04/05/investing/nft-prices-falling/index.html

https://en.wikipedia.org/wiki/Non-fungible_token (Wikipedia provides a good definition and additional reading on the tech behind NFTs, something other articles don’t do)

Apple Wheels – It’s Wheely About Advertising

Ah, Complaining.

 

Apple Wheels

 

The Apple Mac Pro cost several thousand dollars, and it looked like a cheese grater. The little Apple-branded wheels to make it move cost about 700$, or approximately the price of the iPhone 8 at launch. Oh, but don’t worry – you can buy the feet for a mere 300$, if you just have to have Apple Brand. How did we get here? How did we, as a society, get to 700$ computer wheels?

 

Brand = Trustworthy

 

Branding by itself is an interesting mark of human psychology. It’s a shortcut to trusting something! The brand of an item itself purely imaginary – the brand, by itself, does not produce value for the final product except for the value the consumer gives it in their mind. “A rose by any other name would smell as sweet”. Look at IKEA: all those items come from different factories, so customers shouldn’t just blindly trust whatever they buy, right? But because IKEA has put their name behind it, consumers still buy the cheap shelves with the understanding that IKEA has endorsed them. If these shelves were somewhere else, and un-branded, consumers wouldn’t trust them as much. They’d sell less. Branding, in this way, is extremely valuable even though it’s intangible.

And it’s good for the customer, too! It allows them to make a more informed decision. Emotionally, people become loyal to brands that have served them well. Fortunately for the brand, they’ll stay loyal unless something seriously impacts their mental image of that brand.

All of this sounds totally logical and reasonable, right? It’s the way people have done business since cash was invented. It made sense for people to trust the smithy, who branded their creations, over someone who wouldn’t put their name to what they made.

Strange things start happening when people like the brand more than the products, and we’ll get there.

Even though consumers may know the store-brand comes from the same plant that the name-brand does, they may still pick name-brand. This is part of that trust – it is scary to try new things, and keeping one constant, the brand, the same, makes buying big electronics or new foods less scary. When consumers stop showing a brand loyalty, or they start complaining, the brand could do things like throw in warranties or spare parts for free and retain that good will. Store brand doesn’t stand a chance even if it’s literally identical.

 

Brand = Money

 

Branding can save a company even if they’re like modern day Pyrex, which has a different heat tolerance depending which factory you get it from. People post infographics online so consumers can identify the ‘good’ pieces, because they love Pyrex so much. A change to the glass manufacturing process means that the brand is no longer a reliable indicator of quality, but people still want to like Pyrex. Otherwise they wouldn’t go through all this effort to find the right Pyrex factory, they’d buy somewhere else. This is where brand starts to become more important than what it’s selling.

People will pay a premium for a brand they trust, and companies know this. We see this everywhere, from cars to computers. If something was good, some people will believe it’s still good. That’s the business principle of goodwill. Sears might have survived a couple years off of goodwill and nostalgia alone.

Branding, therefore, can become a phylactery in the hands of a new controlling board. As soon as a company starts to rely on goodwill to sell items that they know other companies would have ditched, they become like Apple. Unlike Apple, many of them don’t sell high-ticket items as a luxury.

For Apple, the brand is demand. Where Steve Jobs might have demanded innovation out of every item they released, the controlling board doesn’t. They know that the brand reputation he built will sell items because people love Apple, and they know people want to look like they have money, and by smearing Android products as ‘cheap’, Apple became a shortcut for ‘expensive’. Apple wheels are a natural result of a market that’s so hyperfocused on branding that it doesn’t care about functionality. A combination of goodwill and a little psychology gives us these overpriced items that are only overpriced for the sake of it.

The irony of all of this is that people will eventually buy the item as a ‘flex’, unironically, and then the product exists in a quantum state of sincerity. How does Apple live where others die?

 

Wheely Worth It

 

Apple sells sincere items alongside their ‘meme’ items. While Apple sells things like wheels and pens for hundreds of dollars, the past generations of phones are still about the right price for what the user gets. Factoring in things like R + D, factory overhead, and the materials to go into it, a comparable phone made by a third party would be cheaper, but not by much. They’re only at a small premium to other comparable brands for the same computing power, which makes sense with Apple’s well-known tech support. They haven’t gone full ‘Sears’ yet, and there’s still some value in the idea of their brand, and they still release ‘worthy’ items alongside the garbage ones. Why risk it with wheels that cost as much as an iPhone, a genuinely expensive item?

Simple: it’s for advertising, and it’s fairly cheap as far as campaigns go. Either ‘hype beasts’ (people known for buying branded clothing just because it’s expensive) buy it to flex on others, or regular people discuss how out-of-line Apple is. Either way, Apple’s name is out there. Apple might not actually expect to make money with these wheels, but the items are so cheap to make that a single purchase could finance the production of 50 more sets. Not to forget hype beasts!

This new trend of “flexing” expensive-but-nearly-worthless items has led to the creation of the Supreme Brick, the Apple wheels, and all sorts of other tomfoolery that relies on branding. Now, some brands use branding as a shortcut to ‘luxury’ instead of ‘trust’. Luxury clothing items have already been doing this for years, so while the material is thin, the manufacturing process cost cents on the dime, and shipping it en masse cost maybe a couple dollars, the final item is an 800$ shirt. Not because it’s made of especially good materials, or hardy – because it has a logo on it.

The only reason knockoffs are not worth as much is because the original brand has convinced people that their product is ‘better’ because it cost more, not that it cost more because it was better. And people believe it! Anyone self-conscious enough to get fake Airpods or a third-party Gucci shirt are still pursuing that image of luxury, which is fantastic for the brand. The same goes for Apple Wheels, and Airpods, and Supreme clothing… if the consumer values it, then they’re worth it. The Apple Wheels are worth 699$ to the people who want it, and that’s good enough to keep making them.  They’re buying Apple Brand, after all.

Apple Wheel. It’s wheely about the advertising.

 

Sources:

https://www.apple.com/shop/product/MX572ZM/A/apple-mac-pro-wheels-kit

https://www.apple.com/shop/product/MXNM2ZM/A/apple-mac-pro-feet-kit

https://www.thedrum.com/opinion/2020/02/26/shout-or-whisper-dissecting-quiet-and-loud-luxury

 

Blizzard Entertainment’s 2012 Hack: An Example of How to Do It Right

In 2012, game developers were beginning to experiment with a principle known as “always on”. “Always on” had many potential benefits, but the downsides keep the majority of games from ever attempting it. Many of the notable standouts are games that require team play, like Fall Guys or Overwatch. Others without main-campaign team play tend to fall behind, like Diablo 3 and some of the Assassin’s Creed games. Lag, insecurities, perpetual updating, etc. are all very annoying to the end user, so they’ll only tolerate it where it’s needed, like those team games. It’s hard to say that this hack wouldn’t have happened if Blizzard hadn’t switched to an “always on” system… but some of their users only had Battle.net accounts because of the always-on.

Blizzard’s account system was designed with their larger, team games in mind. It was forwards facing, and internet speeds were getting better by the day. Users were just going to have to put up with it, they thought. Users grumbled about it, but ultimately Blizzard was keeping data in good hands at the time. You wouldn’t expect Battle.net accounts created purely to play Diablo 3 to lose less data than the user profiles in the Equifax breach, right? Blizzard didn’t drop the ball here! What did Blizzard do right to prevent a mass-meltdown?

Hacker’s Lament

 

The long and the short of it was that Blizzard’s stuff had multiple redundancies in place to A) keep hackers out and B) make the info useless even if it did end up in the wrong hands. Millions of people had lost data in similar events before, and security experts were more and more crucial to keeping entertainment data safe. Blizzard was preparing for the worst and hoping for the best, so even when the worst struck here, they were prepared.

The actual hack was defined by Blizzard as ‘illegal access to our internal servers’. It released the listed emails of players (excluding China), the answers to security questions, and other essential identifying information about accounts into the wild. However, due to Blizzard’s long-distance password protocol, the passwords themselves were scrambled so much that the hackers might as well have been starting from scratch. This is still a problem, but it’s not a world-ending, ‘everyone has your credit card’ problem. Changing the password on the account and enabling 2FA was considered enough to shore up security.

 

Potential Issues

 

Lost email addresses aren’t as big of a problem as lost passwords, but they can still present an issue. Now that the hacker knows an email address was used on a particular site, it’s possible to perform a dictionary attack, or regular brute forcing! This strategy will eventually work, but the longer and more complicated the password is, the less likely it is to succeed on your account in particular.

A secondary problem is the lost security questions. Those are a form of 2FA. Depending on the question asked, guessing something that works or brute forcing it again is dangerously easy. Sparky, Rover, and Spot are very popular names for American dogs, for example. If the hacker is able to identify that the player’s American, and then guess the name of their first dog, they’re in! They can change the password to keep the legitimate player out. (Part of Blizzard’s response is forcing users to change their security questions for this reason). 2FA that uses email or mobile is generally preferred.

Battle.net acted as an overarching account for all the games, and made the stakes higher for an account breach. All the online Blizzard games went through Battle.net. Losing access could mean losing access to hundreds of hours of game progress. Or worse: credit card data and personal info.

 

Online, Always, Forever

 

The event provided ammo for anti-always-on arguments. There was no option to not have a Battle.net account if you wanted to just play Diablo’s latest game. Some users were only vulnerable as a result of the always-online system. If they’d simply been allowed to play it offline, with no special account to maintain that always-online standard, there wouldn’t have been anything to hack! Previous Blizzard games didn’t require Battle.net. People who stopped at Diablo 2 seem to have gotten off scot-free during the hack. This is annoying to many users who only wanted to play Diablo 3. They might not find value in anything else about the Battle.net system. Why bother making users go through all this work to be less secure?

When discussing always online, there’s good arguments to be made for both sides. Generally, always on is better for the company, where offline gaming is better for the consumer. Always on helps prevent pirating, and it gives live data. Companies need data on bugs or player drop-off times, which can help them plan their resources better and organize fixes without disrupting the player experience.

On the other hand, consumers with poor internet are left out, as lag and bugs caused by poor connection destroy their gaming experience. As games move more and more to pure digital, buying a ‘used game’ only gets more difficult for the consumer. Companies treat purchased games as a ticket to a destination, rather than an object the consumer buys. Games used to be objects, where anybody could play the game on the disc even though save data stayed on the console. Buying access to Diablo 3 via Battle.net means that there’s no way to share that access without also allowing other people to access the Battle.net account, which stores the save data. It’s the equivalent of sharing the console, not just the disc.

 

Handling

 

The response to the stolen, scrambled passwords was for Blizzard to force-reset player passwords and security questions, just in case the hackers somehow managed to unscramble them.

2FA is always a good idea, and Blizzard strongly recommended it too. 2FA will do a better job of alerting you than the default email warning  ‘your password has been changed’ will after the fact. After you’ve received that email, the hacker is already in. Depending on when you noticed, they could have already harvested all the data and rare skins they wanted by the time you get your support ticket filed! Setting up 2FA first means that you’re notified before that happens.

All in all, Blizzard handled this particular incident well! Companies are required to inform their users about potential online breaches, but some companies do this with less tact than others. Formally issuing an apology for the breach isn’t part of their legal requirements, for example. What made this response possible in the first place was Blizzard’s competent security team, alongside a set of policies that were strictly followed. Logs and audits in the system ensured that Blizzard knew who accessed what and when, which is critical when forming a response. Blizzard was able to determine the extent of the problem and act on it quickly, the ultimate goal of any IT response.

 

 

Sources:

https://us.battle.net/support/en/article/12060

https://us.battle.net/support/en/article/9852

https://www.forbes.com/sites/erikkain/2012/08/09/its-official-blizzard-hacked-account-information-stolen/?sh=2ecadbc955d1

https://comsecglobal.com/blizzards-gaming-server-has-been-hacked/

https://medium.com/@fyde/when-too-much-access-leads-to-data-breaches-and-risks-2e575288e774

https://www.bbc.com/news/technology-19207276

In Terms of Memory

Elizabeth History, Innovation April 23, 2021

The first Bit of Data

 

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.

 

Processing

 

If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

 

Then, a Byte

 

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.

 

Alphabet

 

8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!

 

Kilobyte

 

1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.

 

Megabyte

 

A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.

 

Gigabyte

 

A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.

 

Terabyte

 

A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.

Petabyte

A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.

 

Sources:

https://en.wikipedia.org/wiki/Bit

https://kb.iu.edu/d/ahfr

http://www.differencebetween.net/technology/software-technology/difference-between-unicode-and-ascii/

https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3155B.html

https://www.pcworld.com/article/127105/article.html

https://www.wired.com/2014/10/astrophysics-interstellar-black-hole/

https://www.merriam-webster.com/dictionary/gigabyte

https://www.nasa.gov/mission_pages/chandra/news/black-hole-image-makes-history

https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/

Attempts at Media Storage That Didn’t Get Big

Elizabeth History April 21, 2021

CEDs: Like Vinyl for Video, but more expensive

 

The CED (or capacitance electronics disc) was a disc that could create pictures with the grooves in its surface, like a video/audio form of the vinyl record. It was expensive to produce, however, and just like vinyl it could degrade after being played too many times. The reader was physically touching the disc to read it. CED tech was also extremely sensitive to dust, even more so than vinyl records. It took a specialized caddy to store these things just to keep contaminants out! Consumers may have thought the idea was great, but the upfront cost was just too much for the average Joe, especially since more affordable media types were already nearby on the horizon (like VHS tapes).

CEDs were being produced even as the manufacturer said they’d cancel them. This understandably led to a dip in profits while manufacturing was still happening, and nobody wanted to pick it back up. CEDs are a fine idea, but much like the eight-track, they were somewhat expensive to make and not very widely demanded.

 

Optical Cards: Like a CD-ROM, But Worse

 

The Optical card briefly appeared as an alternative to CD-ROMs (ROM here stands for Read-Only Memory). It’s very cool in theory – it can only be written on once, it’s flexible, and it’s sturdy! It could make a perfect ID card as it usually had a capacity of several megabytes, perfect for storing info to access right away. However, you don’t see much of them today. Why? Optical cards seem like a perfect solution for a number of things.

It’s difficult to find a solid answer online, but my theory is that it did stuff that other products already did.

By the time it came out, it was easier to just scan a code linked to files in the computer than it was to manufacture a card with that unchanging data inside of it. For example: a barcode. There’s a reason barcodes win out over things like RFIDs for inexpensive(!) goods – adding in all that tech is just not worth the price when the computer can also do the trick by itself. Do you invest in 500 small cards with electronics inside and a machine to read them, or 500 plastic business cards with a barcode, and a machine to read them? One’s going to be much cheaper.

Besides, magnetic stripe cards were already on the market, and machines could already read them. It was a short jump to include more info on the card that everybody already had a machine for, so magnetic cards dominated over opticals.

The other part of it (which information online will verify) was that storage was getting cheap! So cheap that optical cards fell out of use for other forms of storage, too. Like in cameras, where Canon released their first optical card. SD cards could hold more than even CDs, so an optical card had no chance in the race. That’s not to say optical cards aren’t used at all, but they sit in an intersection that other products can fill with minimum additional effort. Legacy machines, and certain companies use them, but they’re not very popular.

Good theory, niche too small.

 

Bubble Memory: Like A Magnetic DRAM Chip, but worse

 

Bubble memory was supposed to be a more compact, sturdier replacement to other memory types. Unfortunately, bubble memory sat at the worst intersection of expensive and power hungry – even if it outperformed DRAM chips, Semi-conductor memory, and hard disks in one field or another, everything else wrong with it dragged it down to become a second-rate competitor. Not to mention, the main producers of bubble memory drives never got manufacturing down to a science, so it was prone to breakage and bugs even when it should have been competitive in each niche, before the others came along.

It got some use because it popped up in the middle of a DRAM chip shortage, and then promptly died back out once DRAM units were back on the shelf alongside other replacements. It was just too fiddly to keep!

 

Eight-Tracks: Like a cassette, but more niche

 

If you’re going purely off of the item’s legacy, the eight-track is certainly a legacy item worth mentioning. It’s in this list because other items from the same era survived where the eight-track died. Cassette players in cars are still so widely present that adaptors sell in drugstores, while eight-track adaptors are a specialty item sold online. The last generation of cars to hold eight-track players are largely off the road, while cars with cassette players were still made into the early aughts. Vinyl records are still sold in physical locations, eight-track tapes are not. Compact cassettes are still sometimes featured in teen movies… eight tracks are not. Eight tracks still hold a lot of nostalgia, but the effort to get one playing in this day and age is a massive pain.

It was a great idea, but it was outlived by other media.

 

ROM Cartridges for Not-Games: Like a floppy disc, but earlier

 

Once, cartridges were used across the board. Of course they were! They were convenient, and the earliest home computers already had a slot for them! Most people recognize them as video game storage, but they were capable of more than just that. Applications, extra RAM, extra storage – the cartridge, even the ROM-only cartridge, was almost as capable as a USB was, except for capacity.

Nothing really had that much capacity at the time, though. The computers of the time usually held less than a modern cheapo USB’s worth of memory. Other forms of media outstripped it for basic storage, but it reigned supreme for a few more years in videogame media, before floppy discs started taking over there, too.

It’s distant descendant, the CD-ROM, held more data more securely, so the cartridges started to become outdated when optical media became available for purchase. Even video game consoles switched from cartridges to discs.

 

Sinclair ZX Microdrive: Like mini-USBs, but too early

 

A teeny-tiny drive with about 200 inches of magnetic tape inside sounded like a piece of spy equipment when it first launched. The Microdrive was especially small for it’s time and capacity, although it tended to wear out quickly. As a result, it still struggled to compete with bigger drives despite its many advantages. Other, similar drives released by competition were in much the same position. The thinner the plastic, the easier it wore out. Smaller devices either had less tape, or thinner tape – most devices chose thinner. It was the best consumer electronics could do at the time.

 

Magnetic Drums: Like tape, but bigger

 

Magnetic tape came out before magnetic drum tech did, although both saw use at release. The primary difference is in the reading: magnetic tape is moved in front of a single reader, while the drum spins in front of several fixed readers. In computers, it was replaced by core-memory. In a way, drum memory was the first time hard drives really took shape: hard drives follow many of the same principles, in that the heads stay still while the magnetic (or capacitive) item rotates beneath it, and the machine picks the correct head to see the data it’s looking for. Instead of a stack of discs, it was a single drum, so it’s capacity understandably wasn’t as great as modern drives, even if the idea was there.

Drum memory certainly didn’t fail – the military used it for years! IBM even used it up until the 90s in certain machines. However, its limited storage capacity made it a less popular choice than the also-widely-used tape, and then core-memory, for regular consumers who wanted something smaller.

 

Sources: https://www.pcmag.com/news/10-bizarre-pc-storage-formats-that-didnt-quite-cut-it

https://gizmodo.com/the-9-weirdest-data-storage-devices-ever-created-1782768448

https://en.wikipedia.org/wiki/Capacitance_Electronic_Disc

https://www.laserfocusworld.com/lasers-sources/article/16556371/optical-memory-cards-provide-secure-identification

https://legacybox.com/blogs/analog/when-did-cassette-tapes-replace-8-track

https://techreport.com/news/17216/will-flash-cards-replace-optical-storage/

https://tidbits.com/1990/10/01/canon-optical-card/

 

Magnetic Storage Types

 

Magnetic Tape

 

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live. Music recording tech already existed, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

 

Carousel Memory

 

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

 

Core Rope Memory

 

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

 

Core Memory

 

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

 

Magnetic Bubbles

 

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

 

Semi-Conductor DRAM – Honorable Mention

 

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

 

Magnetic Disk (Hard Disk Drive)

 

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

 

Sources:

https://spectrum.ieee.org/tech-history/space-age/software-as-hardware-apollos-rope-memory

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

https://electronics.howstuffworks.com/vcr.htm

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html

https://www.computerhistory.org/revolution/memory-storage/8/253

https://nationalmaglab.org/education/magnet-academy/watch-play/interactive/magnetic-core-memory-tutorial

https://www.rohm.com/electronics-basics/memory/what-is-semiconductor-memory

https://cs.stanford.edu/people/nick/how-hard-drive-works/

https://psap.library.illinois.edu/collection-id-guide/audiotape

https://www.engadget.com/2014-04-30-sony-185tb-data-tape.html?guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAC5GC2YOKsvhOs9l4Z2Dt1oHX3-YxjPyJC60qfkq6_6h8zyckkBK9V9JJC9vce3rCmcgyehT-RB6aORBfzB9b5oiBoF1Fbic_3653XVM8fsUTHHnTgxKx4piCeEl65Lp54bkbMcebEEddwlq-EDnAcM7zuv49TXYHcgq9lmnrBln

https://en.wikipedia.org/wiki/Carousel_memory (all primary sources regarding carousel memory are in Swedish)