Posts Tagged


Emulators And The Legal Gray of AbandonWare

Elizabeth Technology April 23, 2024

What is an Emulator?

An emulator is a program that emulates a game console, usually for the purpose of playing a game that is – either by price, age, or device – inaccessible. Streamers commonly use emulators to play Pokemon games made for the Gameboy, so they can screen-record their gameplay directly from their computer instead of having to somehow hook the Gameboy up to it. Zelda fans might want to play Ocarina of Time, but they might also find that the console to play it on is awfully expensive for one game where an emulator is pretty cheap! In certain cases, games are geolocked – countries restrict access to certain forms of art as a means of censorship. Emulators can make those games accessible to people who want to play them in that country.

In the 1990s, consoles were on top when it came to games. Computers were rapidly gaining in power, however, and some folks realized that the console could be recreated using a home computer. The first emulators were born via reverse-engineering console coding. They evaded legal action by only copying devices that were outdated, but that changed too with a major emulator made for the Nintendo 64 while it was still in production. Nintendo pursued legal action to stop the primary creators, but other folks who had already gotten their hands on the source code kept the project going.

Ever since then, emulators have lived in a strange space of both making games available and making them so available that the parent company decides to step in and try to wipe it out, which is nearly impossible once it’s out on the open web. Gamers simply won’t allow a good emulator to die!


Copyrights are crucial to the gaming ecosystem, and it’s a delicate balance of allowing fan art, but disallowing unauthorized gameplay. Allowing game mods, but disallowing tampering that could lead to free copies being distributed against the company’s wishes. Allowing fun, but not theft. Copyright laws are always evolving – new tech comes with new ways to copy, create, and distribute intellectual property. Generally, though, copyright falls back to permission: did the original company intend for their IP to be used in this way?

Emulators and copyright don’t get along very well at all! Emulators are, by their very definition, creating access to the game in a way the original company didn’t intend. As such, it’s unofficial, and if money is exchanged, it’s not normally between the copyright holder company and the customer, it’s the customer and some third unauthorized party.

Games aren’t selling you just the physical disk. You’re buying a license to play the game. If you take it as far as Xbox intended to back when the Xbox One was coming out, friends are only allowed to come over and play with you on your license because the company can’t enforce it. It’s a limitation of the system that they can’t keep you from sharing disks or accounts.

Not every company thinks like this (see the Playstation 5 and a number of more recent cases regarding digital content ownership), but that’s the most extreme possible interpretation. You bought a disk so you could play a copy of their game that they have licensed out to you. You own the right to play that copy of the game, you don’t own the game itself.

Consider: Death of a Console

When a console dies, it’s taking all of its content with it. There is no more money to be made off of it, and the games are going to slowly disappear into collections and trash bins.

Does art need to exist forever, or is it okay if some art is temporary? Not every Rembrandt sketch is still in trade – some of it was just sketches, and he obviously discarded some of his own, immature art. Immature art is interesting to see, but it’s not what the artist wanted their audience to see. Otherwise it would have been better kept. Think about the ill-fated E.T. game that Atari made, they weren’t proud of it, they didn’t want it seen, and they saw fit to bury it. So they buried it. It was directly against their wishes for people to find this game and then play it. Emulating it is obviously not what the copyright holder wants.

But then consider all the little games included on a cartridge that’s just forgotten to the sands of time, made by a programmer who didn’t want it to fade away? Acrobat, also for the Atari, isn’t very well-remembered, but it still made it onto Atari’s anniversary console sold in-stores. 97 games on that bad boy, and Acrobat was included. It’s not a deep game, it’s nearly a single player Pong. But the programmers who made it didn’t ask for it to be excluded from the collection, so some amount of pride must exist over it, right? Does the game have to be good to be emulated? Is only good art allowed to continue existing officially?

Is all art meant to be accessible to everyone?

If some art is made with the intent to last forever, is it disregarding the creator’s wishes to not emulate it, against their production company’s wishes? If a corporate exec decides a work of art is better used as a tax writeoff than launched even though it’s already complete, is it better to listen to that exec, or the dozens – perhaps hundreds – of people opposing the exec’s will?

If art’s made to last forever but the artist (and society) accepts that that’s simply unrealistic, is it weird to emulate it, in the same way it’s weird to make chat-bots out of dead people?

When you get past the copyright, it’s a strange, strange world to be in.

Ethical Dilemma

Stealing goes against the ethics of most societies, modern or not. The case against emulators is that it’s stealing.  It often is! An emulator/ROM (ROMs act as the ‘disc’ or ‘cartridge’ for the emulator) for Breath of the Wild was ready just a few weeks after the game launched, which could have seriously dampened sales if Nintendo didn’t step in to try and stop that. That first emulator, the one for the Nintendo 64, also drew a lot of negative attention for the same reasons, potentially siphoning away vital sales.

However, there’s a case to be made for games and consoles that aren’t in production anymore.

Is this a victimless crime, if the original game company really can’t make any more money off of it? It’s one thing to condemn piracy when the company is still relying on that income to make more games and pay their workers, it’s another entirely when the game studio isn’t interested in continuing support, and the console had a fatal fault in it that caused many of them to die after 10 years. That game is as good as gone forever without emulators. With no money to be made, why not emulate it?

In less extreme circumstances, the console’s still functioning, but the cartridges that went to it are incredibly rare. The company could potentially make money off of the game if they someday decided to remaster it, but that’s unknowable. Licenses could be available for purchases… but they aren’t right now.

Or, even better, the cartridges are still available for purchase in the secondary market. You just don’t happen to have the console, which has now spiked to a cost of 400 dollars due to reduced supply over time. You buy the cartridge – you’re still buying the license, you just don’t have the car, right?

According to copyright, you need a specific car for a specific license, but ethically, you’ve done the best you can as a consumer.

Brand Name

Much like Disney did with Club Penguin’s many spinoffs, emulators are kind-of sort-of overlooked up until they start eating into sales. More aggressive companies will go after emulators before they blow up (see Nintendo challenging Yuzu, an emulator) but most companies just don’t want to spend money to enforce an issue like emulators – their game is still being played, their brand is still out there, and the users are going to be very upset if this big company decides to step in and ruin fun when they don’t need to (see Nintendo challenging Yuzu, a beloved emulator). It may do more harm than good to try and wipe the emulator out when most people want to do the right thing.

Obviously, they’ll need to put a stop to emulating new games – the goal is to spend just enough money to do that effectively without also overstepping and destroying emulators for consoles no longer in production. It takes money to make games, games should earn money as a result. Removing emulators for games and consoles no longer in production isn’t helping them earn money – as such, many are allowed to stay. For now.


How To Handle A Hack: Blizzard in 2012

Elizabeth Technology April 2, 2024

In 2012, game developers were beginning to experiment with a principle known as “always on”. “Always on” had many potential benefits, but the downsides keep the majority of games from ever attempting it. Many of the notable standouts are games that require team play, like Fall Guys or Overwatch. Others without main-campaign team play tend to fall behind, like Diablo 3 and some of the Assassin’s Creed games. Lag, insecurities, perpetual updating, etc. are all very annoying to the end user, so they’ll only tolerate it where it’s needed, like those team games. It’s hard to say that this hack wouldn’t have happened if Blizzard hadn’t switched to an “always on” system… but some of their users only had accounts because of the always-on.

Blizzard’s account system was designed with their larger team games in mind. It was forwards facing, and internet speeds were getting better by the day. Users were just going to have to put up with it, they thought. Users grumbled about it, but ultimately Blizzard was keeping data in good hands at the time. You wouldn’t expect accounts created purely to play Diablo 3 to lose less data than the user profiles in the Equifax breach, right? Blizzard kept the ball here! What did Blizzard do right to prevent a mass-meltdown?

Hacker’s Lament

The long and the short of it was that Blizzard’s storage had multiple redundancies in place to A) keep hackers out and B) make the info useless even if it did end up in the wrong hands. Millions of people had lost data in similar events before, and security experts were more and more crucial to keeping entertainment data safe. Blizzard was preparing for the worst and hoping for the best, so even when the worst struck here, they weren’t left floundering telling people they lost their credit cards.

The actual hack was defined by Blizzard as ‘illegal access to our internal servers’. It released the listed emails of players (excluding China), the answers to security questions, and other essential identifying information about accounts into the wild. However, due to Blizzard’s long-distance password protocol, the passwords themselves were scrambled so much that the hackers might as well have been starting from scratch. This is still a problem, but it’s not a world-ending, ‘everyone has your credit card’ problem. Changing the password on the account and enabling 2FA was considered enough to shore up security.

Potential Issues

Lost email addresses aren’t as big of a problem as lost passwords, but they can still present an issue. Now that the hacker knows an email address was used on a particular site, it’s possible to perform a dictionary attack, or regular brute forcing! This strategy will eventually work, but the longer and more complicated the password is, the less likely it is to succeed on your account in particular.

A secondary problem is the lost security questions. Those are a form of 2FA. Depending on the question asked, guessing something that works or brute forcing it again is dangerously easy. Sparky, Rover, and Spot are very popular names for American dogs, for example. If the hacker is able to identify that the player’s American, and then guess the name of their first dog, they’re in! They can change the password to keep the legitimate player out. (Part of Blizzard’s response is forcing users to change their security questions for this reason). 2FA that uses email or mobile is generally preferred. acted as an overarching account for all the games, and made the stakes higher for an account breach. All the online Blizzard games went through Losing access could mean losing access to hundreds of hours of game progress. Or worse: credit card data and personal info.

Online, Always, Forever

The event provided ammo for anti-always-on arguments. There was no option to not have a account if you wanted to just play Diablo’s latest game. Some users were only vulnerable as a result of the always-online system. If they’d simply been allowed to play it offline, with no special account to maintain that always-online standard, there wouldn’t have been anything to hack! Previous Blizzard games didn’t require People who stopped at Diablo 2 seem to have gotten off scot-free during the hack. This is annoying to many users who only wanted to play Diablo 3. They might not find value in anything else about the system. Why bother making users go through all this work to be less secure?

When discussing always online, there’s good arguments to be made for both sides. Generally, always on is better for the company, where offline gaming is better for the consumer. Always on helps prevent pirating, and it gives live data. Companies need data on bugs or player drop-off times, which can help them plan their resources better and organize fixes without disrupting the player experience.

On the other hand, consumers with poor internet are left out, as lag and bugs caused by poor connection destroy their gaming experience. As games move more and more to pure digital, buying a ‘used game’ only gets more difficult for the consumer. Companies treat purchased games as a ticket to a destination, rather than an object the consumer buys. Games used to be objects, where anybody could play the game on the disc even though save data stayed on the console. Buying access to Diablo 3 via means that there’s no way to share that access without also allowing other people to access the account, which stores the save data. It’s the equivalent of sharing the console, not just the disc.


The response to the stolen, scrambled passwords was for Blizzard to force-reset player passwords and security questions, just in case the hackers somehow managed to unscramble them.

2FA is always a good idea, and Blizzard strongly recommended it too. 2FA will do a better job of alerting you than the default email warning  ‘your password has been changed’ will after the fact. After you’ve received that email, the hacker is already in. Depending on when you noticed, they could have already harvested all the data and rare skins they wanted by the time you get your support ticket filed! Setting up 2FA first means that you’re notified before that happens.

All in all, Blizzard handled this particular incident well! Companies are required to inform their users about potential online breaches, but some companies do this with less tact than others. Formally issuing an apology for the breach isn’t part of their legal requirements, for example. What made this response possible in the first place was Blizzard’s competent security team, alongside a set of policies that were strictly followed. Logs and audits in the system ensured that Blizzard knew who accessed what and when, which is critical when forming a response. Blizzard was able to determine the extent of the problem and act on it quickly, the ultimate goal of any IT response.


Pirating Is a Crime

Elizabeth Technology March 26, 2024

Piracy is a crime. Don’t pirate things. They’re serious about it. There are real reasons beyond “big music corps are people too”.

Why are the fines so steep?

Piracy seems victimless. In reality, the victims are just barely affected with each instance, up until the cumulative effect starts to affect their desire to create. Art has a price, and if folks aren’t willing to pay it, art disappears. Not all of it, of course, but the niche, unusual, and otherwise less profitable stuff goes by the wayside.

Fines are a strong motivator for many people – the main goal is to make piracy so undesirable that nobody does it for fear of the fines, not for the fear of being a thief (or “thief”, depending on how you define copyright violation). Many people don’t see anything actually wrong with stealing content from big name artists. What would the harm be? They aren’t really wrong, but they’re not right – they won’t be affecting that artist very much by themselves, and the amount missing from their art consumption is maaaybe a couple of pennies.

For example, Pharell only made something like $2,000 on Spotify when he was #1 on the top 40. Pirating that song would cost him maybe a twentieth of a cent, more in potential lost sales if you were intending to buy it on iTunes but went to LimeWire instead. However, now that Spotify is not monetizing any songs under 1,000 listens, you not listening in a legitimate channel could make a bigger difference to smaller artists. It’s like littering: if everyone left their trash at the park, the park would close for cleanup. One person is just an inconvenience to the groundskeeper. One plastic bottle won’t ruin the park’s water, but dozens will, and the rangers only need to catch one to get some of the others to stop. Fines keep litterers and minor pirates alike in check. If everyone thinks ‘my trash won’t hurt’, you get a trashed park. If every pirate thinks ‘my pirating won’t hurt’, you get musicians and moviemakers on strike.

Besides, fines for piracy are massive. Up to $250,000, and possible jail time!

Who are you actually going to hurt?

Small artists who get ripped off with copyright breaches and stolen songs are the people on the cutting edge of new. New music, new tech, new art – the small artists create things that you won’t find in Bed, Bath and Beyond, or on the Top 40. Cost these people money, and you’re destroying a complicated ecosystem of inspiration and passion-projects that the Top 40 is not looking to recreate. Layer Ariana Grande songs over each other, and you’ll discover patterns you didn’t notice before – patterns the producers definitely did notice, and they went down a checklist to get that song out and on the charts.

Small bands don’t have the same resources. When something sounds good, it’s because they made it sound good by themselves – you’re rewarding individual talent by not pirating. Tame Impala didn’t have access to a recording studio for their first album. He wrote the songs himself. He mixed it, himself. The same goes for Billie Eilish, and any other number of bedroom musicians (musicians who record their music in their bedroom). No disrespect to Ariana Grande, but she can’t make albums with the creative freedom that a bedroom band can. The people who invested in her can’t afford to have a flop, so she always gets breathy, poppy, peppy songs with high notes. It’s her strength, so it’s all she gets to release. She has creative input, but not a lot of control.

Pirating wouldn’t directly affect her unless everybody started pirating. It would take significantly less to accidentally crush something like early (early!!!) Tame Impala, or early Billie Eilish, and you might not hear anything like them ever again.

Don’t pirate the music if you want more of it!

Movies: More Serious

Movies are more serious to pirate. The theater runs on a tight margin to keep the tickets cheap. This is why a cup of popcorn is six dollars, that’s where the operating cost goes – the ticket is just paying for the movie’s rental of the reel from the studio.

The studio puts its money towards making back the budget of the film, and if the film does well enough, there may be a sequel. Trolls, for example, did well enough for studios to invest in Trolls: World Tour. The same goes for Tenet, and for Sonic. They made enough money back that the studio wants to keep the gravy train running. Not all sequels are good – and some may say that money shouldn’t be running art – but the world we live in has these rules. More money = more creation. Many talented artists literally cannot afford to create art full-time if they aren’t being paid for it.

However, assume pirating eats into the profit. One guy copies the file and sends it out and around, and a bunch of people see the pirated version on disc or download. They don’t want to spend money to see it again. Pirating takes the movie off the watchlist of hundreds or thousands without actually funding the movie. That wouldn’t have ruined Sonic or Tenet necessarily, but for an indie project, that can be devastating.

Pirating can happen at the theater too! You think you’re watching a legitimate copy of Fast and Furious 8, but the owner had pirated it from a connection he had who got it early for review. That theater makes blockbuster movie money, and the studio sees none of it. Stuff like that is why the fines are so huge, that owner would gladly do it again for a $2,000 fine. Illegitimate rental places were also a real problem. BlockBuster franchises (and small locally-owned rental stores) making illegal copies of recent hits was a profit-killer.

And as small bands suffer more than big bands, so too do small movie studios. Some of the wildest, most creative movies ever pushed to the big screen come out of small studios. The group that made Coraline, for example, is relatively small compared to Disney or Pixar. Pirating a newly released movie en masse could seriously dampen their funding for the next movie even if it wouldn’t make a dent for Disney.

It’s cumulative. They won’t catch everyone who pirates… but they’ll get enough to be a deterrent. Good art comes from protecting the artists who made it!


Sony’s DRM Nightmare

Elizabeth Technology March 21, 2024

In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers. Back to the story.

The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”

The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit.  The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.

This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.

The History

At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.

Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’ and spoiling produce. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves.  They don’t care anymore; nobody is going to get away with it.

Was it Reasonable?

Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.

“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler

This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.

The Problem

They’re going after the big boys, the folks downloading music to upload everywhere else…for free.

These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.

This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.

To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Lawsuits are in progress right now over this because the obvious result is that Deere can cost you your farm by doing this.

To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.

Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest, and any would-be pirates away. And they did it without destroying either computer! As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.

Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.

Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!

Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.

The Response

Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.

This does not help!

The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion from a CD they paid for. Over pirates.

Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.

The Aftermath

Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.

Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.

Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.

Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. While pirating is wrong, it’s possible to be more wrong trying to stop it.–5-years-later.html

Don’t Delete Your System32

Elizabeth Technology March 14, 2024

System 32 is essentially the heart of the computer’s software. Task manager, the boot-up instructions, and hardware-to-software system files are all located in the System 32 file folder. It’s very important. Do not delete it.

This folder is not a secret, but what exactly it’s responsible for wasn’t always public knowledge. After all, Windows keeps everything very neat and tidy; photos and documents to games and applications all stayed in their own little cubby holes. The actual System 32 folder is a couple of folders deep already– exploratory digging might result in someone finding it by themselves, but why would they ever delete it if it’s already there? That was Microsoft’s approach: make everything the user wants easy to find so only experts and programmers have to consider System 32. Even better, it would still (usually) work in the recycle bin, and it wouldn’t allow deletion with a simple left-click; there was no way a user could delete this folder without serious work. The hope was that most people would never even notice it.

They were right, and this was enough. For a time.

The Beginning

It’s the mid to late 2000s, and anonymous internet message boards are largely unrecognized and somewhat unmoderated. It serves as the Wild West of the internet, the last dark corner in a time where the rest of said internet is at least glimpsable with Google. Computers are expensive, but not Hope Diamond expensive, and the thought that someone would tell an un-monitored kid online to break theirs just for the heck of it was kind of absurd. Keyword: un-monitored. Underage children were getting into all sorts of sites they shouldn’t have, including internet messaging boards.

Knowing this, the people falling for the system32 prank are obviously not all just gullible adults.

Interim Growth

The site responsible for the meme (at the time) made it very clear that this was not a place for children, and the nature of the site’s set-up made it nigh impossible for the average user to be tracked or traced by another user. No username? No IP tracking? Zero consequences. There were mods, but the mods were few in number, and more concerned with activities that were genuinely very illegal and could lead to the site’s shut-down. Users convincing strangers to mix chemicals together or de-magnetize their hard drive was less pressing unless it also resulted in something illegal.

The meme really got going when one user came back to complain that their computer would no longer start after they followed one of the first troll posts. That post gave instructions on how to delete it while framing it as ‘bloatware’(software that intentionally slows a device down). If you have no idea what makes a computer run, it sounded like good advice.

When users caught on that some versions of Windows would refuse to outright delete System 32, they moved on and started including console commands, something the average user (at the time) had no experience with. Someone with little or no knowledge of the subject wouldn’t know what they were looking at. A button press, some typing, and an @echo command. Easy to follow… too easy.

Mainstream Dilution

Instructions for deleting System 32 to ‘speed up the computer’ or ‘make the computer quieter’ appeared on more public sites some time in 2008. I Can Haz Cheezburger is likely the largest at this point, a forum centered around funny images of cats and other assorted animals, with a penchant for memes including advice, good or bad. Soap Ice, the idea that you could freeze Dawn dish soap and water in a puck of ice, and then use it to ‘shower’ after a trip to the gym or park, was one of these ‘advice’ memes. This does not work for the reasons you’d expect, but it’s less likely to kill someone than bathroom cleaner ‘crystal’ hacks. ‘Advice’ to delete System 32 was a natural fit, and it spread like wildfire.

With the meme’s spread into bigger websites that are more strictly moderated, articles start coming out advising people not to delete System 32. Even better, memes start circulating on websites like I Can Haz Cheezburger to give users warning directly. It doesn’t stop all of it – no good-advice-meme can stop a person determined to use a hack like Soap Ice, but it puts a major dent in the spread. With less people taking the bait, and others ready to comment ‘don’t do this!’ on posts where it appears, the meme finally slows down, eventually to a crawl. “Delete System 32” is now used ironically, because knowledge of it is so widespread that someone not knowing is rare.

And so the rise and fall of a meme is recorded. This is one of the first of it’s kind, but it’s far from the last.

Memory Terms

Elizabeth Technology March 7, 2024

The first Bit of Data

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.


If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

Then, a Byte

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.


8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!


1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.


A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.


A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.


A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.


A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.


The Train That Breaks Itself

Elizabeth Technology February 15, 2024

If you’ve paid any attention to big tech in the last several years, you’ll probably know that Apple is on the verge of switching to USB-C for phones. It’s easier and more accessible for the average EU citizen to acquire than Apple’s lightning chargers. It’s not just Apple that’s being forced to change for the sake of the customer – the shareholder system at large is constantly at odds with the end user’s rights to buy a complete, sturdy product that wasn’t designed to break a few months down the road so that an official BrandProduct shop can charge over the market rate to fix it. Thanks to the EU’s legal interventions, Apple (and many others) cannot continue to sell a product that only they can make chargers and power supplies for, that only they can update, that they can choose to brick whenever they feel like the user needs to move on to the next phone, etc.

The Newag train scandal is particularly egregious given this context!

Big parts of Europe rely heavily on trains for both passenger and freight transit, and trains are expensive to make and repair; once the state has invested money into infrastructure and the trains themselves, they won’t simply be switching brands on a whim. This already gives the company a massive amount of leverage over their contractors.

 Newag is one such train company. Allegedly, as Apple did, Newag figured that regular repair and maintenance were good places to squeeze a bit more money out of the customer, and set up a bit of code within the train’s computer brain that would cause it to error and stop working if anyone but a Newag shop touched it to fix it. Keep in mind train repair shops are already incredibly niche, and repairs to trains come out of taxpayer money – to be thriftier by going to an independent shop is an obligation when the money isn’t your own. Worse, even if the shop didn’t need to fix anything in the train’s computer, Newag’s trains are GPS-enabled, and if the train spent too long at an independent train-repair station, it would still mysteriously stop working.

Of course, Newag denies this heavily – they even went as far as trying to sue the company that discovered this quirk, Dragon Sector, into shutting up about it. Then, they suggested it was the result of cybercriminals and not Newag itself, which could make sense if this were ransomware stopping the train entirely and not just when the train didn’t stop at a Newag shop or get it’s special unlock code. The odds are stacking up against the company, as the evidence is too clearly pointing towards predatory practices for them to get out of an investigation.


Optical Memory

Elizabeth Technology January 30, 2024

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than magnetic tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc (don’t do that – most discs contain some amount of metal, and that can damage your microwave badly enough to be dangerous).


“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?


The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.


DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.


That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.


CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

FlexPlay Self-Destructing Entertainment

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.


Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.


Magnetic Memory

Elizabeth Technology January 25, 2024

Magnetic Tape

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live in front of the audience. Music recording tech already existed of course, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

Carousel Memory

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

Core Rope Memory

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-Rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

Core Memory

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

Magnetic Bubbles

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

Semi-Conductor DRAM – Honorable Mention

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

Magnetic Disk (Hard Disk Drive)

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

Sources: (all primary sources regarding carousel memory are in Swedish)

The Awareness of Future Cringe Past

Elizabeth Technology December 28, 2023

The Concept of Cringe

What is ‘cringe’? To cringe is to jerk away from a negative stimuli – accidentally getting a papercut between your fingers, or hearing the sound of nails on chalkboard, may make you cringe.

Sometime in the 2000s, a new definition of cringe arose, and forums sprung up trying to catalog it. This new cringe focuses on secondhand embarrassment over actual, physical discomfort: it’s the awkward text to a crush that gets rejected outright. It’s the kid in a college-level presentation class trying to get their group members to theme the project after an unrelated kid’s TV show. It’s someone wearing something in public that breaks rules everyone else is trying to follow. While shame and embarrassment are useful emotions almost anywhere else, the concept of cringe in the new panopticon created by modern social media and high-definition phone cameras is sucking the joy out of memes. The next generation is not ready to be made fun of by people who they respect.

“Millennial Humor” and “This is What Gen Alpha Will Make Fun of Us For”

Gen Z is effectively building a prison made of cringe and ensuring that nobody will escape it, using social media. One comment, one foot, is calling I Can Haz Cheezburger speak annoying and cringe. Another comment, the other foot, is calling someone the Rizzler, and spamming fire emojis. Both feet are straddling a hole in the ground, an abyss that can’t be looked into because the abyss – Nietzsche’s final, paralyzing frontier of awareness – will look back. That hole contains the phrase “this is what gen Alpha will make fun of us for”.

Some Gen Zers have looked into the abyss. The abyss looks back. The future looks back. They, themselves, but younger and meaner and willing to make a joke at their older selves’ expense, looks back at them and sneers. Their jokes are cringe. Their clothes are cringe and make them look cringe. The way they take their selfies in public is cringe. The easily identifiable way that they speak signals to the next generation that they may say something neocringe if prodded right. There’s no escaping now that phones are everywhere, and everyone seems to be filming. They will, one day, have a haircut that turns cringe. They know all of this because the previous generation, Millennials, are subjected to the same treatment. The introduction of the “Millennial Pause” gave ammo to an audience that cares about age so much that identifying Millennials is a sport now, even for other Millennials. Of course that little pause is no big deal, but it exists. The fire emoji, too, will one day be no big deal, but exist, and signal out to Gen Alpha that they’re talking to someone older than them. There’s some comment to be made about how much Americans love the idea of youth. Now, if someone sticks out with dated humor or an awkward pause, they’re a target – they are expected to look and act young enough to blend in with the next generation (which means understanding the jokes and dressing like them too) or risk being singled out as cringe.

This awareness that trendy things age poorly is so paralyzing that some teens are trying to remove themselves from the memery without fully leaving social media. It’s the final stage of irony poisoning, where doing cringey things ironically is still too close to being cringe, and so is just existing (unironically and contemporaneously) with trends in photos or videos, so the people who’d otherwise be having fun making jokes or dancing their meme dances are instead opting to say “this joke won’t be nearly as funny when it’s no longer fresh” as if that’s a revelation. The other option is posting cringe and making jokes that are only funny for right now; if someone wants to stay young and funny forever, they can’t participate. They try to warn the other people outside their prison that one day they’ll be cringe, as though they can somehow stop the embarrassment of embracing popular trends by stopping the trend itself from manifesting with the power of irony and self-awareness, but it’s always already too late. Mullets are on a comeback, and some day the people who had them will look back at those photos and laugh.

To be cringe is to be free. Embrace the cringe. Pause awkwardly. Say ‘Rizzler’ out loud. Keep an ugly haircut and a sage-colored couch, and enjoy existence freed from the dichotomy of cringe and noncringe.