Posts Tagged

History

Laser Vs. Inkjet: How Do They Compare?

Elizabeth Technology June 6, 2024

From our last article, you now know that inkjets are pricier in the long run than laser printers due to ink cost. But why?

InkJet

Inkjet printers work by spraying the page with a little bit of ink in a very precise jet. There are two kinds, although one is patented by Epson: in the first method, ink is heated to boiling, which causes it to shoot out of the ink nozzle onto the page where it quickly cools down and sticks. The second method, the piezo-electric method, sends an electrical charge to a crystal inside the cartridge, which causes it to vibrate and force ink out of the nozzle. (Watches and some alarms use piezoelectric crystals too – piezoelectricity uses a crystal to convert mechanical energy, like pressing or twisting, into electrical energy, or vice versa. Read more here: https://www.explainthatstuff.com/piezoelectricity.html ).

Inkjets are widely renowned for their color quality and high dpi (dots per inch) resolution, something to consider if you’re printing a lot of pictures or content where exact color is important. While the color range of these printers is great, the issue with spraying liquid ink is that it can smear coming out of the printer! You have to be careful which kind of cartridge you buy to avoid this. Speaking of cartridges, inkjets are rather expensive in the long run: although the machine itself is cheap, the average cost per page printed is 10 to 30 cents depending on color and density, according to How To Geek. This is because the cartridges are generally pretty small and the ink itself is pretty pricey, no matter where you get it.

Laser

Laser printers are notably faster and cheaper per page than inkjets are, although they cost more upfront. A laser printer uses static electricity generated by lasers inside the machine to stick the ink powder (toner) to the drum, which is then rolled onto paper, and heated to melt the toner to the paper in the design required by the computer. While it’s less accurate than inkjets, it’s adequate for most things outside of photo-quality imaging or exact color matches with the benefit of speed. Your smallest possible fonts may come out more blurry on a laser printer than an inkjet, but it will be in your hands much faster. The toner cartridges are also generally cheaper and longer-lasting than their liquid ink counterparts, so the trade may be worth it, especially if you mostly print text anyway.

Dot- Matrix and Character

When printing technology was just starting to miniaturize for office use, the dot matrix printer – which physically touched the paper with the dot printer head when printing – reigned supreme. The wailing noise of a dot-matrix printer in movies, alongside their characteristic impressionist look on the page, has since been replaced by much quieter, faster, and more precise machines. Still, in the early days of printing, such a device was incredibly useful for any number of tasks requiring multiple hard copies of data. Just not pictures or anything too fine in quality. There are still a handful of places where they’re more useful than a newer design – for example, they’re still frequently used to print checks!

Their predecessor was better with numbers and letters, but as a whole not much better than a print press outside of their automation – character printers were the next logical step up from the typewriter, with an array of physical characters that would be pushed up against the ink ribbon and page to ‘type’ the print. Understandably, this printer can’t do much besides text, and font choices are limited to what is already inside the machine.

Sources:

https://www.explainthatstuff.com/laserprinters.html

https://www.howtogeek.com/771348/laser-printer-vs.-inkjet-which-is-better

What is WiFi? How Does WiFi Work?

Elizabeth Technology May 28, 2024

Wi-Fi’s older than it may seem, as it spent quite some time at the fringe of new tech. The market was already flooded with dial up internet, and replacing it was going to take quite a bit of doing. When it first launched, it had an average speed of 2 mbps, which is actually pretty good, about four times faster than dial up, which had a max speed of 56 kbps. However, systems were so heavily dependent on that dial up system that it took many years for it to become the standard.

Wi-Fi is understood to mean Wireless Fidelity, but apparently nobody in the labs that studied or made it ever designated Wi-Fi as the official shortening of that term, it just sort of happened, and then licensing and officiating went from there.

Kind of Like Radio

AM and FM radio have been around for decades, now, and they work fairly similarly to Wi-Fi if Wi-Fi did both at the same time. AM radio changed the amplitude of the waves to transmit information across different bands, where FM changes the frequency of the band. However, AM and FM stick to kilohertz and megahertz frequencies, while Wi-Fi is in the significantly higher gigahertz frequencies.

Electromagnetic Radiation is a spectrum: at one end, there is infrared radiation which is extremely low-frequency, and at the other, gamma radiation, which is extremely high frequency. Visible light falls somewhere near the infrared side, where red is closer to the low end and violet is closer to the high end. Microwaves fall on the low side. A 2.4 GHz microwave has a gap between wave crests about the size of a baseball – the waves aren’t nearly as close together as they are in visible light. (Note – a microwave oven has the same frequency, it is much higher energy than Wi-Fi. Loud sounds can be the same pitch, or frequency, as quiet sounds, the same goes for microwaves). Microwaves, just like colors, are broken up into bands, and different frequencies can do different things. For this article, we’re focusing on information transmission.

What Can Stop WiFi?

Wi-Fi does get weaker when walls or other obstacles get in the way, although this is usually a good thing – there are only so many viable ‘bands’ for Wi-Fi to transmit over, just like radio, so crowded buildings would run out of available bands if they weren’t so easily stopped. While microwave ovens use metal, eventually those same microwaves would be stopped if they came into contact with walls or other solid materials. Eventually, distance also stops Wi-Fi. The waves lose energy as they travel and then carried information is lost.

Bluetooth devices can interact poorly with Wi-Fi as well – they work on similar principles, but Bluetooth is much weaker. If your headphones are undetectable to your phone, even when your device is on, it’s possible the Bluetooth is being drowned out by local Wi-Fi. Bluetooth typically has a range of about 30 feet, compared to Wi-Fi’s much larger 240 feet in ideal conditions.

How Does Protecting WiFi work?

Wi-Fi transmits over those microwave frequencies to bring information to the computer and send it back out.

How do you protect information if it’s just being broadcast like that? Well, a couple of things. While it is very similar, it’s not exactly like radio, where the information from the station is broadcast across the city, and all you have to do is tune it. The computer has to find the network first, and as previously stated, both physical objects and distances can keep Wi-Fi from reaching a compatible device. Distance is a solid defense. If a hacker is in the same building, however, how do you protect the network then? Assuming their device is within accessible distance of the network, can it intercept information sent over that network?

The second part is encryption: it doesn’t matter if the data’s intercepted if the interceptor can’t un-scramble it. Transmitting unencrypted data over unprotected Wi-Fi can get you into trouble – see all the warnings about using public Wi-Fi to do banking – but encrypting it stops most issues before they start. Hence, the rise of VPNs. However, encryption alone won’t stop intruders, so the third part is network security.

The next logical step for a hacker is to get into the protected network and then seek out the info they want, skipping the encryption step entirely. The network itself has to be protected as well! Network protection can be passwords, or firewalls, or anything that prevents closed data ports from being opened. An open port in data security just means something that will allow packets of data to go in or out. A website has open ports so you can access the information on it, for example. If a poorly configured application on a computer has an open port, it’s looking for information, and that can be used to get into the network, bypassing the encryption.

2.4 GHz vs 5 GHz

Some modems allow two frequencies of Wi-Fi, a faster channel, and a further channel. The 5GHz channel is what you’ll want to use for your video streaming. The frequency is higher, and that means information is transported to your device faster. The 2.4 GHz frequency is probably what the printer in the other room is best on. It’s better at penetrating solid objects than 5 GHz, and it has a larger range, but it’s also weaker. 2.4 GHz is also more prone to interference, because many things use that frequency. Microwaves, for example. If you’ve had issues with your Wi-Fi while the microwave is on, get that microwave checked! The odds are good it’s shielding is faulty.

Modem Vs. Router

What’s the difference? A router routes traffic from your network to the internet. It’s sometimes referred to as a WLAN (or a wireless local area network) device. Most houses have a router because of the number of network-enabled devices in a modern home. Printers are rarely connected by cable to a computer anymore, for example.

A modem, on the other hand, is designed to connect devices directly to the internet. Modems are hard-wired into cabled data lines, like telephone lines, so they’re less popular than they used to be. Routers have taken their spot in-home, as dial-up internet is basically non-existent.

Routers and Wi-Fi are here to stay, at least until the next big things comes out!

Sources:

https://www.cisco.com/c/en/us/solutions/small-business/resource-center/networking/how-does-a-router-work.html#~how-routers-route-data

https://medium.com/@syakirharis25/modem-vs-gateway-vs-router-and-its-security-considerations-d853de8b4b31

https://www.scientificamerican.com/article/how-does-wi-fi-work/

https://science.nasa.gov/ems/06_microwaves

https://www.centurylink.com/home/help/internet/wireless/which-frequency-should-you-use.html

Emulators And The Legal Gray of AbandonWare

Elizabeth Technology April 23, 2024

What is an Emulator?

An emulator is a program that emulates a game console, usually for the purpose of playing a game that is – either by price, age, or device – inaccessible. Streamers commonly use emulators to play Pokemon games made for the Gameboy, so they can screen-record their gameplay directly from their computer instead of having to somehow hook the Gameboy up to it. Zelda fans might want to play Ocarina of Time, but they might also find that the console to play it on is awfully expensive for one game where an emulator is pretty cheap! In certain cases, games are geolocked – countries restrict access to certain forms of art as a means of censorship. Emulators can make those games accessible to people who want to play them in that country.

In the 1990s, consoles were on top when it came to games. Computers were rapidly gaining in power, however, and some folks realized that the console could be recreated using a home computer. The first emulators were born via reverse-engineering console coding. They evaded legal action by only copying devices that were outdated, but that changed too with a major emulator made for the Nintendo 64 while it was still in production. Nintendo pursued legal action to stop the primary creators, but other folks who had already gotten their hands on the source code kept the project going.

Ever since then, emulators have lived in a strange space of both making games available and making them so available that the parent company decides to step in and try to wipe it out, which is nearly impossible once it’s out on the open web. Gamers simply won’t allow a good emulator to die!

Copyright

Copyrights are crucial to the gaming ecosystem, and it’s a delicate balance of allowing fan art, but disallowing unauthorized gameplay. Allowing game mods, but disallowing tampering that could lead to free copies being distributed against the company’s wishes. Allowing fun, but not theft. Copyright laws are always evolving – new tech comes with new ways to copy, create, and distribute intellectual property. Generally, though, copyright falls back to permission: did the original company intend for their IP to be used in this way?

Emulators and copyright don’t get along very well at all! Emulators are, by their very definition, creating access to the game in a way the original company didn’t intend. As such, it’s unofficial, and if money is exchanged, it’s not normally between the copyright holder company and the customer, it’s the customer and some third unauthorized party.

Games aren’t selling you just the physical disk. You’re buying a license to play the game. If you take it as far as Xbox intended to back when the Xbox One was coming out, friends are only allowed to come over and play with you on your license because the company can’t enforce it. It’s a limitation of the system that they can’t keep you from sharing disks or accounts.

Not every company thinks like this (see the Playstation 5 and a number of more recent cases regarding digital content ownership), but that’s the most extreme possible interpretation. You bought a disk so you could play a copy of their game that they have licensed out to you. You own the right to play that copy of the game, you don’t own the game itself.

Consider: Death of a Console

When a console dies, it’s taking all of its content with it. There is no more money to be made off of it, and the games are going to slowly disappear into collections and trash bins.

Does art need to exist forever, or is it okay if some art is temporary? Not every Rembrandt sketch is still in trade – some of it was just sketches, and he obviously discarded some of his own, immature art. Immature art is interesting to see, but it’s not what the artist wanted their audience to see. Otherwise it would have been better kept. Think about the ill-fated E.T. game that Atari made, they weren’t proud of it, they didn’t want it seen, and they saw fit to bury it. So they buried it. It was directly against their wishes for people to find this game and then play it. Emulating it is obviously not what the copyright holder wants.

But then consider all the little games included on a cartridge that’s just forgotten to the sands of time, made by a programmer who didn’t want it to fade away? Acrobat, also for the Atari, isn’t very well-remembered, but it still made it onto Atari’s anniversary console sold in-stores. 97 games on that bad boy, and Acrobat was included. It’s not a deep game, it’s nearly a single player Pong. But the programmers who made it didn’t ask for it to be excluded from the collection, so some amount of pride must exist over it, right? Does the game have to be good to be emulated? Is only good art allowed to continue existing officially?

Is all art meant to be accessible to everyone?

If some art is made with the intent to last forever, is it disregarding the creator’s wishes to not emulate it, against their production company’s wishes? If a corporate exec decides a work of art is better used as a tax writeoff than launched even though it’s already complete, is it better to listen to that exec, or the dozens – perhaps hundreds – of people opposing the exec’s will?

If art’s made to last forever but the artist (and society) accepts that that’s simply unrealistic, is it weird to emulate it, in the same way it’s weird to make chat-bots out of dead people?

When you get past the copyright, it’s a strange, strange world to be in.

Ethical Dilemma

Stealing goes against the ethics of most societies, modern or not. The case against emulators is that it’s stealing.  It often is! An emulator/ROM (ROMs act as the ‘disc’ or ‘cartridge’ for the emulator) for Breath of the Wild was ready just a few weeks after the game launched, which could have seriously dampened sales if Nintendo didn’t step in to try and stop that. That first emulator, the one for the Nintendo 64, also drew a lot of negative attention for the same reasons, potentially siphoning away vital sales.

However, there’s a case to be made for games and consoles that aren’t in production anymore.

Is this a victimless crime, if the original game company really can’t make any more money off of it? It’s one thing to condemn piracy when the company is still relying on that income to make more games and pay their workers, it’s another entirely when the game studio isn’t interested in continuing support, and the console had a fatal fault in it that caused many of them to die after 10 years. That game is as good as gone forever without emulators. With no money to be made, why not emulate it?

In less extreme circumstances, the console’s still functioning, but the cartridges that went to it are incredibly rare. The company could potentially make money off of the game if they someday decided to remaster it, but that’s unknowable. Licenses could be available for purchases… but they aren’t right now.

Or, even better, the cartridges are still available for purchase in the secondary market. You just don’t happen to have the console, which has now spiked to a cost of 400 dollars due to reduced supply over time. You buy the cartridge – you’re still buying the license, you just don’t have the car, right?

According to copyright, you need a specific car for a specific license, but ethically, you’ve done the best you can as a consumer.

Brand Name

Much like Disney did with Club Penguin’s many spinoffs, emulators are kind-of sort-of overlooked up until they start eating into sales. More aggressive companies will go after emulators before they blow up (see Nintendo challenging Yuzu, an emulator) but most companies just don’t want to spend money to enforce an issue like emulators – their game is still being played, their brand is still out there, and the users are going to be very upset if this big company decides to step in and ruin fun when they don’t need to (see Nintendo challenging Yuzu, a beloved emulator). It may do more harm than good to try and wipe the emulator out when most people want to do the right thing.

Obviously, they’ll need to put a stop to emulating new games – the goal is to spend just enough money to do that effectively without also overstepping and destroying emulators for consoles no longer in production. It takes money to make games, games should earn money as a result. Removing emulators for games and consoles no longer in production isn’t helping them earn money – as such, many are allowed to stay. For now.

Sources:

https://www.pcgamer.com/the-ethics-of-emulation-how-creators-the-community-and-the-law-view-console-emulators/

https://scholarlycommons.law.northwestern.edu/njtip/vol2/iss2/3/

How To Handle A Hack: Blizzard in 2012

Elizabeth Technology April 2, 2024

In 2012, game developers were beginning to experiment with a principle known as “always on”. “Always on” had many potential benefits, but the downsides keep the majority of games from ever attempting it. Many of the notable standouts are games that require team play, like Fall Guys or Overwatch. Others without main-campaign team play tend to fall behind, like Diablo 3 and some of the Assassin’s Creed games. Lag, insecurities, perpetual updating, etc. are all very annoying to the end user, so they’ll only tolerate it where it’s needed, like those team games. It’s hard to say that this hack wouldn’t have happened if Blizzard hadn’t switched to an “always on” system… but some of their users only had Battle.net accounts because of the always-on.

Blizzard’s account system was designed with their larger team games in mind. It was forwards facing, and internet speeds were getting better by the day. Users were just going to have to put up with it, they thought. Users grumbled about it, but ultimately Blizzard was keeping data in good hands at the time. You wouldn’t expect Battle.net accounts created purely to play Diablo 3 to lose less data than the user profiles in the Equifax breach, right? Blizzard kept the ball here! What did Blizzard do right to prevent a mass-meltdown?

Hacker’s Lament

The long and the short of it was that Blizzard’s storage had multiple redundancies in place to A) keep hackers out and B) make the info useless even if it did end up in the wrong hands. Millions of people had lost data in similar events before, and security experts were more and more crucial to keeping entertainment data safe. Blizzard was preparing for the worst and hoping for the best, so even when the worst struck here, they weren’t left floundering telling people they lost their credit cards.

The actual hack was defined by Blizzard as ‘illegal access to our internal servers’. It released the listed emails of players (excluding China), the answers to security questions, and other essential identifying information about accounts into the wild. However, due to Blizzard’s long-distance password protocol, the passwords themselves were scrambled so much that the hackers might as well have been starting from scratch. This is still a problem, but it’s not a world-ending, ‘everyone has your credit card’ problem. Changing the password on the account and enabling 2FA was considered enough to shore up security.

Potential Issues

Lost email addresses aren’t as big of a problem as lost passwords, but they can still present an issue. Now that the hacker knows an email address was used on a particular site, it’s possible to perform a dictionary attack, or regular brute forcing! This strategy will eventually work, but the longer and more complicated the password is, the less likely it is to succeed on your account in particular.

A secondary problem is the lost security questions. Those are a form of 2FA. Depending on the question asked, guessing something that works or brute forcing it again is dangerously easy. Sparky, Rover, and Spot are very popular names for American dogs, for example. If the hacker is able to identify that the player’s American, and then guess the name of their first dog, they’re in! They can change the password to keep the legitimate player out. (Part of Blizzard’s response is forcing users to change their security questions for this reason). 2FA that uses email or mobile is generally preferred.

Battle.net acted as an overarching account for all the games, and made the stakes higher for an account breach. All the online Blizzard games went through Battle.net. Losing access could mean losing access to hundreds of hours of game progress. Or worse: credit card data and personal info.

Online, Always, Forever

The event provided ammo for anti-always-on arguments. There was no option to not have a Battle.net account if you wanted to just play Diablo’s latest game. Some users were only vulnerable as a result of the always-online system. If they’d simply been allowed to play it offline, with no special account to maintain that always-online standard, there wouldn’t have been anything to hack! Previous Blizzard games didn’t require Battle.net. People who stopped at Diablo 2 seem to have gotten off scot-free during the hack. This is annoying to many users who only wanted to play Diablo 3. They might not find value in anything else about the Battle.net system. Why bother making users go through all this work to be less secure?

When discussing always online, there’s good arguments to be made for both sides. Generally, always on is better for the company, where offline gaming is better for the consumer. Always on helps prevent pirating, and it gives live data. Companies need data on bugs or player drop-off times, which can help them plan their resources better and organize fixes without disrupting the player experience.

On the other hand, consumers with poor internet are left out, as lag and bugs caused by poor connection destroy their gaming experience. As games move more and more to pure digital, buying a ‘used game’ only gets more difficult for the consumer. Companies treat purchased games as a ticket to a destination, rather than an object the consumer buys. Games used to be objects, where anybody could play the game on the disc even though save data stayed on the console. Buying access to Diablo 3 via Battle.net means that there’s no way to share that access without also allowing other people to access the Battle.net account, which stores the save data. It’s the equivalent of sharing the console, not just the disc.

Handling

The response to the stolen, scrambled passwords was for Blizzard to force-reset player passwords and security questions, just in case the hackers somehow managed to unscramble them.

2FA is always a good idea, and Blizzard strongly recommended it too. 2FA will do a better job of alerting you than the default email warning  ‘your password has been changed’ will after the fact. After you’ve received that email, the hacker is already in. Depending on when you noticed, they could have already harvested all the data and rare skins they wanted by the time you get your support ticket filed! Setting up 2FA first means that you’re notified before that happens.

All in all, Blizzard handled this particular incident well! Companies are required to inform their users about potential online breaches, but some companies do this with less tact than others. Formally issuing an apology for the breach isn’t part of their legal requirements, for example. What made this response possible in the first place was Blizzard’s competent security team, alongside a set of policies that were strictly followed. Logs and audits in the system ensured that Blizzard knew who accessed what and when, which is critical when forming a response. Blizzard was able to determine the extent of the problem and act on it quickly, the ultimate goal of any IT response.

Sources:

https://us.battle.net/support/en/article/12060

https://us.battle.net/support/en/article/9852

https://www.forbes.com/sites/erikkain/2012/08/09/its-official-blizzard-hacked-account-information-stolen/?sh=2ecadbc955d1

https://medium.com/@fyde/when-too-much-access-leads-to-data-breaches-and-risks-2e575288e774

https://www.bbc.com/news/technology-19207276

Pirating Is a Crime

Elizabeth Technology March 26, 2024

Piracy is a crime. Don’t pirate things. They’re serious about it. There are real reasons beyond “big music corps are people too”.

Why are the fines so steep?

Piracy seems victimless. In reality, the victims are just barely affected with each instance, up until the cumulative effect starts to affect their desire to create. Art has a price, and if folks aren’t willing to pay it, art disappears. Not all of it, of course, but the niche, unusual, and otherwise less profitable stuff goes by the wayside.

Fines are a strong motivator for many people – the main goal is to make piracy so undesirable that nobody does it for fear of the fines, not for the fear of being a thief (or “thief”, depending on how you define copyright violation). Many people don’t see anything actually wrong with stealing content from big name artists. What would the harm be? They aren’t really wrong, but they’re not right – they won’t be affecting that artist very much by themselves, and the amount missing from their art consumption is maaaybe a couple of pennies.

For example, Pharell only made something like $2,000 on Spotify when he was #1 on the top 40. Pirating that song would cost him maybe a twentieth of a cent, more in potential lost sales if you were intending to buy it on iTunes but went to LimeWire instead. However, now that Spotify is not monetizing any songs under 1,000 listens, you not listening in a legitimate channel could make a bigger difference to smaller artists. It’s like littering: if everyone left their trash at the park, the park would close for cleanup. One person is just an inconvenience to the groundskeeper. One plastic bottle won’t ruin the park’s water, but dozens will, and the rangers only need to catch one to get some of the others to stop. Fines keep litterers and minor pirates alike in check. If everyone thinks ‘my trash won’t hurt’, you get a trashed park. If every pirate thinks ‘my pirating won’t hurt’, you get musicians and moviemakers on strike.

Besides, fines for piracy are massive. Up to $250,000, and possible jail time!

Who are you actually going to hurt?

Small artists who get ripped off with copyright breaches and stolen songs are the people on the cutting edge of new. New music, new tech, new art – the small artists create things that you won’t find in Bed, Bath and Beyond, or on the Top 40. Cost these people money, and you’re destroying a complicated ecosystem of inspiration and passion-projects that the Top 40 is not looking to recreate. Layer Ariana Grande songs over each other, and you’ll discover patterns you didn’t notice before – patterns the producers definitely did notice, and they went down a checklist to get that song out and on the charts.

Small bands don’t have the same resources. When something sounds good, it’s because they made it sound good by themselves – you’re rewarding individual talent by not pirating. Tame Impala didn’t have access to a recording studio for their first album. He wrote the songs himself. He mixed it, himself. The same goes for Billie Eilish, and any other number of bedroom musicians (musicians who record their music in their bedroom). No disrespect to Ariana Grande, but she can’t make albums with the creative freedom that a bedroom band can. The people who invested in her can’t afford to have a flop, so she always gets breathy, poppy, peppy songs with high notes. It’s her strength, so it’s all she gets to release. She has creative input, but not a lot of control.

Pirating wouldn’t directly affect her unless everybody started pirating. It would take significantly less to accidentally crush something like early (early!!!) Tame Impala, or early Billie Eilish, and you might not hear anything like them ever again.

Don’t pirate the music if you want more of it!

Movies: More Serious

Movies are more serious to pirate. The theater runs on a tight margin to keep the tickets cheap. This is why a cup of popcorn is six dollars, that’s where the operating cost goes – the ticket is just paying for the movie’s rental of the reel from the studio.

The studio puts its money towards making back the budget of the film, and if the film does well enough, there may be a sequel. Trolls, for example, did well enough for studios to invest in Trolls: World Tour. The same goes for Tenet, and for Sonic. They made enough money back that the studio wants to keep the gravy train running. Not all sequels are good – and some may say that money shouldn’t be running art – but the world we live in has these rules. More money = more creation. Many talented artists literally cannot afford to create art full-time if they aren’t being paid for it.

However, assume pirating eats into the profit. One guy copies the file and sends it out and around, and a bunch of people see the pirated version on disc or download. They don’t want to spend money to see it again. Pirating takes the movie off the watchlist of hundreds or thousands without actually funding the movie. That wouldn’t have ruined Sonic or Tenet necessarily, but for an indie project, that can be devastating.

Pirating can happen at the theater too! You think you’re watching a legitimate copy of Fast and Furious 8, but the owner had pirated it from a connection he had who got it early for review. That theater makes blockbuster movie money, and the studio sees none of it. Stuff like that is why the fines are so huge, that owner would gladly do it again for a $2,000 fine. Illegitimate rental places were also a real problem. BlockBuster franchises (and small locally-owned rental stores) making illegal copies of recent hits was a profit-killer.

And as small bands suffer more than big bands, so too do small movie studios. Some of the wildest, most creative movies ever pushed to the big screen come out of small studios. The group that made Coraline, for example, is relatively small compared to Disney or Pixar. Pirating a newly released movie en masse could seriously dampen their funding for the next movie even if it wouldn’t make a dent for Disney.

It’s cumulative. They won’t catch everyone who pirates… but they’ll get enough to be a deterrent. Good art comes from protecting the artists who made it!

Sources: https://variety.com/2020/film/news/trolls-world-tour-streaming-theatrical-window-future-1234573263/

Sony’s DRM Nightmare

Elizabeth Technology March 21, 2024

In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers. Back to the story.

The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”

The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit.  The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.

This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.

The History

At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.

Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’ and spoiling produce. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves.  They don’t care anymore; nobody is going to get away with it.

Was it Reasonable?

Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.

“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler

This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.

The Problem

They’re going after the big boys, the folks downloading music to upload everywhere else…for free.

These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.

This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.

To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Lawsuits are in progress right now over this because the obvious result is that Deere can cost you your farm by doing this.

To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.

Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest, and any would-be pirates away. And they did it without destroying either computer! As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.

Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.

Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!

Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.

The Response

Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.

This does not help!

The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion from a CD they paid for. Over pirates.

Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.

The Aftermath

Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.

Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.

Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.

Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. While pirating is wrong, it’s possible to be more wrong trying to stop it.

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal

https://us.norton.com/internetsecurity-malware-what-is-a-rootkit-and-how-to-stop-them.html

https://www.wired.com/2006/12/sony-settles-bm/

https://www.theregister.com/2005/11/01/sony_rootkit_drm/

https://money.cnn.com/2005/06/24/news/international/music_piracy/

https://www.networkworld.com/article/2998251/sony-bmg-rootkit-scandal-10-years-later.html

https://fsfe.org/activities/drm/sony-rootkit-fiasco.en.html

https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=4058&context=thesesdissertations

https://www.networkworld.com/article/2194292/sony-bmg-rootkit-scandal–5-years-later.html

Don’t Delete Your System32

Elizabeth Technology March 14, 2024

System 32 is essentially the heart of the computer’s software. Task manager, the boot-up instructions, and hardware-to-software system files are all located in the System 32 file folder. It’s very important. Do not delete it.

This folder is not a secret, but what exactly it’s responsible for wasn’t always public knowledge. After all, Windows keeps everything very neat and tidy; photos and documents to games and applications all stayed in their own little cubby holes. The actual System 32 folder is a couple of folders deep already– exploratory digging might result in someone finding it by themselves, but why would they ever delete it if it’s already there? That was Microsoft’s approach: make everything the user wants easy to find so only experts and programmers have to consider System 32. Even better, it would still (usually) work in the recycle bin, and it wouldn’t allow deletion with a simple left-click; there was no way a user could delete this folder without serious work. The hope was that most people would never even notice it.

They were right, and this was enough. For a time.

The Beginning

It’s the mid to late 2000s, and anonymous internet message boards are largely unrecognized and somewhat unmoderated. It serves as the Wild West of the internet, the last dark corner in a time where the rest of said internet is at least glimpsable with Google. Computers are expensive, but not Hope Diamond expensive, and the thought that someone would tell an un-monitored kid online to break theirs just for the heck of it was kind of absurd. Keyword: un-monitored. Underage children were getting into all sorts of sites they shouldn’t have, including internet messaging boards.

Knowing this, the people falling for the system32 prank are obviously not all just gullible adults.

Interim Growth

The site responsible for the meme (at the time) made it very clear that this was not a place for children, and the nature of the site’s set-up made it nigh impossible for the average user to be tracked or traced by another user. No username? No IP tracking? Zero consequences. There were mods, but the mods were few in number, and more concerned with activities that were genuinely very illegal and could lead to the site’s shut-down. Users convincing strangers to mix chemicals together or de-magnetize their hard drive was less pressing unless it also resulted in something illegal.

The meme really got going when one user came back to complain that their computer would no longer start after they followed one of the first troll posts. That post gave instructions on how to delete it while framing it as ‘bloatware’(software that intentionally slows a device down). If you have no idea what makes a computer run, it sounded like good advice.

When users caught on that some versions of Windows would refuse to outright delete System 32, they moved on and started including console commands, something the average user (at the time) had no experience with. Someone with little or no knowledge of the subject wouldn’t know what they were looking at. A button press, some typing, and an @echo command. Easy to follow… too easy.

Mainstream Dilution

Instructions for deleting System 32 to ‘speed up the computer’ or ‘make the computer quieter’ appeared on more public sites some time in 2008. I Can Haz Cheezburger is likely the largest at this point, a forum centered around funny images of cats and other assorted animals, with a penchant for memes including advice, good or bad. Soap Ice, the idea that you could freeze Dawn dish soap and water in a puck of ice, and then use it to ‘shower’ after a trip to the gym or park, was one of these ‘advice’ memes. This does not work for the reasons you’d expect, but it’s less likely to kill someone than bathroom cleaner ‘crystal’ hacks. ‘Advice’ to delete System 32 was a natural fit, and it spread like wildfire.

With the meme’s spread into bigger websites that are more strictly moderated, articles start coming out advising people not to delete System 32. Even better, memes start circulating on websites like I Can Haz Cheezburger to give users warning directly. It doesn’t stop all of it – no good-advice-meme can stop a person determined to use a hack like Soap Ice, but it puts a major dent in the spread. With less people taking the bait, and others ready to comment ‘don’t do this!’ on posts where it appears, the meme finally slows down, eventually to a crawl. “Delete System 32” is now used ironically, because knowledge of it is so widespread that someone not knowing is rare.

And so the rise and fall of a meme is recorded. This is one of the first of it’s kind, but it’s far from the last.

Memory Terms

Elizabeth Technology March 7, 2024

The first Bit of Data

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.

Processing

If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

Then, a Byte

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.

Alphabet

8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!

Kilobyte

1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.

Megabyte

A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.

Gigabyte

A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.

Terabyte

A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.

Petabyte

A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.

Sources:

https://en.wikipedia.org/wiki/Bit

https://kb.iu.edu/d/ahfr

http://www.differencebetween.net/technology/software-technology/difference-between-unicode-and-ascii/

https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3155B.html

https://www.pcworld.com/article/127105/article.html

https://www.wired.com/2014/10/astrophysics-interstellar-black-hole/

https://www.merriam-webster.com/dictionary/gigabyte

https://www.nasa.gov/mission_pages/chandra/news/black-hole-image-makes-history

https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/

The Train That Breaks Itself

Elizabeth Technology February 15, 2024

If you’ve paid any attention to big tech in the last several years, you’ll probably know that Apple is on the verge of switching to USB-C for phones. It’s easier and more accessible for the average EU citizen to acquire than Apple’s lightning chargers. It’s not just Apple that’s being forced to change for the sake of the customer – the shareholder system at large is constantly at odds with the end user’s rights to buy a complete, sturdy product that wasn’t designed to break a few months down the road so that an official BrandProduct shop can charge over the market rate to fix it. Thanks to the EU’s legal interventions, Apple (and many others) cannot continue to sell a product that only they can make chargers and power supplies for, that only they can update, that they can choose to brick whenever they feel like the user needs to move on to the next phone, etc.

The Newag train scandal is particularly egregious given this context!

Big parts of Europe rely heavily on trains for both passenger and freight transit, and trains are expensive to make and repair; once the state has invested money into infrastructure and the trains themselves, they won’t simply be switching brands on a whim. This already gives the company a massive amount of leverage over their contractors.

 Newag is one such train company. Allegedly, as Apple did, Newag figured that regular repair and maintenance were good places to squeeze a bit more money out of the customer, and set up a bit of code within the train’s computer brain that would cause it to error and stop working if anyone but a Newag shop touched it to fix it. Keep in mind train repair shops are already incredibly niche, and repairs to trains come out of taxpayer money – to be thriftier by going to an independent shop is an obligation when the money isn’t your own. Worse, even if the shop didn’t need to fix anything in the train’s computer, Newag’s trains are GPS-enabled, and if the train spent too long at an independent train-repair station, it would still mysteriously stop working.

Of course, Newag denies this heavily – they even went as far as trying to sue the company that discovered this quirk, Dragon Sector, into shutting up about it. Then, they suggested it was the result of cybercriminals and not Newag itself, which could make sense if this were ransomware stopping the train entirely and not just when the train didn’t stop at a Newag shop or get it’s special unlock code. The odds are stacking up against the company, as the evidence is too clearly pointing towards predatory practices for them to get out of an investigation.

Sources:

https://arstechnica.com/tech-policy/2023/12/manufacturer-deliberately-bricked-trains-repaired-by-competitors-hackers-find/

Optical Memory

Elizabeth Technology January 30, 2024

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than magnetic tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc (don’t do that – most discs contain some amount of metal, and that can damage your microwave badly enough to be dangerous).

CDs

“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?

CD-RW

The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.

DVDs

DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.

DVD +R DL

That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.

Compatibility

CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

FlexPlay Self-Destructing Entertainment

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.

Blu-Ray

Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.

Sources:

https://www.britannica.com/technology/optical-storage

https://www.dell.com/support/kbdoc/en-us/000149930/what-are-the-different-cd-and-dvd-media-formats-available

http://www.osta.org/technology/cdqa13.htm

https://www.scientificamerican.com/article/whats-a-dvd-and-how-does/

https://kodakdigitizing.com/blogs/news/cd-vs-dvd-how-are-they-different

http://recordhead.biz/difference-blu-ray-dvd/

https://www.dell.com/support/kbdoc/en-us/000147805/guide-to-optical-disk-drives-and-optical-discs