Posts Tagged


Tricking Apple Customers With A Fake Download

Elizabeth Technology May 21, 2024

Apple’s pretty famous for being difficult to write viruses for. Essentially, for something to get into an Apple device, it has to be so small and so powerless that it’s worthless as a virus. Apple takes pride in this. It’s very rare for a virus to infect so many devices before Apple notices and puts a stop to it!

What Happened?

A virus dubbed “Silver Sparrow” by tech company Red Canary snuck onto devices via “update” download requests. Essentially, it tricked victims into believing that they couldn’t view certain content without updating their flash player. The ad helpfully provided the download so they could update right then and there. This was not a flash update – it was a .pkg file masquerading as one! This is a common trick, but it’s not the only way these ‘updates’ end up on machines. If a box pops up asking you for permission to download something even though you didn’t click anything requesting an update, don’t allow it. Legitimate programs will never do that!

Red Canary also notes that ads and malicious search results may have had a hand in the virus’s extreme reach – unsecured websites can carry viruses in images and ads, so if a hacker figures out a site will host ads for anybody, they can use that as a launch gate.

Besides “how”, Silver Sparrow right now is non-specific malware, an activity cluster. This just means that a set of files contain the code to carry out the attack, but they don’t fall neatly into one category over others. Identification only goes as far as “not adware” right now, but this may change as more is learned about the virus!

Reason to Fear?

It doesn’t actually look like the new virus did anything. Yet. Unfortunately, viruses like these are usually used to set up a wide-scale attack at a later date. The goal is to infect as many computers as possible without firms like Red Canary noticing, and then kill or encrypt the infected all at once. They don’t yet know exactly if this is what Silver Sparrow was going to do, but it certainly seems a little odd that this incredibly quiet virus was installing itself in places just to sit there indefinitely.

Alternatively, this could have been a sort of ‘test run’. Whoever made Silver Sparrow included a self-destruct that should have triggered by itself. It’s possible the creators were looking to gather some numbers before actually launching a more dangerous malware that could deliver a payload. Red Canary currently has an estimate of just under 30,000 Apple devices infected, but the number may grow as new infection indicators are discovered. After all, something with a self-destruct will occasionally manage to get it right!

Once Apple was alerted of the problem, they revoked the certificates Silver Sparrow had been using illegitimately and began developing an action plan to keep viruses like this one out in the future. Revoking those certificates should be enough to keep Silver Sparrow from infecting more devices. Red Canary currently recommends a solid anti-malware tool on top of what Apple’s OS already has to prevent copycat viruses, and boost security.

The virus is still pretty scary, even though it didn’t do much more than sit quietly. It’s compatibility with the M1 chip, evading the Apple MRT, and it’s high infection rate are all reasons to keep an ear to the ground if you’re a Mac owner.

Define “High-Stealth”

The virus had a self-destruct function built in, but it seems like it didn’t actually get to activate it in a lot of cases. The virus was supposed to come into contact with a different part of the library that would contain the code it was looking for to trigger the self-destruct. It’s possible the thing was hiding a little too well, to its own detriment.

Notably, it runs on the M1 chip, something malware’s not supposed to be able to do. That may have contributed to how difficult it was to identify. The chip itself is pretty young, and researchers have determined that the virus may have begun infecting devices as early as three years ago, meaning Silver Sparrow is part of a very exclusive club right now.

No activity that triggered the built in antivirus + self-destruct + small size = high stealth!

What Is MRT?

An MRT, or Malware Removal Tool, is designed to remove threats to the computer in the background without the user noticing. This can create problems with CPU usage, and it means there’s less flexibility in downloading files than Windows gives, but the security the tool gives consumers is worth it. Especially for folks who don’t know computers all that well, and may not understand how to browse the web safely. The MRT has a library of known viruses, and combines that knowledge with programming designed to combat new and unknown ones.

As said before, Apple’s pretty difficult to write viruses for. The MRT certainly contributes, but the OS itself boosts this difficulty to a point that hackers and cyber criminals don’t even try. It’s not impossible, but malware is custom-fitted for Macs. Windows viruses are just easier to make, and there’s more Windows devices than Macs, especially in the business world.

Don’t Click Random Ads – And Don’t Download Things

It’s unfortunate, but if a website’s not supporting ads from a large, trusted vendor like Google, they likely can’t vet every ad they sell space to. Anti-virus should help protect devices against ad intrusions, but what about everything else?

For other issues, like clicking links, the unfortunate answer is that it comes down to ‘street smarts’. It’s something employees and regular computer users need some training on. What looks suspicious to one user may not seem suspicious at all to another! Free-to-play games, for instance, might trick a child, while “recipe.exe” sent forward from chainmail might catch an older adult who doesn’t know what different file extensions mean.

What you can do if you’re struggling to separate good links from bad is listen to your device and carefully review the download. Is it what it says it should be (i.e recipe.pdf instead of recipe.exe)? Does the publisher’s credentials match the site you got it from? And does your computer throw a fit when you try to download it? Or warn you that the file may be from an unverified third party?

When in doubt, you can always Google the alert you’re getting – and err on the side of caution!


Being Too Smooth To Use

Elizabeth Technology May 16, 2024

Breaking rank with other companies to make things smoother can certainly set your product apart, but is there a point where something becomes too sleek to use?  

Tesla Handles

Most models of the Tesla car have handles that physically retract into the doors when not in use. Inside the car, the handles operate by a button press, not by a pull. You are not mechanically opening the car door, you are instructing the car door to open, and that’s an important difference. Both sets of handles require that the car has power. Otherwise, they won’t function. Famously, one man struggled to get out of his car after it caught fire because the handles inside don’t operate like the handles of any other car, and a special ‘release latch’ that’s hidden behind the doorgrab is necessary to open the car when it doesn’t have power. He couldn’t find that latch because it’s hidden (for added sleekness), and as a result, he had to crawl through the window. Of course, Twitter commenters pointed out the latch, but if you can’t visually identify the thing that’s going to open your flaming car in a few seconds, is it really a ‘good’ design? Sure, that’s great if the car dies and the button doesn’t work and you have time to figure it out – it doesn’t work so well in an emergency. Similarly, the doors of the new Cybertruck will stop working if the electronics stop, requiring the user inside to manually disassemble part of the door and pull on a specific wire to get out in case of emergency, which lead to the death of billionaire Angela Chao :

The handles are also more prone to freezing over in cold climates, which is very annoying. Plenty of car doors freeze shut, and this is far from a Tesla-only problem, but it turns an already annoying problem into an even more annoying one because the handle has to be freed from its pocket in the door before you can even begin to try opening it.

 Apple and It’s Missing Jack

Apple removed the aux jack from its devices. Did it need to? Maybe – the jack takes up quite a bit of space inside the phone thanks to it’s placement, and removing it would enable Apple to put some more cool stuff inside the phone. But then the phones got bigger, and the storage chips got smaller even as they held more digital storage space. Does this mean Apple will put the jack back in, seeing as it no longer needs to conserve space as much as it did when it was trying to make phones that broke technological walls? The phones are flipping huge now, there is space for the jack.

Haha, no!

Removing the aux jack also made it so that any non-Bluetooth headphones the consumers had wouldn’t work without an adaptor. An adaptor that Apple just so happens to sell. An adaptor that has the same problems that all of the cords made by Apple do. This means that a number of accessories are now effectively Bluetooth-only, which is annoying at best and kind of malicious at worst. When carriers pushed the new phone, users had to upgrade everything if they wanted to go to the next model. Apple happens to sell a lot of those accessories, and while Apple may be pricey, the name does still carry weight – it means a defective product could be returned to a physical store or exchanged immediately without waiting for Amazon to retrieve it.

The phone is sleeker. It has less ports. It’s closer to being truly waterproof than it ever has been before. It looks cooler than ever. But the minimalist principles in the design of the phone are directly costing consumers both real money and ease of use. Apple knows this – Apple likes it that way. Eventually, there may come a time when Apple removes the C-USB port and expects you to use cordless charging, with its proprietary charging pad.

Windows 10

Windows wants you to use Bing. Windows wants to add functionality to your taskbar. Windows has combined the built-in taskbar search feature with the open web in an effort to do both of these things. Unfortunately, it turns out this configuration combines the worst of both. Have you ever had a relative who doesn’t use computers much? For a long time, you could rest assured that a search on the Windows taskbar wouldn’t somehow end with that relative downloading a browser extension they didn’t need or clicking on an ad they mistook for a file on their computer.

When Windows made it possible to search for both ‘on-web’ and ‘on-computer’ pages in the same search bar, they also created a massive headache and added additional clicks to the search. Trying to search for a file named something like ‘car report’ could bring up search results for sites like Carfax. Suddenly, you’re not in your files digging around for a report that was already made, you’re on the web. That’s annoying, but you can just go back and try again. If you’re really desperate, you can open up the file picker and search there. It doesn’t work for everything on the computer (it doesn’t want you to be able to find and delete functions like Sys32 or Task Manager, so it won’t show you their file locations, and file picker isn’t equipped to open it for you like the taskbar search is even if you do find them) but it’s better than the mess you just got into with the search bar.

But wait – go back to that relative from before. For that relative, this was a linear path that makes sense, and the website must have what they were looking for because it popped up in their search. Every iteration of Windows before this one has worked by only showing the relevant files on the device, so they don’t know that they aren’t meant to be on the Carfax website. If they don’t stop to call in help, they may end up filling out a form on that site they didn’t need to, or giving up information they might not have wanted to. Imagine how much that could suck if it wasn’t the car report – taxes, Social Security, health insurance, any number of things that might be saved on a computer, could simply be confused with an ad on their accidental Bing search.

It should say something about how poorly this worked out that there are dozens of pages on forums and blogs detailing how to disable it so this exact thing won’t happen – or happen again. Windows 11 at least gives you the opportunity to turn it off, and you have to go out of your way to get to web results in regular taskbar search once it is. A search function where everything can show up in the same place is not always better.

What’s the Difference Between a .jpg and a .png?

Elizabeth Technology May 14, 2024

Loss, mostly.

Picture this: it’s 2005, you’re online, and you go to save a funny image to your family computer so you can send it to a family member later. The image saves, but when you go to open it again, you notice the image is a little grainier than it had appeared on the website. You shrug and brush it off since the image is still clearly legible, but then that family member does the same thing: they save the image from your email to send it to a friend they have across the state. That friend opens it, and it’s a little grainier than before. Repeat. Add grain. Repeat. Add grain. Eventually, the picture is a mess: seemingly random squares of color and gray splotches are everywhere, and the colors in spots that aren’t all glitchy are different.

So what happened?

Under Compression

Data needs to be compressed before it can be taken to or from places on the computer. Compressing the file means it takes up less storage space, which improves response time. However, there are different methods of compression depending on what kind of content you’re dealing with.

Lossless compression replaces long bits of data with shorter bits, while lossy compression deletes bits and pieces outright. If you open a losslessly compressed file, it is put back together exactly as it was; lossy files are still missing pieces.

The Curse of the JPG

Certain image formats are more focused on storage space than on the quality of the image. Generally, most people don’t have a problem with this, since saving an image once to send it somewhere (or hang on to for reference) doesn’t cause too much loss. Loss in photo terms means that some of the information in the photo was, well – lost. Jpgs can normally get away with this at first; lossy compression, after all, looks for unimportant parts to delete first during compression. At worst some of the shadows might get a touch harsher and some of the lines a little blurrier.

If it’s saved again as a .jpg, it’s compressed again and more data is lost from the image, blurring it a little more every time, which leads to that unique “.jpg rainbow” sometimes seen around text that was black but slowly turned red, blue, and green.

The PNG Files

Fun fact, .png files were actually made to replace .gif files, which were patented by UNIX at the time. Copyright gave us a better photo format, as .gif files aren’t fantastic at recreating colors accurately.

A .png is better suited for basically everything else except for storage space, which is a small trade-off if you’re trying to make graphics for things that you sell. Company logos, professional headshots, images that may need to be resized larger – all of these are better saved as a .png.  .png files also have the benefit of transparency, meaning that there’s no white square hiding behind the image if that’s how you saved it, like there would be for .jpg files no matter what you do.

Long story short: .png is better for graphics that have to look a certain way, and .jpg is better for casual photos that are allowed to get a little blurry.


The New Internet Is Full of Bots

Elizabeth Technology May 2, 2024

Ever see a bizarre post with a comments section full of people spamming emotes or otherwise responding in a way that suggests they read a description of the post, but didn’t actually see it? Of course interaction bots have been here for a while, but now with AI art (rather than stolen art) it becomes obvious these are actually bots and not people.  

What Is An Interaction Bot?

Firstly, in this area, ‘bot’ refers to a bit of code that does something. What the bot does depends on its creator’s goal – some bots sit and ‘watch’ videos to boost view count, others scrape data from websites to analyze it, and some do things like scroll, interact with buttons, and leave simple, plausibly human-sounding comments on posts online. An interaction bot is meant to be a substitute for real human interaction on a post. Since many social media sites now offer moneymaking opportunities based on views or likes, and since everyone likes feeling popular, this is a problem that said social media sites have been fighting since internet points were invented.

Every time some new ‘tell’ makes the bots easier to purge, the bot makers come up with another way to thwart moderators. When bots were getting too specific with likes, the bot makers told them to like a handful of other posts before they started interacting with the desired post, and to stagger when the interactions happened so they didn’t all hit at once. When the comments got too repetitive, a library of  comments scraped from places like Reddit started re-appearing in comment sections. It’s easy to borrow human habits, and we’re at a point where an uninterested user is borderline indistinguishable from a bot pretending to be a human, at least just by looking at their browsing habits.

The goal of some bots is to get a lot of followers to follow one account so that account can then be used to sell the new followers something, whether that be a political belief or an actual product. Even on services where views are not tied to money, those eyes are still useful. The way most algorithms work, a popular post becomes more popular because the website shows those popular posts around to new people who might not have seen it. It does this because the popular post in question created engagement, and if the website can keep you engaged, you’ll stay on longer and see more ads. Having bots enter this ring and artificially boost the popularity of certain posts has resulted in a strange new kind of post dominating Facebook. Where a post had to be written by people, and a picture had to at least be stolen from a real person in the past, the widespread availability of ChatGPT and image generators makes some of these fake posts stick out like a sore thumb.

ChatGPT and Image Generators

You can tell a bot to ask MidJourney or Dall-E to generate an image, and then put that image into a Facebook post with a caption you pre-wrote. Once you set it up, you don’t even have to check on it. Once the post has been put up, other bots show up to comment on it or like it, whether they’re yours or someone else’s.

This has resulted in posts like Spaghetti Jesus or The 130 Year Old’s Peach Cream and Filling Birthday Cake getting hundreds of comments all saying “Amen!” or “Looks Good!” with maybe a dozen people asking what everybody is talking about, because the picture usually looks terrible and fake. This isn’t a case of tech-illiterate folks seeing something obviously bizarre and giving it a ‘like’ anyway – these people don’t exist. The better ones may get a couple of real people, but the strange ones are certainly not (look at these pictures The Verge has collected as an example: ).  

We’ve circled around! This new generation of bots are so advanced that, when given the chance to show off the state-of-the-art tech entering the market, they do it without question and accidentally pull back the curtain in the process.

What To Do?

Unfortunately, managing this issue as a user on the web is basically impossible. Even if you keep bots from following your accounts, you’re not immune to seeing bot-run accounts when you’re searching or scrolling. Instead, the best thing you can do is just refuse to engage with engagement bait – when something asks you to say “Heck yes!” in the comments, or leave a like if you love X hobby, you can ignore it, and avoid accidentally propping up bot accounts trying to get big. As for imagery, the bizarre spaghetti creatures and uncanny peach cake bakers are only going to get better – we’re entering a phase of the internet where pictures must be assumed to be fake and verified before they are treated as real, the opposite of what most internet users are accustomed to. On forums like Reddit or Tumblr, a user must look at the comments before taking a post as fact, because upvotes and comments are not necessarily the sign of quality they used to be when the internet was young and lacked bots. It’s a strange new world out there, and the bots are part of it now, for better or worse.

Emulators And The Legal Gray of AbandonWare

Elizabeth Technology April 23, 2024

What is an Emulator?

An emulator is a program that emulates a game console, usually for the purpose of playing a game that is – either by price, age, or device – inaccessible. Streamers commonly use emulators to play Pokemon games made for the Gameboy, so they can screen-record their gameplay directly from their computer instead of having to somehow hook the Gameboy up to it. Zelda fans might want to play Ocarina of Time, but they might also find that the console to play it on is awfully expensive for one game where an emulator is pretty cheap! In certain cases, games are geolocked – countries restrict access to certain forms of art as a means of censorship. Emulators can make those games accessible to people who want to play them in that country.

In the 1990s, consoles were on top when it came to games. Computers were rapidly gaining in power, however, and some folks realized that the console could be recreated using a home computer. The first emulators were born via reverse-engineering console coding. They evaded legal action by only copying devices that were outdated, but that changed too with a major emulator made for the Nintendo 64 while it was still in production. Nintendo pursued legal action to stop the primary creators, but other folks who had already gotten their hands on the source code kept the project going.

Ever since then, emulators have lived in a strange space of both making games available and making them so available that the parent company decides to step in and try to wipe it out, which is nearly impossible once it’s out on the open web. Gamers simply won’t allow a good emulator to die!


Copyrights are crucial to the gaming ecosystem, and it’s a delicate balance of allowing fan art, but disallowing unauthorized gameplay. Allowing game mods, but disallowing tampering that could lead to free copies being distributed against the company’s wishes. Allowing fun, but not theft. Copyright laws are always evolving – new tech comes with new ways to copy, create, and distribute intellectual property. Generally, though, copyright falls back to permission: did the original company intend for their IP to be used in this way?

Emulators and copyright don’t get along very well at all! Emulators are, by their very definition, creating access to the game in a way the original company didn’t intend. As such, it’s unofficial, and if money is exchanged, it’s not normally between the copyright holder company and the customer, it’s the customer and some third unauthorized party.

Games aren’t selling you just the physical disk. You’re buying a license to play the game. If you take it as far as Xbox intended to back when the Xbox One was coming out, friends are only allowed to come over and play with you on your license because the company can’t enforce it. It’s a limitation of the system that they can’t keep you from sharing disks or accounts.

Not every company thinks like this (see the Playstation 5 and a number of more recent cases regarding digital content ownership), but that’s the most extreme possible interpretation. You bought a disk so you could play a copy of their game that they have licensed out to you. You own the right to play that copy of the game, you don’t own the game itself.

Consider: Death of a Console

When a console dies, it’s taking all of its content with it. There is no more money to be made off of it, and the games are going to slowly disappear into collections and trash bins.

Does art need to exist forever, or is it okay if some art is temporary? Not every Rembrandt sketch is still in trade – some of it was just sketches, and he obviously discarded some of his own, immature art. Immature art is interesting to see, but it’s not what the artist wanted their audience to see. Otherwise it would have been better kept. Think about the ill-fated E.T. game that Atari made, they weren’t proud of it, they didn’t want it seen, and they saw fit to bury it. So they buried it. It was directly against their wishes for people to find this game and then play it. Emulating it is obviously not what the copyright holder wants.

But then consider all the little games included on a cartridge that’s just forgotten to the sands of time, made by a programmer who didn’t want it to fade away? Acrobat, also for the Atari, isn’t very well-remembered, but it still made it onto Atari’s anniversary console sold in-stores. 97 games on that bad boy, and Acrobat was included. It’s not a deep game, it’s nearly a single player Pong. But the programmers who made it didn’t ask for it to be excluded from the collection, so some amount of pride must exist over it, right? Does the game have to be good to be emulated? Is only good art allowed to continue existing officially?

Is all art meant to be accessible to everyone?

If some art is made with the intent to last forever, is it disregarding the creator’s wishes to not emulate it, against their production company’s wishes? If a corporate exec decides a work of art is better used as a tax writeoff than launched even though it’s already complete, is it better to listen to that exec, or the dozens – perhaps hundreds – of people opposing the exec’s will?

If art’s made to last forever but the artist (and society) accepts that that’s simply unrealistic, is it weird to emulate it, in the same way it’s weird to make chat-bots out of dead people?

When you get past the copyright, it’s a strange, strange world to be in.

Ethical Dilemma

Stealing goes against the ethics of most societies, modern or not. The case against emulators is that it’s stealing.  It often is! An emulator/ROM (ROMs act as the ‘disc’ or ‘cartridge’ for the emulator) for Breath of the Wild was ready just a few weeks after the game launched, which could have seriously dampened sales if Nintendo didn’t step in to try and stop that. That first emulator, the one for the Nintendo 64, also drew a lot of negative attention for the same reasons, potentially siphoning away vital sales.

However, there’s a case to be made for games and consoles that aren’t in production anymore.

Is this a victimless crime, if the original game company really can’t make any more money off of it? It’s one thing to condemn piracy when the company is still relying on that income to make more games and pay their workers, it’s another entirely when the game studio isn’t interested in continuing support, and the console had a fatal fault in it that caused many of them to die after 10 years. That game is as good as gone forever without emulators. With no money to be made, why not emulate it?

In less extreme circumstances, the console’s still functioning, but the cartridges that went to it are incredibly rare. The company could potentially make money off of the game if they someday decided to remaster it, but that’s unknowable. Licenses could be available for purchases… but they aren’t right now.

Or, even better, the cartridges are still available for purchase in the secondary market. You just don’t happen to have the console, which has now spiked to a cost of 400 dollars due to reduced supply over time. You buy the cartridge – you’re still buying the license, you just don’t have the car, right?

According to copyright, you need a specific car for a specific license, but ethically, you’ve done the best you can as a consumer.

Brand Name

Much like Disney did with Club Penguin’s many spinoffs, emulators are kind-of sort-of overlooked up until they start eating into sales. More aggressive companies will go after emulators before they blow up (see Nintendo challenging Yuzu, an emulator) but most companies just don’t want to spend money to enforce an issue like emulators – their game is still being played, their brand is still out there, and the users are going to be very upset if this big company decides to step in and ruin fun when they don’t need to (see Nintendo challenging Yuzu, a beloved emulator). It may do more harm than good to try and wipe the emulator out when most people want to do the right thing.

Obviously, they’ll need to put a stop to emulating new games – the goal is to spend just enough money to do that effectively without also overstepping and destroying emulators for consoles no longer in production. It takes money to make games, games should earn money as a result. Removing emulators for games and consoles no longer in production isn’t helping them earn money – as such, many are allowed to stay. For now.


How To Handle A Hack: Blizzard in 2012

Elizabeth Technology April 2, 2024

In 2012, game developers were beginning to experiment with a principle known as “always on”. “Always on” had many potential benefits, but the downsides keep the majority of games from ever attempting it. Many of the notable standouts are games that require team play, like Fall Guys or Overwatch. Others without main-campaign team play tend to fall behind, like Diablo 3 and some of the Assassin’s Creed games. Lag, insecurities, perpetual updating, etc. are all very annoying to the end user, so they’ll only tolerate it where it’s needed, like those team games. It’s hard to say that this hack wouldn’t have happened if Blizzard hadn’t switched to an “always on” system… but some of their users only had accounts because of the always-on.

Blizzard’s account system was designed with their larger team games in mind. It was forwards facing, and internet speeds were getting better by the day. Users were just going to have to put up with it, they thought. Users grumbled about it, but ultimately Blizzard was keeping data in good hands at the time. You wouldn’t expect accounts created purely to play Diablo 3 to lose less data than the user profiles in the Equifax breach, right? Blizzard kept the ball here! What did Blizzard do right to prevent a mass-meltdown?

Hacker’s Lament

The long and the short of it was that Blizzard’s storage had multiple redundancies in place to A) keep hackers out and B) make the info useless even if it did end up in the wrong hands. Millions of people had lost data in similar events before, and security experts were more and more crucial to keeping entertainment data safe. Blizzard was preparing for the worst and hoping for the best, so even when the worst struck here, they weren’t left floundering telling people they lost their credit cards.

The actual hack was defined by Blizzard as ‘illegal access to our internal servers’. It released the listed emails of players (excluding China), the answers to security questions, and other essential identifying information about accounts into the wild. However, due to Blizzard’s long-distance password protocol, the passwords themselves were scrambled so much that the hackers might as well have been starting from scratch. This is still a problem, but it’s not a world-ending, ‘everyone has your credit card’ problem. Changing the password on the account and enabling 2FA was considered enough to shore up security.

Potential Issues

Lost email addresses aren’t as big of a problem as lost passwords, but they can still present an issue. Now that the hacker knows an email address was used on a particular site, it’s possible to perform a dictionary attack, or regular brute forcing! This strategy will eventually work, but the longer and more complicated the password is, the less likely it is to succeed on your account in particular.

A secondary problem is the lost security questions. Those are a form of 2FA. Depending on the question asked, guessing something that works or brute forcing it again is dangerously easy. Sparky, Rover, and Spot are very popular names for American dogs, for example. If the hacker is able to identify that the player’s American, and then guess the name of their first dog, they’re in! They can change the password to keep the legitimate player out. (Part of Blizzard’s response is forcing users to change their security questions for this reason). 2FA that uses email or mobile is generally preferred. acted as an overarching account for all the games, and made the stakes higher for an account breach. All the online Blizzard games went through Losing access could mean losing access to hundreds of hours of game progress. Or worse: credit card data and personal info.

Online, Always, Forever

The event provided ammo for anti-always-on arguments. There was no option to not have a account if you wanted to just play Diablo’s latest game. Some users were only vulnerable as a result of the always-online system. If they’d simply been allowed to play it offline, with no special account to maintain that always-online standard, there wouldn’t have been anything to hack! Previous Blizzard games didn’t require People who stopped at Diablo 2 seem to have gotten off scot-free during the hack. This is annoying to many users who only wanted to play Diablo 3. They might not find value in anything else about the system. Why bother making users go through all this work to be less secure?

When discussing always online, there’s good arguments to be made for both sides. Generally, always on is better for the company, where offline gaming is better for the consumer. Always on helps prevent pirating, and it gives live data. Companies need data on bugs or player drop-off times, which can help them plan their resources better and organize fixes without disrupting the player experience.

On the other hand, consumers with poor internet are left out, as lag and bugs caused by poor connection destroy their gaming experience. As games move more and more to pure digital, buying a ‘used game’ only gets more difficult for the consumer. Companies treat purchased games as a ticket to a destination, rather than an object the consumer buys. Games used to be objects, where anybody could play the game on the disc even though save data stayed on the console. Buying access to Diablo 3 via means that there’s no way to share that access without also allowing other people to access the account, which stores the save data. It’s the equivalent of sharing the console, not just the disc.


The response to the stolen, scrambled passwords was for Blizzard to force-reset player passwords and security questions, just in case the hackers somehow managed to unscramble them.

2FA is always a good idea, and Blizzard strongly recommended it too. 2FA will do a better job of alerting you than the default email warning  ‘your password has been changed’ will after the fact. After you’ve received that email, the hacker is already in. Depending on when you noticed, they could have already harvested all the data and rare skins they wanted by the time you get your support ticket filed! Setting up 2FA first means that you’re notified before that happens.

All in all, Blizzard handled this particular incident well! Companies are required to inform their users about potential online breaches, but some companies do this with less tact than others. Formally issuing an apology for the breach isn’t part of their legal requirements, for example. What made this response possible in the first place was Blizzard’s competent security team, alongside a set of policies that were strictly followed. Logs and audits in the system ensured that Blizzard knew who accessed what and when, which is critical when forming a response. Blizzard was able to determine the extent of the problem and act on it quickly, the ultimate goal of any IT response.


Pirating Is a Crime

Elizabeth Technology March 26, 2024

Piracy is a crime. Don’t pirate things. They’re serious about it. There are real reasons beyond “big music corps are people too”.

Why are the fines so steep?

Piracy seems victimless. In reality, the victims are just barely affected with each instance, up until the cumulative effect starts to affect their desire to create. Art has a price, and if folks aren’t willing to pay it, art disappears. Not all of it, of course, but the niche, unusual, and otherwise less profitable stuff goes by the wayside.

Fines are a strong motivator for many people – the main goal is to make piracy so undesirable that nobody does it for fear of the fines, not for the fear of being a thief (or “thief”, depending on how you define copyright violation). Many people don’t see anything actually wrong with stealing content from big name artists. What would the harm be? They aren’t really wrong, but they’re not right – they won’t be affecting that artist very much by themselves, and the amount missing from their art consumption is maaaybe a couple of pennies.

For example, Pharell only made something like $2,000 on Spotify when he was #1 on the top 40. Pirating that song would cost him maybe a twentieth of a cent, more in potential lost sales if you were intending to buy it on iTunes but went to LimeWire instead. However, now that Spotify is not monetizing any songs under 1,000 listens, you not listening in a legitimate channel could make a bigger difference to smaller artists. It’s like littering: if everyone left their trash at the park, the park would close for cleanup. One person is just an inconvenience to the groundskeeper. One plastic bottle won’t ruin the park’s water, but dozens will, and the rangers only need to catch one to get some of the others to stop. Fines keep litterers and minor pirates alike in check. If everyone thinks ‘my trash won’t hurt’, you get a trashed park. If every pirate thinks ‘my pirating won’t hurt’, you get musicians and moviemakers on strike.

Besides, fines for piracy are massive. Up to $250,000, and possible jail time!

Who are you actually going to hurt?

Small artists who get ripped off with copyright breaches and stolen songs are the people on the cutting edge of new. New music, new tech, new art – the small artists create things that you won’t find in Bed, Bath and Beyond, or on the Top 40. Cost these people money, and you’re destroying a complicated ecosystem of inspiration and passion-projects that the Top 40 is not looking to recreate. Layer Ariana Grande songs over each other, and you’ll discover patterns you didn’t notice before – patterns the producers definitely did notice, and they went down a checklist to get that song out and on the charts.

Small bands don’t have the same resources. When something sounds good, it’s because they made it sound good by themselves – you’re rewarding individual talent by not pirating. Tame Impala didn’t have access to a recording studio for their first album. He wrote the songs himself. He mixed it, himself. The same goes for Billie Eilish, and any other number of bedroom musicians (musicians who record their music in their bedroom). No disrespect to Ariana Grande, but she can’t make albums with the creative freedom that a bedroom band can. The people who invested in her can’t afford to have a flop, so she always gets breathy, poppy, peppy songs with high notes. It’s her strength, so it’s all she gets to release. She has creative input, but not a lot of control.

Pirating wouldn’t directly affect her unless everybody started pirating. It would take significantly less to accidentally crush something like early (early!!!) Tame Impala, or early Billie Eilish, and you might not hear anything like them ever again.

Don’t pirate the music if you want more of it!

Movies: More Serious

Movies are more serious to pirate. The theater runs on a tight margin to keep the tickets cheap. This is why a cup of popcorn is six dollars, that’s where the operating cost goes – the ticket is just paying for the movie’s rental of the reel from the studio.

The studio puts its money towards making back the budget of the film, and if the film does well enough, there may be a sequel. Trolls, for example, did well enough for studios to invest in Trolls: World Tour. The same goes for Tenet, and for Sonic. They made enough money back that the studio wants to keep the gravy train running. Not all sequels are good – and some may say that money shouldn’t be running art – but the world we live in has these rules. More money = more creation. Many talented artists literally cannot afford to create art full-time if they aren’t being paid for it.

However, assume pirating eats into the profit. One guy copies the file and sends it out and around, and a bunch of people see the pirated version on disc or download. They don’t want to spend money to see it again. Pirating takes the movie off the watchlist of hundreds or thousands without actually funding the movie. That wouldn’t have ruined Sonic or Tenet necessarily, but for an indie project, that can be devastating.

Pirating can happen at the theater too! You think you’re watching a legitimate copy of Fast and Furious 8, but the owner had pirated it from a connection he had who got it early for review. That theater makes blockbuster movie money, and the studio sees none of it. Stuff like that is why the fines are so huge, that owner would gladly do it again for a $2,000 fine. Illegitimate rental places were also a real problem. BlockBuster franchises (and small locally-owned rental stores) making illegal copies of recent hits was a profit-killer.

And as small bands suffer more than big bands, so too do small movie studios. Some of the wildest, most creative movies ever pushed to the big screen come out of small studios. The group that made Coraline, for example, is relatively small compared to Disney or Pixar. Pirating a newly released movie en masse could seriously dampen their funding for the next movie even if it wouldn’t make a dent for Disney.

It’s cumulative. They won’t catch everyone who pirates… but they’ll get enough to be a deterrent. Good art comes from protecting the artists who made it!


Memory Terms

Elizabeth Technology March 7, 2024

The first Bit of Data

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.


If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

Then, a Byte

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.


8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!


1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.


A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.


A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.


A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.


A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.


The World’s Most Specific Shirt

Elizabeth Technology February 29, 2024

You’ve probably seen some variation of the shirt.

You’re wondering how it’s so wildly specific. You click it, and scroll down, and somehow… somehow the company seems to have made shirts specifically for you, the boyfriend of a Registered Nurse who was born in June, who’s a little crazy with a heart of gold.

And then you notice on other channels, people are getting shirts that say ‘Never mess with a Union Welder born in November with Blue Eyes’. ‘My Boyfriend is a Crazy Libra who loves Fishing and Mountain Biking”. Okay… it’s specific… but no harm, right?

What’s happening?

The Ads

First, some context. Facebook takes information like birth date, gender, likes and dislikes, etc. to hyper-tailor ads directly to specific individuals. On the advertiser’s side, Facebook allows their advertising customers to modify ads depending on group – companies can make multiple ads for their product to better build a brand image for any one customer’s specific demographic profile.

Picture that a company makes hair gel for adolescents as well as young adults, for example. The adult is looking to impress their coworkers, but the kid just wants to prevent helmet hair. The gel does both, but the ad will change the target customer’s view of the product – is it for skateboarders, or is it for professionals? Only a super generic ad could appeal to both, and generic ads do much worse than targeted ones. Luckily, Facebook’s fine-tuned ad program can determine which set of ads the viewer should be seeing, and the company can make two ads, one for skateboarders, and one for young professionals.

However, that’s time consuming, so many ad vendors allow mix-n-match campaigns, where lines are taken from one ad and put in another. An adolescent’s ad would work for most teens if the wording was a little different – see Axe’s body spray ads. Sometimes the company doesn’t even have to make the new lines themselves, they just include a modifiable blank field in the ad space and they’re good to go.

That’s where things go sideways! A blank line in an insurance ad can tell the user that they’ll be eligible for a rate as low as X$ based on their age and gender. A blank line in a kennel ad knows they’re looking for a medium dog over a small cat based on their search history. A blank line in a T-shirt ad tells them that Facebook knows they’re a Gemini, an accountant, of Swedish descent, a regular fisher, an occasional beer-drinker, and more.

Art and More

Even worse, bots that work on similar mechanisms have been caught scraping art from artists and slapping it on cheap T-shirts. Since copyright enforcement is dependent on the copyright owner filing for takedown, shirts with that artwork might get sold before the artist even knows something’s amiss. The shirts are frequently poor-quality rips directly from the artist’s social media account, triggered by comments requesting wearable merch or complimenting the work – the bot determines demand and then harvests it, without human intervention, just like the ad T-shirts.

Sure, the artist can request a takedown each and every time the bots snag their art, but it’s a slog, and the company itself never seems to actually do anything meaningful about the violations. It’s also bad for the artist’s reputation: fans complaining to them about the quality of a shirt they bought may be the first time the artist hears about the art theft, and then explaining to someone that they’ve been scammed is only going to make them angrier. It becomes “How could you let this happen” instead of “I’m sorry, I didn’t realize” – everyone loses except for the ad bot’s shirt company.

The ‘Why’

Before companies like ZapTee and CustomInk, getting a custom shirt meant going to a print shop and paying a hefty price for the final product. As such, shirt companies just didn’t make shirts like these ad bots do. It was unfeasible. If it didn’t sell, it was a waste of production. The closest you could get was “I’m a Proud Mom!” or “Rather be Fishin’”. If you were an artist, and your work was too fringe for major manufacturers to work with, you might have had to buy the screen-printing supplies yourself, build your own website or storefront, source blank shirts, and do things the hard way.

Now, all of that is easily outsourced to these printing companies that specialize in customizable products. The tech has improved so much that they can make money on single shirt sales, where before orders had to be in bulk. It’s honestly incredible. However, customers don’t necessarily understand the mechanisms behind these shirts. The specifics on the shirt are just blank space fill-ins, based on information Facebook gives to the ad. They think they’re seeing a unicorn out in the wild when they see something that relates to them. They’re thinking back to the times where companies couldn’t do this, where everything was geared towards two or three consumer profiles. “Wow, a shirt for Peruvians!” instead of “Oh, Facebook knows I’m Peruvian”.

Or in the case of the art-rippers, they see merch from an artist they really like and respect, and buy it without wondering if it’s official because – once again – they’re thinking back to a time when companies didn’t steal art (not officially, anyway) for shirts. Independent artists had to beg, barter, and network their way onto the front of a T-shirt, there wasn’t any other way to sell art-shirts en masse before silk-screen tech got cheap. Therefore, there’s no way unofficial or stolen art merch exists, it just doesn’t happen!

The Marketing

A company named Signal decided to take out ads mocking Facebook’s hyper-specific targeting by simply filling in a MadLib with demographic spots.

The result is, shockingly, just like the T-shirts! Facebook already knows you pretty well. A trend of ‘hyper-targeting’ took over once social media websites realized that people guard their info from companies but share it willingly with friends, publicly. As a result, it can pinpoint things like your favorite movie, your favorite color, what items you’ve bought online (and post about), your perfect vacation, and how dark you like your coffee, to name a few, all harvested from comments and posts you share with your friends. Ads then generate shirts out of what the site gathers. You can turn off targeted advertising in Google, but that doesn’t mean they’re not gathering information. It just means you’re not seeing the direct results of that. The only way to fight the hyper-targeting is to be vague and lie to the platforms, or stay off of them altogether.

If you or an artist you know gets their work ripped by bots, combatting it is unfortunately pretty difficult. The best you can do is sometimes just cave and make your own branded products via something like RedBubble or FanJoy. Give customers an official way to support their favorite artist, and most of the time, they’ll take it! Making your social media work obnoxiously and obviously watermarked helps, as does making the preview pic low-quality. Fans need to know that you have official channels, and if they buy from anywhere else, they’re not supporting you. If they like it so much that they want to wear it, they should want the artist to keep making more of it! Make that link between your official purchasing channels and their support of your work clear.


Optical Memory

Elizabeth Technology January 30, 2024

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than magnetic tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc (don’t do that – most discs contain some amount of metal, and that can damage your microwave badly enough to be dangerous).


“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?


The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.


DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.


That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.


CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

FlexPlay Self-Destructing Entertainment

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.


Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.