Category Archive

Technology

What’s the deal with Google.amp links?

Google And Fast Loading

If a mobile site takes even a second too long to load, users navigate away. This is a well-studied phenomenon, and all companies can do is try and optimize loading so the user gets some feedback before they bounce.

Facebook created Instant Articles, an easier-to-read and easier-to-load format than the original old method of simply copying and pasting a link to your wall, which worked fine on desktop and not so well on mobile. Ads, videos, and assorted other tidbits really slow loading down on mobile devices, even on WiFi. Consumers agree via engagement: Instant Articles is great. After all, who likes autoplay videos? Google sees a fantastic channel for improving loading times, pictures how it could monetize it, and begins to assemble the Accelerated Mobile Pages project, or .amp for short, and introduces Google.Amp links. You search something on mobile, you find it, and instead of being taken directly to the site, you’re taken to a Google.Amp page that optimized the site for you.

 

How does it work?

 

How does .amp make things load faster? Well, firstly, dynamic content doesn’t show up. Everything on that .amp version of the page is as simple and easy-to-load as possible.

That means if you’ve mistagged a menu, the consumer might not be able to see it. The same goes for embedded videos and music clips. If your site is really reliant on those things being present to function, allowing .amp links is a bad move!

Secondly, the website is stripped down to its bare bones: website creators are given a small selection of tags to build out their website, which usually results in something plain, but quick-loading. If the website is really, really insistent on keeping all of its content, .amp links are unfortunately unable to help. .Amps are a trade-off!

 

And Results

 

It makes some websites downright ugly. People using .amp links have very limited tags in their toolbox, so the end websites almost always look really similar. Sometimes that’s a good thing, sometimes it’s bad. After all, if you, as a business owner, spent however many hours going back and forth with a designer (or designing a site yourself) only to have to cut most of it when signing up for those .amp links, you might be a little mad, right? Menus, color options, images – if all of it goes missing, it may as well be written in plaintext. One critic complains that this makes it easier for fake news and disinformation to squeak into the regular news stream, because when all pages look the same, all pages receive the same quality assessment from readers who don’t know better, whether they deserve it or not.

.Amp links can negatively impact search ratings and valuable data for the client website, as well. People see the page via Google, not the host’s website. As a result, the brand gets out there and impressions improve, but the website itself can’t track that data as effectively. If you’re trying to navigate the complicated world of SEO optimization, then that’s a major issue.

It also has the potential to limit ad revenue. If the ad takes too long to load, it takes to long to load, and the end user never actually sees the ad. Most Google ads function by clicks – that means that customers clicking or tapping the ad is the only way the website gets money from them. As a result, unloaded ads = lost potential revenue.

 

Good Results?

 

However, the ability to load the website so quickly is often worth it to small business owners. Customers are impatient and often expect instant feedback – with Google.amp links, they can provide that instant feedback, usually for cheaper than other speed-up options, like redesigning the site or removing certain content features.

Besides, many users actually like the lack of ads. The mobile web is riddled with annoying popups and other assorted garbage that makes sketchy websites even more annoying to navigate. Of course customers are going to pick a .amp if it means not having to struggle with jerky, autoloading videos and annoying, jumpy ads. Not to mention that .amp links take away windows for viruses!

 

Google and… Data

 

It’s not a secret anymore. Google is always gathering data. It knows what device you’re using, it has some understanding of who you are as a person, and it’s using it to build ads that people like you are more likely to click on.

Google primarily started the .amp project as a way to compete with other data hogs like Facebook and Messenger. Why? Data, valuable data. You clicked on X? We’ll show you more articles about X! You clicked on a fashion article? Why, we just so happen to have ads from Calvin Klein’s newest collection.

Now, sometimes this is good – many people find new and interesting things via algorithms. Sometimes this is bad for the consumer, where they get ad after ad about dog food despite not having a dog because they clicked an article about dogs, and sometimes it’s bad for society at large, where conspiracy theorists get more and more misinformation funneled to them via the algorithm. Nothing tells Google to stop. Once you start on a path, it takes some serious effort to get algorithmically plugged content away from your feeds.

.amp links are obviously not the only things tracking you. Anything with Google anywhere is tracking you. Adsense is tracking you. But .amp links are part of the problem, and Google is getting your info before it’s getting filtered down to the actual website’s owner.

 

Turn It Off!

 

While turning off customized ads won’t stop the data collection, it will mean you’re less likely to see oddly specific, creepily accurate ads when you’re just trying to browse. As for the .amp links, turn that off too. .AMP links are giving a lot of power to Google, and some of the information you accumulate during normal browsing may very well be sucked up by Google.

Look here: and here to control how you’re seeing ads.

Sources:

https://www.discovertec.com/blog/amp-speed-page-the-good-and-bad-of-faster-load-times

https://www.theverge.com/2019/4/16/18402628/google-amp-url-problem-signed-exchange-original-chrome-cloudflare

Small Sites Vs. A Big Internet

Art projects

 

Some little art project websites deliberately avoid indexing their page, so it’s well-hidden from traffic. Web development classes, modern art classes, and all sorts of other classes will ask students to make something online. They don’t necessarily want those websites getting shared outside of the class. Keeping a page un-indexed makes it much harder to stumble upon, but it’s not a perfect cure – people with the direct link can still post it elsewhere. If they retain it after they leave the class, and remember how cool it was, and then it ends up on Reddit… suddenly it’s a curse, especially if identifying information like names are left on-site.

Websites made as a joke in the first place can turn into a curse too! Youtuber Drew Gooden’s “Hot Dog” website was made as part of an advertising campaign for Wix, but it’s unclear if he actually wants to maintain it. It’s still in his ownership today. This is a unique problem to have! It may cause him more issues to close the site, now that it’s address has been immortalized in videos. Besides, they’ve come to expect the website to function, they’ve bookmarked it, and they’re demanding that their entertainer dance.

 

Real Retail Hours

 

Tiny DIY shopping websites sometimes get cratered by that same ‘hug of death’, especially if they accidentally go ‘viral’. Look at TikTok advertisers for example: anyone can post, and because of the app’s algorithm, it’s possible for a creator with no followers to suddenly end up with 100,000+ views on a particularly entertaining video. No ad dollars were spent, the creator was just super funny that day and it spread. This is great! Until their traffic jumps from an expected 500/day to 20,000/day, because their product has gotten much more reach than they could have prepared for. Sellouts are inevitable, frustrated users are also inevitable.

In fact, a broken or slow website will even push away people who did get to make a purchase. Unpleasant shopping experiences steer consumers away from online retailers at a horrifying rate! The same goes for lag – mobile users are unwilling to wait for an item they don’t really want, or don’t really need from that specific store. I could get a hat anywhere, for instance – why should I wait five seconds to get through to a store on mobile when I could go ding a different store? Obviously it’s not that simple, but big websites have resources that little ones don’t, and the especially wishy-washy buyers will be put off by the difference in experiences.

There are ways to handle this, but unfortunately many businesses don’t have the chance to prepare.

 

Welcome, But…

 

And then there are websites that are really hoping for growth, and it suddenly happens. It’s rare to have a site blow up overnight – most repeat visits are the result of hard work and consistent effort to capture the visitor’s attention. Unfortunately, in this era of social media, it’s very easy to accidentally blow a website out of the water. Yay, Growth! Turns into Oh No, They Aren’t Stopping. The server for the website crashes, and a lot of potential viewers are shut out from it. If the website’s lucky, the interested folks will bookmark the page and come back, so they’ve got a better distribution the second time around.

Some websites go offline a few hours after Reddit’s discovered them, to recover. The ‘hug of death’ is a well-known phenomenon – nobody’s DDoSing the website on purpose!

 

Lonesome Town

 

Single-person websites are often hoping to not be discovered by somewhere huge. Think about it: if they haven’t paid for advertising, if they don’t get revenue from hosting ads, if they don’t sell anything on their site, then they don’t make money from page views. They’re probably not looking for a giant spike in page views out of nowhere, with some exceptions like ‘public service’ projects made by civilians, or ARGs.  

 Tiny websites and tiny forums alike struggle to handle being “discovered” on websites like Reddit, Digg, or Youtube. Famously, a Buffy the Vampire superfan’s website (which I’m deliberately not linking here) was crashed by new visitors after forums made it a spectacle. Sure, the superfan posted a lot – as is their right. The information they posted helped other fans find information about meetups and appearances by the actors. The flood of people showing up on other social media to comment on and harass the single poster was unfortunate, and it could have been avoided if people hadn’t dogpiled. Even deeper, maybe people wouldn’t have dogpiled if the website hadn’t shown up on blogs. The sole commentator, maintainer, and moderator made the website private after people showed up to screw around.

 

Similarly, small sites get flooded when a big site ‘discovers’ them, and then suffer from community collapse and site breakdown. A forum with 200 or so regular posters isn’t going to be able to moderate new conversations from other, bigger sites – and even worse, newcomers who might have been interested in the topic get the idea that the website’s a total dumpster fire when it’s just understaffed. These sites want traffic, yeah, but they want the right kind of traffic. Well-intentioned traffic. On-topic traffic. If a community behaves itself, there’s no reason to have a team of 20 moderators. People showing up to flame the forum are going to stretch resources thin.

Don’t go spread news about some wacky website on big forums without knowing the site first. The consequences may be greater than you could imagine!  

 

Sources:

https://queue-it.com/blog/how-high-online-traffic-can-crash-your-website/

https://www.siteuptime.com/blog/2019/09/26/the-top-8-reasons-behind-a-website-crash/

https://www.inmotionhosting.com/blog/my-website-crashed-now-what/

Wildly Specific T-Shirts: Why?

You’ve probably seen some variation of the shirt.

You’re wondering how it’s so wildly specific. You click it, and scroll down, and somehow… somehow the company seems to have made shirts specifically for you, the boyfriend of a Registered Nurse who was born in June, who’s a little crazy with a heart of gold.

And then you notice on other channels, people are getting shirts that say ‘Never mess with a Union Welder born in November with Blue Eyes’. ‘My Boyfriend is a Crazy Libra who loves Fishing and Mountain Biking”. Okay… it’s specific… but no harm, right?

What’s happening?

The Ads

First, some context. Facebook takes information like birth date, gender, likes and dislikes, etc. to hyper-tailor ads directly to specific individuals. On the advertiser’s side, Facebook allows their advertising customers to modify ads depending on group – companies can make multiple ads for their product to better build a brand image for any one customer’s specific demographic profile.

Picture that a company makes hair gel for adolescents as well as young adults, for example. The adult is looking to impress their coworkers, but the kid just wants to prevent helmet hair. The gel does both, but the ad will change the target customer’s view of the product – is it for skateboarders, or is it for professionals? Only a super generic ad could appeal to both, and generic ads do much worse than targeted ones. Luckily, Facebook’s fine-tuned ad program can determine which set of ads the viewer should be seeing, and the company can make two ads, one for skateboarders, and one for young professionals.

However, that’s time consuming, so many ad vendors allow mix-n-match campaigns, where lines are taken from one ad and put in another. An adolescent’s ad would work for most teens if the wording was a little different – see Axe’s body spray ads. Sometimes the company doesn’t even have to make the new lines themselves, they just include a modifiable blank field in the ad space and they’re good to go.

That’s where things go sideways! A blank line in an insurance ad can tell the user that they’ll be eligible for a rate as low as X$ based on their age and gender. A blank line in a kennel ad knows they’re looking for a medium dog over a small cat based on their search history. A blank line in a T-shirt ad tells them that Facebook knows they’re a Gemini, an accountant, of Swedish descent, a regular fisher, an occasional beer-drinker, and more.

Art and More

Even worse, bots that work on similar mechanisms have been caught scraping art from artists and slapping it on cheap T-shirts. Since copyright enforcement is dependent on the copyright owner filing for takedown, shirts with that artwork might get sold before the artist even knows something’s amiss. The shirts are frequently poor-quality rips directly from the artist’s social media account, triggered by comments requesting wearable merch or complimenting the work – the bot determines demand and then harvests it, without human intervention, just like the ad T-shirts.

Sure, the artist can request a takedown each and every time the bots snag their art, but it’s a slog, and the company itself never seems to actually do anything meaningful about the violations. It’s also bad for the artist’s reputation: fans complaining to them about the quality of a shirt they bought may be the first time the artist hears about the art theft, and then explaining to someone that they’ve been scammed is only going to make them angrier. It becomes “How could you let this happen” instead of “I’m sorry, I didn’t realize” – everyone loses except for the ad bot’s shirt company.

The ‘Why’

Before companies like ZapTee and CustomInk, getting a custom shirt meant going to a print shop and paying a hefty price for the final product. As such, shirt companies just didn’t make shirts like these ad bots do. It was unfeasible. If it didn’t sell, it was a waste of production. The closest you could get was “I’m a Proud Mom!” or “Rather be Fishin’”. If you were an artist, and your work was too fringe for major manufacturers to work with, you might have had to buy the screen-printing supplies yourself, build your own website or storefront, source blank shirts, and do things the hard way.

Now, all of that is easily outsourced to these printing companies that specialize in customizable products. The tech has improved so much that they can make money on single shirt sales, where before orders had to be in bulk. It’s honestly incredible. However, customers don’t necessarily understand the mechanisms behind these shirts. The specifics on the shirt are just blank space fill-ins, based on information Facebook gives to the ad. They think they’re seeing a unicorn out in the wild when they see something that relates to them. They’re thinking back to the times where companies couldn’t do this, where everything was geared towards two or three consumer profiles. “Wow, a shirt for Peruvians!” instead of “Oh, Facebook knows I’m Peruvian”.

Or in the case of the art-rippers, they see merch from an artist they really like and respect, and buy it without wondering if it’s official because – once again – they’re thinking back to a time when companies didn’t steal art (not officially, anyway) for shirts. Independent artists had to beg, barter, and network their way onto the front of a T-shirt, there wasn’t any other way to sell art-shirts en masse before silk-screen tech got cheap. Therefore, there’s no way unofficial or stolen art merch exists, it just doesn’t happen!

The Marketing

A company named Signal decided to take out ads mocking Facebook’s hyper-specific targeting by simply filling in a MadLib with demographic spots.

The result is, shockingly, just like the T-shirts! Facebook already knows you pretty well. A trend of ‘hyper-targeting’ took over once social media websites realized that people guard their info from companies but share it willingly with friends, publicly. As a result, it can pinpoint things like your favorite movie, your favorite color, what items you’ve bought online (and post about), your perfect vacation, and how dark you like your coffee, to name a few, all harvested from comments and posts you share with your friends. Ads then generate shirts out of what the site gathers. You can turn off targeted advertising in Google, but that doesn’t mean they’re not gathering information. It just means you’re not seeing the direct results of that. The only way to fight the hyper-targeting is to be vague and lie to the platforms, or stay off of them altogether.

If you or an artist you know gets their work ripped by bots, combatting it is unfortunately pretty difficult. The best you can do is sometimes just cave and make your own branded products via something like RedBubble or FanJoy. Give customers an official way to support their favorite artist, and most of the time, they’ll take it! Making your social media work obnoxiously and obviously watermarked helps, as does making the preview pic low-quality. Fans need to know that you have official channels, and if they buy from anywhere else, they’re not supporting you. If they like it so much that they want to wear it, they should want the artist to keep making more of it! Make that link between your official purchasing channels and their support of your work clear.

Sources:

Reddit.com/r/TargetedShirts

https://www.vox.com/2018/4/11/17177842/facebook-advertising-ads-explained-mark-zuckerberg

https://www.bbc.com/news/technology-50817561

https://www.chowdaheadz.com/products/lunatic-million-words-t-shirt?variant=32657701240917&gclid=Cj0KCQjw–GFBhDeARIsACH_kdb8X6iW6iRfaitNOwytkgKBZ3PqcW3CbvAfEyZ1pJaKjDtr4C0y9YQaAockEALw_wcB

https://thehustle.co/who-makes-those-insanely-specific-t-shirts-on-the-internet/

Bitcoin’s dip is Affecting GPU Prices

Cryptocurrency affects the price of hardware IRL now. There’s an entire legion of computers that spend their whole lives solving hashes and producing rewards for their owners. So when the reward crashes a little, the market reacts strangely. Some people buy, because BitCoins always bounce back, and some people sell, because BitCoins might not this time. On top of that, China has re-banned parts of trading!

 

BitCoin Crash

 

BitCoin has nearly halved in value over the past few months. The ‘why’ is everything from a general decline in the stock market to celebrities tweeting about BitCoin’s fall, to other cryptocurrencies establishing themselves on the market. It’s truly wild how many different things come into play for an untethered resource’s price, but Bitcoin enthusiasts remain as optimistic as ever that BitCoin will return, and better than ever. It did in the 2010s. It did after the first crash. Surely it will this time, too!

Like I said, many things, some material, some not, affect Bitcoin’s price. As such, many businesses and countries are becoming increasingly skeptical of it. Receiving a few % of a Bitcoin for 700$ of repairs, only to have it drop to 300$? Too bad! The business is forced to ride waves of inflation and deflation until they can use those coins at their desired value or trade them for real money. This will eventually stabilize the price, but until then, the leaps and drops are bad for businesses. Imagine getting a cash payment, only to have to hold onto it until it’s worth recovers enough for you to deposit it in savings, or use it elsewhere – your business operations could come to a halt while you wait for your liquid cash to replenish itself. Bartering would be safer at that point.

The government sees many issues with this system, and understandably a country like China can’t afford to have business owners upset in a time of serious unrest. Plus, taxes! Bitcoin was created primarily to avoid third parties, and no third parties = difficult-to-collect taxes.

 

Confounding Factors

 

The epicenter of the cheapening GPUs is China, although Europe is also seeing some major dips in the reseller’s market. But why? China’s partial ban on trading or accepting BitCoin has put a serious damper on consumers’ desire to mine for it. It’s not illegal to own Bitcoin, but when transactions to convert that Bitcoin to ‘real money’ are stifled, what’s the point? They have no promise of when or if the Chinese government will lift their restrictions.

Aside from what officials call ‘speculation risk’, which is what I’ve described in the section above this one, certain regions of China are trying to limit energy consumption, and BitCoin’s heavy consumption makes it an easy target. Mining BitCoins has a lot of complicated math involved, and it’s math that has to be done fast. Only the first person to solve the transaction gets any reward, so it’s a constant race to make the computer better and faster. Better computers eat more energy. GPUs, the common bottleneck part, got siphoned up by BitCoin miners everywhere.

Now, China has fewer BitCoin miners looking to upgrade immediately, but BitCoin’s low price is also convincing some of the folks in other countries that upgrades can wait a bit. Europe’s slowly improving prices are a good sign – maybe the US will finally get some GPUs in! Right?

 

The Market

 

Turns out, demand doesn’t always behave as expected! Official reports say that the prices of Graphics Cards are falling, when many people have also noticed the prices going up even on ‘ancient’ and less powerful cards on eBay. Is it just a failure of the buyer/seller market to catch on to the news? Is it a sign of an incoming rebound? Or could it be because the shortage in the US hasn’t actually been resolved in spite of less demand from overseas? With international shipping in such disarray, a dip in China and Europe doesn’t have to mean a dip in the US!

As for the future, who knows? Cards might go down. They might also go up. GPUs are expensive to make and buy ordinarily, and given perfect conditions, a new one could still be worth a thousand-plus dollars. It’s difficult to say what exactly waits for the gamers and workers waiting for the GPUs to come down in price, although market watchers like Tom’s Guide can establish patterns based on the past.

Will BitCoin go back up? It’s very hard to tell given the nature of a cryptocurrency and what we’ve seen from it so far. Sometimes a coin drops and crashes so hard it may as well have died – BitCoin once had a dip so severe people doubted it would ever come back up, down to the high four digits. Is this downwards trend permanent? Will China’s ban influence the end results? I have no idea! Experts in similar fields can’t tell either, crypto is a wild, wild West compared to stocks. What they do say is generally along the lines of ‘we can’t tell, but it could dip very badly’.  It’s akin to gambling.

If it does recover, European cards will almost certainly follow, although the depressed prices in China likely won’t until restrictions are lifted. American cards, having shown no sign of going down in price despite a clear dip in Bitcoin values, may not be as tied to crypto-mining as they formerly were, so BitCoin’s movement may have no impact. America is a big country with a lot of people in it, so ordinary demand for the currently out-of-stock GPUs may be holding prices high all by itself.

 

 

Sources:

https://www.scmp.com/tech/tech-trends/article/3137128/chinas-bitcoin-crackdown-fourth-largest-bitcoin-producing-province

https://www.scmp.com/tech/policy/article/3138130/bitcoin-crackdown-sends-graphics-cards-prices-plummeting-china-after

https://www.tomshardware.com/news/gpu-pricing-index

https://www.tomshardware.com/reviews/best-gpus,4380.html

https://www.reuters.com/technology/chinese-financial-payment-bodies-barred-cryptocurrency-business-2021-05-18/

 

Portable Phones: A Brief History and Selection

The smartphone of today has a lot of ancestors, and some of them were pretty bulky.

Translated to Modern Times

The first couple of portable phones were… weird.

The brick phone: the brick phone’s pretty well-known! It was sometimes used in movies to demonstrate how high tech the star’s organization was. Portable phones were huge for a long time, so a brick phone being slightly less huge than the suitcase phone or the car phone was cool. DynaTAC’s version was initially the most popular in 1984, and it took 10 hours to charge enough for 30 minutes of talk time.

The car phone: it was new! It was innovative! It was stylish! Luxury cars were the primary installees of car phones, as the phone itself was both expensive and heavy, and relied on the car’s power to actually run. This came before the brick phone and ran concurrent to it for quite a while, and gradually faded out of manufacturing as mobile technology got better and better.

The suitcase phone: the suitcase phone carried all of its important bits and bobs in a suitcase, discretely, so the caller didn’t have to hold the weight of the battery up to their ear to talk. The original case for the suitcase phone was kind of ugly, but having the tech at the time made it worth it.

Pay Phones: This wasn’t technically mobile, but it was possible to find one out and about in public, and that was usually good enough. Most pay phones even in earlier days took payment first and then would place the call, an artifact of the quick-call stations and cashiers that they replaced.

Slides and Other Moves

Manufacturers got pretty wild when designing devices for children in the 2000s.

The Razr phone opened with one screen pivoting away from the other on a joint, so the top screen was upside down when it was closed. The keyboard inside it was tragically small – texting was even more of a hassle, and the screen was tiny. It was also sometimes awkward to hold while on a call, but the unique design meant it took up very little pocket real-estate, so it all evened out. Not a good phone for games or texting, but fine for the kids it was marketed to.

If you were fancy, you’d get a slider phone: the top screen would slide upwards to reveal a full keyboard with tiny, tiny little keys underneath. Back in this era, the screen being exposed was considered a real risk: what if your keys or coin change scratched it? Then what? If only they could have seen what we’d have now. These phones were a little more expensive than other phones on the market, but they made texting a little easier. Txt speak was still more efficient because the keys were so tiny it sometimes took the edge of your nail to actually press them, but it was the thought that counted.

And if your parents had business obligations that sometimes meant emailing from their phone, you might have had access to a hand-me-down BlackBerry: all the buttons were on the same plane as the screen. There were the crude beginnings of a ‘free roam’ cursor for phones, a track ball that behaved the same way arrow keys did but faster.

Flips N Such

The flip phone I had also had the 9-button keyboard: it was designed for calling first, and texting second. It could open a browser… kinda… if all you wanted to see was broken graphic boxes and white squares. Internet access over a 3G or below network was slow and expensive, so you weren’t exactly meant to read the news on it. The ringtone store also didn’t work all of the time. It was great. An ordinary flip phone was difficult to break and easy to answer calls on, so it was perfect for kids of the time. You could play games on it, but not very many, and you could text, but only slowly. It wasn’t an active distraction.

However, that slow texting could get annoying when the phone was allowed out. That’s the source of modern text speak: if you wanted to type out Good Morning, for example, you’d hit the 6 button a total of 16 times to get all the letters stored in that key while writing. Shortening to Gd Mrng only made sense – you’d tap the 6 key 5 times to get two characters, a much better ratio. To compare it to telegramming, the fewer taps there are, the faster the message gets out. Only using the number was also often faster, too: “Before” takes a total of 21 keystrokes, while B4 takes three. Three.

Decorative casings for smartphones

For a brief period of time, pocket-unfriendly phone cases hit the market. From the hamburger phone to the rotary phone purse, all sorts of weird add-ons and cases designed to make the phone look like something else graced Hot Topic and other ‘teen’ stores. Nowadays, most cases are pretty close-fitting to the phone, likely because people realized bulky phones had been shrinking over the years for a reason. Still, quite a few of these oversized cases were popular, particularly with classmates who always had a bag and/or lacked pockets in the summer. They were cute, they offered better corner cushioning (dropping an iPhone used to be catastrophic), and quite frankly I wouldn’t be too sad to see them come back.

Modern Times

Nowadays, phones come in many colors and sizes, but generally one shape. The smartphone’s ease of storage and use makes it a winner among telecommunications. Cases tend to be close fitting to the phone, now, but a lack of character cases is a small price to pay for full access to the internet, email, phone, texting, all sorts of telecommunications that the first sci-fi writers could have only imagined. Flip phones are still around, and they’re a great option for a lot of people, but generally they’re not a first choice for reasons mentioned above. That being said, they are a lot tougher overall than an average un-cased smartphone, and significantly cheaper. Just look at how many memes there are about the Nokia!

Samsung’s new folding phone is a loop back to the olden days, but it’s obviously not exactly the same as the flip phones of yore. Screens cover both halves of the folding bits, and it’s designed more to prevent scratching than to provide convenience, like the first ones did.

A Sidenote: The first “Txt Speak”

Telegraph operators use abbreviations anyone might recognize: BRB, GTG, etc. all for the sake of cutting time. Morse code and push-button flip phone strategies are a lot alike in a lot of ways: shorten the message to make transmitting it easier. Whether it’s taps or clicks, shortening speech by cutting vowels has always been around.

Abbreviations can cause confusion, yes, but telegraph operators were paid to go fast, not to be perfectly understandable.

Sources: https://www.pcmag.com/news/the-golden-age-of-motorola-cell-phones

https://thenewswheel.com/history-of-the-car-phone/

https://en.wikipedia.org/wiki/Motorola_Bag_Phone (Wikipedia does a fine job at an overall explanation)

https://slate.com/human-interest/2015/05/history-of-telegraph-operators-abbreviations-used-by-telegraphers.html

Traditional Storage vs Quantum Storage: What Does it Actually Mean?

 

Traditional Methods

Traditional storage means a lot of things, right now. Magnetic storage is still used pretty consistently, as SSDs aren’t quite at the point where they replace everything like hard drives themselves did.

Now, quantum computing occasionally hits the news when a major breakthrough happens, and for good reason! Quantum computing promises to do more than any major storage advancement before. Quantum computing isn’t just ‘better’ classical computing – it’s a whole new ballpark, assembled with totally new technology.

What is ‘Quantum’?

 

Quantum mechanics. It’s frequently used by the sci-fi show’s token show-off to demonstrate their knowledge of physics. But what are quantum mechanics, really? As a concept, they’re not that tough to grasp, and you’ve probably witnessed some of the principles in action without even realizing it! For example, have you ever played the game of hiding a coin under one cup, and then shuffling it with two other cups?

Assume someone sits down to pick a cup, and they can’t tell where the coin is based on you, or your observation. Until they pick up a cup, the coin could be under all three cups. Basically, there’s a 33.33% chance the coin is under the cup they choose. However, once you pick up the two cups you know are empty, the odds condense. There’s now 100% certainty the coin is beneath the final cup, and 0% possibility it’s under the other two cups.

In real physics, this example doesn’t work perfectly. Most quantum mechanics, once observed, break down into observable truths, and you’re an observer too. You, the shuffler, have some way of knowing which cup the coin is under. The coin is probably making a sound as it’s dragged around the table, or maybe the coin is so heavy it is obvious which cup is holding it. If you know where the coin might be at all, it means that there is one observable outcome where the coin’s underneath the noisy cup, and not three potential outcomes where the coin is under all the cups. Observing this makes it true for your opponent, as well!

Assuming coins are actually particles, and the cups are really probable locations, you’ve got something that gets close to real quantum mechanics in action!

 

Make Waves

 

Quantum mechanics (without any math in the explanation) are just a way to explain the probability of a particle existing somewhere in a real, physical environment when it’s actual location can only be expressed through that probability, or else it stops behaving the way it’s ‘supposed to’.

This probability breaks down into wave forms, where certain spots are more likely than others to have a particular particle than others. For example, the cups all have a 33.3% chance of coin, but the table outside the cup has a 0% chance of coin. In a dark room, where nobody can observe that the surroundings are coinless, but everyone ‘knows’ coins go under cups, (like we ‘know’ where electrons tend to be found in an electron shell), the chance of it being on the floor are very, very small – but not 0%.

Out of the places you’d pick a coin to be, though, it’s probably still under one of the cups, and almost certainly still on the table. If you looked at this probability on a chart, you’d see hills of likelihood where the cups are, and dips where they aren’t! In this way, we calculate the probable locations of things like electrons and photons, which behave in ways humans don’t fully understand yet. The coins in the above example are like those particles! A photon is probably in a certain area given what we know about its behavior – but attempting to actually measure it as a wave makes it behave like a particle, breaking it’s quantum state. Information is lost, and the particle no longer behaves like it did when it wasn’t being observed. Picking up the ‘cup’ to observe fundamentally changes the behavior of the ‘coin’ underneath!

How does this turn into a revolutionary computing method?

 

Entangled

 

Quantum entanglement describes items (like particles) being tied to each other in such a way that one item can’t be described without also describing the other items in the system, which causes it to collapse as though you were looking at all of it. For example, say you put two different coins under two cups. Each cup has a coin, but which cup has which coin can’t be accurately described until one cup is lifted.

Once that cup is lifted, the first coin is described. The second coin has now also been described because there’s no way the coin you’re looking at is under the other cup, and each cup now contains/has only contained its respective coin. But only once you observed it. The probabilistic wave forms have now collapsed into two points with 100% likelihood.

That doesn’t mean that one coin/particle was always, 100%, underneath its specific cup – until you picked up the cup, both were underneath both cups, mathematically speaking (remember, this is a rough example – coins and particles have different laws attached). Entanglement also has a lot to do with superposition, since both coins would have had to share a location for the cup/coin thing to happen.

 

Superposition

 

Superposition describes things existing in the same space – and it’s not solely tied to quantum mechanics. Two notes played on an instrument at the same time, for example, create a new note out of their superposition. The big thing about superposition is waves. Physical objects can’t be superimposed upon one another, and two particles can’t be in exactly the same location. However… properties of objects can be expressed mathematically, in wave forms, and in that way they can be superimposed. Much like different wavelengths of light can combine to form a new color, the odds of objects being in a certain state, or being in a certain, unobservable spot can combine in superposition!

In the two-cup example, the coins are in a state of superposition until the cup is removed and their options are solidified; before the cups are removed, whatever equations are used to describe a coin’s location can be added to the equation to describe the other coin, and both equations are still valid. Neither is disproven by the existence of the other until one is observed. Until one is observed, the superposition stands.

These concepts, when put together, allow computers to read bits that aren’t yet bits, but could be bits.

 

Sum Total

 

All of this sounds really complicated – and it is, mathematically – but conceptually, it just boils down to ‘things can be predicted to be in multiple spots at once’, and ‘things can be a combination of the probabilities of other things, instead of just one thing, until observed’.

A quantum computer looks at probabilistic bits like we look at those coins, and it doesn’t think ‘that’s a 1’ – it thinks ‘this is probably a 1, but if it was a 0, how does that change the data?’ and ‘how does this being a 1 affect later bits?’ The most common path of quantum computing research uses qubits, which stay in a state of superposition.

This means that the qubit is both a zero and a one until the computer looks at it and determines its state via some randomized metric that maintains the quantum state. It could be the state of the electrons at the time the computer reads it, it could be the magnetic direction the qubit is excited into randomly, etc. it just has to behave in a way that outside observers can’t definitively say leads to one specific outcome. If it can manage that, then it can calculate all the available options all at once.

 

Advancements

 

How is this faster, you may ask? Well, the qubit is ‘stacked’ onto other bits. The qubit can be two states, and subsequent qubits can be two states, and… they daisy-chain together to form exponentially larger potential states, which then lead to answers being calculated simultaneously, instead of linearly. In a perfect system, faults are discarded, and then the quantum computer spits out the right answers in a fraction of the time it would have taken a classical computer.

For example, let’s say a password is tied directly to the state of a pair of dice in an automatic shaker. A quantum computer will be able to spit out a probabilistic password, but a classical computer won’t be able to compete! Even if it’s a supercomputer, it will have to get lucky if it wants to guess what  the shaker’s results are going to be before the dice are shaken again.

While this sounds very futuristic, websites are already using algorithms to convert random footage into protection for their servers: the lava lamp wall used by Cloudflare is one such example. By the time a classical computer has calculated what the algorithm required when lava lamps A-Z were in any position, literally all of them have changed. As a result, the code has changed as well, rendering that math useless. A quantum computer will be able to step up to the plate where the classical computer has struggled!

As Dr. Shohini Ghose puts it, this isn’t the equivalent of several classical computers, or one big classical computer compressed into a smaller state – it’s a totally new technology that will behave differently as it advances. Even a super computer would struggle with the lava lamp wall! However, quantum computers may not. Every qubit used to calculate has the potential to lead to a correct answer, or a wrong one. Good quantum computing will kick out incorrect answers as soon as they’re produced, and you’re left with something that the lava-lamp wall algorithm will take as an answer.

Dr. Ghose uses the example of a coin-flip game, where participants face off against a quantum computer. If the computer is told to win, and it goes first, it produces a probabilistic result that only collapses with the other player’s input – the computer is essentially allowing its coin to continue spinning in the air until it can tell what the human player has, and then it catches it, to spit out the answer that it always had. The answer existed in a probabilistic state – and it won, it just needed to be observed to tell the human that. The computer only loses when it mistakes the ‘noise’ answer for the actual result. If it were able to successfully suppress noise, it would win 100% of the time.

 

Why Not Earlier?

 

These computers have been seriously considered as a project since the 80s and 90s. And now, they’re making a resurgence. What kept them from being considered earlier?

Logical faults are a big part. Modern AI can suppress things it knows aren’t ‘really’ part of an equation’s answer, but the coin-flip computer above still lost 7% of the time to bad answer output. In the past, quantum computers wouldn’t have been able to correctly identify their own mistakes even down to 7% without a classical computer running alongside them, which defeats the purpose. Unlike classical computers, where faults like that come from the hardware, quantum computers are getting these errors from the state of universe itself. Of course that’s difficult to compensate for.

Aside from that, there were also mechanical issues to sort out first. The computer can’t be allowed to turn the qubit into a regular bit, which is called ‘decoherence’. Decoherence happens once the system is connected to something measurable, observable: out of two cups, lifting one solidifies the probability, and the other cup, even though it hasn’t been observed, definitely has the other coin. If it’s solidified into a regular bit, it may as well have not been a qubit at all!

Mechanically, to avoid decoherence, speed and environmental controls are essential. In quantum computing, you aren’t maintaining that quantum state indefinitely – the longer the computer has to maintain that, the worse off the state is, until eventually something collapses in a measurable way. Heat will do it, stray magnetic or electricity pulses will do it – flip one qubit, screw up the system or collapse it entirely. Decoherence has destroyed the calculations.

Side note: if you’ve heard of the double slit experiment, that’s an example of decoherence! Measuring the particles breaks the system while deliberately not measuring them allows for that nice waveform. Their final location becomes known, but not the path they took to get there. In computing, measuring the qubit before the computer gets to then breaks it down into a not-qubit. Rendering the system decoherent, and screwing up the results of the calculations.

 

Tid-Bit

 

Ironically, Schrodinger haaated that his ‘cat experiment’ got big because folks were taking it too literally. For those of you who haven’t heard of the thought experiment (no cats were ever actually put in a box) the experiment’s set-up was that radioactive material has a certain % chance every second to release a radioactive particle, and then putting this material next to a particle-sensitive trigger would release poison via that trigger into the cat’s box. If there’s no guarantee of poison being released into the box, there’s no mathematical certainty that the cat’s either alive or dead, so it’s both. Just like the coin is under all three cups.

 But not really. At the scale the experiment would have to take place, the cat’s as good as already poisoned (a lump of radiation has so many individual atoms that the odds of one not releasing a particle at any one moment is basically zero), but Schrodinger was struggling to explain the concept to laypersons who otherwise had no exposure to physics.

The thought experiment does a great job of breaking down what’s actually occurring with superposition. It’s not about the cat, or poison, it’s about the particles. If the experiment could be particle-sized, it would work the way it’s described.

 

 

Sources:

https://indianapublicmedia.org/amomentofscience/the-heisenberg-uncertainty-principle.php

https://www.sciencealert.com/quantum-computers

https://jqi.umd.edu/glossary/quantum-superposition

Shohini Ghose via TED Talk (direct link: https://www.youtube.com/watch?v=QuR969uMICM)

https://www.ibm.com/quantum-computing/learn/what-is-quantum-computing/

https://www.nature.com/articles/s41598-020-75730-1

https://newsroom.ibm.com/2015-04-29-IBM-Scientists-Achieve-Critical-Steps-to-Building-First-Practical-Quantum-Computer

 

Internet Of Things: Network Vulnerability

 

Internet of Things items are convenient, otherwise they wouldn’t be selling. At least not next to regular, non-wifi-enabled items. They don’t even have to be connected to the internet, and they should stay that way!

An Internet of Things item, or an IoT item, is a device that has a WiFi- or network-enabled computer in it to make the consumer’s use of it easier. This includes things like WiFi-enabled/networked washing and drying machines, ovens, fridges, mini-fridges, coffee makers, lamps, embedded lights, etc. anything can be an IoT item, if it’s got WiFi capability.

 

Network Entry Point

 

Internet of Things items, when connected to WiFi, represent a weak link in the chain. They’re poorly protected, they’re designed to favor user friendliness over all else, and they’re usually always on. You likely don’t unplug your fridge or washing machine when you go to bed – that computer may sleep, but it’s not off. You probably don’t disconnect the internet when you go to bed, either. Some devices take advantage of this, and only schedule updates for late at night so you don’t notice any service interruptions. Unfortunately, their strengths are their weaknesses, and an always-open port is a dream for hackers.

 

Outdated Password Policies

 

Internet of Things items are rarely password protected, and if they are, many users don’t bother actually changing the password from the factory default. This makes them excellent places to start probing for weaknesses in the network!

Assuming someone’s hacking into a place to ding it with ransomware, there are a number of worthy targets: corporate offices, nuclear facilities, hospitals, etc. are all staffed by people, and people like their coffee. A well-meaning coworker bringing in an internet-enabled coffee machine for his coworkers is suddenly the source of a critical network vulnerability, an open port in an otherwise well-defended network!

If the coffee machine, or vending machine, or the lights are IoT items, they need to be air-gapped and separated from the main network. They don’t need to be on the same network supplying critical data within the center. The devices are simply unable to protect themselves in the same way a PC or phone is! There’s no way to download a suitable antivirus onto a coffeemaker. If something gets past a firewall, and that password’s still default or nonexistent, there’s no second layer of protection for IoT devices.

 

Malware

 

For example, hacking into a fridge is not nearly as hard as hacking into an old PC. Even great antivirus can struggle with traffic coming from inside the network. Even worse, IoT devices are often missed in security checkups anyway. When McAfee or Norton or Kaspersky recommends you scan your computer, are they offering to scan your lightbulbs as well?

Once they’re in, the entire network is vulnerable. Ransomware events with no obvious cause, malware that’s suddenly deleted all the files on a server, stolen data and stolen WiFi – all of it’s possible with IoT devices. There’s more to gain than just bots for the botnet, which is why hackers keep going after these IoT items.

IoT devices are also much easier to overwhelm to gain access, even with firewalls and effective load balancing. DoSing an IoT item can be as simple as scanning it. No, really. A team in the UK found that they could shut down turbines in a wind farm by scanning them. The computers inside weren’t equipped to handle both a network scan and their other computing duties at the same time. Many user devices are in the same spot or worse!

 

Security

 

Besides turbines, items like cameras and door locks probably shouldn’t be connected to the internet just yet. A terrifying string of hacks let strangers view doorbell and baby monitoring cameras, for example. The cameras themselves were difficult to defend even though the network was protected by a router. This is terrible for obvious reasons and class action suits were filed soon after. It even happened accidentally; Nest users would occasionally end up viewing other people’s cameras unintentionally, a bug in the system that was only fixed after complaints were made.

A consistent pattern is forming, here: security patches are only issued after vulnerabilities are discovered by the consumer! Any other type of programming wouldn’t get away with this without some public outcry. You shouldn’t have to become a victim of a security flaw to get it fixed.

And then there’s things that physically interact with the security features of a house, like electronic locks. There’s nothing wrong in theory with a password lock. However, electronics are not inherently more secure than physical locks, and adding in WiFi only gives lockpickers another ‘in’. Hacking the lock could lead to being locked out of your own home, or worse. Besides, a regular lock will never unlock itself because its battery died, or because you sat down on the fob while getting on your bike or into your car. If you do want a password lock, it’s better to get one that’s not network enabled.

We aren’t quite at the point where hacked self-driving cars are a legitimate issue, although the danger is growing on the horizon. Cars are also poorly protected, computer wise.

BotNets

The fridge doesn’t need a quadcore processor and 8 GB of RAM to tell you that it’s at the wrong temperature, or that the door’s been left open and you should check the milk. The voice-controlled lightbulbs only need enough power to cycle through colors. IoT items are weak. But not too weak to be used for things like Botnets, even if your main PC wards off botnet software.

Botnets are networks of illegitimately linked computers used to do things like DDoSing, brute-forcing passwords, and all other kinds of shenanigans that a single computer can’t do alone. By combining the computing ability of literally thousands of devices, a hacker can turn a fridge into part of a supercomputer. No one ant can sustain an attack on another colony, but an entire swarm of ants can!

This is another reason tech experts are worried about IoT items becoming widely used. Their basic vulnerabilities give skilled hackers the ability to ding well-protected sites and fish for passwords even if the network they’re targeting doesn’t have any IoT items on them. It’s a network of weaponizable computers just waiting to be exploited. Remember, password protect your devices, and leave them disconnected if you can!

Source:

https://eandt.theiet.org/content/articles/2019/06/how-to-hack-an-iot-device/

https://danielelizalde.com/iot-security-hacks-worst-case-scenario/

https://cisomag.eccouncil.org/10-iot-security-incidents-that-make-you-feel-less-secure/

https://www.courtlistener.com/docket/16630199/1/orange-v-ring-llc/

 

Blizzard Entertainment’s 2012 Hack: An Example of How to Do It Right

In 2012, game developers were beginning to experiment with a principle known as “always on”. “Always on” had many potential benefits, but the downsides keep the majority of games from ever attempting it. Many of the notable standouts are games that require team play, like Fall Guys or Overwatch. Others without main-campaign team play tend to fall behind, like Diablo 3 and some of the Assassin’s Creed games. Lag, insecurities, perpetual updating, etc. are all very annoying to the end user, so they’ll only tolerate it where it’s needed, like those team games. It’s hard to say that this hack wouldn’t have happened if Blizzard hadn’t switched to an “always on” system… but some of their users only had Battle.net accounts because of the always-on.

Blizzard’s account system was designed with their larger, team games in mind. It was forwards facing, and internet speeds were getting better by the day. Users were just going to have to put up with it, they thought. Users grumbled about it, but ultimately Blizzard was keeping data in good hands at the time. You wouldn’t expect Battle.net accounts created purely to play Diablo 3 to lose less data than the user profiles in the Equifax breach, right? Blizzard didn’t drop the ball here! What did Blizzard do right to prevent a mass-meltdown?

Hacker’s Lament

 

The long and the short of it was that Blizzard’s stuff had multiple redundancies in place to A) keep hackers out and B) make the info useless even if it did end up in the wrong hands. Millions of people had lost data in similar events before, and security experts were more and more crucial to keeping entertainment data safe. Blizzard was preparing for the worst and hoping for the best, so even when the worst struck here, they were prepared.

The actual hack was defined by Blizzard as ‘illegal access to our internal servers’. It released the listed emails of players (excluding China), the answers to security questions, and other essential identifying information about accounts into the wild. However, due to Blizzard’s long-distance password protocol, the passwords themselves were scrambled so much that the hackers might as well have been starting from scratch. This is still a problem, but it’s not a world-ending, ‘everyone has your credit card’ problem. Changing the password on the account and enabling 2FA was considered enough to shore up security.

 

Potential Issues

 

Lost email addresses aren’t as big of a problem as lost passwords, but they can still present an issue. Now that the hacker knows an email address was used on a particular site, it’s possible to perform a dictionary attack, or regular brute forcing! This strategy will eventually work, but the longer and more complicated the password is, the less likely it is to succeed on your account in particular.

A secondary problem is the lost security questions. Those are a form of 2FA. Depending on the question asked, guessing something that works or brute forcing it again is dangerously easy. Sparky, Rover, and Spot are very popular names for American dogs, for example. If the hacker is able to identify that the player’s American, and then guess the name of their first dog, they’re in! They can change the password to keep the legitimate player out. (Part of Blizzard’s response is forcing users to change their security questions for this reason). 2FA that uses email or mobile is generally preferred.

Battle.net acted as an overarching account for all the games, and made the stakes higher for an account breach. All the online Blizzard games went through Battle.net. Losing access could mean losing access to hundreds of hours of game progress. Or worse: credit card data and personal info.

 

Online, Always, Forever

 

The event provided ammo for anti-always-on arguments. There was no option to not have a Battle.net account if you wanted to just play Diablo’s latest game. Some users were only vulnerable as a result of the always-online system. If they’d simply been allowed to play it offline, with no special account to maintain that always-online standard, there wouldn’t have been anything to hack! Previous Blizzard games didn’t require Battle.net. People who stopped at Diablo 2 seem to have gotten off scot-free during the hack. This is annoying to many users who only wanted to play Diablo 3. They might not find value in anything else about the Battle.net system. Why bother making users go through all this work to be less secure?

When discussing always online, there’s good arguments to be made for both sides. Generally, always on is better for the company, where offline gaming is better for the consumer. Always on helps prevent pirating, and it gives live data. Companies need data on bugs or player drop-off times, which can help them plan their resources better and organize fixes without disrupting the player experience.

On the other hand, consumers with poor internet are left out, as lag and bugs caused by poor connection destroy their gaming experience. As games move more and more to pure digital, buying a ‘used game’ only gets more difficult for the consumer. Companies treat purchased games as a ticket to a destination, rather than an object the consumer buys. Games used to be objects, where anybody could play the game on the disc even though save data stayed on the console. Buying access to Diablo 3 via Battle.net means that there’s no way to share that access without also allowing other people to access the Battle.net account, which stores the save data. It’s the equivalent of sharing the console, not just the disc.

 

Handling

 

The response to the stolen, scrambled passwords was for Blizzard to force-reset player passwords and security questions, just in case the hackers somehow managed to unscramble them.

2FA is always a good idea, and Blizzard strongly recommended it too. 2FA will do a better job of alerting you than the default email warning  ‘your password has been changed’ will after the fact. After you’ve received that email, the hacker is already in. Depending on when you noticed, they could have already harvested all the data and rare skins they wanted by the time you get your support ticket filed! Setting up 2FA first means that you’re notified before that happens.

All in all, Blizzard handled this particular incident well! Companies are required to inform their users about potential online breaches, but some companies do this with less tact than others. Formally issuing an apology for the breach isn’t part of their legal requirements, for example. What made this response possible in the first place was Blizzard’s competent security team, alongside a set of policies that were strictly followed. Logs and audits in the system ensured that Blizzard knew who accessed what and when, which is critical when forming a response. Blizzard was able to determine the extent of the problem and act on it quickly, the ultimate goal of any IT response.

 

 

Sources:

https://us.battle.net/support/en/article/12060

https://us.battle.net/support/en/article/9852

https://www.forbes.com/sites/erikkain/2012/08/09/its-official-blizzard-hacked-account-information-stolen/?sh=2ecadbc955d1

https://comsecglobal.com/blizzards-gaming-server-has-been-hacked/

https://medium.com/@fyde/when-too-much-access-leads-to-data-breaches-and-risks-2e575288e774

https://www.bbc.com/news/technology-19207276

Optical Storage

 

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc. Don’t microwave the disc unless you plan on trashing the microwave soon, though. Most discs contain some amount of metal, and that can wear the microwave out faster. Fun!

 

CDs

 

“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?

 

CD-RW

 

The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.

 

DVDs

 

DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.

 

DVD +R DL

 

That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.

 

Compatibility

 

CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

DVDs also carve out pits (or burn marks) into the shiny material of the disk. Just like CDs, a DVD can only be written on once, although DVD-RWs do exist (and struggle like CD-RWs do).

 

FlexPlay Self-Destructing Entertainment

 

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.

 

Blu-Ray

 

Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.

 

Sources:

https://www.britannica.com/technology/optical-storage

https://www.dell.com/support/kbdoc/en-us/000149930/what-are-the-different-cd-and-dvd-media-formats-available

http://www.osta.org/technology/cdqa13.htm

https://www.techrepublic.com/article/all-about-cd-r-and-cd-rw/

https://www.scientificamerican.com/article/whats-a-dvd-and-how-does/

https://kodakdigitizing.com/blogs/news/cd-vs-dvd-how-are-they-different

http://recordhead.biz/difference-blu-ray-dvd/

https://www.dell.com/support/kbdoc/en-us/000147805/guide-to-optical-disk-drives-and-optical-discs

 

Preventing Piracy Is Hard

It’s frustrating to have someone else steal your work. That’s why piracy is one of the biggest scourges of entertainment today. Yet bootlegs and copyright infringement still happen, and sometimes undetectably. So, if the person pirating is outside your legal reach, how do you keep them from enjoying your work for free?

Create anti-piracy measures, of course.

Tainting the Well

Cher briefly released songs on LimeWire that played very quietly, in an effort to get the listener to jack up their volume. After a little bit, she’d shout at you to stop stealing at the normal volume band – which was now at max volume. This didn’t last very long, because downloads had names on the site, but there was no limit to what artists would do to keep their intellectual property in their own hands. Ironically, the worst LimeWire users themselves were more likely to protect property than the artists! Trolls would put some strange things on otherwise normal tracks, and some people would rather go to iTunes than play download lottery. They tainted the well themselves.

Shame

People tend to be more embarrassed that they got caught with their hand in the cookie jar than they are about the pirating itself. Asking about the bizarre version of the song you downloaded would out you as a pirate. And music wasn’t the only industry to do this.

In fact, a whole bunch of games would give strange errors or messages to get pirates to ask about it online. Of course, the pirates are the only ones who got these messages, so creators and other fans alike knew they’d pirated the software.  That was the punishment: everybody on the game’s Steam page knew you were a pirate! They then either self-exile or double down on the pirating.

Anti-Piracy software

Games have great examples of anti-piracy in action. Piracy detection used to be pretty hard – all it took was a blank disc and a PC that already had the game on it in the early days. Games would use physical wheels or artifacts on the inside of the game’s packaging to be sure you had a legit copy. Then, as computers got better and games could take up more space, programmed anti-piracy kicked into a higher gear. Anything and everything went – it was the pirate’s problem if they didn’t like it. Earthbound, a game that was already difficult, would crash at the final screen and then delete all your save data. So would Spyro, although Spyro would warn you that it thought you were playing a bootleg copy before you got to the end.

The goal was to frustrate the pirate, which would eventually prevent piracy in its own way. Some developers went to guilt, instead: Alan Wake just slaps an eyepatch with the Jolly Roger on your character to remind you that you’re playing a pirated copy and you should feel bad.

Business Software License Checks

There are many obvious downsides to pirating something like Excel. Namely, if something goes wrong, what are you going to do? Contact the vendor? With your illegitimate copy? Good luck with that. It doesn’t help that Microsoft runs audits, too – if they detect a license or a product key not in line with what they’re expecting, they’ll know you’re pirating. If another copy of Word tries to interact with an illegitimate copy, they’ll know you’re pirating. Basically, if you’re ever connected to the internet with a cracked copy of Office software, they’ll know. There are so many free alternatives that pirating Word seems foolish.

Microsoft is doing it for more than the money, too. There’s a growing host of people online who would just love to scam some businesses into downloading malicious software, alongside illegitimate copies of Word. Assuming the business owner genuinely believes they’re getting real copies of Office, Microsoft’s good name is tainted!

CAP Software

Pirating early-release discs destroys faith in reviewers. However, early reviewers are also giving you a lot of free advertisement, so it wouldn’t be very smart financially to just cut them all off. Instead, what they use is CAP software, which stores a code in the file. If the file is leaked or copied, the code is present, and the studio knows exactly which reviewer to cut off. Versions of this using tones mixed into the audio of the movie and visual watermarks are also common! Everyone benefits: the studio still gets it’s promotion, the reviewer gets to review the movie, and the viewer gets some early information about what they want to watch, legitimately. The pirate is slapped with a fine and everyone moves on.

 

 

Sources:

https://www.thegamer.com/clever-anti-piracy-techniques-in-gaming/

http://jolt.law.harvard.edu/articles/pdf/v07/07HarvJLTech377.pdf

https://www.forbes.com/sites/forbestechcouncil/2019/06/27/four-data-driven-ways-to-combat-software-piracy/?sh=58e23a84320e