Google recently released some new top-level domains for purchase.
What is a Top-Level Domain?
A top level domain is one of the most important parts of a website’s internet address, after the ‘root’ zone. A URL (Uniform Resource Locator) is made up of several pieces. The first part, usually http:// or https:// is the ‘scheme’, which tells your device which application it should use to open the URL. ‘www’, the part right after that part, is a subdomain – it gives your device additional information about the website, and can even be swapped out depending on the website being used (although www is very common). After that is the domain – in a website name like www . example . com, ‘example’ is the domain. In www. Google . com, Google is the domain.
After that is top-level domains – the last part of the address that use .com, .org, .gov, and others, which are just below the domain name in importance. If you type in the wrong top-level domain, you will not land on the correct website, just like if you mistyped the main domain name. Some top-level domains are controlled (only U.S. government bodies can use the .gov ending, according to CISA, and only websites in the United Kingdom use the .uk top-level domain) but others are open and available to whoever wants to use them. They don’t have to be three letters or less, either – .pizza , .tube, and .online are just some of the top-level domains one can buy. Truly, the world is an oyster!
Trouble Afoot
With all that out of the way, what has Google done this time?
The thing about top-level domains is that they have to be for sale first! There are a limited number of domain vendors, and not every domain vendor can sell every type of top-level domain. However, any established organization in the world, public or private, can apply to create and then operate a new top-level domain. They have to prove their capability, because doing that takes a lot of money and server space, but it’s possible for large companies like Google.
The problem is that a few of Google’s cool new top-level domains are A) already in existence elsewhere, and B) exist in a place where they can overlap. Google released eight new top-level domains, and two among them are also file types: .zip and .mov.
For convenience, many websites will turn links into hyperlinks. Typing in www.google.com into Word, for example, will create a hyperlink. The same goes for Outlook and Teams. This is the core of the problem – trying to reference a file you’ve saved elsewhere in online communications channels is creating an opportunity for the recipient to click on a link they didn’t mean to.
If you mean to tell someone that they should check out the photos[dot]zip file attached to the email you’re sending, and they mistakenly click the auto-hyperlink instead of downloading the file attachment, they’re left visiting an unknown (potentially malicious) website. Or, if someone in a Teams chat group says the new photos are ready in the photos[dot]zip file in the company OneDrive, then they’ve opened their team up to accidentally clicking a link thinking it leads to the shared files. Simple statements that weren’t issues before are now security risks! A particularly clever scammer could set up auto-downloads for .zip files named the same as the website, so the victim doesn’t even realize they’re downloading malware. If their browser throws a warning, they’re likely to trust the source if they don’t know that this is a possibility. The same goes for .mov files, but those aren’t as common as .zips are.
Google has basically opened the door to a new kind of scamming, and their reasons for doing so are unclear.
Before the age of built in antivirus and user-friendly web design, it was entirely possible to wander onto a webpage that would just start downloading something malicious out of nowhere. Popups that did this were a serious problem, and many browsers responded by working in a sort of zero-trust philosophy. Firefox, for example, will tell you when a site has tried to open a pop-up, and asks you if you still want to open it. This does occasionally catch honest secondary windows (like payment portals and the like) but the great thing about that is that because it asked, you can say ‘yes, I wanted that to open’ and you’re not stuck with some horrid flashing popup dominating your screen every other time.
Aside from popups, some websites were able to either trick users into downloading things by mimicking a real website, or simply start downloading things themselves as soon as they were clicked. Separate antivirus programs were needed to combat phishing downloads alongside other website trash, as browsers can’t always differentiate between intentional and unintentional downloads. In this era of the internet, misclicking or accidentally misspelling a website URL could be catastrophic for the computer. Big hosting companies protect their hosted websites now by preventing others from registering domains that are almost the target URL, but not quite (a form of domain squatting) but this wasn’t always the case.
Furthermore, hyperlinks can be used to trick people into clicking things they’d otherwise have avoided. Remember Rick Rolling? Every trick that anyone has ever used to Rick Roll you can also be used to get you to click on, and download, something you don’t want on your computer. Disguised hyperlinks. Obfuscated URLs that re-route a couple of times to get you to lower your guard. Clickable buttons, in place of links. Social engineering. The list goes on!
The False Sense of Security
The modern web as most people browse it is a safer place than it used to be. Google’s SEO is partly to blame – users who report unpleasant website experiences or demonstrate that the website isn’t good by leaving within so many seconds of it loading will lead to that website appearing lower in the search results, until eventually Google stops letting it pop up near the top at all. Hosting services are also partly to blame – they have a monetary interest in keeping their websites whitelisted, and malicious websites screw that up for them. Plus, it’s sort of scummy. Would you want to do business with a company that passively allowed one of its clients to wreck another potential client’s car? Probably not!
Antivirus and default browser settings take care of much of the rest. But these things don’t mean the nastier parts of the web have stopped existing, they just mean it’s harder to get there without doing so intentionally. Users don’t fear clicking on links that lead to sources or Ko.Fi services because it’s been so long since that was a problem. Forum users click through links with no fear. While not a perfect breeding ground for scam links to come back (most people still know and remember the warning signs) it is a perfect breeding ground for something new built on old foundations – QR code scams.
QR Codes
A QR code is a sort of bar code that’s recorded in two dimensions (vertical and horizontal) instead of one. Almost every modern phone (and many of the outdated ones) come with a QR-reading feature built in. QR codes and code readers have a high tolerance for missing or damaged information, making it a fantastic resource for quick and easy link-loading – where a barcode is unreadable if a bar is missing, a QR code can often still be read if squares are missing or obscured. Advertisements, verification texts, digital menus, libraries, virtual queues, etc. all benefit from how simple it is to whip out a phone and point the camera at a black and white square for a few seconds. It’s even easier than typing in a link, and you can direct users to specific pages with gangly URLs without worrying how that URL is going to look on printed material – the user isn’t going to see the URL anymore, they’re going to see the QR code!
This lead to things like QR code stickers that would lead to individual GIFs or art project websites out in public, a form of easy-to-remove graffiti that still showed off some art in today’s hyper-online world. QR codes gave restaurants and their diners an easy way to see a digital menu without having to type in a URL. It also made Rick Rolling easy again.
You’re probably already seeing the issue here: when users can’t see the URL, they have no way of knowing where they’re going to end up when they scan it. A hyperlink’s true destination is visible to a user when they press and hold on mobile, or hover their mouse pointer over it on desktop – the same is not universally true for QR codes (some phones and programs show the link before asking you to continue, but many do not). The scam potential for these codes is off the charts because many do not understand them as ‘links’ but as ‘scannable objects’.
Discord Scam
For example, the recent slew of Discord scams! Essentially, what happens is a scammer compromises an account, either by password brute-forcing or by social engineering, and sends messages to everyone on that person’s friend list saying things like “ummm idk if this is really you or not but it was your name and it says you sent a girl gross stuff like wtf? Check the #shame tag and you’ll see it. I’m blocking you just in case, I can’t be friends with a predator”. They then send a link inviting you to join the Discord server mentioned in the message, and block you so you can’t continue to chat with them. As this is a compromised account and may be pretending to be someone you actually speak to on the regular, this can be very alarming. The first instinct is to join the server so you can defend yourself against whatever allegations have allegedly been made in that server! It presents you with a QR code to join the server that this compromised account has sent to you so you can clear your name and get your friend to unblock you, but when you scan it, it tricks your phone into giving over the login credentials for your Discord, compromising your account and continuing the scam.
This is the sort of scam that happened all the time before people grew wary of random DM’ed links! Here we are again, re-learning not to trust people that talk like bots and the things those bot-people/compromised accounts send us.
If you’re getting sick of having to, say, embolden and italicize words in your program over and over, have no fear – you can reduce the number of steps you have to take to do that (and many other tasks) using macros!
How To Make a Macro
The process is simple! To add a macro to a button on your mouse for use across the computer, follow these steps as listed by Microsoft (this document has pictures): https://support.microsoft.com/en-us/topic/how-do-i-create-macros-bd0f29dc-5b89-3616-c3bf-ddeeb04da2fb
To do so in Word, here: https://support.microsoft.com/en-us/office/create-or-run-a-macro-c6b99036-905c-49a6-818a-dfb98b7c3c9c
And Excel, here: https://support.microsoft.com/en-us/office/quick-start-create-a-macro-741130ca-080d-49f5-9471-1e5fb3d581a8
As with anything you do that could change the functionality of a button or mouse click, be very careful when assigning buttons certain actions! You don’t want to remove your ability to do something important (like right-clicking) by adding a macro that closes Word every time you try to paste something without using the keyboard.
Macros as a Malicious Entity
Programs like Word and Excel can come with macros designed to run as soon as the program is opened, and not every macro is harmless. Some do things like making hundreds of new documents, some can corrupt your drive, and most of them try to take over the other documents on the computer when they’re opened. This is why recent editions of Microsoft Office products warn you that you shouldn’t open a document outside of Safe Mode unless you trust it’s source. An ordinary-looking .XLSM document can completely brick your hard drive if it comes with the macros to do it!
This is also why you should always verify the sender of an attachment before you open an attachment, even a .pdf. Malicious attachments using macros can be used to steal the contents of the target’s email address book and send those addresses malicious emails too, continuing the cycle and spreading the document until it gets somewhere with valuable information. An early version of this, a macro called “Melissa”, would bait users into opening the document in Word, and then hi-jack their Outlook to send it’s bait email to the first fifty contacts in the victim’s address book as the victim (read more here at the FBI site: https://www.fbi.gov/news/stories/melissa-virus-20th-anniversary-032519). Melissa itself may be obsolete, but the technique sure isn’t.
Worse, because the macro is coming from an application, it’s already compatible with anything that’s using that application. Mac is not spared this time. A malicious macro can open hundreds of garbage word docs on a Mac too!
Alongside 2FA, making a difficult-to-guess password can stop a staggering number of cyberattacks, both brute-force and engineered. But how exactly do you do that? The latest recommendation for a password has jumped from 8 characters to 10, 12 if you really want to play it safe, and a scrambled set of characters that meets all of a decent administrator’s password requirements is going to be difficult to remember almost no matter what! If you do make a good, memorable one, you shouldn’t be using it anywhere else. It’s also unfeasible to just reset your password every time you need access to a site. What can you do?
Get a Password Manager
Password managers bridge the gap between the passwords you want to make, the ones you can remember, and the password that meets all of the site’s requirements. This is such a common problem that it’s even built into some browsers! Firefox will save your passwords securely for you, although you can always download the third-party extensions of your choice in the Mozilla add-ons page (https://addons.mozilla.org/en-US/firefox/extensions/). While Chrome also has a built-in password manager, if your Google account gets hacked, all of your passwords just went with it, so in their case it’s better to go third-party.
You can download reputable password managers such as LastPass or 1Password just as easily and perhaps more securely – in all of LastPass’s existence, it’s never had its password database breached (although their dev environment had a security incident a little while ago).
DO NOT “Just Write It Down”
If you think just writing the password down on a Post-It is good enough, don’t be so sure! Social engineering is probably the easiest way to get into someone’s computer. If someone wanders into your office when you’re not there, and they spy your password written on a Post-It stuck to your desk, then boom – they’re in.
Similarly, this actually isn’t a great way to keep track of your passwords even if nobody else has access to it. For example – if you keep a Word Doc with a bunch of passwords in it, assuming nobody is going to be able to
A) find it or
B) identify which passwords you used where (assuming you didn’t write down your username with them) you can also assume you’re not going to remember them either!
If you don’t use them frequently, you’re far more likely to forget what goes where. Oh, good, a random bunch of numbers and letters just titled ‘game account’ on the front of a post-it that’s lost all it’s sticky powers. Where does it go? What is the username? Does it need a username, or just your email? Good luck figuring that out!
But the Manager is Always On!
Yes, these password managers are always prepared to fill in a blank on a webform. If you leave your office without putting your computer to sleep, then hypothetically someone could access an account of yours using one. However, this is easy to fix. If you’re not putting your computer to sleep or locking the screen when you leave for extended periods of time, you should! If you’re not doing that because your password is too long to type in every time you get up, consider setting up a login PIN instead to remove that barrier – a regular person isn’t going to be able to guess every permutation of four-to-six numbers (and sometimes letters depending on your admin’s settings!) in a reasonable amount of time. By locking the desktop, the manager’s convenience can’t be used against you. It’s more secure, anyhow. It’s actually a requirement for companies that follow HIPAA standards!
Note: this is not meant to act as a buyer’s guide.
If you’ve been on Youtube in the past couple of years, you might have noticed an uptick in sponsorships from VPNs, making all sorts of claims. But what does a VPN do?
Location Services
Sometimes content published online is kept exclusive to certain countries. Canada, for example, has a rule that a certain percentage of their entertainment has to be made by Canadian artists, and Germany’s copyright laws are notoriously strict. VPNs can allow people to access this content as though they were from where it was made, instead of where they are actually at. American music videos and uncut Hulu channels for everyone!
Privacy
VPNs are usually advertised for privacy purposes. And most work pretty well! Instead of sending the information straight through to the ISP, the VPN anonymizes the data to get it to the server. From that server, your request goes through to the content you wanted, and the content comes back to you anonymized. The ISP, which is usually responsible for restricting content, can’t see this data and therefore can’t restrict it. For privacy concerns around the ISP, that anonymizing is great.
It doesn’t stop there, either: If the VPN is encrypting the data coming to and from a coffee shop’s WiFi for your computer, it’s hiding it from anyone who has access to that network – which might be more than the ISP. If all it takes is the password on the receipt to get into the network, then in theory almost anyone who finds a receipt or buys a drink can access the network. This could become a problem if that person knows more about WiFi than the owners of the shop do.
But Branding?
How is it possible for there to be so many? Don’t they all do the same thing? Kinda. That’s also why ads for VPNs have been so incredibly choking. The barrier to entry to sell one as a service is actually pretty low. Depending on where the host buys their server space, they’re also low maintenance. Given those two conditions, the only thing that could keep someone from making money off of one is their visibility. The market’s flooded, so right now the winner of the race is the one with the most advertising dollars.
Does it do Everything?
For advertising concerns, a VPN is not the be-all end-all of privacy. There are so many devices in the average house (your phone, your WiFi enabled washer, your computer, your Smart TV, your gaming console…) that advertisers will still have an idea of who you are, which doesn’t even include things like cookies. When you’re using Google, every Google service knows what you’re interested in, unless you’re signed out and incognito – so searches you made could be used to tweak the content that appears on your Youtube’s ‘recommended’ page. Google allows you to turn off ad customization – that doesn’t mean they aren’t keeping the info.
Accounts
If you have an account with, say, Amazon, they already know what you’re looking at on their site because it’s linked to the account. Or if you have a digital assistant that you regularly search or browse with, the VPN can’t help you. If you’re really interested in browsing privacy and not accessing Geo-locked content, you could download something like DuckDuckGo or Ecosia (this is not a buyer’s guide, products only used as examples). These services don’t store data on your search habits. Privacy-focused search engines aren’t foolproof, but if your main concern is privacy from advertisers and you don’t want to spend money on a subscription…
Where’s The Data?
There are also concerns about the many different VPNs themselves: you are partially anonymous to your ISP (they still know you’re using them, and for approximately how much data) but you are not anonymous to the VPN. In some cases, the website on the other end expects non-encrypted data, which means that the VPN literally cannot connect you without un-encrypting that data. To be fair, most browsers will warn you about unencrypted websites. But if you insist because you think the VPN’s keeping you safe, this is important information to know. Besides that, the VPN itself can sell your data. Or get hacked! The barrier to entry is very low, which is why this is a problem!
Long story short, when Youtubers are trying to sell this service, they don’t tell you why you might not need it. It’s not a good idea to connect to public WiFi without some sort of protection. VPNs can help. VPNs are a good service if you really want to watch the UK version of The Office. However, VPNs are not an invincible shield, and they’re not always capable of end-to-end encryption. They’re a security tool, not a comprehensive solution to your privacy woes.
As always, do your research on the brands you’re considering before jumping into it headfirst.
Remember, this is an overview of VPNs as a service, not a buyer’s guide!
It’s frustrating to have someone else steal your work. That’s why piracy is one of the biggest scourges of entertainment today. Yet bootlegs and copyright infringement still happen, and sometimes undetectably. So, if the person pirating is outside your legal reach, how do you keep them from enjoying your work for free?
Create anti-piracy measures, of course.
Tainting the Well
Cher briefly released songs on LimeWire that played very quietly, in an effort to get the listener to jack up their volume. After a little bit, she’d shout at you to stop stealing at the normal volume band – which was now at max volume. This didn’t last very long, because downloads had names on the site, but there was no limit to what artists would do to keep their intellectual property in their own hands. Ironically, the worst LimeWire users themselves were more likely to protect property than the artists! Trolls would put some strange things on otherwise normal tracks, and some people would rather go to iTunes than play download lottery. They tainted the well themselves.
Shame
People tend to be more embarrassed that they got caught with their hand in the cookie jar than they are about the pirating itself. Asking about the bizarre version of the song you downloaded would out you as a pirate. And music wasn’t the only industry to do this.
A whole bunch of games would give strange errors or messages to get pirates to ask about it online. Of course, the pirates are the only ones who got these messages, so creators and other fans alike knew they’d pirated the software. That was the punishment: everybody on the game’s Steam page knew you were a pirate! They then either self-exile or double down on the pirating by removing themselves from the forum to avoid the shaming.
Anti-Piracy software
Games have great examples of anti-piracy in action. Piracy detection used to be pretty hard – all it took was a blank disc and a PC that already had the game on it in the early days to make copies. Games would use physical wheels or artifacts on the inside of the game’s packaging to be sure you had a legit copy – if you couldn’t answer a question pre-programmed into the game, you didn’t have the original package, and you couldn’t play. Then, as computers got better and games could take up more space, programmed anti-piracy kicked into a higher gear. Anything and everything went – it was the pirate’s problem if they didn’t like it. Earthbound, a game that was already difficult, would crash at the final screen and then delete all your save data. So would Spyro, although Spyro would warn you that it thought you were playing a bootleg copy before you got to the end.
The goal was to frustrate the pirate, which would eventually prevent piracy in its own way. Some developers went to guilt, instead: Alan Wake just slaps an eyepatch with the Jolly Roger on your character to remind you that you’re playing a pirated copy and you should feel bad. So does Quantum Break.
Business Software License Checks
There are many obvious downsides to pirating something like Excel. Namely, if something goes wrong, what are you going to do? Contact the vendor? With your illegitimate copy? Good luck with that. It doesn’t help that Microsoft runs audits, too – if they detect a license or a product key not in line with what they’re expecting, they’ll know you’re pirating. If another copy of Word tries to interact with an illegitimate copy, they’ll know you’re pirating. Basically, if you’re ever connected to the internet with a cracked copy of Office software, they’ll know. There are so many free alternatives that pirating Word seems foolish.
Microsoft is doing it for more than the money, too. There’s a growing host of people online who would just love to scam some businesses into downloading malicious software, alongside illegitimate copies of Word. Assuming the business owner genuinely believes they’re getting real copies of Office, Microsoft’s good name is tainted!
CAP Software
Pirating early-release discs destroys faith in reviewers. However, early reviewers are also giving you a lot of free advertisement, so it wouldn’t be very smart financially to just cut them all off. Instead, what they use is CAP software, which stores a code in the file. If the file is leaked or copied, the code is present, and the studio knows exactly which reviewer to cut off. Versions of this using tones mixed into the audio of the movie and visual watermarks are also common! Everyone benefits: the studio still gets it’s promotion, the reviewer gets to review the movie, and the viewer gets some early information about what they want to watch, legitimately. The pirate is slapped with a fine and everyone moves on.
Twitter’s meltdown should serve as a warning – while it’s possible to coast off of minimal support for a little bit, it’s not actually all that easy to keep things running on a skeleton crew. And even if Twitter still had all of its staff, would it still be standing after all those changes?
For those of you who don’t use Twitter, Musk’s purchase of the company has been a pretty huge mess for the people working under him. He fired a large percentage of the staff (more than half of the company was laid off) and encouraged those not laid off to leave by insisting Twitter was going to go ‘hardcore’ and they’d have to return to their physical offices for long hours if they valued their job. Many simply sent a salute emoji in the company’s big Slack town square and jumped ship. The people left behind are a mixed bag – engineers that like Musk a lot, people trapped under Twitter’s employment due to work visas, and everybody in between. They’re not the company’s second choice team, by any means, but there are less of them. A lot less. Some might even say it’s too few for the site to function with.
Broken New Features
The blue checkmark fiasco, where Twitter’s CEO promised that being able to simply buy verification would definitely not result in fraud, is one of a number of bad rollouts. A common mantra for startups is to ‘move fast and break things’, a strategy formulated when delaying choices or rollouts to make them not-broken could be the difference between receiving investor money (and customers count as investors here) or not. The iPhone, for example, famously did not work when Steve Jobs first demoed it. It crashed a lot, and it didn’t have great reception. But by demonstrating that everyone was super into the idea, he was able to rally and put out a better, more complete version of the device for customers to buy! Importantly, the iPhone wouldn’t crush the rest of Apple if it didn’t work, so they could afford to play fast with it.
However. Twitter is not a startup, is it? Nor is it releasing a fenced-in product totally unseen before – paid content tiers are new to Twitter, but pretty common everywhere else. (Had Twitter not downsized, it might have even still had the necessary expertise onboard to roll this feature out gracefully.) When a startup moves fast and breaks things, it’s forgivable, because the team might be creating something so groundbreaking that they can’t even keep up with the scope of their idea. When a big company does it, it looks… embarrassing. A team working out of a garage may not have multiple test environments for their app or product. What kind of billion-dollar company doesn’t have test environments?
What kind of billion-dollar company couldn’t see the potential for abuse, especially on a platform dedicated to discussion, either? People were tweeting about misusing this verification shortcut as soon as the announcement was made, and they still went through with it! This new, fast, broken feature shut down a valuable communication channel between big companies and their clients until moderation was put into place. The lack of moderation was supposed to be a feature, you see – Twitter’s previous verification system meant that verified accounts were actually verified by Twitter, not by money, and if they moderated it, it would be like Twitter was doing the verifying again. Again, this is an almost understandable mistake on a smaller platform with less people chomping at the bit to abuse it, but not for multi-billion dollar Twitter. It looked like official pharmaceutical companies were finally breaking good, and like the official channel for Nintendo USA had posted a picture of Mario flipping the bird. Customer support lines on Twitter were strangled by fakes. The response from some of those big companies was understandably angry. Musk attempted to smooth this over by bringing back the individually assigned verification checkmarks, but in gray, and then finally just dropped the idea.
Breaking Old Features
Twitter disabled the service that sent out the 2-Factor Authentication texts in an attempt to prune down microservices. Later, it broke the service that allowed users to tweet directly to their page, meaning only scheduled tweets would go through, when restricting API access. In theory, both actions were unfortunate side effects of trying to streamline user experience: by shutting down what Musk felt was bloatware, Twitter would run faster upon startup. That makes sense. However, Twitter runs on miles and miles of code. And they only have a quarter or so (maybe even less) of the team they had at the start of Musk’s takeover. The resultant ‘breaking’ of microservices like 2FA, and the over-restricting of Tweet permissions, is a direct result of losing the engineers who handled those features before deciding to tinker with them.
Musk’s choice to prune Twitter’s team down to the roots means that every update, every security hole patch, every choice affecting the infrastructure of the site, is now ten times more likely to result in bugs, and those bugs are going to take much longer to fix now.
But hey – at least there’s less overhead. That’s going to be important, because advertisers are not exactly pleased.
Making Simply Existing in the Space A Total Nightmare
The CEO’s promise to ‘stop stifling free speech’ on a platform that’s honestly pretty permissive (a side-effect of being an official channel of communication for a U.S. president, a role that comes with a huge number of responsibilities) certainly earned him brownie points with people who were decidedly not going to use this new, even looser set of rules kindly. People who’d been, say, banned over the use of certain words, in certain targeted circumstances. At the rate Musk was suggesting they loosen moderation, Twitter could have easily turned into 2 Kiwi 2 Farms, where the targets are actually on the same platform the harassment campaigns are planned.
Ultimately, what changes he actually made didn’t matter, because the mere promise of maybe loosening the rules a bit brought a ton of vitriol to the surface anyway, and the remaining moderators at Twitter after Musk’s big ultimatum were not equipped to handle it. Discourse on Twitter was already a horrible, rotten place where nuance goes to die, but people just existing on the site, promoting their wares or keeping up with their favorite singers and actors, were now experiencing a worse version of the site where slurs were now part of the discourse.
Every step of this is an absolute nightmare for advertisers who don’t want an ad for Sunny-D appearing next to a tweet telling someone to off themselves. Musk’s total reign over Twitter combined with his unpredictable behavior means that he can’t even promise he’ll change, because yeah, he might – and what if he makes it even more of a nightmare?
Musk Himself is Part of The Problem
Stephen King declaring that he wasn’t going to pay 20$ to hang around on Twitter as a verified user led to Musk very publicly changing the price point to 8$ – the price that stuck for rollout. How absolutely insane of a business choice! A single celebrity says ‘this costs too much’ (and because he’s a celebrity, you know it’s not because he’s incapable of paying it, the tech-sphere says) and then the price is actually changed. Can you imagine almost any other service just… going for it, like that? This is a perfect example of behavior that would have been funny if Musk had not burned away all his goodwill on stupid stuff, like getting the California high-speed rail canceled in favor of his hyperloop, or calling an account that uses publicly available info on jets a ‘stalker’, calling that cave diver who saved those kids a very mean name with no evidence, or subjecting his staff to inhumane work hours, or that thing with the horse, or the cybertruck delay, or threatening to shut off Ukraine’s new Starlink internet even though the US Government paid for it, the list goes on.
When Musk made a flamethrower available for sale, it was funny! He talks directly to the people! Look, he’s reinventing cars from the ground up! He named his son a bunch of letters and numbers! When Musk said “both sides are making good points”, it was scary. He has so much money that if he decided to fund an ad campaign for a candidate, that candidate could win. When he appeared behind Dave Chapelle to shout “I’m rich, bitch!” at a show, it was… bizarre. The CEO of Twitter has such an investment in looking cool that he appeared on Rick and Morty as a version of himself with tusks. To his remaining fans, he’s a maverick! To advertisers who’d normally buy Twitter adspace, he’s a nightmare. To car owners, his investment in linking his reputation to Tesla makes Teslas unattractive – a nice electric Ford doesn’t come with all the baggage, and the quality control is more consistent. He could appear anywhere, any time, and nobody can stop him from embarrassing himself and all of the people invested in his brands.
Musk himself is a huge problem for Twitter. A bad CEO can destroy a company as readily as any disaster. People within his other companies report that allegedly, orders from him get filtered a couple of times so they actually make sense when they get where they’re going. While that might be hearsay, comparing Twitter’s past few months to Musk’s more successful companies suggests it’s got some truth to it somewhere. Twitter is not filtering his requests – it wasn’t an organization built with impulsive leaders, so orders generally made sense as they left the head office. Tesla was built around Musk, so the buffers were there the whole time.
For Twitter to survive Musk, it has to essentially remove him from himself.
DoS stands for ‘Denial of Service’. What this means is that someone plans to deny service to and from a website by crashing it, or making it run so poorly that it may as well be offline. As for ‘why’, there are many reasons – someone could be ‘disagreeing’ with the content of the website or it’s discussions, they may be attempting to drive viewers elsewhere, it may be political, it may be simple trolling, the list goes on.
So, how is it done?
The How
Denial of Service is just that: a denial of service. Any means may be used to get to that point. If it’s a poorly secured website, getting in via hacking or password stuffing and changing the contents on-site could be a DoS. If it’s a poorly balanced website, and if it’s one that allows for posting of pictures and memes, sending an image that’s too large for the website to handle could do it. Similarly, sending too much text, animate gifs, or other content that the website wasn’t prepared for could shut it down. Requesting too much data and opening several tabs at once of a big image that did load could simulate an http attack, although that may be equally hard on the computer that’s doing the requesting. It’s possible to DOS a site accidentally!
Inputting code into poorly made text entry spots can also crash the website, if the owner didn’t know how to prevent SQL injections. Dinging the website too many times in one go can crash some websites, although that usually requires things like bot nets, which turns it from a DoS to a DDoS.
In that same family, SYN flood attacks can also deny service by requesting information over and over until the website is so overloaded that it can’t respond. In a SYN flood, the computer sends requests to connect to the server repeatedly, but never actually completes them. If it’s done right, the server runs out of ports to take the requests, and legitimate requests mixed in with the faulty ones now have to wait much longer.
Preventing it
Many of these are simple issues of preventing out-of-format content and slowing down users requesting to visit. If a posting box has a hard limit of 10,000 characters, the DoSer could whip up a bot to post over and over, but the website owner would be able to tell that something was going on before it crashes the website. Many picture-printing places won’t allow photos over a certain size or resolution to be sent over the web, because it can clog the intake – especially places like drugstores that aren’t set up for large high-quality images. If the network isn’t prepared, it’s entirely possible for photographers to DoS them (at least in the photo station) by accident! Instead, it’s much easier to keep these incidents out at the gate: configuring comment sections and image requirements for size is a bare minimum.
As far as SQL injections go, we have a whole article on sanitizing inputs (here) – the essence of prevention is keeping data inputs and the command to get it to the database separate from each other. This prevents a number of issues by itself, but is good advice to avoid DoSing via SQL as well.
For SYN floods and other brute-force attacks, configuring the firewall and installing an IPS (Intrusion Prevention Software) are what security vendor PurpleSec recommends. In the olden days, attacks like these may not have crashed the site, but they could still drive the hosting costs through the roof – the owner is then incentivized to pull the plug themselves so they don’t drown in fees from their server company.
To prevent breaches, use two-factor authentication when building your site. Please. Microsoft reports that it stops 99.9% of fraudulent login attempts. It is one of the easiest ways to improve your security.
How is it different from DDoSing?
DDoSing relies on multiple computers to get the desired effect; DoSing takes much fewer. This has many benefits for the person trying to wreck a website. Firstly, DoSing doesn’t involve gathering other computers to attack with – you already have all your resources at your fingertips! However, that’s a double-edged sword, as you can’t attack with more than you have.
DoSing is also easier to coordinate as other people are (usually) only minimally involved. Getting other people to DDoS a site organically is difficult because it requires organizing strangers, and doing it with a botnet requires buying a virus or making one yourself and then distributing it. DoSing with a SYN flood or with SQL injections is hard – but it might be easier than trying to get ever-more-wary strangers to click a suspicious link. Outsourcing to a hacker group, of course, is easier than both unless the malicious party lacks the funds to do so.
On the other hand, hacking into a website that’s only password-protected with a password stuffer (or doing it semi-manually by guessing passwords yourself) is probably easier than any other method. While this carries some risk (if they can tell where the login came from, they may be able to find the attacker), it also has a lot of potential for damage if the website owner hasn’t backed up the website. The problem with this method is that the website has to be poorly secured for it to work – 2FA stops the vast majority of these attacks, and being smart with who gets admin permissions can limit the effectiveness of the attack.
Apple devices are slightly harder to weasel into from outside, but that doesn’t mean that it’s impossible. A virus has to be crafted differently to even function on an Apple computer. For the same reason that Apple needs its own version of browsers and games, it needs its own version of viruses, and with Microsoft being the default for most ‘sensitive’ systems, like pharmacies, school networks, and hospitals, hackers and other malicious individuals just don’t seem to care that much about Mac devices.
But not caring that much is not the same as not caring at all.
Apple’s known virus count is slowly creeping up, although viruses that use weaknesses in the system to get in are quickly made obsolete by updates. Apple viruses are a special kind of pain to deal with because the person who made them surely made them out of spite – as said previously, Mac’s system is not compatible with Microsoft’s, so viruses are custom tailored.
Their recommendation is to completely avoid third party apps – for good reason. The primary way that malware ends up in the computer’s system is via scam downloads. Those can look like a couple different things. Everybody (or almost everybody) knows not to click those flashing banners at the top of blog sites that advertise “FREE iPAD! CLICK NOW!” because it used to be the most common way to steal information from non-tech-savvy people.
“Free Flash Player!” “Free Game! Connect With Friends! Download Now!” are it’s equally outdated cousins. Anything that tells a Mac user that they need to download it has the potential to be a virus, and if the user is unlucky enough to get a virus prepared for a Mac, they’re in for a headache. But it’s tough to trick people with those flashing banners anymore, right? So…
The next easiest way is to fake an email from an app publisher, or even from Apple itself! This still won’t get a lot of people, but the people who fell for the flashing banners the first go-round might fall for an email that looks juuuuust official enough to make them doubt themselves.
One version of this scam involves sending an email with a downloadable attachment to ‘fix’ a ‘virus’ that ‘Apple’ has detected on the device. That’s not Apple, and there’s no virus until the recipient downloads the attachment. That was the goal! And now the virus is on the computer. Oh no!
Alternatively, if you’ve downloaded some game or another that you trusted, even though it was third party, and then received an email about a big patch that needs to be downloaded, you might fall for it! Depending on the game, they could have your email to send patches to, right? Official platforms like Steam certainly have their user’s email.
And that’s not even the game download itself! Downloading a game off of third party websites can lead to some nasty results, which is why Apple goes out of it’s way to warn you every step of the download, and also warn you off of third party downloads in every help forum. The risk that what you downloaded could be malware is just not worth the inconvenience of waiting for that game to come out on an Apple-licensed platform.
Long story short: it’s very possible, albeit difficult, to get viruses on a Mac computer. Don’t download attachments from strangers!
In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers.
The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”
The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit. The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.
This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.
The History
At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.
Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway no matter how huge the fine gets. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves. They don’t care anymore; nobody is going to get away with it.
Was it Reasonable?
Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.
“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler
This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.
The Problem
They’re going after the big boys, the folks downloading music to upload everywhere else…for free.
These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.
This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.
To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Adobe recently announced it was going to make using Pantone’s color book a subscription after offering it for free initially, and to really hammer home how annoying they were going to be about it, they’d black out any design aspect using those Pantone colors, ruining it. Consumers who want to keep their colors in Pantone’s library are now going to have to pay twice for the same service.
To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.
Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest. And they did it without destroying either computer!
Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.
Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!
Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.
The Response
Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.
This does not help!
The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion. Over pirates.
Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.
As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.
The Aftermath
Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.
Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.
Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.
Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. Way to go.