Note: this is not meant to act as a buyer’s guide.
If you’ve been on Youtube in the past couple of years, you might have noticed an uptick in sponsorships from VPNs, making all sorts of claims. But what does a VPN do?
Location Services
Sometimes content published online is kept exclusive to certain countries. Canada, for example, has a rule that a certain percentage of their entertainment has to be made by Canadian artists, and Germany’s copyright laws are notoriously strict. VPNs can allow people to access this content as though they were from where it was made, instead of where they are actually at. American music videos and uncut Hulu channels for everyone!
Privacy
VPNs are usually advertised for privacy purposes. And most work pretty well! Instead of sending the information straight through to the ISP, the VPN anonymizes the data to get it to the server. From that server, your request goes through to the content you wanted, and the content comes back to you anonymized. The ISP, which is usually responsible for restricting content, can’t see this data and therefore can’t restrict it. For privacy concerns around the ISP, that anonymizing is great.
It doesn’t stop there, either: If the VPN is encrypting the data coming to and from a coffee shop’s WiFi for your computer, it’s hiding it from anyone who has access to that network – which might be more than the ISP. If all it takes is the password on the receipt to get into the network, then in theory almost anyone who finds a receipt or buys a drink can access the network. This could become a problem if that person knows more about WiFi than the owners of the shop do.
But Branding?
How is it possible for there to be so many? Don’t they all do the same thing? Kinda. That’s also why ads for VPNs have been so incredibly choking. The barrier to entry to sell one as a service is actually pretty low. Depending on where the host buys their server space, they’re also low maintenance. Given those two conditions, the only thing that could keep someone from making money off of one is their visibility. The market’s flooded, so right now the winner of the race is the one with the most advertising dollars.
Does it do Everything?
For advertising concerns, a VPN is not the be-all end-all of privacy. There are so many devices in the average house (your phone, your WiFi enabled washer, your computer, your Smart TV, your gaming console…) that advertisers will still have an idea of who you are, which doesn’t even include things like cookies. When you’re using Google, every Google service knows what you’re interested in, unless you’re signed out and incognito – so searches you made could be used to tweak the content that appears on your Youtube’s ‘recommended’ page. Google allows you to turn off ad customization – that doesn’t mean they aren’t keeping the info.
Accounts
If you have an account with, say, Amazon, they already know what you’re looking at on their site because it’s linked to the account. Or if you have a digital assistant that you regularly search or browse with, the VPN can’t help you. If you’re really interested in browsing privacy and not accessing Geo-locked content, you could download something like DuckDuckGo or Ecosia (this is not a buyer’s guide, products only used as examples). These services don’t store data on your search habits. Privacy-focused search engines aren’t foolproof, but if your main concern is privacy from advertisers and you don’t want to spend money on a subscription…
Where’s The Data?
There are also concerns about the many different VPNs themselves: you are partially anonymous to your ISP (they still know you’re using them, and for approximately how much data) but you are not anonymous to the VPN. In some cases, the website on the other end expects non-encrypted data, which means that the VPN literally cannot connect you without un-encrypting that data. To be fair, most browsers will warn you about unencrypted websites. But if you insist because you think the VPN’s keeping you safe, this is important information to know. Besides that, the VPN itself can sell your data. Or get hacked! The barrier to entry is very low, which is why this is a problem!
Long story short, when Youtubers are trying to sell this service, they don’t tell you why you might not need it. It’s not a good idea to connect to public WiFi without some sort of protection. VPNs can help. VPNs are a good service if you really want to watch the UK version of The Office. However, VPNs are not an invincible shield, and they’re not always capable of end-to-end encryption. They’re a security tool, not a comprehensive solution to your privacy woes.
As always, do your research on the brands you’re considering before jumping into it headfirst.
Remember, this is an overview of VPNs as a service, not a buyer’s guide!
It’s frustrating to have someone else steal your work. That’s why piracy is one of the biggest scourges of entertainment today. Yet bootlegs and copyright infringement still happen, and sometimes undetectably. So, if the person pirating is outside your legal reach, how do you keep them from enjoying your work for free?
Create anti-piracy measures, of course.
Tainting the Well
Cher briefly released songs on LimeWire that played very quietly, in an effort to get the listener to jack up their volume. After a little bit, she’d shout at you to stop stealing at the normal volume band – which was now at max volume. This didn’t last very long, because downloads had names on the site, but there was no limit to what artists would do to keep their intellectual property in their own hands. Ironically, the worst LimeWire users themselves were more likely to protect property than the artists! Trolls would put some strange things on otherwise normal tracks, and some people would rather go to iTunes than play download lottery. They tainted the well themselves.
Shame
People tend to be more embarrassed that they got caught with their hand in the cookie jar than they are about the pirating itself. Asking about the bizarre version of the song you downloaded would out you as a pirate. And music wasn’t the only industry to do this.
A whole bunch of games would give strange errors or messages to get pirates to ask about it online. Of course, the pirates are the only ones who got these messages, so creators and other fans alike knew they’d pirated the software. That was the punishment: everybody on the game’s Steam page knew you were a pirate! They then either self-exile or double down on the pirating by removing themselves from the forum to avoid the shaming.
Anti-Piracy software
Games have great examples of anti-piracy in action. Piracy detection used to be pretty hard – all it took was a blank disc and a PC that already had the game on it in the early days to make copies. Games would use physical wheels or artifacts on the inside of the game’s packaging to be sure you had a legit copy – if you couldn’t answer a question pre-programmed into the game, you didn’t have the original package, and you couldn’t play. Then, as computers got better and games could take up more space, programmed anti-piracy kicked into a higher gear. Anything and everything went – it was the pirate’s problem if they didn’t like it. Earthbound, a game that was already difficult, would crash at the final screen and then delete all your save data. So would Spyro, although Spyro would warn you that it thought you were playing a bootleg copy before you got to the end.
The goal was to frustrate the pirate, which would eventually prevent piracy in its own way. Some developers went to guilt, instead: Alan Wake just slaps an eyepatch with the Jolly Roger on your character to remind you that you’re playing a pirated copy and you should feel bad. So does Quantum Break.
Business Software License Checks
There are many obvious downsides to pirating something like Excel. Namely, if something goes wrong, what are you going to do? Contact the vendor? With your illegitimate copy? Good luck with that. It doesn’t help that Microsoft runs audits, too – if they detect a license or a product key not in line with what they’re expecting, they’ll know you’re pirating. If another copy of Word tries to interact with an illegitimate copy, they’ll know you’re pirating. Basically, if you’re ever connected to the internet with a cracked copy of Office software, they’ll know. There are so many free alternatives that pirating Word seems foolish.
Microsoft is doing it for more than the money, too. There’s a growing host of people online who would just love to scam some businesses into downloading malicious software, alongside illegitimate copies of Word. Assuming the business owner genuinely believes they’re getting real copies of Office, Microsoft’s good name is tainted!
CAP Software
Pirating early-release discs destroys faith in reviewers. However, early reviewers are also giving you a lot of free advertisement, so it wouldn’t be very smart financially to just cut them all off. Instead, what they use is CAP software, which stores a code in the file. If the file is leaked or copied, the code is present, and the studio knows exactly which reviewer to cut off. Versions of this using tones mixed into the audio of the movie and visual watermarks are also common! Everyone benefits: the studio still gets it’s promotion, the reviewer gets to review the movie, and the viewer gets some early information about what they want to watch, legitimately. The pirate is slapped with a fine and everyone moves on.
Twitter’s meltdown should serve as a warning – while it’s possible to coast off of minimal support for a little bit, it’s not actually all that easy to keep things running on a skeleton crew. And even if Twitter still had all of its staff, would it still be standing after all those changes?
For those of you who don’t use Twitter, Musk’s purchase of the company has been a pretty huge mess for the people working under him. He fired a large percentage of the staff (more than half of the company was laid off) and encouraged those not laid off to leave by insisting Twitter was going to go ‘hardcore’ and they’d have to return to their physical offices for long hours if they valued their job. Many simply sent a salute emoji in the company’s big Slack town square and jumped ship. The people left behind are a mixed bag – engineers that like Musk a lot, people trapped under Twitter’s employment due to work visas, and everybody in between. They’re not the company’s second choice team, by any means, but there are less of them. A lot less. Some might even say it’s too few for the site to function with.
Broken New Features
The blue checkmark fiasco, where Twitter’s CEO promised that being able to simply buy verification would definitely not result in fraud, is one of a number of bad rollouts. A common mantra for startups is to ‘move fast and break things’, a strategy formulated when delaying choices or rollouts to make them not-broken could be the difference between receiving investor money (and customers count as investors here) or not. The iPhone, for example, famously did not work when Steve Jobs first demoed it. It crashed a lot, and it didn’t have great reception. But by demonstrating that everyone was super into the idea, he was able to rally and put out a better, more complete version of the device for customers to buy! Importantly, the iPhone wouldn’t crush the rest of Apple if it didn’t work, so they could afford to play fast with it.
However. Twitter is not a startup, is it? Nor is it releasing a fenced-in product totally unseen before – paid content tiers are new to Twitter, but pretty common everywhere else. (Had Twitter not downsized, it might have even still had the necessary expertise onboard to roll this feature out gracefully.) When a startup moves fast and breaks things, it’s forgivable, because the team might be creating something so groundbreaking that they can’t even keep up with the scope of their idea. When a big company does it, it looks… embarrassing. A team working out of a garage may not have multiple test environments for their app or product. What kind of billion-dollar company doesn’t have test environments?
What kind of billion-dollar company couldn’t see the potential for abuse, especially on a platform dedicated to discussion, either? People were tweeting about misusing this verification shortcut as soon as the announcement was made, and they still went through with it! This new, fast, broken feature shut down a valuable communication channel between big companies and their clients until moderation was put into place. The lack of moderation was supposed to be a feature, you see – Twitter’s previous verification system meant that verified accounts were actually verified by Twitter, not by money, and if they moderated it, it would be like Twitter was doing the verifying again. Again, this is an almost understandable mistake on a smaller platform with less people chomping at the bit to abuse it, but not for multi-billion dollar Twitter. It looked like official pharmaceutical companies were finally breaking good, and like the official channel for Nintendo USA had posted a picture of Mario flipping the bird. Customer support lines on Twitter were strangled by fakes. The response from some of those big companies was understandably angry. Musk attempted to smooth this over by bringing back the individually assigned verification checkmarks, but in gray, and then finally just dropped the idea.
Breaking Old Features
Twitter disabled the service that sent out the 2-Factor Authentication texts in an attempt to prune down microservices. Later, it broke the service that allowed users to tweet directly to their page, meaning only scheduled tweets would go through, when restricting API access. In theory, both actions were unfortunate side effects of trying to streamline user experience: by shutting down what Musk felt was bloatware, Twitter would run faster upon startup. That makes sense. However, Twitter runs on miles and miles of code. And they only have a quarter or so (maybe even less) of the team they had at the start of Musk’s takeover. The resultant ‘breaking’ of microservices like 2FA, and the over-restricting of Tweet permissions, is a direct result of losing the engineers who handled those features before deciding to tinker with them.
Musk’s choice to prune Twitter’s team down to the roots means that every update, every security hole patch, every choice affecting the infrastructure of the site, is now ten times more likely to result in bugs, and those bugs are going to take much longer to fix now.
But hey – at least there’s less overhead. That’s going to be important, because advertisers are not exactly pleased.
Making Simply Existing in the Space A Total Nightmare
The CEO’s promise to ‘stop stifling free speech’ on a platform that’s honestly pretty permissive (a side-effect of being an official channel of communication for a U.S. president, a role that comes with a huge number of responsibilities) certainly earned him brownie points with people who were decidedly not going to use this new, even looser set of rules kindly. People who’d been, say, banned over the use of certain words, in certain targeted circumstances. At the rate Musk was suggesting they loosen moderation, Twitter could have easily turned into 2 Kiwi 2 Farms, where the targets are actually on the same platform the harassment campaigns are planned.
Ultimately, what changes he actually made didn’t matter, because the mere promise of maybe loosening the rules a bit brought a ton of vitriol to the surface anyway, and the remaining moderators at Twitter after Musk’s big ultimatum were not equipped to handle it. Discourse on Twitter was already a horrible, rotten place where nuance goes to die, but people just existing on the site, promoting their wares or keeping up with their favorite singers and actors, were now experiencing a worse version of the site where slurs were now part of the discourse.
Every step of this is an absolute nightmare for advertisers who don’t want an ad for Sunny-D appearing next to a tweet telling someone to off themselves. Musk’s total reign over Twitter combined with his unpredictable behavior means that he can’t even promise he’ll change, because yeah, he might – and what if he makes it even more of a nightmare?
Musk Himself is Part of The Problem
Stephen King declaring that he wasn’t going to pay 20$ to hang around on Twitter as a verified user led to Musk very publicly changing the price point to 8$ – the price that stuck for rollout. How absolutely insane of a business choice! A single celebrity says ‘this costs too much’ (and because he’s a celebrity, you know it’s not because he’s incapable of paying it, the tech-sphere says) and then the price is actually changed. Can you imagine almost any other service just… going for it, like that? This is a perfect example of behavior that would have been funny if Musk had not burned away all his goodwill on stupid stuff, like getting the California high-speed rail canceled in favor of his hyperloop, or calling an account that uses publicly available info on jets a ‘stalker’, calling that cave diver who saved those kids a very mean name with no evidence, or subjecting his staff to inhumane work hours, or that thing with the horse, or the cybertruck delay, or threatening to shut off Ukraine’s new Starlink internet even though the US Government paid for it, the list goes on.
When Musk made a flamethrower available for sale, it was funny! He talks directly to the people! Look, he’s reinventing cars from the ground up! He named his son a bunch of letters and numbers! When Musk said “both sides are making good points”, it was scary. He has so much money that if he decided to fund an ad campaign for a candidate, that candidate could win. When he appeared behind Dave Chapelle to shout “I’m rich, bitch!” at a show, it was… bizarre. The CEO of Twitter has such an investment in looking cool that he appeared on Rick and Morty as a version of himself with tusks. To his remaining fans, he’s a maverick! To advertisers who’d normally buy Twitter adspace, he’s a nightmare. To car owners, his investment in linking his reputation to Tesla makes Teslas unattractive – a nice electric Ford doesn’t come with all the baggage, and the quality control is more consistent. He could appear anywhere, any time, and nobody can stop him from embarrassing himself and all of the people invested in his brands.
Musk himself is a huge problem for Twitter. A bad CEO can destroy a company as readily as any disaster. People within his other companies report that allegedly, orders from him get filtered a couple of times so they actually make sense when they get where they’re going. While that might be hearsay, comparing Twitter’s past few months to Musk’s more successful companies suggests it’s got some truth to it somewhere. Twitter is not filtering his requests – it wasn’t an organization built with impulsive leaders, so orders generally made sense as they left the head office. Tesla was built around Musk, so the buffers were there the whole time.
For Twitter to survive Musk, it has to essentially remove him from himself.
DoS stands for ‘Denial of Service’. What this means is that someone plans to deny service to and from a website by crashing it, or making it run so poorly that it may as well be offline. As for ‘why’, there are many reasons – someone could be ‘disagreeing’ with the content of the website or it’s discussions, they may be attempting to drive viewers elsewhere, it may be political, it may be simple trolling, the list goes on.
So, how is it done?
The How
Denial of Service is just that: a denial of service. Any means may be used to get to that point. If it’s a poorly secured website, getting in via hacking or password stuffing and changing the contents on-site could be a DoS. If it’s a poorly balanced website, and if it’s one that allows for posting of pictures and memes, sending an image that’s too large for the website to handle could do it. Similarly, sending too much text, animate gifs, or other content that the website wasn’t prepared for could shut it down. Requesting too much data and opening several tabs at once of a big image that did load could simulate an http attack, although that may be equally hard on the computer that’s doing the requesting. It’s possible to DOS a site accidentally!
Inputting code into poorly made text entry spots can also crash the website, if the owner didn’t know how to prevent SQL injections. Dinging the website too many times in one go can crash some websites, although that usually requires things like bot nets, which turns it from a DoS to a DDoS.
In that same family, SYN flood attacks can also deny service by requesting information over and over until the website is so overloaded that it can’t respond. In a SYN flood, the computer sends requests to connect to the server repeatedly, but never actually completes them. If it’s done right, the server runs out of ports to take the requests, and legitimate requests mixed in with the faulty ones now have to wait much longer.
Preventing it
Many of these are simple issues of preventing out-of-format content and slowing down users requesting to visit. If a posting box has a hard limit of 10,000 characters, the DoSer could whip up a bot to post over and over, but the website owner would be able to tell that something was going on before it crashes the website. Many picture-printing places won’t allow photos over a certain size or resolution to be sent over the web, because it can clog the intake – especially places like drugstores that aren’t set up for large high-quality images. If the network isn’t prepared, it’s entirely possible for photographers to DoS them (at least in the photo station) by accident! Instead, it’s much easier to keep these incidents out at the gate: configuring comment sections and image requirements for size is a bare minimum.
As far as SQL injections go, we have a whole article on sanitizing inputs (here) – the essence of prevention is keeping data inputs and the command to get it to the database separate from each other. This prevents a number of issues by itself, but is good advice to avoid DoSing via SQL as well.
For SYN floods and other brute-force attacks, configuring the firewall and installing an IPS (Intrusion Prevention Software) are what security vendor PurpleSec recommends. In the olden days, attacks like these may not have crashed the site, but they could still drive the hosting costs through the roof – the owner is then incentivized to pull the plug themselves so they don’t drown in fees from their server company.
To prevent breaches, use two-factor authentication when building your site. Please. Microsoft reports that it stops 99.9% of fraudulent login attempts. It is one of the easiest ways to improve your security.
How is it different from DDoSing?
DDoSing relies on multiple computers to get the desired effect; DoSing takes much fewer. This has many benefits for the person trying to wreck a website. Firstly, DoSing doesn’t involve gathering other computers to attack with – you already have all your resources at your fingertips! However, that’s a double-edged sword, as you can’t attack with more than you have.
DoSing is also easier to coordinate as other people are (usually) only minimally involved. Getting other people to DDoS a site organically is difficult because it requires organizing strangers, and doing it with a botnet requires buying a virus or making one yourself and then distributing it. DoSing with a SYN flood or with SQL injections is hard – but it might be easier than trying to get ever-more-wary strangers to click a suspicious link. Outsourcing to a hacker group, of course, is easier than both unless the malicious party lacks the funds to do so.
On the other hand, hacking into a website that’s only password-protected with a password stuffer (or doing it semi-manually by guessing passwords yourself) is probably easier than any other method. While this carries some risk (if they can tell where the login came from, they may be able to find the attacker), it also has a lot of potential for damage if the website owner hasn’t backed up the website. The problem with this method is that the website has to be poorly secured for it to work – 2FA stops the vast majority of these attacks, and being smart with who gets admin permissions can limit the effectiveness of the attack.
Apple devices are slightly harder to weasel into from outside, but that doesn’t mean that it’s impossible. A virus has to be crafted differently to even function on an Apple computer. For the same reason that Apple needs its own version of browsers and games, it needs its own version of viruses, and with Microsoft being the default for most ‘sensitive’ systems, like pharmacies, school networks, and hospitals, hackers and other malicious individuals just don’t seem to care that much about Mac devices.
But not caring that much is not the same as not caring at all.
Apple’s known virus count is slowly creeping up, although viruses that use weaknesses in the system to get in are quickly made obsolete by updates. Apple viruses are a special kind of pain to deal with because the person who made them surely made them out of spite – as said previously, Mac’s system is not compatible with Microsoft’s, so viruses are custom tailored.
Their recommendation is to completely avoid third party apps – for good reason. The primary way that malware ends up in the computer’s system is via scam downloads. Those can look like a couple different things. Everybody (or almost everybody) knows not to click those flashing banners at the top of blog sites that advertise “FREE iPAD! CLICK NOW!” because it used to be the most common way to steal information from non-tech-savvy people.
“Free Flash Player!” “Free Game! Connect With Friends! Download Now!” are it’s equally outdated cousins. Anything that tells a Mac user that they need to download it has the potential to be a virus, and if the user is unlucky enough to get a virus prepared for a Mac, they’re in for a headache. But it’s tough to trick people with those flashing banners anymore, right? So…
The next easiest way is to fake an email from an app publisher, or even from Apple itself! This still won’t get a lot of people, but the people who fell for the flashing banners the first go-round might fall for an email that looks juuuuust official enough to make them doubt themselves.
One version of this scam involves sending an email with a downloadable attachment to ‘fix’ a ‘virus’ that ‘Apple’ has detected on the device. That’s not Apple, and there’s no virus until the recipient downloads the attachment. That was the goal! And now the virus is on the computer. Oh no!
Alternatively, if you’ve downloaded some game or another that you trusted, even though it was third party, and then received an email about a big patch that needs to be downloaded, you might fall for it! Depending on the game, they could have your email to send patches to, right? Official platforms like Steam certainly have their user’s email.
And that’s not even the game download itself! Downloading a game off of third party websites can lead to some nasty results, which is why Apple goes out of it’s way to warn you every step of the download, and also warn you off of third party downloads in every help forum. The risk that what you downloaded could be malware is just not worth the inconvenience of waiting for that game to come out on an Apple-licensed platform.
Long story short: it’s very possible, albeit difficult, to get viruses on a Mac computer. Don’t download attachments from strangers!
In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers.
The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”
The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit. The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.
This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.
The History
At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.
Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway no matter how huge the fine gets. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves. They don’t care anymore; nobody is going to get away with it.
Was it Reasonable?
Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.
“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler
This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.
The Problem
They’re going after the big boys, the folks downloading music to upload everywhere else…for free.
These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.
This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.
To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Adobe recently announced it was going to make using Pantone’s color book a subscription after offering it for free initially, and to really hammer home how annoying they were going to be about it, they’d black out any design aspect using those Pantone colors, ruining it. Consumers who want to keep their colors in Pantone’s library are now going to have to pay twice for the same service.
To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.
Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest. And they did it without destroying either computer!
Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.
Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!
Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.
The Response
Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.
This does not help!
The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion. Over pirates.
Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.
As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.
The Aftermath
Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.
Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.
Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.
Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. Way to go.
If you’ve been online in the past few years, you’ve likely seen this warning already from VPN ads and security experts: don’t connect straight to public WiFi if you can help it, and if you do, don’t do your online banking on it. If the hacker gains special access to the WiFi network without the actual owners knowing, they can see the data that travels to and from the systems attached to it.
2) Juice Jacking
There was a period of time between the phone security we see today and teeny-tiny tech found in things like micro-cameras where hackers could connect chips to public USB plug-ins and steal data. This happened either directly through the port or by downloading malware designed to send that info after a certain amount of time. Things like pictures, app passwords, saved files and audio recordings , anything you wouldn’t want to share over USB. Luckily, a security conference revealed a lot of these issues before they became an epidemic, and between Android updating with a white-list system and Apple updating with security patches, juice jacking is less and less common. If you’re still worried, there are a number of ways that don’t rely on programming, like using the cable/adaptor that came with your device or using a cable with no data cord.
3) Illegitimately Named HotSpots
In this case, the hacker renames a WiFi source (which could be a phone hotspot or something similar) to something that you’re looking for. Maybe it’s the free WiFi for the hotel, and you don’t notice that there’s two of them before you go through the effort of logging in with your room’s key and the password they gave you – which the hacker doesn’t need, but it gives an air of legitimacy to the fake network. Now the hacker can see your online traffic, whether it be to apps on your phone or to websites on your laptop. Private information is no longer private.
This is different than the previously mentioned public WiFi: in this method, the hacker owns the fake network, where on public WiFi, they don’t. The legitimate admin on a WiFi channel that the hacker doesn’t own might eventually notice and kick them from it, but the WiFi source the hacker owns would need to be shut down to keep people off of it since the hacker is the source.
Renaming networks to get phones to auto-connect can also be a problem, but if it’s not done right unseen data alerts the phone that HomeNetwork1 isn’t really the network it is supposed to auto-connect to. This means that this hack is more complicated than the method listed above; most people would probably pause for a second if their phone was asking for permission to connect to their home network from miles away, without a password. Social engineering a connection to a network the device is unfamiliar with anyway is an easier, more efficient way to steal data.
Be sure to turn off WiFi seeking features until you’re ready to connect to a specific network of your choosing, which removes this possibility altogether.
4) Over the Shoulder
The simplest method of gaining illegitimate access to your accounts is via Social Engineering. Now, it’s not easy – if you’ve ever tried before out of curiosity, you’ll know that most people type too fast for your eyes to actually follow, and that’s not including hitting the shift key and adding in numbers or punctuation, etc. so it’s simple – not easy. But difficult is not impossible, and if your password is especially simple, or they watch you glance at a sticky note you’ve stuck somewhere to remember the password, the chance that they’ll successfully remember or find your password goes up. Remember, the best passwords are long and decently complicated!
DDoS stands for Distributed Denial of Service, and it’s usually used to mean that a website is pushed to its capacity limit and forcibly closed. It’s entirely possible to do this accidentally, and spikes in a website’s popularity can actually be more of a problem than a success!
What causes it?
The internet has been described as a series of tubes, a network of roads, and a whole host of other metaphors. At its most basic interpretation, a DDoS attack is like causing a traffic jam, or a clogged pipe, by sending or asking for so much data that the road lights stop working, or the valves have to shut. The server holding the website gets so many false requests for data that it can’t sort the legitimate ones from the weaponized ones, and everything slows to a crawl while it tries to catch up.
Different websites have different thresholds to reach before this becomes a problem, however, and this changes the way the attack is carried out. The resources needed to DDoS a site change depending on where the website’s infrastructure is weakest.
Modern Times
Websites allocate time, money, and resources differently depending on what they need. A website with a lot of far-flung servers may invest heavily in load balancing and firewalls, so someone attempting to DDoS them is going to have a heck of a time actually getting through that way. A video hosting website that’s recently switched to 4K is going to invest in more server space, so a SYN flood may be unsuccessful.
And then there’s small websites who sit somewhere in the middle – they don’t host a lot of videos, and nothing’s really demanding of bandwidth except their content library. These are the most vulnerable to DDoSing.
DDoS-ability is entirely based on the website’s resources. It would be nearly impossible to successfully DDoS Google, for example. They have the capacity to withstand a sudden influx of several million computers, all trying to access their services. That’s just a Friday night for them! However, if a celebrity posts a link to a home-run recipe blog, that blog’s about to come under heavy strain they might not have expected.
Forum websites like Digg and Reddit have a term for accidentally DDoSing a small website: the “Hug of Death”. So many users are directed from a cool post on the front page to the website that it crashes and loses service. This is DDoSing, even if it’s completely accidental. Sometimes popularity is the worst thing that can happen to a website! Repeat visits to a website tend to grow very slowly and are the result of a lot of hard work and careful ad placement. Insane success doesn’t happen overnight… until it does, and a DDoS event happens.
Malicious uses
DDoSing a site used to be a pretty popular way to harass a website creator or organization. It’s simple, it’s cheap, it’s effective, and it doesn’t take much to successfully DDoS the tiny sites that content creators make to separate themselves from things like BlogSpot or Wix.
There are multiple roads to get to the desired goal of a crash! One method is simply coordinating other users via social media to repeatedly ding a website til it starts slowing, and then crashing. This is the easiest, fastest method, but it requires a pre-existing platform to rally bad actors. This also happens accidentally all the time! Someone will point to a cool website and then crash it when their followers hit it too hard all at once.
Inhuman Causes
The second option is to create a botnet, a network of internet-accessible devices that can request access to a website. This sounds expensive, but the real secret is that the hacker’s using other people’s computers to carry out this kind of attack. They get their malicious software onto the machine by exploiting social engineering or poor network security, and then they send a command to the device to attempt to access the website they’re DDoSing.
People affected may notice their own computers slowing down because the command is taking up computing power! This method requires more programming knowledge than the other method, but it delivers a lot of power anonymously. However, identifying it as an actual DDoS attack and not a spike in popularity is easier. The visits come in unnatural waves that the website host will pick up on! Many hosting services offer analytics as an option to help website builders sell ads. Using it for DDoS data gathering is a natural extension!
However, assuming it’s done right, this kind of attack is the most difficult to ward off. This method includes things like ‘http floods’, which is what it sounds like – the http, or the hypertext transfer protocol, is flooded with requests to connect. SYN floods also fall into this category, but instead of the http, it’s the initial request for the website. Again, the website can’t tell who’s legit. The website can reroute traffic to a stopgap page or a black hole page (where the traffic is just told ‘there’s nothing here!’) to stop it, but it still gives the DDoSer the desired result – service is denied.
Location
DDoSing can attack the upstream and downstream of the site, too. One example is a DNS amplification attack, where the malicious user makes simple requests that take a lot of data to complete. The website can handle it, but the upstreams supplying the info requested might be forced to cut service to protect themselves. Protocol attacks aim to over-burden the firewall and load balancers of the site by repeatedly dinging them until they’re too busy and shut down. Both of these are easier to handle than http floods, but they’re still used today against unprepared and poorly written websites.
Botnets don’t have to be made entirely of high-powered user devices, like laptops or desktops. Internet of Things items can be used in a DDoS attack too! IoTs are usually poorly protected and have juuust enough power to request data from a website. They make perfect botnet fodder. Plus, it’s much harder to tell that a fridge has been hacked, so it tends to fly under the radar.
How to Stop it?
The best way for you to avoid being sucked into a botnet is the same security advice used against viruses. IoT items are computers too, and they should be treated with the same fear of viruses as PCs do! Don’t download sketchy things off sketchy sites, don’t click malicious attachments, etc. And for those IoT devices, change the default password! Use a password with your router that isn’t the factory default! This should keep your devices from being used in botnets without your knowledge.
As far as preventing an attack on your site, the answer is much more difficult.
Some websites defend against this by using something to check the request before actually allowing the request in. As mentioned above, AI will pick up on unnatural waves. Having a program in place to shut out the peaks of those waves can help. Real users will refresh the page and wait to be allowed in – bots may not. Some older websites use a form of this by routing new visitors to a ‘check’ page before allowing them access to the site; this confuses botnets, which may be expecting instant entry.
Also, be sure that firewalls and other web protections are up-to-date and running as they should be. This will keep out DDoS attacks relying on bugs and bad-faith data requests from being able to successfully deny service.
What does the future of work really look like? Unlock the full potential of cloud-based solutions.
Where the first initial lockdown, back in March last year had forced most companies into an abrupt digital setting, the two that have since followed only solidified this. This transformation has now manifested itself so that companies have become significantly, if not completely, reliant on remote, digital solutions to remain functional. As a result, the growing need for robust remote working solutions has surged, causing traditional and antiquated workplace solutions to fall by the wayside.
Embracing digital transformation and unified workplaces
The evolution of digital transformation has fast-tracked the online revolution, meaning elaborate predictions of future working are now not so distant. The boundaries between working from home and in the office are now completely blurred as we find ourselves marching through 2021. The need for physical office space now seems redundant as we can work just as we did before, if not better, from home and exceed productivity and collaboration standards.
A far stronger focus is now on the availability of IT tools as workforces rely on these methods of remote solutions to remain as collaborative with their colleagues as possible. For example, proprietary business communication platforms have completely revolutionised the way we communicate, collaborate and generally work as they dealt with the majority of the population pivoting to remote working. State of the art interactive, virtual meetings via a browser promotes efficient collaboration and strengthens the performance of organizations, while necessary commutes can be reduced or in some cases avoided.
What’s more, the capabilities to provide quality engagement between employer and employees is now of utmost priority. As we navigate a more digitized year than ever before, employees should be equipped with most efficient solutions that IT managers can source within minutes, instead of days or even weeks so that effective communication internally can also benefit.
This ‘future of working’ model can be achieved through introducing personalized digital workspaces accessed through a browser of any device, anywhere in the world. Perfectly suited to the new home and office split, innovative cloud technology enables organizations and their staff to access any of their applications hosted on-premise or in the cloud, as well as internal and external web applications instantly.
Understanding the challenges
The sudden pivot to mass remote working, however, has not been as smooth sailing as initially thought after all. For companies still operating in traditional virtual environments, remote working solutions often lack flexibility to include legacy or GPU intensive applications that are traditionally running on a desktop or on-premise solutions. Though, it is not too late to innovate and take the first step towards cloud-based technologies. It cannot be stressed enough that cloud computing is here to stay and can offer these types of businesses a life line before it’s too late and fall completely behind digital transformations and breakthroughs.
Additionally, let us not forget that the internet is no doubt a dangerous place. A world now mostly operating online, puts the traditional-based IT infused companies, even more at risk. In fact, there are several emerging cyber threats with an impact that have never been seen before. Due to existing Enterprise software protection solutions that are decades old and vulnerable, many businesses are left exposed and ‘easy’ to attack. And now, with the entire UK workforce being told to work from home, where possible, investing in secure and reliable solutions has never been so crucial for the online safety of not only a business, but its workforce.
Companies can look for intelligent cloud-based solutions that combine the benefits of streaming an online workplace effortlessly with complete trust in the solution to resolve exposure to hackers. For example, when using the cloud, client-to-site VPN connections are no longer required as a result of migrating systems to the cloud, meaning there is no point of attack for trojans. Furthermore, no end device within an organization will be able to access an application server as the direct communication between the user and the target system can be completely ruled out with cloud software.
Yet, it is all types and sizes that can be affected. Even multinational companies fall victim to cyber hacks, often involving over 1000 employees due to vulnerabilities in outdated architecture. Investing in state-of-the-art cloud solutions that include cyber insurance will become a new box to add to the IT checklist in 2021 and beyond.
What’s more, new cloud technologies have emerged and seen acceleration in adoption, thanks to the influx of home working such as Everything-as-a-Service (XaaS). This type of solution enables all IT services to be offered in the cloud for workforces as they work remotely. XaaS not only provides remote workers with advanced flexibility but ensures enhanced security due to it encompassing the likes of other solutions such as IaaS, PaaS and SaaS.
How cloud can help create the ‘anywhere office’ for the millennial workforce
Implementing an efficient cloud adoption strategy
If the multiple lockdowns have taught us one thing, it is that cloud adoption is no doubt proving to be one of the most efficient ways to secure and sustain the demands of a digital workforce. Now in 2021, we hope to reach some kind of normality as the dust will hopefully settle on the Covid-19 pandemic. Remote working is now here to stay and it will be up to business leaders to make sure they have the correct and most efficient cloud adoption strategy in place, for their employees. Armed with the right cloud solutions, businesses have the potential to simplify their IT ecosystems and procure solutions without committing to large upfront investments.
We’re Elixis Technology
In the ever-changing, technology-centric world we live in, it’s vital to have an I.T. solution source you can count on. At Elixis Technology, it is our mission to help businesses — big and small — produce the results their customers demand, with technology that actually works.
As businesses of the future evolve to be more digital and more shared, the need to prepare to avert a cyber pandemic – with potential even more than the coronavirus to upend our lives – has never been more urgent.
We need to strengthen our strategic response to the risks before we invest in tactics. Our plans must work harder and smarter to address capability gaps.
A common agenda will build the confidence and competence to achieve the resilience we need.
If humanity ever needed reminding of our interdependence, the pandemic has brought that home. As we scale up our response to the crisis, through largely digital means, our interconnectedness grows exponentially. And with it our vulnerability to the risk exposures of the virtual world. In fact, businesses of the future are evolving to be more digital and more shared. The need to prepare to avert a cyber pandemic – with potential even more than the coronavirus to upend our lives – has never been more urgent.
For a moment, let’s think of the unthinkable. A world without phones and internet, with idling trucks, trains and planes because fuel pumps and charging stations are incapacitated; banks shuttered; food supply chains broken; and emergency services made all but unavailable. This bleak vision would be inevitable if electricity supplies are cut off by a cyberattack.
In a scenario such as this, we know, that the ensuing swift blackout would be crippling. Unfortunately, we also know that a crisis of this scope, sophistication and impact is not just possible but something we are currently dealing with – albeit in a different context.
Global Technology Governance Report 2021
Last month, a group, believed to be Russian, gained access to over 18,000 systems – belonging to government and corporations – through a compromised update to SolarWinds’ Orion software. We were unprepared to prevent the attack because the bad actors slipped through the exact whitelisted software supply chain we trust. Even more regrettably, the software supply chain allowed them to access the network of FireEye – the US-based cybersecurity giant known for investigating and remedying some of the world’s most high-profile breaches.
While FireEye’s customers remained largely unimpacted this time, the moral of the story is that no one and nothing is immune. Our sources of cyber-protection – software updates or defending partners – can be the Trojan Horse where everything around us devolves into chaos.
Well before we learnt these tough lessons in the final weeks of a rather challenging 2020, the World Economic Forum questioned whether our individual and collective approach to managing cyber risks is sustainable in the face of the major technology trends taking place.
Although there’s an array of resources to manage cyberattacks, we still have a long way to go before we can, as a whole, effectively counter these threats. We need to strengthen our strategic response to the risks before we invest in tactics. Our plans must work harder and smarter to address capability gaps in three areas:
More coordination
Consider the SolarWinds attack. It did not directly hit its intended targets. Instead, the attackers surreptitiously built a chain of offence, that included non-government agencies, security and technology firms along with educational institutions, to inch unnoticed towards their real targets for espionage.
They knew they’d find their mark through our digital interconnectedness. We can turn this same intertwining of infrastructure to our advantage. Research tells us that hackers attack computers with Internet access—every 39 seconds on average. If we all shared threat intelligence, across borders, across the private and public sector, across industries and competitors, the collective intelligence could only move us forward faster.
An invaluable first step would be to develop more open systems, while adopting common standards and taxonomy in cybersecurity. This will serve us better to integrate and train our teams to drive holistic security. Global spending on cybersecurity solutions is projected to exceed $1 trillion cumulatively over the five-year period from 2017 to 2021. We must reprioritize these budgets to align with shared goals including collaborating to overpower organized cybercrime and the private-sector technology nexus with nation-state attackers.
More sophistication
The Global Risks Report 2020, articulated how the digital nature of the Fourth Industrial Revolution technologies is making our landscapes vulnerable to cyberattacks. For example, it is estimated that there are already over 21 billion IoT devices worldwide, slated to double by 2025. Attacks on IoT devices increased by more than 300% in the first half of 2019 alone.
The report, observes how “using ‘security-by-design’ principles to integrate cybersecurity features into new products continues to be secondary to getting products quickly out into the market.” Our current approach of bolt-on security needs to be reimagined to create stronger build-in standards, including SDLC-security quality certification, that makes software partners more accountable for security assurance. Along with this discipline in securing the supply chain as meticulously as we secure our products, we need better design architecture to tackle the challenges at hand.
More human capital
At the same pace that AI is growing useful in cyber defence, it is also enabling cybercriminals to use deep learning to breach security systems and harness data sets to improve response to defence.
While we can battle machine with machine, nurturing a strong pipeline cybersecurity talent, will give our defence an edge. We need better problem finders in greater numbers to work with our problem-solving machines. And this time, they need to be embedded in the complete lifecycle of our processes. Every person in the ecosystem must understand his or her role with respect to cybersecurity and be accountable to deliver to metrics and standards for cybersecurity quality. As of 2019, there were an estimated 2.8 million cybersecurity professionals worldwide, against a need for over 4 million.
If there is one lesson from dealing with the pandemic, it is the need to take each other along as we move forward into a more secure future. The very nature of a pandemic is such that no one is really safe unless everyone is safe. A cyber pandemic is no different. It is in shared trust and a common agenda that we can build the confidence and competence to achieve the resilience we need.
We’re Elixis Technology
In the ever-changing, technology-centric world we live in, it’s vital to have an I.T. solution source you can count on. At Elixis Technology, it is our mission to help businesses — big and small — produce the results their customers demand, with technology that actually works.