DoS stands for ‘Denial of Service’. What this means is that someone plans to deny service to and from a website by crashing it, or making it run so poorly that it may as well be offline. As for ‘why’, there are many reasons – someone could be ‘disagreeing’ with the content of the website or it’s discussions, they may be attempting to drive viewers elsewhere, it may be political, it may be simple trolling, the list goes on.
So, how is it done?
Denial of Service is just that: a denial of service. Any means may be used to get to that point. If it’s a poorly secured website, getting in via hacking or password stuffing and changing the contents on-site could be a DoS. If it’s a poorly balanced website, and if it’s one that allows for posting of pictures and memes, sending an image that’s too large for the website to handle could do it. Similarly, sending too much text, animate gifs, or other content that the website wasn’t prepared for could shut it down. Requesting too much data and opening several tabs at once of a big image that did load could simulate an http attack, although that may be equally hard on the computer that’s doing the requesting. It’s possible to DOS a site accidentally!
Inputting code into poorly made text entry spots can also crash the website, if the owner didn’t know how to prevent SQL injections. Dinging the website too many times in one go can crash some websites, although that usually requires things like bot nets, which turns it from a DoS to a DDoS.
In that same family, SYN flood attacks can also deny service by requesting information over and over until the website is so overloaded that it can’t respond. In a SYN flood, the computer sends requests to connect to the server repeatedly, but never actually completes them. If it’s done right, the server runs out of ports to take the requests, and legitimate requests mixed in with the faulty ones now have to wait much longer.
Many of these are simple issues of preventing out-of-format content and slowing down users requesting to visit. If a posting box has a hard limit of 10,000 characters, the DoSer could whip up a bot to post over and over, but the website owner would be able to tell that something was going on before it crashes the website. Many picture-printing places won’t allow photos over a certain size or resolution to be sent over the web, because it can clog the intake – especially places like drugstores that aren’t set up for large high-quality images. If the network isn’t prepared, it’s entirely possible for photographers to DoS them (at least in the photo station) by accident! Instead, it’s much easier to keep these incidents out at the gate: configuring comment sections and image requirements for size is a bare minimum.
As far as SQL injections go, we have a whole article on sanitizing inputs (here) – the essence of prevention is keeping data inputs and the command to get it to the database separate from each other. This prevents a number of issues by itself, but is good advice to avoid DoSing via SQL as well.
For SYN floods and other brute-force attacks, configuring the firewall and installing an IPS (Intrusion Prevention Software) are what security vendor PurpleSec recommends. In the olden days, attacks like these may not have crashed the site, but they could still drive the hosting costs through the roof – the owner is then incentivized to pull the plug themselves so they don’t drown in fees from their server company.
To prevent breaches, use two-factor authentication when building your site. Please. Microsoft reports that it stops 99.9% of fraudulent login attempts. It is one of the easiest ways to improve your security.
How is it different from DDoSing?
DDoSing relies on multiple computers to get the desired effect; DoSing takes much fewer. This has many benefits for the person trying to wreck a website. Firstly, DoSing doesn’t involve gathering other computers to attack with – you already have all your resources at your fingertips! However, that’s a double-edged sword, as you can’t attack with more than you have.
DoSing is also easier to coordinate as other people are (usually) only minimally involved. Getting other people to DDoS a site organically is difficult because it requires organizing strangers, and doing it with a botnet requires buying a virus or making one yourself and then distributing it. DoSing with a SYN flood or with SQL injections is hard – but it might be easier than trying to get ever-more-wary strangers to click a suspicious link. Outsourcing to a hacker group, of course, is easier than both unless the malicious party lacks the funds to do so.
On the other hand, hacking into a website that’s only password-protected with a password stuffer (or doing it semi-manually by guessing passwords yourself) is probably easier than any other method. While this carries some risk (if they can tell where the login came from, they may be able to find the attacker), it also has a lot of potential for damage if the website owner hasn’t backed up the website. The problem with this method is that the website has to be poorly secured for it to work – 2FA stops the vast majority of these attacks, and being smart with who gets admin permissions can limit the effectiveness of the attack.
You may have heard by now that LastPass suffered a pretty big breach, although thankfully no user passwords were outright compromised (you should still enable 2FA on anything LastPass touched though – the URLs of the websites were lost in the breach so hackers could attempt to brute-force into an account they know you’ve visited).
The attack in question wasn’t a hack in the traditional sense, but a social engineering attack. The hacker(s) created a fake account in order to fool a developer into sharing credentials, at which point they stole data from the development environment to use later in a large-scale attack.
Change your passwords and enable 2FA wherever you can – if you’ve been using LastPass’s recommended 8 character long passwords, or anything that’s not truly random, you should upgrade your password to a longer one. 8-character passwords are no longer an impenetrable wall like they used to be, so longer, more random passwords or even passphrases that are multiple words long are better!
Remember: don’t listen to accounts claiming to be people you know if you don’t recognize the email address, and always check the address carefully for spelling mistakes. It does not matter if they know your name or if their email tag says ‘mom’ – both of these things are easy to make happen. Social engineering attacks include phishing attacks, so following the protocols for phishing can help avoid social engineering breaches as well. If someone calls you and demands you take immediate action, either by threatening or promising a gift that will disappear if you don’t respond ASAP, remember to pause a minute to really think about what they’re asking for, and verify thoroughly.
Similarly, if your work uses badges to control who enters the building, you shouldn’t hold the door open for people you don’t recognize as coworkers – this is known as ‘piggybacking’ and it takes advantage of the fact that most people want to be polite and will hold the door.
This is far from a comprehensive list, so always keep your eyes peeled – identity theft and social engineering can come from any direction!
In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers.
The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”
The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit. The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.
This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.
At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.
Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway no matter how huge the fine gets. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves. They don’t care anymore; nobody is going to get away with it.
Was it Reasonable?
Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.
“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler
This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.
They’re going after the big boys, the folks downloading music to upload everywhere else…for free.
These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.
This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.
To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Adobe recently announced it was going to make using Pantone’s color book a subscription after offering it for free initially, and to really hammer home how annoying they were going to be about it, they’d black out any design aspect using those Pantone colors, ruining it. Consumers who want to keep their colors in Pantone’s library are now going to have to pay twice for the same service.
To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.
Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest. And they did it without destroying either computer!
Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.
Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!
Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.
Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.
This does not help!
The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion. Over pirates.
Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.
As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.
Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.
Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.
Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.
Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. Way to go.
If you’ve been online in the past few years, you’ve likely seen this warning already from VPN ads and security experts: don’t connect straight to public WiFi if you can help it, and if you do, don’t do your online banking on it. If the hacker gains special access to the WiFi network without the actual owners knowing, they can see the data that travels to and from the systems attached to it.
2) Juice Jacking
There was a period of time between the phone security we see today and teeny-tiny tech found in things like micro-cameras where hackers could connect chips to public USB plug-ins and steal data. This happened either directly through the port or by downloading malware designed to send that info after a certain amount of time. Things like pictures, app passwords, saved files and audio recordings , anything you wouldn’t want to share over USB. Luckily, a security conference revealed a lot of these issues before they became an epidemic, and between Android updating with a white-list system and Apple updating with security patches, juice jacking is less and less common. If you’re still worried, there are a number of ways that don’t rely on programming, like using the cable/adaptor that came with your device or using a cable with no data cord.
3) Illegitimately Named HotSpots
In this case, the hacker renames a WiFi source (which could be a phone hotspot or something similar) to something that you’re looking for. Maybe it’s the free WiFi for the hotel, and you don’t notice that there’s two of them before you go through the effort of logging in with your room’s key and the password they gave you – which the hacker doesn’t need, but it gives an air of legitimacy to the fake network. Now the hacker can see your online traffic, whether it be to apps on your phone or to websites on your laptop. Private information is no longer private.
This is different than the previously mentioned public WiFi: in this method, the hacker owns the fake network, where on public WiFi, they don’t. The legitimate admin on a WiFi channel that the hacker doesn’t own might eventually notice and kick them from it, but the WiFi source the hacker owns would need to be shut down to keep people off of it since the hacker is the source.
Renaming networks to get phones to auto-connect can also be a problem, but if it’s not done right unseen data alerts the phone that HomeNetwork1 isn’t really the network it is supposed to auto-connect to. This means that this hack is more complicated than the method listed above; most people would probably pause for a second if their phone was asking for permission to connect to their home network from miles away, without a password. Social engineering a connection to a network the device is unfamiliar with anyway is an easier, more efficient way to steal data.
Be sure to turn off WiFi seeking features until you’re ready to connect to a specific network of your choosing, which removes this possibility altogether.
4) Over the Shoulder
The simplest method of gaining illegitimate access to your accounts is via Social Engineering. Now, it’s not easy – if you’ve ever tried before out of curiosity, you’ll know that most people type too fast for your eyes to actually follow, and that’s not including hitting the shift key and adding in numbers or punctuation, etc. so it’s simple – not easy. But difficult is not impossible, and if your password is especially simple, or they watch you glance at a sticky note you’ve stuck somewhere to remember the password, the chance that they’ll successfully remember or find your password goes up. Remember, the best passwords are long and decently complicated!
DDoS stands for Distributed Denial of Service, and it’s usually used to mean that a website is pushed to its capacity limit and forcibly closed. It’s entirely possible to do this accidentally, and spikes in a website’s popularity can actually be more of a problem than a success!
What causes it?
The internet has been described as a series of tubes, a network of roads, and a whole host of other metaphors. At its most basic interpretation, a DDoS attack is like causing a traffic jam, or a clogged pipe, by sending or asking for so much data that the road lights stop working, or the valves have to shut. The server holding the website gets so many false requests for data that it can’t sort the legitimate ones from the weaponized ones, and everything slows to a crawl while it tries to catch up.
Different websites have different thresholds to reach before this becomes a problem, however, and this changes the way the attack is carried out. The resources needed to DDoS a site change depending on where the website’s infrastructure is weakest.
Websites allocate time, money, and resources differently depending on what they need. A website with a lot of far-flung servers may invest heavily in load balancing and firewalls, so someone attempting to DDoS them is going to have a heck of a time actually getting through that way. A video hosting website that’s recently switched to 4K is going to invest in more server space, so a SYN flood may be unsuccessful.
And then there’s small websites who sit somewhere in the middle – they don’t host a lot of videos, and nothing’s really demanding of bandwidth except their content library. These are the most vulnerable to DDoSing.
DDoS-ability is entirely based on the website’s resources. It would be nearly impossible to successfully DDoS Google, for example. They have the capacity to withstand a sudden influx of several million computers, all trying to access their services. That’s just a Friday night for them! However, if a celebrity posts a link to a home-run recipe blog, that blog’s about to come under heavy strain they might not have expected.
Forum websites like Digg and Reddit have a term for accidentally DDoSing a small website: the “Hug of Death”. So many users are directed from a cool post on the front page to the website that it crashes and loses service. This is DDoSing, even if it’s completely accidental. Sometimes popularity is the worst thing that can happen to a website! Repeat visits to a website tend to grow very slowly and are the result of a lot of hard work and careful ad placement. Insane success doesn’t happen overnight… until it does, and a DDoS event happens.
DDoSing a site used to be a pretty popular way to harass a website creator or organization. It’s simple, it’s cheap, it’s effective, and it doesn’t take much to successfully DDoS the tiny sites that content creators make to separate themselves from things like BlogSpot or Wix.
There are multiple roads to get to the desired goal of a crash! One method is simply coordinating other users via social media to repeatedly ding a website til it starts slowing, and then crashing. This is the easiest, fastest method, but it requires a pre-existing platform to rally bad actors. This also happens accidentally all the time! Someone will point to a cool website and then crash it when their followers hit it too hard all at once.
The second option is to create a botnet, a network of internet-accessible devices that can request access to a website. This sounds expensive, but the real secret is that the hacker’s using other people’s computers to carry out this kind of attack. They get their malicious software onto the machine by exploiting social engineering or poor network security, and then they send a command to the device to attempt to access the website they’re DDoSing.
People affected may notice their own computers slowing down because the command is taking up computing power! This method requires more programming knowledge than the other method, but it delivers a lot of power anonymously. However, identifying it as an actual DDoS attack and not a spike in popularity is easier. The visits come in unnatural waves that the website host will pick up on! Many hosting services offer analytics as an option to help website builders sell ads. Using it for DDoS data gathering is a natural extension!
However, assuming it’s done right, this kind of attack is the most difficult to ward off. This method includes things like ‘http floods’, which is what it sounds like – the http, or the hypertext transfer protocol, is flooded with requests to connect. SYN floods also fall into this category, but instead of the http, it’s the initial request for the website. Again, the website can’t tell who’s legit. The website can reroute traffic to a stopgap page or a black hole page (where the traffic is just told ‘there’s nothing here!’) to stop it, but it still gives the DDoSer the desired result – service is denied.
DDoSing can attack the upstream and downstream of the site, too. One example is a DNS amplification attack, where the malicious user makes simple requests that take a lot of data to complete. The website can handle it, but the upstreams supplying the info requested might be forced to cut service to protect themselves. Protocol attacks aim to over-burden the firewall and load balancers of the site by repeatedly dinging them until they’re too busy and shut down. Both of these are easier to handle than http floods, but they’re still used today against unprepared and poorly written websites.
Botnets don’t have to be made entirely of high-powered user devices, like laptops or desktops. Internet of Things items can be used in a DDoS attack too! IoTs are usually poorly protected and have juuust enough power to request data from a website. They make perfect botnet fodder. Plus, it’s much harder to tell that a fridge has been hacked, so it tends to fly under the radar.
How to Stop it?
The best way for you to avoid being sucked into a botnet is the same security advice used against viruses. IoT items are computers too, and they should be treated with the same fear of viruses as PCs do! Don’t download sketchy things off sketchy sites, don’t click malicious attachments, etc. And for those IoT devices, change the default password! Use a password with your router that isn’t the factory default! This should keep your devices from being used in botnets without your knowledge.
As far as preventing an attack on your site, the answer is much more difficult.
Some websites defend against this by using something to check the request before actually allowing the request in. As mentioned above, AI will pick up on unnatural waves. Having a program in place to shut out the peaks of those waves can help. Real users will refresh the page and wait to be allowed in – bots may not. Some older websites use a form of this by routing new visitors to a ‘check’ page before allowing them access to the site; this confuses botnets, which may be expecting instant entry.
Also, be sure that firewalls and other web protections are up-to-date and running as they should be. This will keep out DDoS attacks relying on bugs and bad-faith data requests from being able to successfully deny service.
With the recent announcement that Chrome is gutting ad blockers, it’s never been a better time to switch to Edge (which we recommend because it is especially easy to use) or Firefox. Edge is better for business – but if you want a smoother, less ad-riddled home browsing extension, why not check out Firefox?
Because Google sells quite a few of the slots you see online, it’s become disincentivized to let you avoid them on their browser – so Chrome will no longer block ads because that would be blocking Google from making that sweet, sweet ad money off of your views. And ads are everywhere. You scroll past them in between posts on TikTok and Tumblr. They appear on the sidebars and banners of news websites. They autoplay when you open Youtube, and speckle the progress bar with yellow. They’re obnoxious. And simultaneously insidious – you may watch a clip of a seemingly normal Instagram video only to realize after they begin pitching the product hard that it’s not a recommendation, it’s an ad, and you simply missed the little sponsor logo in the corner. Ads track you. Ad companies watch you view their ads and then determine from your behavior whether or not you’re interested. They watch the content you watch, and determine your age, gender, nationality, political affiliation, hobbies, and more from your online behavior. Even if you don’t mind ads, this tracking is often enough to justify an ad blocker in and of itself.
That said, ads can be pretty annoying. Especially if it’s disguising itself as regular content. Edge, a popular alternative to Chrome, still has an ad blocker, but does it have a sponsored post blocker? Because Firefox has both! Firefox can filter out sponsored posts from your websites alongside the normal ads you see everywhere. If you’re sick of sponsored content making up an unfair percentage of your feeds, Firefox has you covered.
Edge, Chrome, and Firefox all have versions of their own ad blockers as well as third party versions that can be downloaded to the browser – Firefox, however, will allow you to synchronize this across devices without a fee. While we like and recommend LastPass, it’s only free if you’re using it on one device, and you have to pay to sync it on multiple devices, which can be a bummer.
This is a mixed bag of a tool. On one hand, having all this stuff stored safely inside your Google account sounds great and convenient, and usually it is – except in the case of hacking. If someone socially engineers their way into your Google account, suddenly all of your other passwords are stolen too. Nightmare! A Firefox account, which does not have its own email service, is less likely to get hacked if only because it’s less immediately valuable. By dividing your email service from your browser password service, you’re not putting all of your eggs in one basket.
As far as security, a really good fake webpage that trips your browser or password manager to auto-fill the password would get almost any password service, built in or not! Turn off auto-fill if you can.
Firefox has tons of other useful addons as well! Tired of getting distracted on Reddit, but can’t seem to stop typing in the URL almost unconsciously? Download Impulse Control and wrest your eyes back on task. Trying to keep cookies under control? Download the extension that shortens the path to deleting your browser history right to your window. Ads still squeezing in, or threatening to break your page if you don’t turn off your ad blocker? A browser extension called DeCentralEyes promises to serve more local content that won’t slow down your page or give a ton of info to bigger third-party ad sites. You can remove ‘recommended’ content on YouTube to see only the people you’re subscribed to on your front page, and skip out on YouTube sponsorships with a separate extension from that one. Overall, you can completely tailor your experience on Firefox, and you’ll have quite a bit of privacy from the business running the browser itself while doing it.
If Chrome isn’t going to offer you privacy or add-free browsing or a customizable experience, consider Firefox!
Before the age of built in antivirus and user-friendly web design, it was entirely possible to wander onto a webpage that would just start downloading something malicious out of nowhere. Popups that did this were a serious problem, and many browsers responded by working in a sort of zero-trust philosophy. Firefox, for example, will tell you when a site has tried to open a pop-up, and asks you if you still want to open it. This does occasionally catch honest secondary windows (like payment portals and the like) but the great thing about that is that because it asked, you can say ‘yes, I wanted that to open’ and you’re not stuck with some horrid flashing popup dominating your screen every other time.
Aside from popups, some websites were able to either trick users into downloading things by mimicking a real website, or simply start downloading things themselves as soon as they were clicked. Separate antivirus programs were needed to combat phishing downloads alongside other website trash, as browsers can’t always differentiate between intentional and unintentional downloads. In this era of the internet, misclicking or accidentally misspelling a website URL could be catastrophic for the computer. Big hosting companies protect their hosted websites now by preventing others from registering domains that are almost the target URL, but not quite (a form of domain squatting) but this wasn’t always the case.
Furthermore, hyperlinks can be used to trick people into clicking things they’d otherwise have avoided. Remember Rick Rolling? Every trick that anyone has ever used to Rick Roll you can also be used to get you to click on, and download, something you don’t want on your computer. Disguised hyperlinks. Obfuscated URLs that re-route a couple of times to get you to lower your guard. Clickable buttons, in place of links. Social engineering. The list goes on!
The False Sense of Security
The modern web as most people browse it is a safer place than it used to be. Google’s SEO is partly to blame – users who report unpleasant website experiences or demonstrate that the website isn’t good by leaving within so many seconds of it loading will lead to that website appearing lower in the search results, until eventually Google stops letting it pop up near the top at all. Hosting services are also partly to blame – they have a monetary interest in keeping their websites whitelisted, and malicious websites screw that up for them. Plus, it’s sort of scummy. Would you want to do business with a company that passively allowed one of its clients to wreck another potential client’s car? Probably not!
Antivirus and default browser settings take care of much of the rest. But these things don’t mean the nastier parts of the web have stopped existing, they just mean it’s harder to get there without doing so intentionally. Users don’t fear clicking on links that lead to sources or Ko.Fi services because it’s been so long since that was a problem. Forum users click through links with no fear. While not a perfect breeding ground for scam links to come back (most people still know and remember the warning signs) it is a perfect breeding ground for something new built on old foundations – QR code scams.
A QR code is a sort of bar code that’s recorded in two dimensions (vertical and horizontal) instead of one. Almost every modern phone (and many of the outdated ones) come with a QR-reading feature built in. QR codes and code readers have a high tolerance for missing or damaged information, making it a fantastic resource for quick and easy link-loading – where a barcode is unreadable if a bar is missing, a QR code can often still be read if squares are missing or obscured. Advertisements, verification texts, digital menus, libraries, virtual queues, etc. all benefit from how simple it is to whip out a phone and point the camera at a black and white square for a few seconds. It’s even easier than typing in a link, and you can direct users to specific pages with gangly URLs without worrying how that URL is going to look on printed material – the user isn’t going to see the URL anymore, they’re going to see the QR code!
This lead to things like QR code stickers that would lead to individual GIFs or art project websites out in public, a form of easy-to-remove graffiti that still showed off some art in today’s hyper-online world. QR codes gave restaurants and their diners an easy way to see a digital menu without having to type in a URL. It also made Rick Rolling easy again.
You’re probably already seeing the issue here: when users can’t see the URL, they have no way of knowing where they’re going to end up when they scan it. A hyperlink’s true destination is visible to a user when they press and hold on mobile, or hover their mouse pointer over it on desktop – the same is not universally true for QR codes (some phones and programs show the link before asking you to continue, but many do not). The scam potential for these codes is off the charts because many do not understand them as ‘links’ but as ‘scannable objects’.
For example, the recent slew of Discord scams! Essentially, what happens is a scammer compromises an account, either by password brute-forcing or by social engineering, and sends messages to everyone on that person’s friend list saying things like “ummm idk if this is really you or not but it was your name and it says you sent a girl gross stuff like wtf? Check the #shame tag and you’ll see it. I’m blocking you just in case, I can’t be friends with a predator”. They then send a link inviting you to join the Discord server mentioned in the message, and block you so you can’t continue to chat with them. As this is a compromised account and may be pretending to be someone you actually speak to on the regular, this can be very alarming. The first instinct is to join the server so you can defend yourself against whatever allegations have allegedly been made in that server! It presents you with a QR code to join the server that this compromised account has sent to you so you can clear your name and get your friend to unblock you, but when you scan it, it tricks your phone into giving over the login credentials for your Discord, compromising your account and continuing the scam.
This is the sort of scam that happened all the time before people grew wary of random DM’ed links! Here we are again, re-learning not to trust people that talk like bots and the things those bot-people/compromised accounts send us.
Cryptocurrencies generally work off a blockchain which records its movements. This has both pros and cons, but the biggest pro and con is that there’s no centralized agency that monitors the coins. They monitor themselves instead! Given the base coin technology was made correctly, you can kind of just set it and forget it, and transactions using secure, well-made cryptocurrencies will work out as they should so long as both parties are being honest and not trying to scam each other. That’s not always the case, but in a perfect world, the flaws belong to the people and not to the tech. You can’t hack a Bitcoin, for example, it has to be deliberately sent. Almost all Bitcoin scams involving theft are social engineering attacks for this reason – if a scammer can get into a Bitcoin wallet, either by brute forcing the password or tricking the owner into giving it to them, they can still steal the coin by sending it elsewhere, and it can’t be called back.
However, this really applies best to Bitcoin and older cryptocurrencies that have had a minute to mature and improve the tech. New tech using blockchain are riddled with flaws. Take NFTs, for example – on some of the platforms hosting them, a security flaw allowed ‘smart contracts’ to be planted in someone’s wallet, which would then move the real NFTs out of the wallet once the owner clicked them. NFT chains can’t show if something was paid for, they only show if it was moved, and so those NFTs would be sold along as though they’d never been stolen because nobody would be able to tell. It’s sort of ridiculous.
The coins are impenetrable – everything else is not.
The Nomad Bridge Hack
Bridges, in cryptocurrency speak, are like currency exchanges. They allow people with one type to spend it like another by depositing the crypto they have to be used as collateral for the one they want. Blockchain technology is difficult to break when it’s one continuous piece, but when it’s not, it’s just like any other kind of banking technology. Meaning it also needs layers and layers of security so a failure on one layer doesn’t mean total system failure.
The problem is that typical banks have had a ton of time to work out security, but crypto is new, and it always wants to build itself something special, just for crypto, because that makes it more special than all the other modes people have used for payment. As a result, they’re rediscovering issues that banks have already worked out, like the exploit that drained Nomad of all of its money. Or the different exploit that drained Wormhole. Or the other different exploit that drained the Ronin bridge.
In Nomad’s case, a bad update allowed any tokens with the default value for transactions to go through as though they were valid. Once one person figured it out, others began copy-pasting his transaction info and substituting the destination address for their own. This allowed them to transfer currency to their own wallet without having to put up any collateral, like they normally would. A handful of people tried to altruistically take money so it’d be safe in a wallet and they could give it back later, but the vast majority was snatched before the platform could react.
Currently, Nomad is attempting to trace the coins and get them back, but this is the major disadvantage of cryptocurrency – they can’t just reverse the transaction, and the coins don’t record whether a movement was legal or not. There’s also no central body to make the thieves give the coins back, because the currency was made specifically so it wouldn’t need that. It’s unclear if Nomad is actually going to be able to get those coins back. Right now, 9 million dollars’ worth of the stolen coins have been returned (probably due to the 10% bounty that Nomad set trying to encourage people to give the money back) but the rest is still up in the air.
It’s a message that bears repeating – you shouldn’t click on links or attachments in emails you weren’t expecting, didn’t sign up for, or otherwise don’t entirely trust. For example, say you get an email from Target, but there are several typos in the header. That’s a really easy tell that the email is likely a fake! A real business the size of Target has several sets of eyes on their marketing materials.
A harder tell is checking the email sender each time. Say you open an email from ‘Tagret’, and it’s not loading right. If you don’t normally have that issue, it might be a fake trying to get you to click a ‘view in browser’ link that actually leads to a download page set up for a virus. You might have missed the fraudulent sender if you didn’t double check!
But what about attachments? You should approach attachments with a zero-trust philosophy. Verify the sender, verify the email itself isn’t riddled with typos and easy-to-fix mistakes, and verify that the attachment itself is titled appropriately for what it says it is. While you could easily accidentally open a phishing email, realize it’s a phishing email, and then close it before you click any links or type anything in (you should still report that incident to your IT Department), clicking on an attachment that’s malicious is harder to recover from! PDF attachments, which are normally pretty inert, are a possible highway into your network or computer. Keep these following things in mind when you open attachments.
It Might Not Be A PDF
Not all that glitters is gold! That attachment from someone you don’t remember hiring might be something like an executable file (a .exe file) that’s just named Invoice307.pdf. When you name a file, only certain characters are excluded from possible names, including characters like the percent sign (%) and question marks (?) because they’d interfere with the way the file is stored. Periods are not, and that makes it easy to fake a name! It won’t get everyone (invoice.pdf.exe looks pretty strange, right?) but it might get the kind of person who doesn’t spend that much time on computers, or doesn’t get this kind of scam regularly. If that sounds like you, it only takes a second to double-check the extension name before you download it, and that second can prevent a lot of pain! Most desktops will also show you a file’s full name if you hover your mouse cursor over said file – to hover, you just move your mouse cursor over the file without clicking it, and wait a second or two for your email program to show the full name. This is nice if the name is too long for the thumbnail and you’re not sure if you trust the sender or not.
A similar tactic is hyperlinking some text to open a website which will begin downloading malware instantly. The scammer puts in some ordinary-looking links, like a Shop Now! or Click Here! Button, and then uses the hyperlink feature available in most email applications to hide a viral link inside. If it successfully tricks you into clicking it, you’re in for a bad time. The hover trick from before works here too, and it should show you where the address actually goes in the bottom left corner. Remember – don’t click if you’re using the hover trick! At least until you’re sure it’s safe.
However, there are ways to mess up your computer without overtly malicious software. Consider the ‘.zip bomb’, for example! A .zip bomb is a huge amount of junk files packed into a .zip file, which compresses it. When you, the receiver, download and open the .zip, it slows or even crashes your computer with the huge amount of information it’s trying to decompress. Since the files themselves don’t have to be malicious to achieve this (they can be, but they don’t have to be), many consumer antiviruses will just ask you if you trust the source – and if they’ve done a really good job social engineering by making the sender sound plausible and writing without typos, you might click yes without thinking twice. To recap – if it doesn’t end in .pdf, and someone you don’t know sent it to you, it might cause problems for your computer.
Even If It Is, It Might Have Something Nasty
If you’ve ever struggled to get Word to allow you to open a document and edit it, that’s because some malware can be hidden inside otherwise innocuous-looking documents. It’s rare, but it happens – it’s usually something called a macro virus, or a virus that uses ‘macros’ to download itself. A macro in Microsoft properties is a command that groups several keystrokes into one, and they have many legitimate uses, but can be used maliciously to lead you somewhere you don’t necessarily want to go, or download/unzip another file contained within the file you’re working with. A much simpler version is just using macros combined with the hyperlink trick from before to get you to bring the document out of safe mode by disguising said hyperlink as something innocuous, but other, more complicated ways to get your PC to download something nasty can be hidden too.
Once again, double-checking the file extension might help you determine whether or not you really want to click something. Microsoft Office products save differently if they contain macros or ‘active content’ – for example, instead of a .docx file, a Word document with macros in it will save as a .docm file. If you download one, most recent versions of Office products will ask you to verify you trust the place you downloaded from, adding further security.
Don’t Forward Emails You’re Suspicious of to Anyone but Your IT
If you send this mail to your manager, and your manager is in a rush and doesn’t read what you wrote about the message and clicks the attachment… you’ve just moved the problem! Don’t forward something suspicious to another member of your organization – if the scammer had their info, they’d likely be a target too! Instead, if you get an email you’re not sure about, forwarding it to your IT department is a safe bet. If it’s nothing? Then you sent your IT guys an email with a legit attachment, and you know for sure it’s safe to open. If it’s malicious? IT should be able to handle it in a quarantined computer. They may even be able to tell if it’s malicious without opening it! This could potentially save you and your organization from ransomware or other malware that can completely halt your business.
A shared email box has plenty of utility, but it has to be set up right to reach its full potential. A shared mailbox should allow all it’s members to see the content, and can usually be set up so that members can send emails under the mailbox’s address. Essentially, the box is just a box that they have permission to access. Microsoft Outlook allows you to add your users to specific shared mailboxes, but only you, the admin, can decide who gets to see it, who gets to be part, who has the ability to send as the box, where forwards go automatically, if that’s even desired etc. etc. And they don’t have to have a Microsoft license to function!
A shared account, on the other hand, is an easy path to disaster! A shared box shouldn’t be a fully-fledged account that your users can log into using a password and username that you gave them, generally speaking. If your box is set up so that users are in the account instead of in the box only, they have way too many permissions!
For example – a user decides they want full control of the shared email account and simply logs in, changes the password, and doesn’t share it. Now what? You can do a lot of things to the user, up to and including firing them, but that might not be enough to get the email account back, especially if they left on bad terms. Or, an employee mistakenly believes that everyone in the company is meant to have access to a shared account, and gives the login credentials to an unauthorized employee when they ask. Or, an employee writes down the shared credentials somewhere, loses that, and then the company’s support or information mailbox is hacked and totally out of their control. If the account is set up as part of a security group, everything in that group is then put in jeopardy, because accounts can access shared drives. Accounts also take a license to keep functional, so that’s an added expense over a simple shared email box. The issues go on and on!
While some of this can be mitigated with steps such as two-factor authentication, the vast majority of it can only be stopped by making a box that has layers of separation between the account controlling it and the accounts allowed to use it. Microsoft’s system allows users to be added to a shared mailbox without giving them total control over it – that’s the ideal, as user permissions can be revoked without having to go through the song and dance of giving the login info back out to everyone still authorized to use it. As shared mailboxes can’t be signed in to, they’re also much less likely to be ‘hacked’ via a stolen password (although someone could still access it via someone else’s account).
Group Accounts – Social Media
On the other hand, there are social media accounts for the company. Almost no website allows multiple people to run an account with separation from said account the same way that Microsoft does – LinkedIn is a rare exception, and Facebook pages allow people to post to them, but the page can’t post to itself – the company account has to post to it. In cases like that, a shared account is still not ideal, but it becomes easier to manage if only a handful of people have the password, and only one person has the 2FA number. In a pinch, that makes it slightly easier to reclaim the account if the person in control decides to go rogue, but even then, some sites will allow you to change the 2FA number without verifying it to the current 2FA contact first, thus making all of the issues above also issues here. That makes it extraordinarily difficult to truly, properly, bombproof a social media account! Limiting the total number of people who have access to it as well as monitoring when it’s being used is the best solution. Instead of a group shared account, make it a two-person account – or less!
Alternatively, websites like Buffer and Hootsuite can provide some barriers, but for a fee. They may not stop an employee going rogue, but they can at least identify when and which one was responsible if something happens to the company Instagram.