Category Archive


What is a VPN?

Elizabeth Technology March 23, 2023

Note: this is not meant to act as a buyer’s guide. 

If you’ve been on Youtube in the past couple of years, you might have noticed an uptick in sponsorships from VPNs, making all sorts of claims. But what does a VPN do?

Location Services

Sometimes content published online is kept exclusive to certain countries. Canada, for example, has a rule that a certain percentage of their entertainment has to be made by Canadian artists, and Germany’s copyright laws are notoriously strict. VPNs can allow people to access this content as though they were from where it was made, instead of where they are actually at. American music videos and uncut Hulu channels for everyone!


VPNs are usually advertised for privacy purposes. And most work pretty well! Instead of sending the information straight through to the ISP, the VPN anonymizes the data to get it to the server. From that server, your request goes through to the content you wanted, and the content comes back to you anonymized. The ISP, which is usually responsible for restricting content, can’t see this data and therefore can’t restrict it. For privacy concerns around the ISP, that anonymizing is great.

It doesn’t stop there, either: If the VPN is encrypting the data coming to and from a coffee shop’s WiFi for your computer, it’s hiding it from anyone who has access to that network – which might be more than the ISP. If all it takes is the password on the receipt to get into the network, then in theory almost anyone who finds a receipt or buys a drink can access the network. This could become a problem if that person knows more about WiFi than the owners of the shop do.

But Branding?

How is it possible for there to be so many? Don’t they all do the same thing? Kinda. That’s also why ads for VPNs have been so incredibly choking. The barrier to entry to sell one as a service is actually pretty low. Depending on where the host buys their server space, they’re also low maintenance. Given those two conditions, the only thing that could keep someone from making money off of one is their visibility. The market’s flooded, so right now the winner of the race is the one with the most advertising dollars.

Does it do Everything?

For advertising concerns, a VPN is not the be-all end-all of privacy. There are so many devices in the average house (your phone, your WiFi enabled washer, your computer, your Smart TV, your gaming console…) that advertisers will still have an idea of who you are, which doesn’t even include things like cookies. When you’re using Google, every Google service knows what you’re interested in, unless you’re signed out and incognito – so searches you made could be used to tweak the content that appears on your Youtube’s ‘recommended’ page. Google allows you to turn off ad customization – that doesn’t mean they aren’t keeping the info.


If you have an account with, say, Amazon, they already know what you’re looking at on their site because it’s linked to the account. Or if you have a digital assistant that you regularly search or browse with, the VPN can’t help you. If you’re really interested in browsing privacy and not accessing Geo-locked content, you could download something like DuckDuckGo or Ecosia (this is not a buyer’s guide, products only used as examples). These services don’t store data on your search habits. Privacy-focused search engines aren’t foolproof, but if your main concern is privacy from advertisers and you don’t want to spend money on a subscription…

Where’s The Data?

There are also concerns about the many different VPNs themselves: you are partially anonymous to your ISP (they still know you’re using them, and for approximately how much data) but you are not anonymous to the VPN. In some cases, the website on the other end expects non-encrypted data, which means that the VPN literally cannot connect you without un-encrypting that data. To be fair, most browsers will warn you about unencrypted websites. But if you insist because you think the VPN’s keeping you safe, this is important information to know. Besides that, the VPN itself can sell your data. Or get hacked! The barrier to entry is very low, which is why this is a problem!

Long story short, when Youtubers are trying to sell this service, they don’t tell you why you might not need it. It’s not a good idea to connect to public WiFi without some sort of protection. VPNs can help. VPNs are a good service if you really want to watch the UK version of The Office. However, VPNs are not an invincible shield, and they’re not always capable of end-to-end encryption. They’re a security tool, not a comprehensive solution to your privacy woes.

As always, do your research on the brands you’re considering before jumping into it headfirst.

Remember, this is an overview of VPNs as a service, not a buyer’s guide!

Sources: (Wikipedia here serves as a full explanation of what they are without the potential bias of money)

Preventing Piracy Is Hard

Elizabeth Technology March 21, 2023

It’s frustrating to have someone else steal your work. That’s why piracy is one of the biggest scourges of entertainment today. Yet bootlegs and copyright infringement still happen, and sometimes undetectably. So, if the person pirating is outside your legal reach, how do you keep them from enjoying your work for free?

Create anti-piracy measures, of course.

Tainting the Well

Cher briefly released songs on LimeWire that played very quietly, in an effort to get the listener to jack up their volume. After a little bit, she’d shout at you to stop stealing at the normal volume band – which was now at max volume. This didn’t last very long, because downloads had names on the site, but there was no limit to what artists would do to keep their intellectual property in their own hands. Ironically, the worst LimeWire users themselves were more likely to protect property than the artists! Trolls would put some strange things on otherwise normal tracks, and some people would rather go to iTunes than play download lottery. They tainted the well themselves.


People tend to be more embarrassed that they got caught with their hand in the cookie jar than they are about the pirating itself. Asking about the bizarre version of the song you downloaded would out you as a pirate. And music wasn’t the only industry to do this.

A whole bunch of games would give strange errors or messages to get pirates to ask about it online. Of course, the pirates are the only ones who got these messages, so creators and other fans alike knew they’d pirated the software.  That was the punishment: everybody on the game’s Steam page knew you were a pirate! They then either self-exile or double down on the pirating by removing themselves from the forum to avoid the shaming.

Anti-Piracy software

Games have great examples of anti-piracy in action. Piracy detection used to be pretty hard – all it took was a blank disc and a PC that already had the game on it in the early days to make copies. Games would use physical wheels or artifacts on the inside of the game’s packaging to be sure you had a legit copy – if you couldn’t answer a question pre-programmed into the game, you didn’t have the original package, and you couldn’t play. Then, as computers got better and games could take up more space, programmed anti-piracy kicked into a higher gear. Anything and everything went – it was the pirate’s problem if they didn’t like it. Earthbound, a game that was already difficult, would crash at the final screen and then delete all your save data. So would Spyro, although Spyro would warn you that it thought you were playing a bootleg copy before you got to the end.

The goal was to frustrate the pirate, which would eventually prevent piracy in its own way. Some developers went to guilt, instead: Alan Wake just slaps an eyepatch with the Jolly Roger on your character to remind you that you’re playing a pirated copy and you should feel bad. So does Quantum Break.

Business Software License Checks

There are many obvious downsides to pirating something like Excel. Namely, if something goes wrong, what are you going to do? Contact the vendor? With your illegitimate copy? Good luck with that. It doesn’t help that Microsoft runs audits, too – if they detect a license or a product key not in line with what they’re expecting, they’ll know you’re pirating. If another copy of Word tries to interact with an illegitimate copy, they’ll know you’re pirating. Basically, if you’re ever connected to the internet with a cracked copy of Office software, they’ll know. There are so many free alternatives that pirating Word seems foolish.

Microsoft is doing it for more than the money, too. There’s a growing host of people online who would just love to scam some businesses into downloading malicious software, alongside illegitimate copies of Word. Assuming the business owner genuinely believes they’re getting real copies of Office, Microsoft’s good name is tainted!

CAP Software

Pirating early-release discs destroys faith in reviewers. However, early reviewers are also giving you a lot of free advertisement, so it wouldn’t be very smart financially to just cut them all off. Instead, what they use is CAP software, which stores a code in the file. If the file is leaked or copied, the code is present, and the studio knows exactly which reviewer to cut off. Versions of this using tones mixed into the audio of the movie and visual watermarks are also common! Everyone benefits: the studio still gets it’s promotion, the reviewer gets to review the movie, and the viewer gets some early information about what they want to watch, legitimately. The pirate is slapped with a fine and everyone moves on.


Magnetic Storage Types

Elizabeth Technology March 16, 2023

Magnetic Tape

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live. Music recording tech already existed, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

Carousel Memory

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

Core Rope Memory

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

Core Memory

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

Magnetic Bubbles

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

Semi-Conductor DRAM – Honorable Mention

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

Magnetic Disk (Hard Disk Drive)

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

Sources: (all primary sources regarding carousel memory are in Swedish)

It’s Easier Than Ever to Accidentally Kill a Song Online

Elizabeth Technology March 14, 2023

Trying to Build Hype

It’s actually painfully easy to disappoint potential fans by teasing a sample of a song ahead of it’s full release. The latest to do this was Body Shop, by Sam Smith. The singer released a clip of the song’s drop, and it was bassy, scratchy, and sort of grungy, partially because it wasn’t the final mix, partially because Smith was playing the clip on his phone and the speakers were at their limit. People loved it! When the actual song came out, it felt more Dubstep-esque than it had in the clip, because they removed that scratch and re-balanced the bass before release. A lot of listeners, especially on TikTok, wondered why it didn’t sound as good. However, it still worked out for Smith: the initial stumble didn’t wreck the song’s impact like it had the potential to.

Releasing teasers like this is a semi-common way to build hype for a song, for artists both big and small. Artists will leak a little bit of the song they’re working on, usually the ‘drop’ if a song has one, or some other interesting tidbit of it, and hope it inspires their audience to listen to the full thing once it comes out as a way of building anticipation and excitement.

“Girlfriend” vs. “If We Ever Broke Up”

TikTok has changed the playing field, however. It seems pretty common on TikTok that one artist will manage to spin the app to their favor with a trend, but everyone after that will have great difficulty recreating that success, especially if it’s clear they’re trying to leach off that trend because they didn’t have their own idea. A song called ABCDEFU, for example, did really well (it hit the top 40!) because the app liked the artist singing it, and they liked that she was ‘one of them’. Copycats, including the singer for a song called Twinkle Twinkle Little B**** (yes, it uses that melody), did not have nearly as much success. The app was bored with them. It became clear that a number of artists trying to piggyback off of ABCDEFU thought it would be easy to get the people of TikTok to just stream anything mindlessly as long as it ‘belonged to TikTok’. Begging for listens feels… embarrassing, now. If a song is good, it would have blown up on the app organically, right? ABCDEFU came out before saying ‘let’s get together and make this song big!’ became uncool. It’s also not a bad song in its own right – the artist bothered to write it its own melody, at least.

So it seems to be fate that a songwriter who released a teaser specifically on TikTok, a song called Girlfriend by Hemlocke Springs, would shut the door for most of the songs behind it trying to use the same trick, at least for a bit. Hemlocke didn’t release an unfinished or differently mixed teaser for the song, they just took one of the most interesting parts of the bridge and published a number of videos under that sound clip. This trick works pretty consistently if the song is good, but it comes with risks: if the most interesting part of the song is also one that might get annoying pretty fast, the song’s not going to get a good footing when it comes out because listeners may have heard it too much to enjoy it. If the clip is of the most exciting part of the song, it’s not going to be so exciting in the finished track, because it won’t be new anymore, and that reduces re-listens. If the teaser is released too early, interested listeners might not be interested by the time the song comes out for real.

If We Ever Broke Up by Mae Stephens ran into multiple of these sample traps. The sample came out too far ahead of the song, and the fun part of the song was ran into the ground by the time it came out. For those unfamiliar with TikTok’s algorithm for it’s “For You” page, sounds, creators, and content that the viewer interacts with tends to show up more often on their For You page – if you like a video using a certain song, and you physically tap the like button to indicate that, you’re signalling that you want more of that content. You can get stuck in a loop of the same content over and over if you’re not careful with what you’re liking! The same happened to this song – it was new and interesting, and then the For You page kept pushing it forwards, and then it was overplayed for the people who liked it the most by the time the song was fully available for streaming.

To avoid short-changing a song with this trick, the best bet is to not alter the mixing for this particular sound clip, to release the song soon after the clip is released, and to mean it when you create things. ABCDEFU has something honest to it that Twinkle Twinkle Little B**** is lacking.

The Industry Plant Genre

There’s a whole other subsection of TikTok music written for people who may have connections with a real label and enough money to pay for professional production. It’s usually not very good, and it’s almost never the genre it claims to be (often they go for punk, for some reason, a genre that calls out posers like almost no other).

All of the issues that these previews of songs give to their full-length versions are also somehow worse for these songs. They clip the best snippet of their song and slap it onto TikTok in an attempt to create a viral moment with it. Even if they’re successful, ironically or not, it rarely translates well to streams or song purchases. By the nature of plants, the song itself is usually pretty simplistic with obvious rhymes and a bridge that doesn’t do anything for the song, if there even is one. Releasing a clip of an obviously crowd-sourced/committee-written song before the song is out is pure kryptonite for that song’s success.  It doesn’t even have to have suitable-for-work lyrics anymore! Industry plants can use crass language that would bar it from being played at Target, and many do in hopes of appearing more organic. It doesn’t work. 

Nobody likes being advertised to anymore, and if listeners are able to figure it out before the song’s got its rotation in their Spotify playlists, it may as well be dead on the launchpad.

Why Does it Feel Like CGI is Getting Worse?

Elizabeth Technology March 9, 2023

CGI, or Computer-Generated Imagery, is exactly what it says on the tin. A lot more qualifies as CGI than you might think – even phone filters could be considered CGI. So why, when there’s so much of it, does so much of it look bad? Especially by theoretically multi-million-dollar movie studios? Have we passed its peak?

More of It

Firstly, there’s just more of it than there used to be. CGI was a sparing supplement, and usually pretty expensive, so painted mats, painting the film itself, and practical effects used to be the way to go. This wasn’t always ideal: practical effects are often expensive, and can be very frustrating for actors on-set! Dave Bautista’s Drax makeup (from Guardians of the Galaxy) could take two hours or more to put on. Jim Carrey’s Grinch was much the same. The process is so agonizing that it often takes special expertise to coach the actors through it until filming is done so they don’t literally go insane. To minimize the human suffering of the actors, CGI can be used in scenes where they’re not especially prominent, or where a scene may damage the makeup and they’re unwilling or unable to put in a double.

Even better, CGI allows for insane visuals that couldn’t be made with practical effects. The Mummy series features things that can’t be done with practical effects – scarab beetles, for example, don’t move as fast as the movie portrays, nor do they run in floods together. Huge sandstorms don’t exactly have human faces in them all the time either. 

Not to mention backgrounds! Where previous generations of moviemakers were forced to use painted mats and real sets, greenscreen technology has gotten so good that it’s almost difficult to tell when a movie is using a greenscreen as a background, given the background could be a real place and not outer space.

 CGI also opens the door for newer, more dangerous-looking stunts. Before, clever editing of the film would be necessary to make a larger-than-life stunt possible. Now there’s CGI! The end of John Wick 3 featured a sequence too dangerous for Keanu Reeves or a stunt double to perform, and so CGI was used to make it look like John Wick flopping around. CGI may win actors where stunt doubles don’t. While some admire Tom Cruise’s willingness to do his own stunts, others call it too risky. During the shooting for one of the more recent Mission Impossible movies, Cruise broke his ankle after landing a jump wrong. This delayed production, even though the release date wasn’t changed, and he revealed that his ankle still wasn’t quite right after the movie was completed. By saying the studio doesn’t want anyone doing a particular stunt, they may have convinced him to stop trying. Similarly, when animals are featured in movies, it’s unethical to put them in situations where they feel like they’re in danger because they don’t have the cognizance to know a stunt is just a stunt. A Dog’s Purpose rightfully got a lot of flack when it actually used the real dog actor in a simulated white-water river (simulated as in a fake river with real water) to get the shot. CGI animals may not always look the best, but it’s better than potentially traumatizing a real, living creature.

Overusing It

All of this has become faster and cheaper than doing it the ‘real’ way. Obviously filming in space would be prohibitively expensive, and a number of movies are downright impossible to make with an actor in makeup or a puppet instead of a CGI rig. The technology has improved quite a bit! However, that doesn’t make CGI a fix-all, even though studios are often treating it as such. For example, Dr. Strange: Multiverse of Madness received criticism online for it’s cheesy-looking CGI in places a quality prosthetic would have sufficed. Screenshots of the movie vary between Marvel-worthy and something out of Spy Kids. A staff member of one of the CGI teams working on Endgame reported that the studio filmed an actor in the wrong suit, and instead of refilming the scene, they were asked to CGI the right suit over it. In fact, a huge number of scenes with suits in them have the suits CGI’d over to make them look more polished! The entire movie is shot knowing there’s going to be CGI assisting difficult tricks of lighting and iffy props the actor isn’t allowed to see to prevent spoiler leakage. The final filming of Infinity War is known for how hard the studio worked to keep actors in the dark – green screens with limited direction, actors standing on their own to recite lines, key plot components shot out of order to keep said actors from piecing the story together – a common conspiracy is that by doing this, Marvel is making it impossible to tell who has a bit part and who’s carrying the movie, thus making it harder to negotiate for better pay when the contract can still be negotiated. Speaking of pay…

More of It, For Cost Savings

CGI studios are not often unionized. They make less money, so they cost less money, and because they don’t have to be physically on-set, they can be physically located anywhere, including countries with significantly lower cost of living. Hiring a CGI studio located outside of the U.S. can free up additional funding to spend on the actors themselves, or on the things the studio positively cannot outsource.

The problem with this is that it’s shorting every party and forcing them all to work for less money than they’d be paid in a fair environment. Stunt workers require insurance. Fun fireworks for explosions require insurance and a lot of specialized expertise so nobody goes deaf or dies. Do you know what costs less? CGI! CGI explosions often don’t look as good as the real thing, but they cost less. If the story is good, fans are often willing to overlook an over-reliance on CGI where it’s unnecessary, which encourages the studio to do it more. They’ll put fake dogs into spots where real dogs could have gone, safely, because it saves them money on animal handling in the long term! To protect animals from accidents and cruelty, the animal has an agent that’s meant to vouch for it. That agent makes the real animal cost more. The studio sees this as a cost to be cut, not part of doing business. When every corner is cut, it starts to affect the way the movie feels for the worse. Actors talking to empty air don’t have the same presence as actors talking to other actors, or handling real props, or wearing something real that’s going to be digitized in post.

Again – you cannot film in actual outer space, and Rocket Raccoon could not be recreated with a real raccoon, but Sebastian the Rat is so endearing in the re-make of Suicide Squad because he’s a real rat. He’s not CGI. The same goes for Marley, in Marley and Me. Even if it makes the animal’s appearance cost more, their onscreen charisma is often worth it, if only the studio can be convinced.

More of it, Faster

The studios have become so accustomed to cheap, quick CGI that they consistently give the studio less time than they really need to render something to perfection. The CGI in that new Dr. Strange movie could have looked better if the CGI professionals were given more time to polish it, but releasing in the right season for the right price kept them from doing their best work. The CGI is meant to shortcut the practical effects specialists and assorted animal or explosives handlers out of the equation, and once it’s out of slack, it starts cutting into itself. It starts looking worse!

CGI looks worse not because it’s somehow gotten worse, it looks worse because big movie studios are using it to take shortcuts where no more shortcuts can realistically be had.

Twitter: A Case Study of how Modern Websites Break Down

Elizabeth Technology March 7, 2023

Gutting is Not Always the Solution

Twitter’s meltdown should serve as a warning – while it’s possible to coast off of minimal support for a little bit, it’s not actually all that easy to keep things running on a skeleton crew. And even if Twitter still had all of its staff, would it still be standing after all those changes?

For those of you who don’t use Twitter, Musk’s purchase of the company has been a pretty huge mess for the people working under him. He fired a large percentage of the staff (more than half of the company was laid off) and encouraged those not laid off to leave by insisting Twitter was going to go ‘hardcore’ and they’d have to return to their physical offices for long hours if they valued their job. Many simply sent a salute emoji in the company’s big Slack town square and jumped ship. The people left behind are a mixed bag – engineers that like Musk a lot, people trapped under Twitter’s employment due to work visas, and everybody in between. They’re not the company’s second choice team, by any means, but there are less of them. A lot less. Some might even say it’s too few for the site to function with.

Broken New Features

The blue checkmark fiasco, where Twitter’s CEO promised that being able to simply buy verification would definitely not result in fraud, is one of a number of bad rollouts. A common mantra for startups is to ‘move fast and break things’, a strategy formulated when delaying choices or rollouts to make them not-broken could be the difference between receiving investor money (and customers count as investors here) or not. The iPhone, for example, famously did not work when Steve Jobs first demoed it. It crashed a lot, and it didn’t have great reception. But by demonstrating that everyone was super into the idea, he was able to rally and put out a better, more complete version of the device for customers to buy! Importantly, the iPhone wouldn’t crush the rest of Apple if it didn’t work, so they could afford to play fast with it.

However. Twitter is not a startup, is it? Nor is it releasing a fenced-in product totally unseen before – paid content tiers are new to Twitter, but pretty common everywhere else. (Had Twitter not downsized, it might have even still had the necessary expertise onboard to roll this feature out gracefully.) When a startup moves fast and breaks things, it’s forgivable, because the team might be creating something so groundbreaking that they can’t even keep up with the scope of their idea. When a big company does it, it looks… embarrassing. A team working out of a garage may not have multiple test environments for their app or product. What kind of billion-dollar company doesn’t have test environments?

What kind of billion-dollar company couldn’t see the potential for abuse, especially on a platform dedicated to discussion, either? People were tweeting about misusing this verification shortcut as soon as the announcement was made, and they still went through with it! This new, fast, broken feature shut down a valuable communication channel between big companies and their clients until moderation was put into place. The lack of moderation was supposed to be a feature, you see – Twitter’s previous verification system meant that verified accounts were actually verified by Twitter, not by money, and if they moderated it, it would be like Twitter was doing the verifying again. Again, this is an almost understandable mistake on a smaller platform with less people chomping at the bit to abuse it, but not for multi-billion dollar Twitter. It looked like official pharmaceutical companies were finally breaking good, and like the official channel for Nintendo USA had posted a picture of Mario flipping the bird. Customer support lines on Twitter were strangled by fakes. The response from some of those big companies was understandably angry. Musk attempted to smooth this over by bringing back the individually assigned verification checkmarks, but in gray, and then finally just dropped the idea.

Breaking Old Features

Twitter disabled the service that sent out the 2-Factor Authentication texts in an attempt to prune down microservices. Later, it broke the service that allowed users to tweet directly to their page, meaning only scheduled tweets would go through, when restricting API access. In theory, both actions were unfortunate side effects of trying to streamline user experience: by shutting down what Musk felt was bloatware, Twitter would run faster upon startup. That makes sense. However, Twitter runs on miles and miles of code. And they only have a quarter or so (maybe even less) of the team they had at the start of Musk’s takeover. The resultant ‘breaking’ of microservices like 2FA, and the over-restricting of Tweet permissions, is a direct result of losing the engineers who handled those features before deciding to tinker with them.

Musk’s choice to prune Twitter’s team down to the roots means that every update, every security hole patch, every choice affecting the infrastructure of the site, is now ten times more likely to result in bugs, and those bugs are going to take much longer to fix now.

But hey – at least there’s less overhead. That’s going to be important, because advertisers are not exactly pleased.

Making Simply Existing in the Space A Total Nightmare

The CEO’s promise to ‘stop stifling free speech’ on a platform that’s honestly pretty permissive (a side-effect of being an official channel of communication for a U.S. president, a role that comes with a huge number of responsibilities) certainly earned him brownie points with people who were decidedly not going to use this new, even looser set of rules kindly. People who’d been, say, banned over the use of certain words, in certain targeted circumstances. At the rate Musk was suggesting they loosen moderation, Twitter could have easily turned into 2 Kiwi 2 Farms, where the targets are actually on the same platform the harassment campaigns are planned.

Ultimately, what changes he actually made didn’t matter, because the mere promise of maybe loosening the rules a bit brought a ton of vitriol to the surface anyway, and the remaining moderators at Twitter after Musk’s big ultimatum were not equipped to handle it. Discourse on Twitter was already a horrible, rotten place where nuance goes to die, but people just existing on the site, promoting their wares or keeping up with their favorite singers and actors, were now experiencing a worse version of the site where slurs were now part of the discourse.

Every step of this is an absolute nightmare for advertisers who don’t want an ad for Sunny-D appearing next to a tweet telling someone to off themselves. Musk’s total reign over Twitter combined with his unpredictable behavior means that he can’t even promise he’ll change, because yeah, he might – and what if he makes it even more of a nightmare?

Musk Himself is Part of The Problem

Stephen King declaring that he wasn’t going to pay 20$ to hang around on Twitter as a verified user led to Musk very publicly changing the price point to 8$ – the price that stuck for rollout. How absolutely insane of a business choice! A single celebrity says ‘this costs too much’ (and because he’s a celebrity, you know it’s not because he’s incapable of paying it, the tech-sphere says) and then the price is actually changed. Can you imagine almost any other service just… going for it, like that? This is a perfect example of behavior that would have been funny if Musk had not burned away all his goodwill on stupid stuff, like getting the California high-speed rail canceled in favor of his hyperloop, or calling an account that uses publicly available info on jets a ‘stalker’, calling that cave diver who saved those kids a very mean name with no evidence, or subjecting his staff to inhumane work hours, or that thing with the horse, or the cybertruck delay, or threatening to shut off Ukraine’s new Starlink internet even though the US Government paid for it, the list goes on.

When Musk made a flamethrower available for sale, it was funny! He talks directly to the people! Look, he’s reinventing cars from the ground up! He named his son a bunch of letters and numbers!  When Musk said “both sides are making good points”, it was scary. He has so much money that if he decided to fund an ad campaign for a candidate, that candidate could win. When he appeared behind Dave Chapelle to shout “I’m rich, bitch!” at a show, it was… bizarre. The CEO of Twitter has such an investment in looking cool that he appeared on Rick and Morty as a version of himself with tusks. To his remaining fans, he’s a maverick! To advertisers who’d normally buy Twitter adspace, he’s a nightmare. To car owners, his investment in linking his reputation to Tesla makes Teslas unattractive – a nice electric Ford doesn’t come with all the baggage, and the quality control is more consistent. He could appear anywhere, any time, and nobody can stop him from embarrassing himself and all of the people invested in his brands.

Musk himself is a huge problem for Twitter. A bad CEO can destroy a company as readily as any disaster. People within his other companies report that allegedly, orders from him get filtered a couple of times so they actually make sense when they get where they’re going. While that might be hearsay, comparing Twitter’s past few months to Musk’s more successful companies suggests it’s got some truth to it somewhere. Twitter is not filtering his requests – it wasn’t an organization built with impulsive leaders, so orders generally made sense as they left the head office. Tesla was built around Musk, so the buffers were there the whole time.

For Twitter to survive Musk, it has to essentially remove him from himself.

Setting a Default Program to Open Your PDFs

Elizabeth Technology March 2, 2023

Adobe Acrobat Reader DC

Adobe Reader is a free program offered by Adobe, that allows users to read PDFs. However, it cannot edit them, at least not for free!

Adobe Acrobat DC

Adobe Acrobat is a paid-subscription product that allows users to read and edit PDFs.

How Do You Set One  As Your Default Program?

For Windows Users (10 and 11):

Right-click the PDF, choose ‘Open With’, click ‘Choose Default Program’, and then select your preferred PDF program from the list it provides. (If you have both Adobe Acrobat Reader DC and Adobe Acrobat DC, you should generally select Adobe Acrobat DC – it has more features!) If you want it to always use the program you’ve picked, select the checkbox at the bottom that says “Always use this app to open .pdf files” at the bottom, and hit ‘Okay’.

 For Apple Users:

Click the PDF file’s icon in the finder to select it. Go to ‘File’ in the top left, and select ‘Get Info’. Click the triangle next to ‘Open With’ to open the dropdown menu of programs, and select your desired program from the list (if it’s not there, it may be available under the option for ‘Other’). Click ‘Change All’.

Moderator Bots: Do They Work?

Elizabeth Technology February 28, 2023

In a world of ever-growing conversations and large forums, moderating manpower is in high demand. Websites turn to bots. Is that really the best idea?

Children’s MMOs And Overzealous Bots

Poorly configured bots will spot curse words in other words, so bot configuration is especially important to prevent kids from reverse-discovering a curse word. Kid’s games with open chat are notorious for this issue, even though they should have more attention and care put into their bot moderation than anywhere else. That’s the problem: they’ll go to extreme lengths to protect these children! The people programming auto-moderator bots get overaggressive and say ‘no exceptions. None.’ to their bots. Context doesn’t matter, if it sees a combination of letters that add up to a curse word, then it has to be removed before other children see it. This, however, causes problems.

If someone tries to type ‘assess the situation’ they may end up with a message that says ‘***ess the situation’. They can confirm or deny words their friends told them were actually curse words by bouncing it off the chat filter. Children may be naïve, but they aren’t stupid!

Moderator bots were also trained to spot curse words separated by spaces ‘l i k e t h i s’ later on. This isn’t a bad idea – it just has to be more delicately configured. People will do their best to worm around content filters, and if spaces work, then they’ll use spaces to curse out other players. The problem is that such machines frequently doesn’t understand the context of the letters surrounding it, and you get “Ay* **mells weird” instead of “Aya Ssmells weird” from some little kid’s typo.

The irony of all of this is that it creates a reverse censor effect – clean words seem dirty because the bot’s censored them, words like ‘Assassinate’, or “Scattered”, things kids might use in a game. Typos under this system turn into a fount of forbidden knowledge. People will worm around bot moderators, but – especially on children’s forums – it’s important that the bot understands context, at least a little. If it can’t do that, a human teammate is necessary to whitelist weird word combinations as they appear.

Paleontology and Oversized Profanity Libraries

There are many bones. And if you were going to single out a specific bone (in the context of paleontology) just to cause problems, which bone would you pick? The censor library picked the pubic bone, alongside a host of other totally normal words like ‘stream’ and ‘crack’. There were curse words in the library too, but, of course, like most normal, professional conferences, the curse words did not appear nearly as much as the other words used in completely scientific contexts.

As in the children’s MMO example, it wasn’t an innuendo to say ‘the bone was found in a stream’ until the censor library did the equivalent of adding the flirty wink emoji to the end of the statement. Since tone can’t be conveyed over text except by word choice, the computer choosing to single out a definition for ‘stream’ and apply it to all uses is what made it a dirty word. Besides the words with no connection to actual profanity, pubic bones do come up quite a lot when talking about fossils, because it provides information about how fossilized animals would walk. The pubic bone is the ‘front’ bone in the pelvis: two-legged animals have a differently shaped one than four-legged ones, and animals that walk totally upright like humans have differently shaped ones than animals that ‘lean forwards’, like birds.

Why make a moderation bot too strict to have conversations around? They didn’t make the bot! The conference organizers were using a pre-made program that included its own profanity library. Buying a software that includes censorship already baked-in sounds like a great idea! If applied correctly, it can save everyone time and prevent profanity from appearing where it shouldn’t, even anonymously. However, ask two people what profanity is, and you’ll get two different answers. Everyone has a different threshold for professional language, so it’s better to build a library of the ‘obvious’ ones and go from there based on the event. The best censoring software is the kind you don’t have to use. Professional events are better off stating their expectations, before frustrating their attendees with a software that causes more harm than good.

Weaponizing Profanity Filters

Twitter had a bit of a kerfuffle involving the city of Memphis. People using the word Memphis in a tweet got a temporary ban. Then, a rash of baiting other Twitter users into using Memphis hit once word got around. Memphis getting users banned was the result of a bug, but the incident itself highlights issues with profanity filters. It’s possible to bait people into using banned words, especially if they aren’t inherently a profane word when used out of context.

For example, some online games will filter out the very real countries of Niger and Nigeria, to prevent misspellings of a racial slur from evading a deserved ban. Why would North Americans ever be discussing African countries over a game set in Russia, after all? But, by including them, they’ve created a way to troll other players without saying anything profane (in context). Baiting another user into answering questions about the countries will result in them getting banned, not the question-asker. The person who answered now has to contact the human support line to get unbanned, or wait for their timeout to end, which is annoying and inconvenient for them. The anti-profanity filter has been weaponized!

Building a positive culture around a game takes a lot of effort, and profanity filters are an integral part of keeping arsonists and trolls out. Nobody should feel targeted in game chat for reasons outside the game. However, just like with every example mentioned here, humans should be on call to un-ban and un-block users who were genuinely attempting to answer a question. Err on the side of caution, both with the software and customer support.

Are Bots a Cure?

Short answer: no. Most good moderation teams have at least one human on them in case the bot screws up. Preferably, they’ll be able to respond to ‘deleted comment’ or ‘banned user’ complaints right away. Even better, if the bots are configured well enough, they’re not going to be jumping the gun often enough to take a team!

It’s just very difficult to make a bot that understands people well enough to understand every instance of bad language.

If you’re running a forum and you don’t want people using profanity, you will censor the profane words. A bot could do that. But then there’s things like LeetSpeek, where users will spell the colloquial name for a donkey with two fives in place of the ‘s’s. Do you ban that too? Sure, you could add that to the bot’s library. But then they change the A to a 4. Do you censor that too? If you do, people will push to figure out what is and isn’t acceptable to your bots, and they will. Not. Stop.

And then there’s things like homophones! TikTok, a popular video app, has a fairly robust profanity filter for text. Videos with curse words and sensitive topics in them are noticeably less popular than ones without those words, due to TikTok’s algorithm.  However, people making videos on sensitive topics use phrases like ‘Sewer Slide’ and ‘Home of Phobia’ to evade the bots. The bots, then, have not stopped anything. These conversations will happen no matter what TikTok’s moderators want, and banning the word ‘sewer’ is only displacing the problem. If you don’t want users discussing these things on your site, you’ll have to have human moderators at some point.

Language is dynamic, and bots simply can’t keep up. It takes real people to study languages – why wouldn’t it take real people to moderate it online?


Curb-Cutting Effect In Software

Elizabeth Technology February 23, 2023

Certain design choices make software easier to use – and there’s no reason not to use them.

What is the curb-cutting effect?

Most sidewalks have a dip in the curb, where the concrete comes down to meet the street. This is designed for accessibility: people in wheelchairs would struggle to cross the street if the curb didn’t do this. They could even be injured trying to get up and down over the curb, or rolling along in the street til the next loading ramp for a business intersected with the road. However, the dip also makes life easier for everyone on wheels – baby carriages, skateboarders, bicyclists, roller-skaters, etc. don’t have to dismount and carry their transportation over the curb. Elderly folks, able-bodied people, and people suffering from conditions like drop-foot can now aim for the dipped curb as well to make tripping less likely. The curb being carved out has benefited everyone, even though it was only put in place for the people in wheelchairs.

Accessibility features, when done right, can benefit everyone, not just the people they were made for. This is the curb-cutting effect in action!

Menu Design

Good, clean, accessible menus benefit everyone, not just sight- or mobility- impaired people. The faster and easier it is for someone to find information, the less likely they are to leave the site. How many times have you tried to find something specific on a website, only to give up and Google it again to find it, on the same website? For example, say you’re looking for a local museum’s hours. They have a drop-down menu along the top, but the options are only there when you hover.

You see information about the exhibits, you see information about the team, about the funding, about the history of the museum itself, you see where to buy tickets or make donations – but the hours are nowhere to be found. You give up, turn around, and instead of engaging with the website, you engage with Google to find the hours. This is obviously annoying for the end-user, especially since that info may be outdated! For people with dyslexia, people with cognitive delays, and sight-impaired people, it’s nearly impossible to navigate, and they end up calling instead.

If your website is really, really big, it’s better to include a table of contents menu instead of a dropdown. You can get more specific about what exactly is in each section, so users can navigate the site faster. If your website’s pretty small, a menu that just brings the user to the right point on the page may work – the user can get there either through scrolling or the menu, so they can Ctrl + F to find the museum’s hours. This also helps users in a hurry to find info!  If your website’s pretty medium, a solid, non-retreating, clickable menu is more helpful than ‘hover’ menus. All of these options also have the benefit of making the website easier to use on mobile devices!

Readable Text

Screen readers are a popular choice for blind and seeing-impaired folks, but a critical problem they face is images of text rather than plain text the reader can read. Unfortunately, a lot of information is lost when the image also contains text, like screenshots of tweets or memes with the caption built in, and text captions can give at least some of that information back. Captions help screen-readers out tremendously, and make websites like Reddit, Facebook, and Tumblr, which all use a lot of pictures, more accessible to the blind. However…

Readable text is also easier to copy-paste, easier to cite or quote, and easier to search for. If you remember part of a quote from someone on LinkedIn and want to share it elsewhere, for example, you’ll probably search for it on Google and see if the website crawlers can find it. If that quote was shared in a photo, the website crawlers won’t be able to find it – there’s no text for them to read. You’ll be forced to either plagiarize (don’t do that), drop the quote, or scroll through your feed til you find it. All of that could be avoided if the person had captioned their photo.

Closed-Captions and Transcript Options

Text captions aren’t just for the blind with screenreaders. Many deaf and hard-of-hearing people can lipread, but many more can’t, and some would just rather not if other options, like captions, are available. Besides, lipreading isn’t perfect anyway. Think about shows where someone’s saying something off-screen, or mouths are covered, like Cops, or Grey’s Anatomy. Captions make these shows watchable for people who don’t want to spend half a show guessing what the person with the surgical mask is saying.

Captions also help hearing people. If you’ve learned a language in a classroom setting, including captions, even if they’re also in that language, helps comprehension. Captions make it possible to mute a training video in the office and still gain the information, without disturbing coworkers.

Transcription options are another great example. Most adults read faster than they can speak – a transcript that takes five minutes to read may take ten or fifteen to speak out loud. When a website has no textual information about a video they’re hosting, it suddenly takes 20 minutes of video to gain information that could be read in seven minutes, without the annoying stop-start of rewinding to critical information, watching it forwards again, rewinding when you can’t figure it out, rewatching, watching further to see if they mention it again, etc. etc. It’s so much more efficient to have a transcript. Transcripts also allow readers to use Ctrl + F if the transcript is digital, so singling out specific lines of information are easy as pie.

Besides video-tutorials and learning material, choosing poorly while audio-mixing will make captions mandatory. Tenet was an entirely different movie when there were captions. I’m not joking, important information was completely incomprehensible the first time I watched it, and I’m not hard-of-hearing. Captions were only way to get all the information out of the movie!

Better Mouse-Input Technology

Part of the reason so many things behave themselves with gaming mice is because of accessible software. Not everyone has the strength or dexterity to use a mouse and keyboard, so software designers comply with the ADA by making the software take any generic input as a mouse click, if previously set by the operating system. Things like on-screen keyboards further help people with limited movement, and voice-to-text or voice-commands enable folks like Stephen Hawking to communicate and control their surroundings better.

Game designers took the idea a step further and allowed users to key-bind to any set of keys they like. Players that only have enough mobility to reach half a keyboard, or use a thirteen-button gaming mouse, alongside people going for speed records and people on small desks, can now set the buttons to be on a half-board if they so desire!

Those onscreen keyboards also help if you’re using your laptop as your media center and have it semi-permanently hooked up to a bigger screen. Are you going to get up off the couch to look up a video on Youtube? With the onscreen keyboard, all you need is a Bluetooth mouse, and you don’t have to get up anymore!

These software choices are not only good for ADA compliance – they also make your website more pleasant to use!


Wii: A Masterpiece of it’s Time

Elizabeth Technology February 21, 2023

The Wii, a motion-controller game console, used a combination of things to make sure it read your movements. The Wii was a truly special device!

Hardy Equipment

If you could only look at consoles to compare them, the Wii is at an advantage. It stands straight up, like a book on the shelf! It’s also much smaller. Other consoles can be stood up straight, but it’s not advisable – if doing so blocks the vent, the console can overheat and then die. The Playstation 5 recently advised against flipping the device on its side because the cooling system could break down and leak, which is not good.  

Aside from configuration, the Wii is the weakest of it’s generation of consoles, but that’s actually still a selling point – the device was so cheap because almost all of the interior computing hardware was coming ‘off the shelf’, which made it weaker, but meant the consumer was paying less for a device like no other on the market.

The Wii could sense motion in a way that other consoles simply had not dared to try – no doubt the Xbox or Playstation would manage to create a machine/controller pack that cost three times as much as the Wii did.

Differing Technologies

The Kinect, a much more unique approach to the matter of motion detection, is much more complex, but also more expensive. And Xbox’s mishandling of the new ‘always on’ era of gaming made it pretty contentious. Playstation had the most success by simply trying to emulate what the Wii had going for it.

And what did the Wii have going? It used a sensor bar in conjunction with the actual device to sense where the controller was pointing. The sensor bar itself didn’t actually do anything but light up!

This meant that in a pinch, you could simulate a missing Wii bar with a couple of candles – the machine is using the sensor bar as a frame of reference for where the controller is pointing at any given time. Within the controller itself was an accelerometer, which allowed the machine to tell if you were spinning, shaking, swinging, or otherwise moving the remote. Nintendo even later produced an optional set of control enhancers (the Wii Motion Plus) for games that required even finer tuning. The only downside was that controllers sometimes went through TVs or windows, which eventually stopped happening once users adjusted to the unfamiliar motions of bowling. 

Good Games

One of the biggest deciders of a console’s fate back in the 2000’s was what games would be available on launch day. Wonder why so many consoles come with games already downloaded to them? It’s because that system benefits every party involved, and may swing the purchasing party on whether or not to get the special edition of a particular console. Outside of built-ins, the console has to attract studios to make games, otherwise you end up with a catalogue full of repeats, sometimes even made by the console developers themselves. The Stadia, the Ouya, and a number of other small consoles make a great platform that doesn’t have any games on it. None attractive enough to swing the purchaser.

The Wii, because it was made by Nintendo, was already hand-in-hand with a number of games from a brand known for being family friendly. For families looking for a new console that a child of any age could play, this was a fantastic option. It had zombie games alongside party games and sport simulators. It really was a game-changer.

Bad Sequel

Given all of this , the most disappointing part of the Wii is the Wii U, the next console in the line. Not enough was done to ensure users knew the Wii U was a different console. It sounds ridiculous, but it was a real problem! The Wii U looked just like the Wii to someone who didn’t have either, and the game cases didn’t do a great job of telling users what console they were buying for, so once it came out, there was always the chance that a well-meaning relative would buy the wrong edition of a game.

Similarly, the Wii (just like all Nintendo products) didn’t make enough for the first run… and then broke pattern by drastically overproducing the WiiU, a business decision that haunts the choices made by execs to this day (it was impossible to get a Switch for a good three or so months after launch).

Still – the Wii did set standards for what AR really could be, even without a helmet or anything too fancy. In a way, it’s got tons of sequels. The Playstation started using motion controls after the Wii proved it was not only possible, it was fun! And it opened the door to gameplay mechanics that engineers and programmers could have only dreamed of.