Posts Tagged

History

Sony’s DRM Nightmare

Elizabeth Technology March 21, 2024

In 2005, an organization had been covertly installing a program similar to a rootkit onto consumer devices without warning. For those who haven’t heard it before, a rootkit is simply a program that is designed to remain unfindable on a device. They aren’t all bad, but their difficult-to-detect nature and ability to evade even aggressive anti-virus makes them a top-of-the-line tool for hackers. Back to the story.

The rootkit was on the lookout for ‘suspicious activity’, and if it detected any, it would quietly alert the parent company. However, even if you had nothing to hide, you still had something to fear: the rootkit left a gaping security hole, and a smart enough hacker could piggyback off of it to get Trojan Horses, Worms, and other nasty bugs in without alerting the computer that “hey, there’s an .exe file doing weird stuff!”

The rootkit was designed to hide itself, and it would hide the bugs behind it. There was no mention of this anywhere in the EULA agreement for the program that had the rootkit.  The parent company hadn’t meant to leave a backdoor, but they did, and attempts to fix it without removing their own program just made the problem worse. Attempting to fake fixing it with an uninstaller only hid the program deeper in the system, and trying to uninstall it could brick the computer, depending on which program you got. They’d really screwed themselves, and they hadn’t expected to get caught.

This wasn’t some Russian hacking scheme, or some government overreach – it was Sony, attempting to keep copyrighted material off of pirating websites. Talk about an overreaction.

The History

At some point, a company has to admit it would rather ruin the legitimate user’s experience than let a pirate go unpunished. That’s very understandable: stealing is wrong, and smug pirates behaving like they’ve gotten one over on ‘the system’ are frustrating. Ordinary responses to this can be anything from asking for the license # on the inside of the clear case to more subtly ruining the audio quality of pirated copies. This is a normal level of copyright protection. Very determined pirates could still get around these measures, but hey, you can’t spend all your resources on the fringe cases.

Companies are aware of this, and some begin to factor ‘unstoppable piracy’ into their calculations – you know, like grocery stores will factor in ‘lifting loss’ and spoiling produce. Companies usually determine they’d be spending more on preventative measures than they’d be keeping on the shelves. Theft is wrong, but so is littering and driving without a license. Somehow, all three still happen anyway. Sony is very mad that pirates are getting away with fresh content, and they want to do the equivalent of TSA pat-downs on everybody at the exit of the grocery store to stop a small percentage of thieves.  They don’t care anymore; nobody is going to get away with it.

Was it Reasonable?

Napster and LimeWire are making inroads into the music industry’s profit, and 2005 was the peak. The pirating of copyrighted content is only made easier with the rise of the internet, and Sony realizes it’s nigh impossible to find the illegitimate downloaders, and uploaders were only marginally easier. They decide to go for the source, but they decide to hit hard.

“The industry will take whatever steps it needs to protect itself and protect its revenue streams… It will not lose that revenue stream, no matter what… Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC… These strategies are being aggressively pursued because there is simply too much at stake.” – Sony Senior VP Steve Heckler

This quote was said in 2005, after Sony had merged with another company, BMG. BMG had an incident in Europe in the 2000’s, when they’d released a CD without warning users of the copyright protection on the inside. Apparently, burning money to replace those CDs (and burning goodwill) was not enough of a lesson, and Sony and BMG together prepared to take a stand against pirates.

The Problem

They’re going after the big boys, the folks downloading music to upload everywhere else…for free.

These are the people depressing profits, in theory. Some companies theorize that once these people are gone, the people passively pirating by downloading stuff from them will also disappear and go back to buying the content. They’re somewhat right, and this audience shrinks over time. More on that later.

This is illegal and very annoying! The estimated lost sales from piracy were in the billions, and many companies were beginning to look at more intense DRM: Digital Restriction Management.

To some people, DRM is the root of all evil, the seed of the eventual downfall of consumer’s rights. After Sony’s screw-up, they were right to call it as such. John Deere, Apple, Sony, Photoshop, etc. are all slowly eating away at their own best features for the sake of pushing users into proprietary software. Software they’re not allowed to repair because of DRM. Take Deere: if a new Deere tractor detects a common tractor repairman’s diagnostic software, a Deere tractor will stop working until you call out a Deere technician. This obviously drives up demand for Deere technicians, and it’s horribly restrictive to the user. Lawsuits are in progress right now over this because the obvious result is that Deere can cost you your farm by doing this.

To others, DRM is an essential part of the free market. Companies should be allowed to protect what they made, and if users find their methods extreme, they shouldn’t have bought it. And in less extreme circumstances, they’re right! That’s what the EULA, the End User License Agreement, is for. The user can decide if they’re willing to put up with the DRM specified in the Agreement, and if they’re not, they don’t have to buy it. ‘If you pirate this, it will only play static’ is reasonable.

Sure, some super-cheapskate who found a sketchy download off some sketchy site is going to listen to static with Hint of Music, but the average user would rather buy the disc and be done with it. If the company can make the ripped upload sound like garbage when it’s off its home CD, they won. The company has successfully used DRM here to keep their honest customer honest, and any would-be pirates away. And they did it without destroying either computer! As Stewart Baker of the Department of Homeland Security said, “it’s your intellectual property – it’s not your computer”.

Doing it this way means normal consumers still get a high-quality product, and if the DRM is limited entirely to the content itself, there’s no risk of it coming back to bite the company in the butt.

Still, if you really disagree with DRM, there were companies that successfully reduced their piracy problems in other ways. Some found that guilt was enough, others found that once certain websites were gone, their piracy problems disappeared too. Warning folks that piracy was still a crime got the people who didn’t know any better to stop. Fines did a number on the folks who were too bold or too dumb to not get tracked with non-DRM means, and for the people who were doing it because it was more convenient? They reduced their pirating when better paid methods became available. Sony’s problem could have been solved in a lot of ways!

Besides, Sony wasn’t struggling. Lost sales are not the same as losses! Companies are still making profit, just not as much as they’d like. Property is not being damaged, and nobody is experiencing physical harm as a result of pirating.

The Response

Sony’s DRM was a severe overreaction to the problem at hand, and it did lead to several lawsuits. As said at the beginning, Sony had not only installed software without the user’s knowledge, but they’d then left a big entry point for security threats to get in undetected. Hundreds of thousands of networks were affected, and some of them were government. Once someone blew the lid on the DRMs, they released a cover-up “uninstaller” that just hid the rootkit better and installed more DRM content on the user device.

This does not help!

The blown cover for the rootkit meant that black-hat hacking organizations could tool around and create something that could get into anything with that rootkit on it, undetected. Eventually Sony was forced to admit this was wrong, but not before screwing over a couple million people who just wanted to listen to Santana or Celine Dion from a CD they paid for. Over pirates.

Yeah, there’s some lost profit – but it doesn’t outweigh the regular customers.

The Aftermath

Sony’s first instinct is to hide it. As mentioned in the article above, the uninstaller available didn’t actually uninstall it, and some users reported issues of system crashes and their machine bricking up when the uninstaller’s poor programming tried to interact with the rest of the device’s programming.

Their second decision is to lie – ‘the DRM has no backdoors and doesn’t pose a risk to your computer’s security’. This is demonstrably untrue, and given that they were already in the beginning stages of recall, could be considered a deliberate lie.

Sony’s third action is to recall the discs with the DRM on it, but they don’t get all of the discs. Some users aren’t sure if their disc is affected or not, and even non-profit organizations dedicated to maintaining free internet can’t figure out what discs have it and what discs don’t. The best they can do is a partial list. Stores in New York and Boston are still selling the discs three weeks after the recall. However, users do get to swap their disc with an unprotected one through the mail. Sony seems to have acknowledged their screw-up at this point.

Sony’s fourth action is more a consequence – they stick a class-action lawsuit sign-up notice on their home website, and users affected can claim damages up until 2006. Class-action lawsuits filed by individual states start to drag down Sony’s profits more than the piracy ever did, and the end result is a mandate to put warnings on the cover of discs and to stop using DRM that could damage a user’s computer. DRM is still allowed, it just can’t be possible to destroy a computer to protect a song license. The feds actually considered this a breach of federal law and stated that it was engaging in deceptive and unfair business practices. Sounds about right – consumers wouldn’t have bought a disc that downloaded DRM without their knowledge. From conception to execution, this was a moral, ethical, and legal mistake. While pirating is wrong, it’s possible to be more wrong trying to stop it.

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal

https://us.norton.com/internetsecurity-malware-what-is-a-rootkit-and-how-to-stop-them.html

https://www.wired.com/2006/12/sony-settles-bm/

https://www.theregister.com/2005/11/01/sony_rootkit_drm/

https://money.cnn.com/2005/06/24/news/international/music_piracy/

https://www.networkworld.com/article/2998251/sony-bmg-rootkit-scandal-10-years-later.html

https://fsfe.org/activities/drm/sony-rootkit-fiasco.en.html

https://digitalscholarship.unlv.edu/cgi/viewcontent.cgi?article=4058&context=thesesdissertations

https://www.networkworld.com/article/2194292/sony-bmg-rootkit-scandal–5-years-later.html

Don’t Delete Your System32

Elizabeth Technology March 14, 2024

System 32 is essentially the heart of the computer’s software. Task manager, the boot-up instructions, and hardware-to-software system files are all located in the System 32 file folder. It’s very important. Do not delete it.

This folder is not a secret, but what exactly it’s responsible for wasn’t always public knowledge. After all, Windows keeps everything very neat and tidy; photos and documents to games and applications all stayed in their own little cubby holes. The actual System 32 folder is a couple of folders deep already– exploratory digging might result in someone finding it by themselves, but why would they ever delete it if it’s already there? That was Microsoft’s approach: make everything the user wants easy to find so only experts and programmers have to consider System 32. Even better, it would still (usually) work in the recycle bin, and it wouldn’t allow deletion with a simple left-click; there was no way a user could delete this folder without serious work. The hope was that most people would never even notice it.

They were right, and this was enough. For a time.

The Beginning

It’s the mid to late 2000s, and anonymous internet message boards are largely unrecognized and somewhat unmoderated. It serves as the Wild West of the internet, the last dark corner in a time where the rest of said internet is at least glimpsable with Google. Computers are expensive, but not Hope Diamond expensive, and the thought that someone would tell an un-monitored kid online to break theirs just for the heck of it was kind of absurd. Keyword: un-monitored. Underage children were getting into all sorts of sites they shouldn’t have, including internet messaging boards.

Knowing this, the people falling for the system32 prank are obviously not all just gullible adults.

Interim Growth

The site responsible for the meme (at the time) made it very clear that this was not a place for children, and the nature of the site’s set-up made it nigh impossible for the average user to be tracked or traced by another user. No username? No IP tracking? Zero consequences. There were mods, but the mods were few in number, and more concerned with activities that were genuinely very illegal and could lead to the site’s shut-down. Users convincing strangers to mix chemicals together or de-magnetize their hard drive was less pressing unless it also resulted in something illegal.

The meme really got going when one user came back to complain that their computer would no longer start after they followed one of the first troll posts. That post gave instructions on how to delete it while framing it as ‘bloatware’(software that intentionally slows a device down). If you have no idea what makes a computer run, it sounded like good advice.

When users caught on that some versions of Windows would refuse to outright delete System 32, they moved on and started including console commands, something the average user (at the time) had no experience with. Someone with little or no knowledge of the subject wouldn’t know what they were looking at. A button press, some typing, and an @echo command. Easy to follow… too easy.

Mainstream Dilution

Instructions for deleting System 32 to ‘speed up the computer’ or ‘make the computer quieter’ appeared on more public sites some time in 2008. I Can Haz Cheezburger is likely the largest at this point, a forum centered around funny images of cats and other assorted animals, with a penchant for memes including advice, good or bad. Soap Ice, the idea that you could freeze Dawn dish soap and water in a puck of ice, and then use it to ‘shower’ after a trip to the gym or park, was one of these ‘advice’ memes. This does not work for the reasons you’d expect, but it’s less likely to kill someone than bathroom cleaner ‘crystal’ hacks. ‘Advice’ to delete System 32 was a natural fit, and it spread like wildfire.

With the meme’s spread into bigger websites that are more strictly moderated, articles start coming out advising people not to delete System 32. Even better, memes start circulating on websites like I Can Haz Cheezburger to give users warning directly. It doesn’t stop all of it – no good-advice-meme can stop a person determined to use a hack like Soap Ice, but it puts a major dent in the spread. With less people taking the bait, and others ready to comment ‘don’t do this!’ on posts where it appears, the meme finally slows down, eventually to a crawl. “Delete System 32” is now used ironically, because knowledge of it is so widespread that someone not knowing is rare.

And so the rise and fall of a meme is recorded. This is one of the first of it’s kind, but it’s far from the last.

Memory Terms

Elizabeth Technology March 7, 2024

The first Bit of Data

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.

Processing

If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

Then, a Byte

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.

Alphabet

8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!

Kilobyte

1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.

Megabyte

A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.

Gigabyte

A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.

Terabyte

A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.

Petabyte

A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.

Sources:

https://en.wikipedia.org/wiki/Bit

https://kb.iu.edu/d/ahfr

http://www.differencebetween.net/technology/software-technology/difference-between-unicode-and-ascii/

https://www.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3155B.html

https://www.pcworld.com/article/127105/article.html

https://www.wired.com/2014/10/astrophysics-interstellar-black-hole/

https://www.merriam-webster.com/dictionary/gigabyte

https://www.nasa.gov/mission_pages/chandra/news/black-hole-image-makes-history

https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-the-first-image-of-a-black-hole/

The Train That Breaks Itself

Elizabeth Technology February 15, 2024

If you’ve paid any attention to big tech in the last several years, you’ll probably know that Apple is on the verge of switching to USB-C for phones. It’s easier and more accessible for the average EU citizen to acquire than Apple’s lightning chargers. It’s not just Apple that’s being forced to change for the sake of the customer – the shareholder system at large is constantly at odds with the end user’s rights to buy a complete, sturdy product that wasn’t designed to break a few months down the road so that an official BrandProduct shop can charge over the market rate to fix it. Thanks to the EU’s legal interventions, Apple (and many others) cannot continue to sell a product that only they can make chargers and power supplies for, that only they can update, that they can choose to brick whenever they feel like the user needs to move on to the next phone, etc.

The Newag train scandal is particularly egregious given this context!

Big parts of Europe rely heavily on trains for both passenger and freight transit, and trains are expensive to make and repair; once the state has invested money into infrastructure and the trains themselves, they won’t simply be switching brands on a whim. This already gives the company a massive amount of leverage over their contractors.

 Newag is one such train company. Allegedly, as Apple did, Newag figured that regular repair and maintenance were good places to squeeze a bit more money out of the customer, and set up a bit of code within the train’s computer brain that would cause it to error and stop working if anyone but a Newag shop touched it to fix it. Keep in mind train repair shops are already incredibly niche, and repairs to trains come out of taxpayer money – to be thriftier by going to an independent shop is an obligation when the money isn’t your own. Worse, even if the shop didn’t need to fix anything in the train’s computer, Newag’s trains are GPS-enabled, and if the train spent too long at an independent train-repair station, it would still mysteriously stop working.

Of course, Newag denies this heavily – they even went as far as trying to sue the company that discovered this quirk, Dragon Sector, into shutting up about it. Then, they suggested it was the result of cybercriminals and not Newag itself, which could make sense if this were ransomware stopping the train entirely and not just when the train didn’t stop at a Newag shop or get it’s special unlock code. The odds are stacking up against the company, as the evidence is too clearly pointing towards predatory practices for them to get out of an investigation.

Sources:

https://arstechnica.com/tech-policy/2023/12/manufacturer-deliberately-bricked-trains-repaired-by-competitors-hackers-find/

Optical Memory

Elizabeth Technology January 30, 2024

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than magnetic tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc (don’t do that – most discs contain some amount of metal, and that can damage your microwave badly enough to be dangerous).

CDs

“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?

CD-RW

The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.

DVDs

DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.

DVD +R DL

That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.

Compatibility

CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

FlexPlay Self-Destructing Entertainment

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.

Blu-Ray

Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.

Sources:

https://www.britannica.com/technology/optical-storage

https://www.dell.com/support/kbdoc/en-us/000149930/what-are-the-different-cd-and-dvd-media-formats-available

http://www.osta.org/technology/cdqa13.htm

https://www.scientificamerican.com/article/whats-a-dvd-and-how-does/

https://kodakdigitizing.com/blogs/news/cd-vs-dvd-how-are-they-different

http://recordhead.biz/difference-blu-ray-dvd/

https://www.dell.com/support/kbdoc/en-us/000147805/guide-to-optical-disk-drives-and-optical-discs

Magnetic Memory

Elizabeth Technology January 25, 2024

Magnetic Tape

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live in front of the audience. Music recording tech already existed of course, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

Carousel Memory

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

Core Rope Memory

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-Rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

Core Memory

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

Magnetic Bubbles

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

Semi-Conductor DRAM – Honorable Mention

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

Magnetic Disk (Hard Disk Drive)

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

Sources:

https://spectrum.ieee.org/tech-history/space-age/software-as-hardware-apollos-rope-memory

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

https://electronics.howstuffworks.com/vcr.htm

https://www.apolloartifacts.com/2008/01/rope-memory-mod.html

http://www.righto.com/2019/07/software-woven-into-wire-core-rope-and.html

https://www.computerhistory.org/revolution/memory-storage/8/253

https://nationalmaglab.org/education/magnet-academy/watch-play/interactive/magnetic-core-memory-tutorial

https://www.rohm.com/electronics-basics/memory/what-is-semiconductor-memory

https://cs.stanford.edu/people/nick/how-hard-drive-works/

https://psap.library.illinois.edu/collection-id-guide/audiotape

https://www.engadget.com/2014-04-30-sony-185tb-data-tape.html?guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAAC5GC2YOKsvhOs9l4Z2Dt1oHX3-YxjPyJC60qfkq6_6h8zyckkBK9V9JJC9vce3rCmcgyehT-RB6aORBfzB9b5oiBoF1Fbic_3653XVM8fsUTHHnTgxKx4piCeEl65Lp54bkbMcebEEddwlq-EDnAcM7zuv49TXYHcgq9lmnrBln

https://en.wikipedia.org/wiki/Carousel_memory (all primary sources regarding carousel memory are in Swedish)

The Awareness of Future Cringe Past

Elizabeth Technology December 28, 2023

The Concept of Cringe

What is ‘cringe’? To cringe is to jerk away from a negative stimuli – accidentally getting a papercut between your fingers, or hearing the sound of nails on chalkboard, may make you cringe.

Sometime in the 2000s, a new definition of cringe arose, and forums sprung up trying to catalog it. This new cringe focuses on secondhand embarrassment over actual, physical discomfort: it’s the awkward text to a crush that gets rejected outright. It’s the kid in a college-level presentation class trying to get their group members to theme the project after an unrelated kid’s TV show. It’s someone wearing something in public that breaks rules everyone else is trying to follow. While shame and embarrassment are useful emotions almost anywhere else, the concept of cringe in the new panopticon created by modern social media and high-definition phone cameras is sucking the joy out of memes. The next generation is not ready to be made fun of by people who they respect.

“Millennial Humor” and “This is What Gen Alpha Will Make Fun of Us For”

Gen Z is effectively building a prison made of cringe and ensuring that nobody will escape it, using social media. One comment, one foot, is calling I Can Haz Cheezburger speak annoying and cringe. Another comment, the other foot, is calling someone the Rizzler, and spamming fire emojis. Both feet are straddling a hole in the ground, an abyss that can’t be looked into because the abyss – Nietzsche’s final, paralyzing frontier of awareness – will look back. That hole contains the phrase “this is what gen Alpha will make fun of us for”.

Some Gen Zers have looked into the abyss. The abyss looks back. The future looks back. They, themselves, but younger and meaner and willing to make a joke at their older selves’ expense, looks back at them and sneers. Their jokes are cringe. Their clothes are cringe and make them look cringe. The way they take their selfies in public is cringe. The easily identifiable way that they speak signals to the next generation that they may say something neocringe if prodded right. There’s no escaping now that phones are everywhere, and everyone seems to be filming. They will, one day, have a haircut that turns cringe. They know all of this because the previous generation, Millennials, are subjected to the same treatment. The introduction of the “Millennial Pause” gave ammo to an audience that cares about age so much that identifying Millennials is a sport now, even for other Millennials. Of course that little pause is no big deal, but it exists. The fire emoji, too, will one day be no big deal, but exist, and signal out to Gen Alpha that they’re talking to someone older than them. There’s some comment to be made about how much Americans love the idea of youth. Now, if someone sticks out with dated humor or an awkward pause, they’re a target – they are expected to look and act young enough to blend in with the next generation (which means understanding the jokes and dressing like them too) or risk being singled out as cringe.

This awareness that trendy things age poorly is so paralyzing that some teens are trying to remove themselves from the memery without fully leaving social media. It’s the final stage of irony poisoning, where doing cringey things ironically is still too close to being cringe, and so is just existing (unironically and contemporaneously) with trends in photos or videos, so the people who’d otherwise be having fun making jokes or dancing their meme dances are instead opting to say “this joke won’t be nearly as funny when it’s no longer fresh” as if that’s a revelation. The other option is posting cringe and making jokes that are only funny for right now; if someone wants to stay young and funny forever, they can’t participate. They try to warn the other people outside their prison that one day they’ll be cringe, as though they can somehow stop the embarrassment of embracing popular trends by stopping the trend itself from manifesting with the power of irony and self-awareness, but it’s always already too late. Mullets are on a comeback, and some day the people who had them will look back at those photos and laugh.

To be cringe is to be free. Embrace the cringe. Pause awkwardly. Say ‘Rizzler’ out loud. Keep an ugly haircut and a sage-colored couch, and enjoy existence freed from the dichotomy of cringe and noncringe.

VHS Tapes and Analog Horror

Elizabeth Technology December 12, 2023

What is it about the humble VHS tape that inspires such magnetism from the horror community?

Distortion

It’s no secret VHS tapes are prone to degrading over time. The tape inside loses its charge, and the plastic it’s made out of starts to dry-rot. If you’ve tried to replay a particularly old VHS movie, you might have gotten part of the way through it only to have it crumble on you, never to play again. Even the old-fashioned photo reel tape is not as fragile.

A number of strange effects can be pulled out of the tape and the machine just by treating it poorly, even fresh out of the box – if the tape is exposed to radiation, it develops a distinctive ‘snow’ to it; if it’s rewound or played too fast, the voices and visuals onscreen get weird, high-pitched, and anxiety-inducing. Colorful graphical glitches and brief audio cutouts are eerie, no matter what movie they happen to, and the classic abrupt cut, as though the tape inside has been cut and reunited minus a scene, can jerk anyone out of a Disney movie or war film alike. Tapping, dropping, or shaking the VHS player is an easy way to distort the viewing experience without necessarily breaking the tape or the machine, too, making it super easy for kids to get the funny colors they like to appear onscreen.

For the artists who can catch it juuust right, exactly how it used to happen to them, it’s really something to behold.

Irreplaceable

But you’d think the nostalgia of the casually-creepy VHS system would fade, the same way other trends in media do – Westerns dominated the film landscape for years before slowly sliding off the map, and slasher films are nowhere near as dominant a horror style as they used to be. In that vein, you’d think the sort of skips you see from CDs and other optical storage methods would be getting the attention that VHS glitches are getting from analog horror, a recent online trend in horror that’s only getting more mainstream. Analog horror gets its very name from the style of filming that came to define the genre. Popular projects like the Mandela Catalogue or Angel Hare are purposefully designed to look like they are recovered from VHS tapes and analog TV tech, helpfully uploaded to Youtube by someone trying to get answers. The glitching is used to great effect: when something too horrifying to look at on-screen is due to enter, the VHS tape glitches and clips over the horror, a clever way of hiding the monster from view while amplifying the terror of the unknown.

The corruption itself represents a strange flavor of nostalgia, an additional ingredient thrown into the horror of the scenario. After all, new VHS tapes are rare now. Old VHS tapes didn’t look creepy or monstrous when they were new. What the best analog horror projects capture with this stylistic choice are childhood memories of VHS tapes revisited as an adult, only to discover those tapes have been irrevocably changed by the passage of time. The ultimate premise of trying to share these tapes with the next generation only to have them rot away in one’s hands, blinking all sorts of strange colors and textures on-screen before it fails to warn them of the danger it’s trying to capture, is itself a powerful metaphor.

For millennials and the oldest members of Gen Z, the comfortable becomes a source of horror, a haunted childhood home. For the younger members who never had those VHS tapes, it’s an alien technology that behaves irrationally and unpredictably. The fuzz of a VHS video is not a comfort to kids who grew up with 1080p60 resolution videos. Modern videos don’t skip, either. Optical tech skipped for obvious reasons, like scratches on the disk. VHS tapes seem to choose arbitrarily when to skip.

In this way, VHS is a perfect medium for horror. Everyone around today is a little put off by it for a host of different reasons. When it flubbed up, it wasn’t always obvious why. It doesn’t age gracefully and it’s easy to cause problems within it on purpose. Even when it’s a little broken, the VHS player will still try to play it, where optical drives refuse if too much data is missing. When an optical drive stops, it just freezes on a frame, it doesn’t distort what it’s trying to play like VHS players sometimes do. Of course, all analog horror is just a recreation of the effects of old and damaged machinery. Some method-purists go out of their way to get ahold of real VHS tapes to do what they want to do, but in the end, it’s still getting uploaded to Youtube, an entirely digital platform. The mystique of the VHS haunts us today where CD players and digital files don’t because when the newer two corrupt, you’re spared the horror of the corrupted footage. VHS is the only one capable of the level of jank required to be horrific.

Varieties of Screens

Elizabeth Technology December 7, 2023

There are many different screens. From gigantic vacuum-tube TVs to the flattest of the flat home theater displays, TVs come in all shapes and sizes.

LCD: Liquid Crystal Display – Big Screen, Little Equipment

LCDs, or liquid crystal displays, are what they sound like: a material that has traits of both liquids and crystals is manipulated with an electric current, using a panel behind the crystal panel. Then, an LED panel behind that lights it up so the colors are visible. LCD displays don’t handle heat well, and they’re fragile. You can’t put them next to or above a fireplace, and you can’t clean them with most regular cleaners as a rule. You especially can’t drop them. Videos of people running into their TVs with an AR headset on or throwing a Wii remote into the TV during a virtual bowling game demonstrate the spiderweb effect even minor impacts can cause on-screen.

But the screens are getting massive. A more delicate device is a tradeoff many people are fine with making, if the trend of larger, sleeker smartphones is any indication. For example, a projection screen TV is probably the closest someone in the 1980’s could get to the modern flat screen TV. At 50 inches, and adjusted for inflation to today, it costed about $3,100.

An 82 inch TV from LG currently costs about $1,500 on Amazon. Technology!

LED: Light-Emitting Diodes

The Sphere in our local Las Vegas is currently the largest LED display in the world! LED displays are a common choice for external signs. They’re cheap and easy to manage outdoors, so they’re a great choice for light-up billboards – here in Las Vegas, most casinos on the Strip have one outside for their advertising. However, since the individual components making up each ‘pixel’ or each little square of colors are pretty large, they’re not usually the first choice for indoor, TV electronics – the gaps between each diode cluster are big enough to be visible, and they put out a lot of light.

OLEDs are becoming more popular as a screen choice because the gaps are eliminated, but if an image is going to be displayed on it long term, they can be prone to ‘burn in’ – where the image becomes permanently etched into the screen. As a result, LCD displays are more popular in cases like digital menus and airport queues.

LEDs don’t have many weaknesses that aren’t also shared by LCD screens – the major one is that screen burning, but for big displays like the Casino signs, that’s not an issue. Panels going out and creating wrong-colored squares in the middle of the board are, but thanks to the modular design of LED panels, minor problems don’t kill the entire screen.

Plasma Screen

A plasma screen TV works by exciting little pockets of ionized gas to create plasma, which makes colors. These were all the rage for a while, but they’re also sensitive to heat – and when LCDs caught up, they were cheaper to make and easier to dispose of, so plasma screens dipped in popularity. They’re still high-definition, they’re still sold in stores, so nowadays it comes down to a matter of preference, not price or size.

Rear Projection TV: Big Screen, Big Equipment

These screens were huge, and the speakers were built in to face the viewer at the bottom of the screen. Rear projection TVs were the intermediate step between CRTs and LCDs, and they worked by beaming light from the source of choice to the screen using a system of lenses, magnifying the image. CRTs had reached their max size, but LCD panels weren’t anywhere near large enough by themselves yet – the rear projection TV smoothed the transition between the two while also providing a larger screen than previous TVs. The one I grew up with was gigantic, even at the time we had it. Scratches in the fabric covering the speaker area were the only worry. The TV itself was nigh indestructible, and impossible to knock over.

Over time, the screen we had became outdated. It didn’t have enough ports for all the adaptors it would have taken to keep it in line with new plugins: VCRs and DVRs had different requirements, and so did the Xbox and the Xbox 360. Eventually a smaller (but much thinner) screen won out. Everything could just be directly plugged into the TV instead of screwing with the jack hydra the rear-projection required. The price of progress.

CRTs and Degaussing

With the development of iron ships, navigators discovered a problem – large quantities of iron could mess with the compass, and other tools relying on the Earth’s magnetic poles to function. Even worse, with WWII on the horizon, the magnetic signature of the ship meant that weapons could be designed around it. Underwater mines, specifically, were geared to detect the field and then go off. Degaussing was invented! De-magnetizing the ship meant mines could no longer rely on it as a trigger.

Cathode Ray Tubes displays (or CRT displays for short) are easily disturbed by magnets. The colors turn funny shades when you hold a magnet too close. The same technology used to protect ships was then used to degauss the CRT display and return it to its former full color glory. Eventually, degaussing coils were included within the device, which causes that “Thunk” and then hum when the screen is flipped on. It resets every time the device is turned on, which keeps the image from gradually degrading if it’s kept near other devices with magnetic fields as well.

That doesn’t mean CRTs are immune to breakage: flicking the switch on and off repeatedly and too quickly may break the mechanism that does the degaussing, and you’re back to using an external degausser.

Sources: https://www.doncio.navy.mil/Chips/ArticleDetails.aspx?ID=3031

https://computer.howstuffworks.com/monitor5.htm

http://www.thepeoplehistory.com/80selectrical.html

https://www.prnewswire.com/news-releases/worlds-largest-single-video-screen-illuminates-fremont-street-experience-with-fully-immersive-content-301017215.html

https://www.pcmag.com/news/led-vs-plasma-which-hdtv-type-is-best

Public Internet Access Terminals

Elizabeth Technology November 21, 2023

In the early days of the internet, the average computer was still bulky and often pretty pricey. Most electronics were! Some people still have the brick phones or old CRT monitor computers they used before the size of transistors and chips shrank, and finding those old models in movies or on eBay isn’t hard.

Bringing the internet out of designated places (colleges, libraries, the home, etc.) into other spots it might be useful was difficult. One of the wackier ideas of the time, the Public Internet Access Terminal, foresaw a world where computers would be like payphones, in 2003.

American Terminal Public Internet Access Portal

Sources on this company are incredibly limited. One Youtube video: https://www.youtube.com/watch?v=DASfwrCjICg&t=5s&ab_channel=RicDatzman pops up when the exact name of the kiosk is searched. One very old commercial proves the existence of the internet’s first tendrils encroaching into public space.

The commercial itself is a perfect snapshot of how people viewed the web in its early days. You might need it on the go. It might be like a payphone someday. The computer inside the terminal will make you money and won’t need to be replaced by the next more powerful model anytime soon because it’s good enough as it is (as a reminder, Moore’s Law stated that the number of transistors on a circuit would double about every two years, and up until recently ). People in the comments remember using them to check online game accounts and send last-minute emails before hopping onto planes or busses.

And yet, no other source aside from this commercial seems to exist! Despite their confidence in their product, they weren’t confident enough to build a website for prospective franchisees.

I Just Need To Send An Email

Internet usage at the time was limited – Amazon was still selling primarily books, computers were still pretty large, and while things like email were much more convenient than snail mail or phone calls for their traceable, info-dense nature, not everyone had an email address. For the lighter users of the internet, stopping by the library to check their digital mailboxes was a cheap and easy way of keeping up with the times without committing to a fullblown computer. After all, the dot com crash ruined the internet’s most aggressive investors. If it somehow didn’t pan out, they wouldn’t be out too much money.

The problem was that while that crash was disastrous, the internet still had plenty of use! And people who didn’t want to invest in the equipment were being pulled further and further into it either by work or for recreation.

In the midst of this, a particularly enterprising company thought to put together internet terminals that could be put in places like airports, and controlled by outside franchisees like vending machines often are. To the people trying to sell these products, the age of computers was slowing down post-crash, and  while they may have anticipated that these computers would be fully depreciated by the time the owner paid back the investment and maintenance costs (just like any free money scheme, if this was actually as low risk as they advertise, they would have kept it to themselves), they likely didn’t picture a world where the very thought of one of these things existing freely, unmonitored, in public, paid for by the minute and not the GB, would seem outdated. Like a payphone.