Posts Tagged


Remember the Apple Wheels?

Elizabeth Technology March 12, 2024

Ah, Complaining.

Apple Wheels

The Apple Mac Pro cost several thousand dollars, and it looked like a cheese grater. The little Apple-branded wheels to make it move cost about 700$, or approximately the price of the iPhone 8 at launch. Oh, but don’t worry – you can buy the feet for a mere 300$, if you just have to have Apple Brand. How did we get here? How did we, as a society, get to 700$ computer wheels?

Brand = Trustworthy

Branding by itself is an interesting mark of human psychology. It’s a shortcut to trusting something! The brand of an item itself purely imaginary – the brand, by itself, does not produce value for the final product except for the value the consumer gives it in their mind. “A rose by any other name would smell as sweet”. Look at IKEA: all those items come from different factories, so customers shouldn’t just blindly trust whatever they buy, right? But because IKEA has put their name behind it, consumers still buy the cheap shelves with the understanding that IKEA has endorsed them. If these shelves were somewhere else, and un-branded, consumers wouldn’t trust them as much. They’d sell less. You see copycat items on Amazon all the time, but mostly people buy brand-names they recognize. Branding, in this way, is extremely valuable even though it’s intangible.

And it’s good for the customer, too! It allows them to make a more informed decision. Emotionally, people become loyal to brands that have served them well. Fortunately for the brand, they’ll stay loyal unless something seriously impacts their mental image of that brand.

All of this sounds totally logical and reasonable, right? It’s the way people have done business since cash was invented. It made sense for people to trust the smithy, who branded their creations, over someone who wouldn’t put their name to what they made.

Strange things start happening when people like the brand more than the products, and we’ll get there.

Even though consumers may know the store-brand comes from the same plant that the name-brand does, they may still pick name-brand. This is part of that trust – it is scary to try new things, and keeping one constant, the brand, the same, makes buying big electronics or new foods less scary. When consumers stop showing a brand loyalty, or they start complaining, the brand could do things like throw in warranties, coupons, or spare parts for free and retain that good will. Store brand doesn’t stand a chance even if it’s literally identical.

Brand = Money

Branding can save a company even if they’re like modern day Pyrex, which has a different heat tolerance depending which factory you get it from. People post infographics online so consumers can identify the ‘good’ pieces, because they love Pyrex so much. A change to the glass manufacturing process means that the brand is no longer a reliable indicator of quality, but people still want to like Pyrex. Otherwise they wouldn’t go through all this effort to find the right Pyrex factory, they’d buy somewhere else. This is where brand starts to become more important than what it’s selling: the nostalgia of a good brand present in one’s formative years carries it along past it’s loss of quality.

People will pay a premium for a brand they trust, and companies know this. We see this everywhere, from cars to computers. Sears might have survived a couple years off of goodwill and nostalgia alone.

Branding, therefore, can become a phylactery in the hands of a new controlling board. As soon as a company starts to rely on goodwill to sell items that they know other companies would have ditched, they become like Apple. Unlike Apple, many of them don’t sell high-ticket items as a luxury.

For Apple, the brand is demand. Where Steve Jobs might have demanded innovation out of every item they released, the controlling board doesn’t. They know that the brand reputation he built will sell items because people love Apple, people already have Apple, and they know people want to look like they have money! And by smearing Android products as ‘cheap’, Apple became a shortcut for ‘expensive’. Apple wheels are a natural result of a market that’s so hyperfocused on branding that it doesn’t care about functionality. A combination of goodwill and a little psychology gives us these overpriced items that are only overpriced for the sake of it.

The irony of all of this is that people will eventually buy the item as a ‘flex’, unironically, and then the product exists in a quantum state of sincerity. How does Apple live where others die?

Wheely Worth It

Apple sells sincere items alongside their ‘meme’ items. While Apple sells things like wheels and pens for hundreds of dollars, the past generations of phones are still about the right price for what the user gets. Factoring in things like R + D, factory overhead, and the materials to go into it, a comparable phone (including camera and storage space) made by a third party would be cheaper, but not by much. They’re only at a small premium to other comparable brands for the same computing power, which makes sense with Apple’s well-known tech support. They haven’t gone full ‘Sears’ yet: there’s still some value in the idea of their brand, and they still release ‘worthy’ items alongside the garbage ones. Why risk it with wheels that cost as much as an iPhone, a genuinely expensive item?

Simple: it’s for advertising, and it’s fairly cheap as far as campaigns go. Either ‘hype beasts’ (people known for buying branded clothing just because it’s expensive) buy it to flex on others, or regular people discuss how out-of-line Apple is. Either way, Apple’s name is out there. Apple might not actually expect to make money with these wheels, but the items are so cheap to make that a single purchase could finance the production of 50 more sets. Not to forget hype beasts!

This new trend of “flexing” expensive-but-nearly-worthless items has led to the creation of the Supreme Brick, the Apple wheels, and all sorts of other tomfoolery that relies on branding. Now, some brands use branding as a shortcut to ‘luxury’ instead of ‘trust’. Luxury clothing items have already been doing this for years, so while the material is thin, the manufacturing process cost cents on the dime, and shipping it en masse cost maybe a couple dollars, the final item is an 800$ shirt. Not because it’s made of especially good materials, or hardy – because it has a logo on it.

The only reason knockoffs are not worth as much is because the original brand has convinced people that their product is ‘better’ because it cost more, not that it cost more because it was better. And people believe it! Anyone self-conscious enough to get fake Airpods or a third-party Gucci shirt are still pursuing that image of luxury, which is fantastic for the brand. The same goes for Apple Wheels, and Airpods, and Supreme clothing… if the consumer values it, then they’re worth it. The Apple Wheels are worth 699$ to the people who want it, and that’s good enough to keep making them.  They’re buying Apple Brand, after all.

Apple Wheel. It’s wheely about the advertising.


Memory Terms

Elizabeth Technology March 7, 2024

The first Bit of Data

A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.


If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.

Then, a Byte

A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.


8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!


1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.


A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.


A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.


A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.


A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.


What is an RFID Chip?

Elizabeth Technology February 27, 2024


RFID stands for Radio Frequency Identification, and it’s usually used in the context of a chip! There are active and passive types: an active RFID chip has a tiny battery with it, while a passive one is powered by the energy of the reader’s signals alone. Active chips can be read from much greater distances, but the battery makes them heavier and more expensive. Meanwhile passive chips have to be blasted with the RFID signal to be read.

How do they work?

RFID chips are great because they’re small, and they don’t take line-of-sight to read like many other cataloguing techs do.

There are three major parts to an RFID chip: the microchip, an antenna for receiving and broadcasting signals, and substrate to hold it together. RFIDs work with radio waves, a form of electromagnetic radiation. They actually got their start during the end of WWII, where a Soviet engineer created a passive listening device activated by radio waves, which would then store a small amount of information about the transmission. It wasn’t really the same as what we use in security tags and inventory systems today, but it was a tiny passive chip with information stored on it passively, and that’s close enough! 1973 saw a real attempt at the kind we have today, and ever since, they’ve been shrinking in size.

RFID chips can also come with read-only or read/write memory, depending on the style of that chip. Essentially, it has a very small amount of memory on it, just enough to store things like batch number, serial number, or address, in the case of pet tags. They’re not very complex: in the case of an active tag, the reader simply dings the RFID chip, which then responds on a compatible wavelength with the relevant information via that antenna.

Some chips broadcast constantly, while others broadcast on a regular interval, and some wait for the RFID reader to ding them before they send their data. In a passive chip, the RFID reader has to ding the chip so hard that it absorbs enough EM radiation to respond – energy hits the antenna, travels to the chip, and powers it enough to activate the antenna for signalling, which then causes the chip’s signal to travel back up the antenna and transmit to the reader. Neat!


An RFID chip’s low profile and small size makes them great for inventory management. Since the chip doesn’t need line-of-sight like barcode scanners do, production doesn’t have to worry about maintaining a certain orientation towards cameras for their items, they can just pass them over an RFID scanner and they’re good to go. Radio waves can pass through solid objects!

The RFID chips are also good at tracking inventory while in the store: you’ll notice many big box stores have an exit with detectors alongside the doors, which prevents unscanned or active chips from getting out the door. It also sometimes triggers on nametags and items the cashier had to scan in the cart, but most of the time it works as intended.

RFID chips are great for livestock and pet chipping – they’re small, and not only are they less painful than a tattoo, the data is also unlikely to migrate or blur like ink could in a pet’s ear. The initial wound is also smaller, which makes infection less likely. That doesn’t mean they’re perfect, but they carry a lot more information for less relative risk to the animal.

On the human side, RFID chips are frequently used in employee identification badges – the theory is that it’s harder to copy and easier to read than a barcode scanner for restricted areas. Some people go so far as to get them implanted, but the ethics of that are… iffy, to say the least, even if they want the implant. The long-term effects in humans just aren’t that well-known, and while pets are a good indicator that nothing should go wrong, pets also don’t have to worry about getting their phone hacked because their pet tag carried a virus along.

RFID chips are now popular in credit cards! The chip in the card is (in theory) safer than the regular magnetic stripe, and it’s supposed to be much harder to copy. Of course, early versions still had their issues, but now they’re difficult to signal from a distance.


RFID chips aren’t free from flaws.

Security can be a problem, especially for active chips, which can be read from hundreds of meters away. Most vendors have some sort of protocol in place, but for a hot minute, RFIDs in cards were a potential security nightmare. Remember all those anti-RFID chip wallets? That’s because readers were able to access the chip as though they were being used for a purchase. It just wasn’t very safe before protocols were established.

Secondarily, a bunch of folks went out of their way to prove that the more complex RFIDs could become transmission sites for computer viruses – one guy had one implanted in his hand, and if the virus could infect that hand, then the virus could get anywhere he could wirelessly. The perfect crime! Airgapped networks were no longer safe if RFIDs were on the table.

Incompatible readers can make inventory transfers more painful than they need to be, as well – the ISO sets standards for which channels get to be used for what purposes, but the companies have to comply with them first. They also have to have the right kind of reader – is it scanning for active or passive chips? The two have very different needs. An active reader might not be able to find a passive chip!

There’s also the sticky issue of privacy and destruction. How do you get rid of the tag on the product once it’s no longer needed for inventory? RFIDs can be destroyed by microwaves, but that doesn’t help if they’re attached to an electronic, which can also be destroyed by microwaves. They can be wrapped in foil a couple of times, and stop transmitting long distances – on some objects, that makes them unusable. It takes special equipment and some professional skill to actually scan a building for RFIDs, but it’s not totally impossible.

It just takes work, the kind of work a stalker or government agent might be willing to put in if they needed info on a person so badly that they’d want to see what items they had in their house. This is also more difficult than it sounds because most chips go by something vaguely specific, like a batch or serial number with no product name attached, but it’s not impossible. It would just take quite a lot of effort when stalking via binoculars is much easier.

It’s also still possible to clone RFIDs – passports with RFIDs in them could be an especially large problem for both the original holder and the government of that country. The obvious option, credit cards, are still cloneable too, although with modern banking it’s often not worth the investment for the scammers.

However. With tech improving every day, it may be possible to limit what chips respond to which scanners, which would make it much more difficult to invade privacy. Chips get smaller and smaller every day, so it’s entirely possible a password- or signal- protected RFID may some day come into power.


Reasons to Recycle Your Phone

Elizabeth Technology February 22, 2024

1.Lithium batteries are not biodegradable.

In general, modern materials don’t really degrade much. When was the last time something you owned rusted away completely? And if it did – did it really? The spot below the cheap, neglected grill in my friend’s back yard has no grass growing on it. The rust is still there to interfere with that grass’s growth, even though it’s technically degraded. That grass may eventually come back if the rain ever washes enough of the contaminated dirt away, but until then, the ground is inhospitable. Now picture that with metal that’s not designed to spoil, and chemicals that are much harsher. Batteries are by far one of the most concerning items to trash. They tend to corrode and release acid if not disposed of properly, and the bigger the battery, the bigger the concern for acid to leach into whatever it’s laying on top of. You don’t want something you threw away to make a mini-superfund site, surely?

2. They also don’t behave well when the internals are exposed to air.

You cannot just dump a phone in the trash when you’re buying a new one. Besides the environmental effects (which can be anywhere from acid leaching to heavy metal poisoning, depending on battery type) there’s also a real danger of starting an unquenchable fire in a garbage truck. If it’s one of the fancy ones that can compact garbage as it picks it up, the battery being punctured can set off a fire inside the bin. If you’re unlucky, and others have thrown out paper trash or flammables, you’ve got a serious problem on your hands. Recycle the phone! If not the phone, then at least the battery!

3. The phone contains rare earth metals.

These are metals that are common in the Earth’s crust, but very difficult to actually mine out and purify economically. After a point, mining them might make phones too expensive for the average consumer – so it’s important to harvest what parts are harvestable! Besides that, the phone itself isn’t going to bio-degrade because it’s completely inorganic. Rather than let all those precious minerals and non-degrading materials go to waste, recycle!

4. The hard drive may not be wiped the way you hope it is.

It’s very possible to recover deleted documents off of a hard drive months after ‘wiping’. Wiping a traditional hard drive completely is difficult, and solid states only make it harder. The hard drive still has a phantom copy of the old doc until it’s written over with something else, or grazed with a magnet. Doing this thoroughly is difficult, which is why you should recycle through a reputable hardware recycler. This is especially important for things like email apps, which frequently don’t ask users to log in after the first time they’re used on the phone!

5. Having a secondary market is essential for the health of the industry.

If the number of workable phones is low, people are forced to buy the new model because it’s all they can find. This is why planned obsolescence is so insidious. They’re deliberately cutting down the market for their users so they can sell more new phones at a high price. If this was a perfectly efficient world where consumers had perfect information, this would lead to the company dying, because nobody wants to pay 700$ for something that breaks in three years. But it’s not – it’s a world where people drop an extra $200 on a phone for its camera. It’s a world where the phone carrier forces you to upgrade as part of their contract. It’s a world where branding is the fashion. It is not perfectly efficient, and as long as the manufacturers recognize this, they will make attempts to money-grub.

Keep those second-hand phones in the market and force manufacturers to keep making phones at least as well as their old products. This is still recycling! It’s keeping the phone from its final death in a landfill, and extending it’s life for as long as possible.

6. Broken Phones Still have Valuable Parts

If the phone’s so broken that it’s not possible to re-sell it, consider recycling it anyway – lithium batteries have many uses, and as mentioned before, those rare earth metals aren’t getting any less rare. Recycling the phone by sending it somewhere to get it broken down is also valid recycling. If you can squeeze just a little bit more use out of a device by dropping it off or passing it on – why wouldn’t you?

Besides, the facility will know how to handle that battery!


The Train That Breaks Itself

Elizabeth Technology February 15, 2024

If you’ve paid any attention to big tech in the last several years, you’ll probably know that Apple is on the verge of switching to USB-C for phones. It’s easier and more accessible for the average EU citizen to acquire than Apple’s lightning chargers. It’s not just Apple that’s being forced to change for the sake of the customer – the shareholder system at large is constantly at odds with the end user’s rights to buy a complete, sturdy product that wasn’t designed to break a few months down the road so that an official BrandProduct shop can charge over the market rate to fix it. Thanks to the EU’s legal interventions, Apple (and many others) cannot continue to sell a product that only they can make chargers and power supplies for, that only they can update, that they can choose to brick whenever they feel like the user needs to move on to the next phone, etc.

The Newag train scandal is particularly egregious given this context!

Big parts of Europe rely heavily on trains for both passenger and freight transit, and trains are expensive to make and repair; once the state has invested money into infrastructure and the trains themselves, they won’t simply be switching brands on a whim. This already gives the company a massive amount of leverage over their contractors.

 Newag is one such train company. Allegedly, as Apple did, Newag figured that regular repair and maintenance were good places to squeeze a bit more money out of the customer, and set up a bit of code within the train’s computer brain that would cause it to error and stop working if anyone but a Newag shop touched it to fix it. Keep in mind train repair shops are already incredibly niche, and repairs to trains come out of taxpayer money – to be thriftier by going to an independent shop is an obligation when the money isn’t your own. Worse, even if the shop didn’t need to fix anything in the train’s computer, Newag’s trains are GPS-enabled, and if the train spent too long at an independent train-repair station, it would still mysteriously stop working.

Of course, Newag denies this heavily – they even went as far as trying to sue the company that discovered this quirk, Dragon Sector, into shutting up about it. Then, they suggested it was the result of cybercriminals and not Newag itself, which could make sense if this were ransomware stopping the train entirely and not just when the train didn’t stop at a Newag shop or get it’s special unlock code. The odds are stacking up against the company, as the evidence is too clearly pointing towards predatory practices for them to get out of an investigation.


Why Is Everyone So Disappointed in the CyberTruck?

Elizabeth Technology February 1, 2024

Safety Concerns

You’ve seen the body. You’ve probably seen the interior dash. If this thing gets into an accident, it will win over any other vehicle or passengers, including its own. It is a return to the old steel-body cars but worse, with small crumple zones and a 3 mm thick steel plate for a shell. It’s undeniably sturdy. The car itself could survive a lot, which sounds cool and futuristic for a car that one day might not need passengers, but anything inside of it is subjected to its design in the event of a crash.

Secondly, pinch points! One video of the CyberTruck’s motor-driven hood-closing mechanism shows it cleaving through a carrot without stopping (Out Of Spec Reviews on TikTok and Youtube). There is no pinch detection in the front, which wouldn’t be a problem if it weren’t motorized! Isn’t this the car of the future? Why is there no pinch detection? Human fingers are tougher than carrots, yes, which is why when it comes to safety demos, you don’t want it to cut through the carrot to demonstrate that it definitely won’t hurt your precious fingies. The guy in that video then put his fingers further down into the seam (so the sharp point at the window area wouldn’t break the skin) and the trunk simply shut over them, leaving him stuck. He couldn’t reach the button on the front that would tell the truck he wanted to open it again. It didn’t break anything, but getting unstuck alone would have been painful. At least there is a phone app allowing you to remotely free yourself, but without proper pinch protection, it’s at best capable of trapping someone, at worst capable of biting off fingers, if it’s someone with the keys but not the app.

One major and one minor complaint for safety seems about right for a major deviation from the norm for cars, and the pinch detection should be relatively easy to include in later models – that’s not all people are concerned about, though.


While stringing electronics together seems to make sense, it’s actually not a great idea! We stopped making Christmas string lights that way because of the massive inconvenience of finding a dead bulb when the strand went out, because you had to find it to get the whole string working again. Unfortunately, from consumer complaints online, it seems like this philosophy is working similarly for the Cybertruck, alongside another major issue – dead screens!

Dead Screen errors on these Cybertrucks make it impossible to drive thanks to the huge amount of functionalities relying on the screen as a controller. In fairness to Tesla, most cars with screens instead of proper center consoles with buttons have that problem, so this isn’t Tesla-specific, but when your car is called The Cybertruck, some level of advancement beyond the competition is expected. The truck of the future shouldn’t be so much like the Disney Park Star Wars zone in both appearance, wait to access, and functionality.

 Lastly, the battery life. This is the biggest sticking point, which sucks because it’s tough to tell who is reporting reliably and who’s just pointing and laughing at the car because they don’t like the brand. Trucks are, generally speaking, meant for hauling stuff. The design of the Cybertruck’s truck bed is already straining supporter goodwill given how little it holds, but the truck has a front trunk to boost its storage space, so as long as what you’re hauling can be split into two separate loads, this isn’t such a huge point of contention. A lot of people don’t buy trucks out of a need to haul stuff every weekend, after all. What really has fans ticked off is the inconsistent range of the battery under load! The car itself is heavy, but so is every electric vehicle. Unlike that screen thing above, the Tesla sells itself on being the better electric car, so to not beat Ford and Rivian’s range while hauling or towing makes its marketing feel like it was fudging its numbers. Especially in the cold! Electronics run out of juice faster in the cold thanks to a number of chemical processes slowing down under the hood – the Cybertruck, according to early reports from consumers, can’t hit 80% of its expected range in the cold if it’s hauling too. Those are not ideal conditions, obviously, but if we’re going to keep getting freak snowstorms as far south as Texas, car manufacturers must adapt, or at least do some additional testing in the cold to avoid stranding customers who took ‘470 miles in ideal conditions’ as simply ‘470 miles’, full stop.

All in all, the Cybertruck is simply another electric truck. It looks cool, it’s made (perhaps too) tough, and it suffers many of the same issues as its competitors. For fans who were used to Tesla leading the way, this is a let down.

Optical Memory

Elizabeth Technology January 30, 2024

Optical storage is defined by IBM as any storage medium that uses a laser to read and write the information. The use of lasers means that more information can be packed into a smaller space than magnetic tape could manage (at the time)! Better quality and longer media time are natural results. A laser burns information into the surface of the media, and then the reading laser, which is less powerful, can decipher these burnt areas into usable data. The surface is usually some sort of metal or dye sandwiched between protective layers of plastic that burns easily, producing ‘pits’ or less reflective areas for the laser to read.

This is why fingerprints and scratches can pose such a problem for reading data; even though you aren’t damaging the actual data storage, like you would be if you scratched a hard drive disk, fingerprints prevent the laser from being able to read the data. Scratch up the plastic layer above the dye, and the data’s as good as destroyed.

Destroying data can be even more complete than that, even. Shredding the disc in a capable paper shredder (ONLY IF IT SAYS IT CAN SHRED DISCS) destroys the data, as does microwaving the disc (don’t do that – most discs contain some amount of metal, and that can damage your microwave badly enough to be dangerous).


“Burning a CD” replaced “making a mix tape” when both CDs and downloadable music were available to teenagers, and for good reason. The amount of content may be roughly the same, but the quality is significantly higher.

Most CDs are CD-Rs – disks that can only be written on once but can be read until the end of time. A CD-ROM is just a CD-R that’s been used! The average CD-R has room for about an album’s worth of music, and maybe a hidden track or two, about 75-80 minutes depending on the manufacturer of the disc. Alternatively, if you’d like to store data instead of high-quality audio, you’ll get about 700 MB of data onto a single disc.

To burn a CD, you’d need an optical drive that’s capable of also lasering information into the disc, which wasn’t always the standard. The laser will burn the information into the metal-dye mix behind the plastic coating the outside of the disc, which permanently changes how reflective those sections are. This makes it possible to visually tell what has and hasn’t been used on a disc yet, and CD-Rs can be burnt in multiple sessions! Data is typically burnt from the center outwards.

But everybody knows about CD-Rs. What about CD-RWs, their much fussier brethren?


The primary difference between a  CD-R and a CD-RW is the dye used in the layers that the optical drives can read. CD-RWs are burnt less deeply than CD-Rs, but as a result, they take a more sensitive reader. Early disc readers sometimes can’t read more modern CD-RWs as a result!

To reuse the disc, one has to blank it first (the same drive that can write a CD-RW in the first place should also be able to blank it), which takes time. After it’s been wiped, new data can be put onto the disc again. CD-RWs wear out quicker than other memory media as a result of their medium. That wafer-thin dye layer can only handle being rearranged so many times before it loses the ability to actually hold the data. It’s pretty unlikely that the average user could hit that re-write limit, but it’s more possible than, say, a hard drive, which has a re-write life about 100 times longer than the re-write life of a CD-RW.


DVDs store significantly more data than CDs do, even though they take up about the same space. Where a CD can hold about 700 MB, a DVD can hold up to 4.7 GB. This is enough for most movies, but if the movie is especially long or has a lot of other extra features, it has to be double layered, which can store up to 9 GB. Why can it hold so much more in the same space?

The long answer is that there are a number of small differences that ultimately lead to a DVD having more burnable space, including a closer ‘laser spiral’ (the track a laser burns, like the grooves in a vinyl record), as well as smaller readable pockets. It all adds up into more data storage, but a more expensive product as well.


That double-layering mentioned earlier isn’t present on every disc. Sometime in the later 2000s, double layer discs hit the market at about the same price as single layer discs (although that changed over time). The first layer that the laser can read is made of a semi-transparent dye, so the laser can penetrate it to reach the other layer.

Most modern DVD drives can read dual layer, but if your computer is especially old, it would be wise to check its specs first – DVD readers programmed before their release might not understand the second layer, and readers that can read them might not be able to write to them. DLs are a great invention, it’s just a struggle to find good disc readers when everything is switching to digital.


CD players aren’t usually also able to play DVDs. CDs came first, and the reader would have to be forwards compatible. Obviously, this would have taken a time machine to actually assemble. Picture expecting a record player to read a CD! The gap between the two is almost that large. Nowadays, the manufacturing standard seems to be a DVD player with CD compatibility tacked on. You should double check before you buy a disc reader to be sure it can do everything you want it to, but it’s less common to see CD-Only tech when a DVD reader is only slightly more expensive to create, and can work backwards.

FlexPlay Self-Destructing Entertainment

Remember FlexPlay self-destructing entertainment? The disc that was meant to simulate a rental and could have generated literal tons of trash per family, per year? The self-destructing medium that the disc was coated in turned very dark red to thwart the disc reader’s lasers! The pits aren’t directly on the surface of the DVD, they’re under a couple of layers of plastic. All FlexPlay had to do was sandwich an additional layer of dye between the plastic and the metal/dye that’s being inscribed upon. When that dye obscures the data below it, it’s as good as gone! The laser can no longer get through to the information and read it. Even Blu-Ray tech was thwarted by the dye.


Blu-Ray discs have higher visual quality than DVDs because they hold even more information. The blue-ray technology enables the pits to be even closer together, so more optical data can be crammed into the same space. Blue light has a shorter wavelength than red light, which shrinks the necessary pit size! A single-layer Blu-Ray disc can hold up to 25 GB of information! Blu-Ray discs are most commonly used for entertainment media rather than storage. Disc readers have to be specifically compatible with that blue laser technology, rather than just programmed for it. An ordinary DVD player may be able to play a CD, but it wouldn’t be able to fully read a pit in a Blu-Ray disc before that pit’s passed the reader.

Right now, the state of the art is Blu-Ray: most good Blu-Ray readers are backwards compatible with DVDs and CDs. However, many companies still sell ordinary DVDs alongside their Blu-ray releases due to cost. If you have a DVD player, you can probably hold off on upgrading, at least for a little while longer.


Magnetic Memory

Elizabeth Technology January 25, 2024

Magnetic Tape

The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live in front of the audience. Music recording tech already existed of course, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!

Carousel Memory

Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.

Core Rope Memory

Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-Rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.

Core Memory

Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.

Magnetic Bubbles

Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.

Semi-Conductor DRAM – Honorable Mention

DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!

Magnetic Disk (Hard Disk Drive)

Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!

Sources: (all primary sources regarding carousel memory are in Swedish)

Varieties of Screens

Elizabeth Technology December 7, 2023

There are many different screens. From gigantic vacuum-tube TVs to the flattest of the flat home theater displays, TVs come in all shapes and sizes.

LCD: Liquid Crystal Display – Big Screen, Little Equipment

LCDs, or liquid crystal displays, are what they sound like: a material that has traits of both liquids and crystals is manipulated with an electric current, using a panel behind the crystal panel. Then, an LED panel behind that lights it up so the colors are visible. LCD displays don’t handle heat well, and they’re fragile. You can’t put them next to or above a fireplace, and you can’t clean them with most regular cleaners as a rule. You especially can’t drop them. Videos of people running into their TVs with an AR headset on or throwing a Wii remote into the TV during a virtual bowling game demonstrate the spiderweb effect even minor impacts can cause on-screen.

But the screens are getting massive. A more delicate device is a tradeoff many people are fine with making, if the trend of larger, sleeker smartphones is any indication. For example, a projection screen TV is probably the closest someone in the 1980’s could get to the modern flat screen TV. At 50 inches, and adjusted for inflation to today, it costed about $3,100.

An 82 inch TV from LG currently costs about $1,500 on Amazon. Technology!

LED: Light-Emitting Diodes

The Sphere in our local Las Vegas is currently the largest LED display in the world! LED displays are a common choice for external signs. They’re cheap and easy to manage outdoors, so they’re a great choice for light-up billboards – here in Las Vegas, most casinos on the Strip have one outside for their advertising. However, since the individual components making up each ‘pixel’ or each little square of colors are pretty large, they’re not usually the first choice for indoor, TV electronics – the gaps between each diode cluster are big enough to be visible, and they put out a lot of light.

OLEDs are becoming more popular as a screen choice because the gaps are eliminated, but if an image is going to be displayed on it long term, they can be prone to ‘burn in’ – where the image becomes permanently etched into the screen. As a result, LCD displays are more popular in cases like digital menus and airport queues.

LEDs don’t have many weaknesses that aren’t also shared by LCD screens – the major one is that screen burning, but for big displays like the Casino signs, that’s not an issue. Panels going out and creating wrong-colored squares in the middle of the board are, but thanks to the modular design of LED panels, minor problems don’t kill the entire screen.

Plasma Screen

A plasma screen TV works by exciting little pockets of ionized gas to create plasma, which makes colors. These were all the rage for a while, but they’re also sensitive to heat – and when LCDs caught up, they were cheaper to make and easier to dispose of, so plasma screens dipped in popularity. They’re still high-definition, they’re still sold in stores, so nowadays it comes down to a matter of preference, not price or size.

Rear Projection TV: Big Screen, Big Equipment

These screens were huge, and the speakers were built in to face the viewer at the bottom of the screen. Rear projection TVs were the intermediate step between CRTs and LCDs, and they worked by beaming light from the source of choice to the screen using a system of lenses, magnifying the image. CRTs had reached their max size, but LCD panels weren’t anywhere near large enough by themselves yet – the rear projection TV smoothed the transition between the two while also providing a larger screen than previous TVs. The one I grew up with was gigantic, even at the time we had it. Scratches in the fabric covering the speaker area were the only worry. The TV itself was nigh indestructible, and impossible to knock over.

Over time, the screen we had became outdated. It didn’t have enough ports for all the adaptors it would have taken to keep it in line with new plugins: VCRs and DVRs had different requirements, and so did the Xbox and the Xbox 360. Eventually a smaller (but much thinner) screen won out. Everything could just be directly plugged into the TV instead of screwing with the jack hydra the rear-projection required. The price of progress.

CRTs and Degaussing

With the development of iron ships, navigators discovered a problem – large quantities of iron could mess with the compass, and other tools relying on the Earth’s magnetic poles to function. Even worse, with WWII on the horizon, the magnetic signature of the ship meant that weapons could be designed around it. Underwater mines, specifically, were geared to detect the field and then go off. Degaussing was invented! De-magnetizing the ship meant mines could no longer rely on it as a trigger.

Cathode Ray Tubes displays (or CRT displays for short) are easily disturbed by magnets. The colors turn funny shades when you hold a magnet too close. The same technology used to protect ships was then used to degauss the CRT display and return it to its former full color glory. Eventually, degaussing coils were included within the device, which causes that “Thunk” and then hum when the screen is flipped on. It resets every time the device is turned on, which keeps the image from gradually degrading if it’s kept near other devices with magnetic fields as well.

That doesn’t mean CRTs are immune to breakage: flicking the switch on and off repeatedly and too quickly may break the mechanism that does the degaussing, and you’re back to using an external degausser.


What Is Short-Circuiting?

Elizabeth Technology December 5, 2023

What’s a Short Circuit?

A short circuit is when something comes into contact with an electric current, and unintentionally gives it a shorter path. For example, sticking a fork into an active toaster will result in the fork redirecting the electricity through the fork, because it’s an easier path. Fun fact: the reason the coils inside the toaster get hot is because the path the electricity travels through has a lot of resistance. The fork is a much shorter path, but the other end is usually connected to a human body – which doesn’t handle electricity running through it very well. Old fashioned Incandescent lightbulbs work in the same way, but they’re juiced up so much that they produce both heat and light.

So why does this break things inside the computer? Well, not everything in the computer is meant to run off of 120V of pure outlet-supplied house power. If that flow of electricity comes into contact with a piece of hardware not made to handle it, the hardware fails, sometimes irreparably.

Static shock on the very delicate parts can short circuit them too. Be sure you’re grounded and charge-free before touching the computer’s insides!

What’s a Hardware Failure?

The difference between hardware and software is the difference between the machine and the things it does, basically. The CD tray is hardware, but the disk it’s holding contains software. A video game is software, the controller you use to interact with it is hardware! Music files on your computer? Software. Speakers to listen to it? Hardware! Hardware failure can be anything from the Wi-Fi receiver getting bent out of shape, to the CD tray getting stuck, or pins/solder coming loose inside the machine, and while software failure can be caused by hardware trouble, it can also happen spontaneously. If the computer blue screens? That’s a software failure. If the fan starts to sound like it’s hitting something repeatedly? Hardware!

The really fun issues are the ones that could be both: is your screen showing funny colors because a cord is pinched, or because something important in the software for the display was deleted?

What’s a Kernel Failure?

A kernel failure is when things happen to confuse the kernel, the essential bit of software that keeps programs from trying to pull the same file all at once. If you’ve ever worked with Microsoft Word, you might’ve experienced trying (and being unable) to move a file in the file selector while the document is still open. The kernel doesn’t control that, but it’s a similar principle. The file selector won’t move the doc as long as it’s open because you’re also using it – more specifically, Word is using it and can’t share. You have to close the document to let the file selector ‘use’ it to move it.

The kernel operates on similar principles: when one thing is using essential software, the kernel keeps other things from using that software until the first program is done with it. It keeps a schedule of what programs get to use which assets and when. While that might make it sound like it’s a software thing, both hardware and software can cause kernel panics. Incompatible driver extensions and incompatible or corrupted RAM are two of the most common causes.

What’s a Blue Screen Error, and What’s a Crash?

If you’ve ever owned or worked long-term on a Windows computer, you might have seen this one before :The Blue Screen of Death. A blue-screen happens when an illegal operation confuses the computer and causes it to shut down. For example, if a program’s permissions are read only, but another program tries to write on it, the computer may bluescreen and restart. If a program tries to use a piece of software reserved only for the operating system, the computer blue-screens and restarts.  Permissions violations, and illegal operations both cause blue-screening, but hardware failure is sometimes a cause too.

A crash is a pretty broad term. Anything that brings the computer to a stop can be a crash. Overloading it? Crash. Blue screen? Considered a crash. It shuts off randomly, and the tech can’t tell what caused it? Crash!


(original page no longer available – web archiving service is fundraising as of December 2020, you have been warned)