Posts Tagged

magnetic storage

In Terms of Memory

Elizabeth History, Innovation April 23, 2021

The first Bit of Data


A bit is a single character in binary, and actually comes from shortening “Binary Digit”. A bit is the simplest possible data that the machine can read, and is either a 1, or a 0. A yes, or a no. True or false. The bit has been around for longer than computers, originating in punch cards in the 1700s for analog machines to “read”.




If you’ve recently upgraded to Windows 10, you may recall having to check if your computer is 32 bit or 64 bit. The numbers determine how much memory the computer’s processor can access by its architecture – is it equipped to read up to 32 consecutive bits of data as an address, or 64? A 32 bit computer has fewer possible memory addresses from its CPU register– not much more than 4 GB’s worth, or 2^32’s address’s worth – while a 64 bit computer can store to up to two TB, or 2^64 addresses. This doesn’t mean 32 bit computers can only store 4 GB of data, it just means it can store 4 GB worth of names. The files themselves can be nearly any size as long as there’s storage available for them.


Then, a Byte


A byte is usually eight bits in compliance with international standard – but it didn’t always have to be. Instead, it used to be as long as needed to show a character on screen, usually somewhere between two and ten bits, with exceptions down to one and up to forty-eight bits for certain characters. Eight-bit bytes became the standard by their convenience for the new generation of microprocessors in the 70s: within 8 bits in binary, there are 255 possible organizations of ones and zeroes. 16 bits would give too many possibilities and could slow the computer down, while 4 bits would mean combining phrases of bits anyway to get more than 32 or so characters.




8 sounds like the perfect combination of length and possible complexity, at least with the benefit of hindsight. The government had struggled with incompatible systems across branches due to byte size before 8-bit came along. ASCII was the compromise, at seven bits per byte, and when commercial microprocessors came along in the 1970s, they were forced to compromise again with ASCII Extended, so that commercial and government systems could communicate.

However, not all ASCII extended versions contained the same additions, so Unicode was then formed later to try and bridge all the gaps between versions. Unicode, a character reading program that includes the ASCII set of characters within it, uses eight-bit bytes, and it’s one of the most common character encoding libraries out there. You’ll run into ASCII a lot, too – if you’ve ever opened an article and seen little boxes where characters should be, that’s because it was viewed with ASCII but written with a bigger library. ASCII doesn’t know what goes there, so it puts a blank!




1000 bytes of storage forms a Kilobyte, or a Kb. This is the smallest unit of measure that the average computer user is likely to see written as a unit on their device – not much can be done with less than 1000 bytes. The smallest document I can currently find on my device is an Excel file with two sheets and no equations put into it. That takes up 9 KB. A downloadable “pen” for an art program on my device takes up 2 KB.

Computers before Windows had about 640 KB to work with, not including memory dedicated to essential operations.

The original Donkey Kong machines had approximately 20 kilobytes of content for the entire game.




A megabyte is 1 million bytes, or 1,000 kilobytes. Computers had made some progress post-relays, moving to hard disks for internal memory. IBM’s first computer containing a megabyte (or two) of storage, the System 355, was huge. It was also one of the first models to use disk drives, which read faster than tapes. In 1970, if users didn’t want a fridge, they could invest in the now desk-sized 3 million bytes on IBM’s model 165 computers, an improvement over GE’s 2.3 million bytes the year before – and the year before that, Univac had unveiled a new machine with separate cores tied together to give users between 14 and 58 megabytes of capacity in Byte Magazine, at the cost of space. IBM’s System 360 could reach up to 233 megabytes with auxiliary storage, but its size was…prohibitive, reminiscent of that first System 355.

Tapes and drums were competitive with the disk format for a while, but ultimately disk and solid state improved faster and won out (right now it’s looking more and more like SSDs, those solid state drives, will outcompete disks in the future too). During the 80s, the technology improved so much that hard disks became standard (IBM released a home computer with 10 MBs of storage in 1983) and floppy disks acted as media transport.

DOOM comes out in the 1990s and takes up 2.39 MB for it’s downloadable file, with smaller, DLC-like packs of fan-created mods coming out along the way.




A Gigabyte is 1 billion bytes, or 1,000 megabytes. In 1980, IBM releases another fridge – but it stores up to a gigabyte of information! According to Miriam-Webster Dictionary, you can pronounce Gigabyte as “Jig-ga-bite”, which just… feels wrong. In 1974, IBM releases a 20 foot long beast of a storage system that stores up to 236 GB of data on magnetic tape.

In 2000, the first USB sticks (memory sticks, jump drives, etc…) are released to the public with 8 megabyte capacities, and they’re so convenient that floppy disk ports begin disappearing from computer designs in favor of USB ports. USB sticks then improve exponentially, and soon have capacities of one, two, and four Gigabytes while floppies struggle to keep up.

Besides being smaller and harder to break, those USB sticks also store more. Where the first USB sticks held 8 MB, the standard size floppy disk at the time could only hold 1.44 MB of memory. Knowing how small DOOM is, it would take two floppy disks to download all of DOOM, but a USB only took one. By 2009, USB sticks with capacities of 256 GB were available on the market. That’s 178 floppy drives.




A terabyte is 1 trillion bytes, or 1,000 gigabytes. The first commercial drive with a capacity of one terabyte was first sold in 2007 by Hitachi, a Japanese construction and electronics company. The movie Interstellar, released in 2015, featured a depiction of a black hole known as Gargantua – and became famous when it closely resembled a picture of an actual black hole taken by NASA. A ring of light surrounds the black hole in two directions, one due to friction-heated material Gargantua has accumulated, one due to the lensing of light around it. The gravity is so intense that light itself is pulled into orbit around Gargantua’s hypothetical horizon and kept there. It took 800 terabytes to fully render the movie and make Gargantua somewhat accurate in terms of light-lensing.


A petabyte is 1 quadrillion bytes, or 1,000 terabytes. This is typically cluster storage, and while it’s available for purchase, it’s very expensive for the average consumer. For comparison, while rendering Interstellar took 800 terabytes, storing it at standard quality takes 1/200th of a terabyte. You could store approximately 2000 DVD quality copies of Interstellar on a petabyte. It took a little less than 5 petabytes to take a picture of the real black hole, M87.



Attempts at Media Storage That Didn’t Get Big

Elizabeth History April 21, 2021

CEDs: Like Vinyl for Video, but more expensive


The CED (or capacitance electronics disc) was a disc that could create pictures with the grooves in its surface, like a video/audio form of the vinyl record. It was expensive to produce, however, and just like vinyl it could degrade after being played too many times. The reader was physically touching the disc to read it. CED tech was also extremely sensitive to dust, even more so than vinyl records. It took a specialized caddy to store these things just to keep contaminants out! Consumers may have thought the idea was great, but the upfront cost was just too much for the average Joe, especially since more affordable media types were already nearby on the horizon (like VHS tapes).

CEDs were being produced even as the manufacturer said they’d cancel them. This understandably led to a dip in profits while manufacturing was still happening, and nobody wanted to pick it back up. CEDs are a fine idea, but much like the eight-track, they were somewhat expensive to make and not very widely demanded.


Optical Cards: Like a CD-ROM, But Worse


The Optical card briefly appeared as an alternative to CD-ROMs (ROM here stands for Read-Only Memory). It’s very cool in theory – it can only be written on once, it’s flexible, and it’s sturdy! It could make a perfect ID card as it usually had a capacity of several megabytes, perfect for storing info to access right away. However, you don’t see much of them today. Why? Optical cards seem like a perfect solution for a number of things.

It’s difficult to find a solid answer online, but my theory is that it did stuff that other products already did.

By the time it came out, it was easier to just scan a code linked to files in the computer than it was to manufacture a card with that unchanging data inside of it. For example: a barcode. There’s a reason barcodes win out over things like RFIDs for inexpensive(!) goods – adding in all that tech is just not worth the price when the computer can also do the trick by itself. Do you invest in 500 small cards with electronics inside and a machine to read them, or 500 plastic business cards with a barcode, and a machine to read them? One’s going to be much cheaper.

Besides, magnetic stripe cards were already on the market, and machines could already read them. It was a short jump to include more info on the card that everybody already had a machine for, so magnetic cards dominated over opticals.

The other part of it (which information online will verify) was that storage was getting cheap! So cheap that optical cards fell out of use for other forms of storage, too. Like in cameras, where Canon released their first optical card. SD cards could hold more than even CDs, so an optical card had no chance in the race. That’s not to say optical cards aren’t used at all, but they sit in an intersection that other products can fill with minimum additional effort. Legacy machines, and certain companies use them, but they’re not very popular.

Good theory, niche too small.


Bubble Memory: Like A Magnetic DRAM Chip, but worse


Bubble memory was supposed to be a more compact, sturdier replacement to other memory types. Unfortunately, bubble memory sat at the worst intersection of expensive and power hungry – even if it outperformed DRAM chips, Semi-conductor memory, and hard disks in one field or another, everything else wrong with it dragged it down to become a second-rate competitor. Not to mention, the main producers of bubble memory drives never got manufacturing down to a science, so it was prone to breakage and bugs even when it should have been competitive in each niche, before the others came along.

It got some use because it popped up in the middle of a DRAM chip shortage, and then promptly died back out once DRAM units were back on the shelf alongside other replacements. It was just too fiddly to keep!


Eight-Tracks: Like a cassette, but more niche


If you’re going purely off of the item’s legacy, the eight-track is certainly a legacy item worth mentioning. It’s in this list because other items from the same era survived where the eight-track died. Cassette players in cars are still so widely present that adaptors sell in drugstores, while eight-track adaptors are a specialty item sold online. The last generation of cars to hold eight-track players are largely off the road, while cars with cassette players were still made into the early aughts. Vinyl records are still sold in physical locations, eight-track tapes are not. Compact cassettes are still sometimes featured in teen movies… eight tracks are not. Eight tracks still hold a lot of nostalgia, but the effort to get one playing in this day and age is a massive pain.

It was a great idea, but it was outlived by other media.


ROM Cartridges for Not-Games: Like a floppy disc, but earlier


Once, cartridges were used across the board. Of course they were! They were convenient, and the earliest home computers already had a slot for them! Most people recognize them as video game storage, but they were capable of more than just that. Applications, extra RAM, extra storage – the cartridge, even the ROM-only cartridge, was almost as capable as a USB was, except for capacity.

Nothing really had that much capacity at the time, though. The computers of the time usually held less than a modern cheapo USB’s worth of memory. Other forms of media outstripped it for basic storage, but it reigned supreme for a few more years in videogame media, before floppy discs started taking over there, too.

It’s distant descendant, the CD-ROM, held more data more securely, so the cartridges started to become outdated when optical media became available for purchase. Even video game consoles switched from cartridges to discs.


Sinclair ZX Microdrive: Like mini-USBs, but too early


A teeny-tiny drive with about 200 inches of magnetic tape inside sounded like a piece of spy equipment when it first launched. The Microdrive was especially small for it’s time and capacity, although it tended to wear out quickly. As a result, it still struggled to compete with bigger drives despite its many advantages. Other, similar drives released by competition were in much the same position. The thinner the plastic, the easier it wore out. Smaller devices either had less tape, or thinner tape – most devices chose thinner. It was the best consumer electronics could do at the time.


Magnetic Drums: Like tape, but bigger


Magnetic tape came out before magnetic drum tech did, although both saw use at release. The primary difference is in the reading: magnetic tape is moved in front of a single reader, while the drum spins in front of several fixed readers. In computers, it was replaced by core-memory. In a way, drum memory was the first time hard drives really took shape: hard drives follow many of the same principles, in that the heads stay still while the magnetic (or capacitive) item rotates beneath it, and the machine picks the correct head to see the data it’s looking for. Instead of a stack of discs, it was a single drum, so it’s capacity understandably wasn’t as great as modern drives, even if the idea was there.

Drum memory certainly didn’t fail – the military used it for years! IBM even used it up until the 90s in certain machines. However, its limited storage capacity made it a less popular choice than the also-widely-used tape, and then core-memory, for regular consumers who wanted something smaller.




Magnetic Storage Types


Magnetic Tape


The most well-known version of tape-based magnetic storage is the kind used for media. When tape-based recording was first introduced, it revolutionized the talk show and DJ-ing scene of the time (mostly post WWII) because it enabled shows to be recorded and played later, rather than live. Music recording tech already existed, but it required physical interaction from the DJ, so it wasn’t as hands-off as tapes were.

The second-most well-known version is the kind used for computer memory! Data is stored on the tape in the form of little magnetic ‘dots’ that the computer can read as bits. Before each pocket of data dots is a data marker that tells the computer how long that pocket should be, so it knows when one set of data ends and the next begins. The polarity of the dot determines it’s bit value, and the computer can then read all these dots as binary code.

This method of data storage was a massive breakthrough, and other mediums continue to use the format even today! Tapes are still in use for big stuff – parts of IBM’s library rely on modern tapes, which can now store terabytes of information at a higher density than disks and flash drives alike. Other memory types relying on magnetic domains include hard disks and drums, to name a couple. All that separates them is material and know-how: the better the magnetizing material on the outside, the smaller the domains can get. The better the insulation between the domains and regular old entropy, the more stable the data is!


Carousel Memory


Carousel memory was an attempt at shrinking the space that magnetic tape took, but to the extreme. Instead of one very long piece of magnetic tape on a bobbin, the carousel memory system uses several smaller reels of tape arranged in a carousel pattern around the central read mechanism. To get to the right info is as simple as selecting the right reel! This has some issues with it, as you might imagine. Moving parts add complications and an increased risk of mechanical failure to any device, but a device carrying thin, delicate magnetic tape on it is an especially bad place to start.

However, it wasn’t all bad. Carousel memory was actually quite fast for the time because it didn’t have to rewind or fast-forward as much to get to the right area of code. It could skip feet of tape at a time! This advantage declined as tape tech improved, but it still helped companies trying to squeeze the most life from their machines. The bobbins and individual ribbons were all replaceable, so the tape wasn’t worthless if it got torn or damaged. The carousel itself was also replaceable, so the many moving parts weren’t as much of a curse as they’d be on, say, the first hard disks, which had irreplaceable heads.


Core Rope Memory


Core rope memory featured magnetic gromets, or ‘cores’ on metal ‘ropes’, and then those ropes were woven into fabric the computer could read. In ROM (read-only memory) format, if a wire went through the core, it was a ‘one’, or a ‘yes’. If it didn’t, it was a ‘zero’, or a ‘no’. In this way, the fabric is physically coded into binary that the computer can use. ROMd Core-rope memory involved quite a bit of complicated weaving and un-weaving to get the cores in the right spots.

Core rope memory was chosen over tape memory for the Apollo missions, mainly for weight purposes. Tape was great, but not nearly dense or hardy enough for the mission yet, and neither were the other similar core modules available to NASA. A read-only core-rope memory module could store as many as 192 bits per core, where erasable core memory could only manage one bit per core. Where each core on the final module depended on reading the wires to determine the bit’s state, the erasable model (core memory) read the core’s magnetic state to determine the bit state, not the threads going through it. The final module sent up to get to the moon was a total of 70-ish pounds and read fairly quickly. Tape, core memory, or hard disks available at the time couldn’t have gotten to the same weight or speed.

Core-rope memory has its place. It’s very sturdy, and since it relies on the cores to act as bits, it’s possible to visually identify bugs before the memory’s even used, unlike core memory. Both are sometimes called ‘software crystallized as hardware’ because of the core system. It isn’t seen much today, since it is still incredibly bulky, but at the time of its use it was revolutionary.


Core Memory


Core memory is the older sibling of core rope memory, and it stores less. However, the people who got to work with it call it one of the most reliable forms of memory out there! Core memory works much the same as core rope memory, where the bits are stored in cores.

However, the formats are different. If core rope memory is like a binary-encoded scarf, core memory is more like a rug. Thin threads made of conductive material are woven into a grid pattern, with cores suspended on where the threads cross each other. The computer understands these threads as address lines, so asking for a specific bit to be read is as simple as locating the X and Y address of the core. A third set of lines, the sense lines, runs through each core on the diagonal, and this is the thread that does the actual reading.

When asked to, the computer sends a current down the sense threads and sees if the cores flip their magnetic polarity or not. If it doesn’t, it was a zero. If it does, it was a one, and it has been flipped to zero by the reading process. This method is known as ‘destructive reading’ as a result, however, the computer compensates for this by flipping the bit back to where it was after the reading. Due to its magnetic nature, the core then keeps this info even after power to it is cut!

This link here is an excellent, interactive diagram of the system.

Even though this improved the bit-to-space-taken ratio, core memory still aged out of the market. With the price of bits decreasing rapidly, core memory got smaller and smaller, but the nature of its assembly means it was almost always done by hand – all competitors had to do was match the size and win out on labor. Soon, its main market was taken over by semi-conductor chips, which are still used today.


Magnetic Bubbles


Magnetic memory has had strange branches grow off the central tree of progress, and magnetic bubble memory is one of those strange shoots. One guy (who later developed other forms of memory under AT&T) developed bubble memory. Bubble memory never took off in the same way other magnetic memory styles did, although it was revolutionary for its compact size – before the next big leap in technology, people were thinking this was the big leap. It was effectively shock proof! Unfortunately, better DRAM chips took off shortly after it hit the market and crushed bubble memory with improved efficiency.

Anyway, bubble memory worked by moving the bit to-be-read to the edge of the chip via magnets. The magnetic charge itself is what’s moving the bits, much in the same way electrons move along a wire when charge is applied, so nothing is actually, physically moving within the chip! It was cool tech, and it did reduce space, it just didn’t hold up to semi-conductor memory chips. They saw a spike in use with a shortage, but they were so fiddly that as soon as DRAM chips were available again, they went out of style.


Semi-Conductor DRAM – Honorable Mention


DRAM chips are a lot like core memory, in that the device is reading  the state of a physical object to determine what the bit readout is. In Semi-conductor chips, that physical object is a tiny capacitor, hooked up to a tiny transistor, on semiconductive metal-oxide material. Instead of determining magnetic state, the device is instead checking if the capacitor’s discharged or not. No charge = 0, yes charge = 1. These chips aren’t technically magnetic, but since they’ve killed so many of the other options, here they are!

DRAM stands for Dynamic Random-Access Memory, and it means that the memory can be accessed randomly instead of linearly. As long as the computer knows where the data’s stored, it’s able to pull it without pulling other files first. They’re still being sold today!


Magnetic Disk (Hard Disk Drive)


Hard drives work more like tape than core memory. A Hard drive is a platter (or a stack of platters) with a read-write head hovering above it. When you want to save data, the hard drive head magnetizes areas in binary to represent that information. When you want to read or recover that data, the head interprets these areas as bits in binary, where the polarity of the magnetized zone is either a zero or a one.

The zones of magnetization are incredibly tiny, which makes hard drives one of the more demanding memory forms out there, both now and back then.

Early hard drives could suffer from ‘de-magnetization’, where a magnetic disk’s domains were too close and gradually drew each other out of position, slowly erasing the information on the disk. This meant that the disks had to be bigger to hold the data (like everything else at the time) until better materials for data storage came along. Even though they held more capacity at launch, they were passed over for smaller and more stable stuff like tapes and core memory. The very early drives developed by IBM were huge. Like, washing machine huge. They didn’t respond to requests for data very quickly, either, which further pushed reliance on tape and core technology.

Over time, hard disks improved dramatically. Instead of magnetic zones being arranged end-to-end, storing them vertically next to each other created even denser data storage, enough to outcompete other forms of media storage entirely. Especially small hard drives also come with a second layer of non-magnetizable material between the first layer and a third layer of reverse-magnetized ‘reinforcement’ which keeps the data aligned right. This enables even more data capacity to be crammed into the disks!

Some time in the 80s, hard drives finally became feasible to use in personal computers, and since then they’ve been the standard. SSDs, which don’t have any moving parts whatsoever, are beginning to gain ground in the market, but they can’t be truly, irrevocably erased like hard drives can due to different storage techniques. Hard drives are going to stick around a while, especially for the medical and military industries, as a result!


Sources: (all primary sources regarding carousel memory are in Swedish)