Y2K, for folks who don’t know, was the idea that the world’s computers would attempt to switch over from 1999 to 2000 on New Year’s Eve, and fail because most computers handled date in two digits instead of four – so the computer would see 99 turn to 00 and flip out. Date and time are critical for a lot of automation, even back then, so the fear wasn’t unfounded… but the panic might have been a little excessive. Why didn’t it happen?
The Bug Was Real
Programmers, with limited space and time, would look to optimize anywhere they could. The first commercial computers read years with two digits – memory was incredibly limited, and telling a 1970’s computer it had to count four digits instead of two would have shortened it’s already tiny attention span. They’d cut the “19” clean off the front and told the computer to take “19XX” as a given. This becomes a problem when the year is A) about to turn to 20XX and B) about to have two zeroes at the end because of how linear time works. The computer then might freak out, not knowing what to do with 2000. Automations relying on date could react unpredictably unless programmers got there first.
It was a real bug. Hospital systems relying on date and time, stock systems, automated manufacturing, everyone used date for something. Documents could become chronologically disorganized. Bank automations would wildly miscalculate mortgage payments. Video rental stores would charge a century’s worth of late fees, not understanding ‘negative time’. Some computers might get into a death loop and crash over and over as they tried to understand what year it was.
Echoes of Y2K were seen during the first months of the Covid pandemic: people created the shortages, not the event. Y2K caused an actual water shortage in some spots because people were preparing for the apocalypse by filling up their tubs last minute, assuming their water supply might be interrupted. For what it’s worth now, that’s not a great idea unless you have something to store the water in besides the tub. Water is treated, yes, but if something like an earthquake ruptures the water line, you might end up with contaminated water in the tub, or no water at all. If it’s something that won’t rupture the line, the tub itself is likely harboring some bacteria that could breed in the water over a long period of time – the tub’s where feet go, after all.
Buy your water far ahead of a potential disaster (as in before you can even tell an emergency is going to occur) in a stable, sealed container, because exposure to the open air can make still water go bad over time. Purdue has a good article on the subject here: https://engineering.purdue.edu/SafeWater/drinkinfo/y2kwater.html
And then it was gone. The new year came and went, and suddenly people and media outlets acted like they’d never given in to the panic all along. Things returned to normal. Futurama said it best: “When you do something right, people won’t be sure you’ve done anything at all”. Engineers had fixed the problem so well that people became convinced Y2K was never going to actually happen.
How They Fixed It
As with most things, the truth is somewhere between tabloid and denialist. Y2K really could have screwed some stuff up, but a plane wouldn’t have fallen out of the sky because its GPS said it was time traveling. And on the flip side, it didn’t turn into a crisis, not because it wasn’t one, but because software experts fixed it. The sudden public interest isn’t what alerted software engineers to the problem, either. They generally knew, and they’d been working on it for years.
There were a couple of options to fix the Y2K bug: Windowing, which meant that the computer would treat dates as 20XX instead of 19XX, and full-out reprogramming for four digits. Most programmers went with windowing when they had the option because it was much, much easier (and therefore faster and cheaper) to do than trying to reprogram a legacy system to understand four digits. It’s important to note here that computers are in a lot of things that don’t have very much memory: parking meters, some cash registers, old gaming systems, etc. But they all need to know what the date and time is to issue receipts and count time properly for calculations. And when I say they don’t have much memory, I mean some of these legacy systems had been in use since the initial programming that lead to Y2K, the 1970s. Windowing was sometimes the only viable option.
This did kick the can down the road, but it bought time for memory storage tech to catch up. Businesses now had time to find an alternative to their legacy system, or decide to just keep windowing the problem until it was no longer feasible, which might be a while. Ultimately, it was fixed. Not perfect, not infallible, but fixed well enough to prevent a mass computer meltdown. Windowed systems may go out intermittently, but they’re not failing all at once and causing a choke.
For devices that needed to switch to four digits anyway for futureproofing (like the computers found in nuclear plants, banks and utilities) reprogramming took longer. Again, this was a project in the works for months, if not years, and it came with a very hard deadline. When the issue was described, most organizations jumped to fix it. A total of approximately 100 Billion dollars was spent by the US alone to prevent the potential collapse.
Smaller issues popped up around date and time at the millennium, but they weren’t as potentially catastrophic as the actual Y2K. For example, the leap year problem: 2000 wasn’t supposed to be a leap year in computer logic, because every 100 years the leap year is skipped unless it’s also a year divisible by 400. Some systems accounted for that first part but not the second, leading to more programming work. Ironically if they’d just ignored the 100 year rule, the 400 year rule wouldn’t have come into play, but developers were doing their best to avoid having to patch systems after they’d been installed so close to a new millennium. Being off by a single day in the correct year wasn’t as critical (it was paperwork being dated incorrectly instead of systems crashing), but it was still kind of annoying to try and fix on top of the other crises happening at the same time.
More minor problems include coding only the last two digits of the date as an upwards count instead of all four without limiting the number of characters in the slot, which lead to some websites displaying “19100” until it was fixed later, instead of the more common 1900 also displayed.
Previous date failures
Y2K was the most famous potential failure because of the potential consequences. But it wasn’t the first! This is actually a good thing, as it gave engineers and software experts a good idea of what can and can’t be shoe-horned or windowed on a large scale. Windows Vista? No. Registers? Yes.
In the 1960s, storage space was incredibly limited. This led to a miniature Y2K, where the computer could only count up to 9, meaning 1969 was the highest it could count to from the year 1960. Computers were significantly less widespread then, so it wasn’t as critical a problem as the Y2K described above. Again, it was done to save any scrap of memory possible. By the time 1970 rolled around, memory storage had improved enough for the second digit.
The date 9/9/99 was another mini Y2K: Strings of 0000 or 9999 are frequently used in programming to tell the computer to take a closer look, since it’s both the upper and lower limit of what a four-digit number can hit. As a result, when a computer sees it, it may react by shutting down the process. 0/0/00 wasn’t a possibility because the calendar recognized that day and month couldn’t be 0, so 9999 became the standard for clock errors! 9/9/99 is a real date, but thankfully – much like Y2K – programmers of affected systems caught it before it became a problem.