Posts Tagged

technology

Who is this Apple Gadget Built For?

Elizabeth Technology July 11, 2023

Apple’s newest VR headset, the Apple Vision Pro, is technologically impressive – but it’s priced out of reach for many VR enthusiasts. Who is it for, exactly?

The Device Costs As Much as a Used Car

Apple’s Vision Pro headset costs 3,499$ not including tax. It’s such a gigantic price point that it’s hard not to picture what an average consumer could get elsewhere with that money. Many highschoolers drive cars that cost less than this device – before the pandemic completely destroyed both the new and used car markets, finding a used car that still drove reliably for under 2,000 American dollars was possible. Less, even, if that highschooler had family looking to get rid of a beater. You could get four touchscreen Dell computers with i5 cores, 16 GB of RAM, and Windows 11 included for the same price as one Apple Pro headset as of this article’s writing.

This is the latest in a long line of Apple products priced prohibitively. Apple devices were always costly, but they were a sort of costly that made sense – if you really need a photography-grade camera in your phone, then save up for an Apple device and be set for years. As of recently, every device demands a significant investment, whether ‘better’ equipment exists for the task or not.

Technologically Impressive

It was always going to be expensive, though, even without Apple branding. The price reflects what might be a breakthrough in wearable tech, and the device is certainly impressive. It can do things that the Apple iPhone can do as well as simulate Virtual Reality around the wearer. Images of the device seem to hint you may be able to walk around with these things on without being completely blind behind the visor – the presentation states that this is the first Apple product you look through, not at. The ‘screen’ is transparent. You are seeing reality with the augment layered over it in the lenses, the way Google wanted to do it years ago but couldn’t make look natural.

This isn’t smaller or more discreet, but “ski goggles” that can run intensive apps without another computer attached to them is the stuff science fiction writers have been dreaming of! It’s goofy now, sure, but the iPhone was goofy – the first iPhone was four or five good products mushed into one okay-ish product before it found its footing and started doing things well.

While this is goofy, and expensive, and right now its usefulness is pretty much just for entertainment, it is still impressive. Someday it might be a regular piece of techwear. It depends on what people can find them useful for, the true question that determines the product’s life.  

How Much Use Are Headsets Anyway?

Most of the advertising seems to suggest this is best for filming videos and consuming content. That’s certainly an increasing part of everyday life for many people, but for the same price, those people can buy a quality TV and soundbar and couch and have a decent home entertainment system that shows stuff to more than one person at a time. Even if they wanted the apps, they could buy at least two of the newest iPhone for the same price as one set of ski goggles. Nobody can agree on whether headsets, augmented reality glasses, and metaverses have real value beyond entertainment.

Potential monetary gain is getting in the way of real assessments! If augmented reality or metaverses ever find their footing, the money made by the people who establish themselves first will be completely insane the same way the first NFT sales were insane. They have a motivation besides advancement of the technology to push this stuff. It’s why Facebook bought Oculus, and then never seemed to do anything with it. Zuckerberg saw the potential for purchasable avatar clothing and virtual storefronts that would have to pay ‘rent’ for the virtual space, which Facebook/Meta could sell for massively inflated prices compared to website domains.

Worse, some of the people pushing hardest for Metaverse successors don’t even think that potential money will last, they just figure a boom-bust cycle is inevitable – the sooner the boom, the sooner they can extract money from people and then bounce before it all comes crashing down.

However, while that attitude is everywhere within the companies, it’s getting in the way of making an enjoyable experience for the end user. There is no money to be made until the consumer is having enough fun to spend a couple dollars on a virtual arcade game, or uses their avatar enough to buy it a funny hat. The only reason so many of the crypto ones exist at all is because the funding comes in before they have time to set up all the little microtransactions designed to bleed consumer wallets dry. Once those are in place, the Metaworld’s player count usually drops sharply. Not even the worst arcades in the world steal quarters like these places plan to, and every single one thinks they’re the first to have the idea.

 Metaverses are often painted as a sort of cyberpunk wonderland, the future, the inevitable next step in technology, but they never seem to end up getting there because ‘visionaries’ and ‘early adopters’ make promises they can’t keep and slink away with whatever they got first. If the virtual parts of augmented and virtual reality never improve because of this cycle, then there just won’t ever be a stable set of apps and programs to use on the very expensive hardware bought to facilitate it. Apple has the potential to fix the second part of that because it has final say on every app in the app store as well as the funding necessary to make new and exciting apps for the headset should it choose to do so, but the first part is going to take some serious reimagining of the space’s potential.  

In the face of all of that, what can a peripheral do to prove itself worthy of a consumer’s time? Does Apple really believe this headset is the future, or is it banking on customers buying it to use as a status symbol-slash-fashion statement? For that matter, if money is removed from the conversation, if you could just have one, would you want it, and use it if you got it?

What would you use it for?

How Did The Titan Communicate Underwater?

Elizabeth Technology July 6, 2023

For those out of the loop, the Titan was an experimental vessel meant to dive to depths of around 13,000 feet, specifically to visit the final resting place of the Titanic. Tragically, it imploded on the way down on its last voyage, taking all of its crew with it. The Titan was made of carbon fiber and titanium, designed to be steered with a plug-and-play controller, and used an electric engine. You may be wondering why everyone is questioning their navigation system, and lack of backup comms.

Firstly, The Pinnacles Of Submersible Technology

To say the Titan was unusual in the field of submersibles is a vast understatement. So to truly get the basics, it’s better to look at everyone else’s submarines and submersibles first, alongside the physics of water. Water, especially salt water, is actually really good at absorbing radiation! It’s how life was able to survive in the water even when the Earth lacked an ozone layer. The ocean soaks up electromagnetic radiation and disperses it far before it can reach the depths submarines get to.

This holds true even though most subs aren’t made to reach the deepest parts of an ocean. Strategically, 800 to 1,000 feet is plenty deep enough to hide from surface vessels, so most military subs don’t bother going any deeper. In exploration terms, unmanned vessels equipped with cameras are generally better for observing wildlife or underwater structures, although there are a variety of submarines and submersibles designed to hold people and reach further depths. James Cameron has famously reached the Challenger Deep, the deepest point of the Marianas Trench, as well as multiple dives to the Titanic in his submarine The Deepsea Challenger.

How do those boats communicate? There are a variety of methods, but one of the easiest methods for communicating with others on the surface is to simply come up and put up an antenna. Air communicates radio waves much easier. Some crafts use tethers, tied to buoys with antennas attached, so they don’t have to come all the way up to the surface themselves. That allows them to stay further down, and further out of sight of surface vessels if so desired. At depth, options are limited. One such option was ELF communications: ELF, or Extremely Low Frequency radio waves, were used to summon certain submarines to the surface so they could receive longer instructions up until 2004, when the project was shut down in the US. ELF communications were a highly specific tool used for a highly specific task during the Cold War – they ate up an enormous amount of energy and took a gigantic antenna to broadcast (56 miles of cable total) – and as a result they didn’t see much use, with the original project site shut down in 2004 in the US.  

How Did the Titan Get Instructions?

Using a buoy with an antenna attached or a tether is simply not feasible at the depths intended for the Titan. Neither is GPS, or WiFi, or ‘true’ text messaging. Instead, the Titan used acoustic pulses (otherwise known as sonar), where packets of data are sent down as sound waves, and a hydrophone on the receiving party’s vessel is able to catch and decipher the packet for the people inside. All of the messages the Titan team sent or received were meant to be short, coded messages with no room for confusion. A laminated sheet inside the vessel provided translation, according to a photo from David Pogue.

However, such a system with few navigations tools on the craft itself made it easy for the submersible to get lost. While the craft had sonar, it wasn’t the sort they could use to bounce off of obstacles to navigate, so they were entirely reliant on the surface vessel for eyes. Additionally, the boats could only talk when the surface vessel was directly over the sub. Think the difference between planning a route using a paper map of the highways to travel on and using a GPS – the CEO even compared it to a game of Battleship, where the top ship gave directional instructions using a grid system. Taking an unmarked exit on a highway using a paper map means re-orienting yourself on a landmark to get back on track, but underwater, there just isn’t much to orient with. The Titan was reliant on the topside vessel knowing where it was on their battleship grid to steer it, and with no real landmarks outside of the sunken ship itself, getting ‘lost’ in the water at those depths happened surprisingly infrequently for how low-information that system was.

Unfortunately, even in the event the craft was recoverable, there just aren’t great ways to send signals back and forth and guarantee someone will receive them. Acoustics powerful enough to communicate with anything anywhere on the surface (not just directly overhead) eat up a lot of electricity, and so wouldn’t be a reliable backup if the sub had lost power. Radio waves are out until the craft ascends. ELF only works one way. Once any craft is deep enough in the water, it’s alone!

Podcasts Aren’t Actually so Easy

Elizabeth Technology June 27, 2023

There was a time when podcasts were an obscure form of entertainment. After all, in the early days of the internet, storage space for mobile devices was precious.

The Before Times

Podcasts used to be pretty rare, back when CDs were the main method of data storage. You could get okay-ish radio recordings of professionals who had advice to dispense on a CD, or you could listen to an entire album instead on that same CD.

Podcasts as a format just didn’t make sense. It’s like a radio show, but never aired live? It’s like a TV talk show, but with no footage? It’s… sort of like an audio book… but without premade content? What is it bringing to the table that’s new, exactly? The podcast’s first form was as audio-blogs, and audio blogs existed, but the people making them had to be pretty darn interesting to compete with the other entertainment available.

Especially with what a hassle it was to even get the things and store them!

It took til downloadable files could be accessed by anyone for podcasts to start growing in popularity, in the 2000’s. In the peak era of talk shows, sitting down to watch an interview was more convenient, and easier to parse. The format was tried and true! The interviewees were always interesting, and always previously vetted. Recording those off of TV could be like a podcast, but recording it from there meant recording the entire thing, not just the audio, so stripping the video just didn’t make sense if it was all already there. The same went for radio shows, which were already doing plenty for that niche. Format transfers were a pain for the average person with an average desktop.

Speaking of average desktops, recording equipment and studio space were also prohibitively expensive. If someone in 2004 wanted to record something, they’d have to either go to a specialty shop or settle for consumer grade microphones from Best Buy. Their ‘free’ recording space, their house, wasn’t soundproofed unless they went out of their way to do so. Echoes, interruptions, editing, distributing – this is all studio-level stuff at that point in time, and studios just weren’t interested. Talk shows were live, on the radio, and sometimes available for download on the radio’s website if the radio’s host company wanted to go through the effort. That was a very powerful if. As a result, the best of the best is what most people got, classic Abbott and Costello bits and tips from self-help guides who were actually professionally trained and licensed to help people. The difficulty of starting a show was both a blessing and a curse.

The Now

Now that high-quality microphones are cheaper than they used to be, and many people have the internet speeds necessary to upload hour-long segments, nearly anybody can start a podcast. Audacity, a sound-editing program, is free to download! OBS will let you record yourself for free. A decent-quality mic with a pop filter no longer costs as much as a gaming console. Of course people are going to try and get into the business.

The problems begin to arise when things like soundproofing or room noise or echo aren’t considered. Inexperienced beginners set out in echo-y rooms with audible distractions popping in every now and again, and an entire ocean of them are competing for the attention of their listeners. If they have the right set-up and a quiet place, they still have to jump the hurdles of adjusting their own mix, making an intro or scripting one, cutting out dead space and breathing noises, editing the final file, and finally, uploading it. It sounds so simple to just ‘make a podcast’, but the hidden work is beginning to cost more effort than it’s worth.

Not to mention the marketing and ads, which is why so many people try to jump into podcasts in the first place. Many people misinterpret ‘audio-only’ as ‘easy-money’ but it’s really not. The effort to produce something as cleanly made as any of the top podcasts on Spotify is a full-time job in and of itself – and with so many new podcasts, content consumers aren’t going to settle for poor-quality ones anymore. This is bad news for hopefuls aiming at ad money and sponsorships.

The Money

Ad-reads took over Youtube after what is termed the ‘adpocalypse’. Essentially, Youtubers with good records and decent subscriber counts could be solicited to read an ad directly within the video, bypassing the Google Ads system altogether, as the Ads system was much less profitable once advertisers pulled away en masse. The format, however, was tried and true long before in early podcast break-ins. Many podcasts from the 2010’s contained ad reads as their standard, the same way radio shows did.

Ad-reads are a very good source of money. Incredibly good. Unlike Google Ads, the ads can never be pulled from the video or audio by a third party, which is good for the creator. The ad is also always tied to the content, unlike Google’s rotating reel of pre-roll ads, which is good for the advertiser! The ad’s perpetually advertising for them, even if relationships with the creator crumble. They’re worth more money because of this stability, and as a result, they’re more difficult to attain than the standard Youtube Partnership.

The bigger the podcast, the more likely it is to be approached by an advertiser, and the more potential money one could earn. Unfortunately, because so many podcasts are so opaque about their total listener counts, it’s much harder to gauge how big a channel needs to get before they can start pitching their show to the advertisers. There’s also a sort of wariness around new and upcoming shows because followers and download counts can be purchased from shady folks who specialize in bots. 5,000 subscribers might not be 5,000 sets of ears ready for advertisement – the efforts to cheat the system have made the people with money more wary, and made the bar higher for new entrants along the way.

Longevity

Of course, the only consistent way to get those necessary followers is to produce consistently good content on a schedule. Not every podcast that does that succeeds, but all of the successful podcasts do that. One good episode? Easy! Two good episodes? Maybe! Three, or four, and then five when you really don’t feel like recording? Episode 6, when you’ve gotten a total of three listeners? It’s tough to find the motivation to continue. The NY Times says that between March and May of last year, only a fifth of existing podcasts released a new episode. That’s abysmal.

The question is if a new show can keep it going in spite of the work, or in spite of a rocky start, and many just can’t. Luck doesn’t strike every attempt at a show, and podcasts are not as fun and easy as hosts make them seem. It’s easy to talk with friends for an hour, for some people. It may be easy to spend an entire night together gabbing about whatever the current events are. It’s not easy to guide the conversation using pre-written topics, day after day, week after week. How often did you spend two solid hours just talking to people before the pandemic struck? No breaks. Very little dead space. Long stretches of listening and no pauses once it’s your turn to respond.

I would wager most people overestimate the time they can talk about something before repeating themselves, which is why so many podcasts also feature friends and interviews, a niche that’s become overdone. Having another person to bounce info off of is a great idea, but so many podcasters treat interviews as a marketing method instead of an actual interview that sorting out interesting interviews is like finding a needle in a haystack.

And then there’s the ‘friend group’ podcasts, which have the same core members, week after week. Every issue with scheduling recording time, having a quiet studio, and finding relatable talking points is magnified by however many people are in the group. That being said, they are much easier to run (and more appealing to listeners) than single-person podcasts, or rotating interview podcasts if the host is mediocre. Most radio shows have two or three people for that exact reason. Even then, running out of content is still a very real threat, and if one of the members leave? The show is as good as over.

Shows like My Brother, My Brother, and Me rely on Yahoo Answers as well as audience send-ins to build out content. Beach Too Sandy, Water Too Wet does the same, but with reviews of various locations. Other podcasts with similar formats have all but consumed the niche, and now others trying to get their own podcast off the ground are having to do “X – But With a Twist!” style content. The number of dead shows with premises like the Youtuber Markiplier’s Distractable podcast, or the Joe Rogan Experience, is in the hundreds, because it’s so incredibly easy to make one episode and then bail. People starting podcasts now might only be able to get a reliable viewer base if they have their own built in off of other projects. Distractables, Very Really Good, Schmanners, etc. all come from people who have successful channels somewhere else.

Sources: https://www.nytimes.com/2019/07/18/style/why-are-there-so-many-podcasts.html

It’s Summer for Computers, Too

Elizabeth Technology June 22, 2023

Listen, sometimes machines get old, and they work too hard, and then you don’t want to burn yourself by watching Netflix, so you resort to other methods of cooling your computer. There are right ways, and there are wrong ways.

DON’T: Put Your Machine in the Freezer or Fridge

It sounds like a good idea, but it’s really not. Condensation can form on the inside of the machine, which can then permanently break things as said condensation re-melts and drips onto other components inside your device. Plus, if it’s a systemic issue like a broken fan or overworked CPU, this isn’t actually fixing the issue. You’re going to be taking your machine in and out of the freezer forever!

Cold screws up glues over time, too, meaning internal elements can gradually wiggle their way loose.

As an unrelated hack, freezing gum can usually get it off the bottom of your shoe.

DON’T: Put Ice Packs, Popsicles, or Bags of Ice on or in the Machine

Condensation, once again, can ruin your machine if it drips into the wrong spot. However, ice bags have the added danger of leaking! Ice sometimes has sharp enough points to pierce its own bag. Popsicles, while usually sealed for safety, are not worth the risk of some sharp component in your machine piercing the bag full of sugary dyed liquid. If that doesn’t kill the machine, it will make you wish it had when the keyboard is too sticky to type on quickly.

DON’T: Run Every Program at Once

You shouldn’t be running high-distance Minecraft alongside high-render Overwatch while also running your internet browser for a live Youtube stream in 4K unless you’ve got a super-computer. If it even lets you get those programs open and running, but you notice your computer is unusually, abysmally hot, those programs might be contributing. You can overload your CPU! If you can’t identify which program specifically is eating up all your CPU’s power, check the task manager. Windows devices have a task manager that allows them to see how much of the RAM, the hard drive, and the CPU a program is using. Just hit (Ctrl + Alt +Delete) and you’ll reach a menu with Task Manager at the bottom. If you can’t narrow your issue down to a specific program, then restarting the computer may fix whatever background program has gotten stuck in the RAM. It’s a good idea to reboot regularly anyway!

Now that we’re past the don’ts, what should you do? You obviously can’t let it stay hot, that will slowly fry the hard drive. Excessive heat is worse for electronics than cold is, especially the kinds with batteries in them. You should take steps to cool off your machine if it’s getting ridiculously hot.

DO: Use a Fan

There’s a small fan inside of your computer already. If it’s not cutting it, then the next best step is to use a real fan, and just position the intake for your device in front of it. The extra air flow is just doing what the fan inside the device was already doing, but on a bigger scale! You might find that repositioning your computer so the fan will fit by the intake can help cool it down, too – computers in front of windows might be absorbing more heat than you realize.

DO: Use a Specially Designed Cooling Pad

Some companies sell cooling pads, pads that cool the device down externally. These are specially designed to avoid generating condensation inside the device, while still wicking away heat safely. If you can’t get a fan into the area it needs to be, a cooling pad is a solid second option. Unfortunately, due to the shape and size of PC towers, this is generally only feasible for laptops.

DO: Make Sure the Vents Are Clear

If the machine’s pretty young, and the programs on it aren’t too intense for its specs, the reason may be external. Check where it’s vents are! Especially for PCs. If the tower is pushed right up against the wall, it might not be able to generate the airflow it needs. Also, don’t put stickers or decorations over vents. That’s also bad for the vent’s venting power.

Speaking of vents, make sure the vents are cleared of dust, too! Cleaning them improves efficiency.

DO: Restart Every Once in a While

Your computer is doing a lot of things in the background for you. Many programs are still doing things after you close them! Steam, a popular gaming platform, is almost always also connected to the internet when users aren’t looking. It does this at start up, and it keeps an eye on it’s own connection to let you know if you lost internet. As such, it’s important to occasionally restart, so these programs don’t ‘get stuck’ eating processing power for their own little functions.

DO: Consider a Shop

If the computer’s hot enough to fry eggs, the odds are pretty good that something’s up with the CPU, the fan, or it’s own internal thermometer, depending on the age of the machine. If you’ve tried everything you can think of to cool it off, or keep it from getting so hot in the first place, it might be time to visit a shop. At the very least, you should be keeping backups of your files. If the heat eventually kills the machine, a backup saves you a lot of money on very expensive data recovery.

Sources: https://www.crucial.com/support/system-maintenance-cooling

RugPulls are Still a Problem

Elizabeth Technology May 23, 2023

NFTs remain untrustworthy and unregulated.

The Classic RugPull

Believe it or not, when things are only worth as much as a group’s belief in them, not believing in those things makes them stop being worth anything. The first one to really hit pop-culture as a rugpull might have been SquidCoin, a cryptocurrency coin that hit the spotlight for being the worst possible understanding of Squid Games’s messaging about money and capitalism. SquidCoin grew in value, and a 45 degree angle up is very attractive to new, inexperienced investors. Once the SquidCoin people made their money, they sold all of their stuff and left, leaving SquidCoin unsupported and the people who bought it with less money than they started with. The classic rugpull. It was a source of ridicule for people on the outside, who saw what had just happened with clear eyes – for people on the inside, it was easier to brush off this early warning and keep investing in crypto.

Ignoring the warning has not made the problem go away. Rugpulls continue to happen because NFTs and Crypto holds next to no actual liability or accountability for the owners. The recent FTX fiasco and the punishment it’s CEO got are a good step in the right direction, but the majority of crypto schemes aren’t tied to national banks. While thinking that internet creators should be allowed to stay anonymous is certainly good and pure-hearted, it’s also allowed the growth of a culture that shames people for, say, revealing information that was already public (https://www.vice.com/en/article/akvn5a/bored-apes-buzzfeed-and-the-battle-for-the-future-of-the-internet) and demanding to know where their money went. Anything less than total positivity and complete freedom for the creators of these projects is scorned. This is the sort of attitude that works for Banksy, not for financial institutions.

Doodled Dragonz

Doodled Dragonz is certainly one of the sourest rugpulls out there. Doodled Dragonz was an NFT project that promised to donate 100% of its profits to charities supporting critically endangered species (later choosing WWF instead), only to run off as soon as all the tokens were sold without making said donation. Abandoning a project doesn’t necessarily mean the NFTs generated by it are worthless (I think you can actually still buy other people’s Doodled Dragonz on SolSea), but it does mean the project’s not going to get any new support. If these become afflicted by link rot, they’ll just be gone forever. Besides, a market made purely of vibes isn’t going to support projects with no hype around them, which is ironic because these tokens are now actually scarce.

 In an NFT sense, that is.

Since you can still right-click/save.

Users who bought Doodled Dragonz now had a cheap NFT in their wallet that didn’t even contribute to the charities it promised it would.  

Later, they did the exact same thing under the alias of Balloonsville, a similar, cutesy project with balloons instead of dragons that also rugpulled. They then criticized the platform for letting this happen, which led to the platform (Magic Eden, for NFTs, which deals in SOL or Solana cryptocurrency) promising not to allow anonymous projects on said platform anymore. Again, anonymity is super cool, but it has a lot of potential for abuse when literally anyone can promise to sell things with no real consequences. Think about it – no other market lets things be so completely untraceable as crypto. Even places like Ali Express can be held accountable by their payment platforms! It’s hard to overstate just how little accountability crypto creators have right now.

At least this time, they allowed their buyers who bought the NFTs at first launch to get a refund if they sold for less than they bought it at as the original owner – the people who bought the tokens after the primary sale and tried to sell them again were illegible for the refund, though, so it still hurt a pretty good-sized chunk of people.

Literally named after the Guy

 MadoffCoin was clearly a scam from the get-go, but it promised it wasn’t. Does that mean anything, people wondered? In this totally anonymous, completely untraceable interaction, doesn’t a promise matter at all? The answer was no, although that was apparently only obvious to people outside the hype sphere. MadoffCoin rug-pulled almost immediately after starting up. The website’s been deactivated and the subreddit is close to dead. However, there’s a second MadoffCoin (spelled Madoff Coin this time) attempting to get started, promising that it’s designed to punish people who pull away from the project before everyone gets their investment back. For sure. Totally.

I’d hate to say that this wouldn’t work twice, because it has and continues to – the problem is that not everyone interested in crypto is fully informed of the risks by design. You don’t own the art you buy, and the coins are only expensive because of speculation. Even when they do understand it, their bluster often gets the best of them – a user on Tumblr compared what’s happening with MadoffCoin to Wile E. Coyote painting a tunnel himself and then running into it, insisting it’s still a real tunnel even as the Road Runner watches him make a fool of himself. As long as people keep ignoring warning signs and advocating for a lack of responsibility for the owners, this is going to continue to happen.

Sources:

https://www.vice.com/en/article/akvn5a/bored-apes-buzzfeed-and-the-battle-for-the-future-of-the-internet

https://web3isgoinggreat.com/?id=2022-02-06-2

Tetris

Elizabeth Technology May 18, 2023

Tetris, released in the 1980’s (the first version was released in 1985, but other countries received it from 1986-1988) is one of the most viral games ever. It’s simple enough that children can play it, but complex enough to keep players of all ages entertained for hours. It doesn’t require that the player speak any one language – the mechanics are simple enough to not need instructions. And, most importantly, it’s fun. Winning is satisfying. It gets harder the longer you play, so you’re never bored with the difficulty.

Versions of Tetris exist everywhere now. The game itself is as endlessly versatile as eggs. Physics-based. Efficiency based. Tetris games that want you to fill the board completely, like a puzzle. Tetris games that allow you to squeeze pieces in between gaps that are too small, and Tetris games that don’t. Tetris games that troll you. Competitive Tetris, where discarded lines are given to your enemies. Tetris games where the Tetriminos have 5 blocks, instead of four. The game is endlessly updateable, and the original remains the most ported game in all of video game history. Difficult, but fair, the standard games have chased since day one.

Tetris Effect

Some players develop what’s known as the Tetris Effect – they’ve played the game so long that it begins to seep into their dreams, and they unconsciously wait for blocks to start descending from somewhere whenever they aren’t occupied with another task. The Tetris Effect technically refers to any time a person is devoting so much time to an activity it starts to bleed into places it wouldn’t normally be – Rubix Cube speed-solvers sometimes unwillingly run through their algorithms in their head, and chess players may find themselves trying to identify what piece a traffic bollard would be and how it could move on the board.

When you look at it that way, sea legs are part of the Tetris effect. The Periodic Table in it’s solved state is as well! Tetris first put a name to the phenomenon because it is so genuinely interesting that people who weren’t accustomed to having it were experiencing the effect for the first time.

Repetitive Games and PTSD

Simple puzzle games have benefit beyond just immediate entertainment. Studies seem to suggest that repetitive games like Tetris or word games, something easy enough to be attention-absorbant, can help curb the effects of PTSD after a traumatic event, like a car crash. Specifically, games like Tetris help combat involuntary flashbacks. Treating PTSD after it develops with CBT shows promise, but intervening before it has a chance to really take root would be better. The study size in the initial research was small, but it shows promise: https://www.psych.ox.ac.uk/news/tetris-used-to-prevent-post-traumatic-stress-symptoms .

DOOM (The Game) And Porting

Elizabeth Technology May 16, 2023

DOOM is an incredible game that is famous for running on everything. The game’s code only takes up 2.39 MB (it takes a little bit more to run it), and it’s method of recording player inputs as demos instead of video (enabling anyone to play a demo of another player’s run in a time when recording games as videos and uploading them usually looked like pixelated garbage) made it extremely popular among people who love speedrunning games competitively.

All that said, the original version of the game, run on an emulator, functions really well. What about the ports to other platforms?

The Times

Firstly, to ‘port’ anything in software terms means getting it ready to operate on a different system than the one it was first designed for. It’s the process of making the software portable.

Getting DOOM to play on anything is a trivial matter now. But back when DOOM was new and super cool, it wasn’t so easy to move it to handheld game devices or consoles. Picture a game made for the computer – you play it with your keyboard and mouse. To get it ready for the XBox or the Playstation, the developers of the game have to change how it handles inputs. They may also have to change textures (XBox plays on a TV screen usually, which is larger than a computer screen) and how the game handles loading. That takes work. And games weren’t an object of respect at that point. They were time wasters, something to keep the kids indoors if it was too hot or too rainy outside for them to play. A significant number of people involved in the game making process felt that anything they helped produce just had to be playable, it didn’t have to be good. The gradual dropoff of Atari and the ocean of shovelware games lost to time gradually changed that attitude, but DOOM ports to other consoles were an unfortunate victim of it before that happened.

Rush Jobs

Porting to other consoles was like rebuilding the game, and if you don’t respect the game, you’re going to build a facsimile of it good enough that kids will buy it and stop there.

Take the port to the Super NES, made in 1996 – the game literally does not have the functionality of saving. You have to beat each episode (episodes consist of nine levels each) in it’s entirety in a single sitting. Bizarrely, some of those episodes won’t let the player alter the game’s difficulty, so playing through on Easy the whole way through is not going to happen. It might still have been better than the Sega adaptation two years later, which cut several textures as well as a full episode altogether to make room for the rest of the game! Yeah, you could save, but at what cost? Meanwhile, the Atari’s port to the Jaguar console managed to make a passable copy of the game at the expense of only five levels and a lot of texture. But it could run multiplayer if you had a second Jaguar, so that already made it leagues more attractive than other ports at the time. Not that it was good, it sounded bad and it looked sort of ugly, but it was better.

Better Versions

Of course, DOOM had good copies as well! DOOM is surprisingly functional as an app on the Apple store. You can’t jump in DOOM, so the controls remained simple enough that players could still see most of their screen back in 2009 when the app released. To go just a couple of years after most of these ports to 2001, Nintendo’s Gameboy Advance made a surprisingly playable copy of the original game. The Playstation version from 1995 did a fantastic job of catching the spirit of the game instead of cutting things for time, even adapting some of the music and lighting so the console could handle it better. Eventually, XBox released a version of the game where you could play multiplayer and everything was 1080p in 2006 as part of the XBox LIVE Arcade, and even the Nintendo Switch can play DOOM now.

This isn’t counting emulators that allow the player to play the game on their home computer as if it were the original – the hardware most computers have by default means the game runs as well as the emulator does.

You can see which companies understood the appeal of the game they were porting, in the sense that the companies who went out of their way to make a good version of a simple violent videogame are still mostly competitive today. With the exception of Nintendo and their first chopped up version of the game and Atari’s functional multiplayer version, gaming companies who pushed DOOM to the side ended up pushed aside themselves.

Pong

Elizabeth Technology May 11, 2023

Pong is one of the earliest video arcade-style games, originally released in 1972 by Atari – it was actually their first game. The game was based on another tennis game manufactured by a competitor for a household console, the Odyssey, which was manufactured by their competitor Magnavox. Atari’s version was much more successful, and laid the first bricks in the road for video games as we know them today.

Sue Over Anything

Atari’s new tennis game got into hot water with Magnavox because they were both tennis games. That sounds funny now, but in the era of the first video games, lawmakers weren’t sure how to handle it. Atari believes it could have won, but the expense of fighting Magnavox would have cost them more money than they had at the time. Instead, they settled, and Magnavox agreed to a sum of 1.5 million dollars split across eight payments as well as full information on everything Atari was doing for the next year, public or in development. Atari, as a result, delayed some of it’s products.

In terms of business dealings, the original creator figured Atari would be able to produce the game themselves (instead of licensing it out, as this was Atari’s first game they both made and kept for themselves) but couldn’t get any credit or loans to actually manufacture the things, because it looked like pinball at a glance, and banks associated pinball with the Mafia at the time. Eventually Wells Fargo gave Atari credit, and the arcade cabinets went into production at the rate of ten machines a day. Many of them failed quality testing. This was still their first game! Eventually Atari got it together, and even began shipping Pong multi-nationally thanks to their success in the States.

Home Pong, the edition of Pong that gamers could play at home, sold so many units that it became Sears’s most popular selling item for the holiday season in 1975, a coveted position that lead to dozens upon dozens of copycats entering the market. But it was too late – Atari won. Atari won decisively. Pong was popular and fun among all ages, installed in bars or arcades, or even played at home.

The Age of CRTs

Many early CRT monitors didn’t have great resolution, and it’s not like the computers inside of the consoles of the time were powerful enough to display much anyway. Still, in spite of this, the creator aspired to make the game more interesting than the simple version found in the Magnavox device.

The paddle is designed so that the ball will bounce back at different angles, depending on which pixel of the paddle the ball hits. The ball goes faster, the longer the players are trading it back and forth. The game has a surprising amount of complexity given the simplicity of the tech put into it. Pong doesn’t run on ‘code’ as we understand that word today. The home version ran on a chip, but the arcade-cabinet version that kickstarted Atari ran on a printed circuit board that used transistor-transistor logic to determine where the ball was going to go. Remember – this is just three or so years after Neil Armstrong set foot on the moon, and Atari is certainly not working with NASA’s budget or their technology department. Part of the game, the way that the paddles don’t reach the top of the screen, is due to those circuits. It’s a built-in bug, a flaw that the creator let slide because it made the game harder. Today, making a Pong game is a popular beginner’s exercise in coding languages like Python, done on machines dozens of times more powerful than the original.

Truly, Pong was a pioneer.

Assigning Macros

Elizabeth Technology April 25, 2023

If you’re getting sick of having to, say, embolden and italicize words in your program over and over, have no fear – you can reduce the number of steps you have to take to do that (and many other tasks) using macros!

How To Make a Macro

The process is simple! To add a macro to a button on your mouse for use across the computer, follow these steps as listed by Microsoft (this document has pictures): https://support.microsoft.com/en-us/topic/how-do-i-create-macros-bd0f29dc-5b89-3616-c3bf-ddeeb04da2fb

To do so in Word, here: https://support.microsoft.com/en-us/office/create-or-run-a-macro-c6b99036-905c-49a6-818a-dfb98b7c3c9c

And Excel, here: https://support.microsoft.com/en-us/office/quick-start-create-a-macro-741130ca-080d-49f5-9471-1e5fb3d581a8

As with anything you do that could change the functionality of a button or mouse click, be very careful when assigning buttons certain actions! You don’t want to remove your ability to do something important (like right-clicking) by adding a macro that closes Word every time you try to paste something without using the keyboard.

Macros as a Malicious Entity

Programs like Word and Excel can come with macros designed to run as soon as the program is opened, and not every macro is harmless. Some do things like making hundreds of new documents, some can corrupt your drive, and most of them try to take over the other documents on the computer when they’re opened. This is why recent editions of Microsoft Office products warn you that you shouldn’t open a document outside of Safe Mode unless you trust it’s source. An ordinary-looking .XLSM document can completely brick your hard drive if it comes with the macros to do it!

This is also why you should always verify the sender of an attachment before you open an attachment, even a .pdf. Malicious attachments using macros can be used to steal the contents of the target’s email address book and send those addresses malicious emails too, continuing the cycle and spreading the document until it gets somewhere with valuable information. An early version of this, a macro called “Melissa”, would bait users into opening the document in Word, and then hi-jack their Outlook to send it’s bait email to the first fifty contacts in the victim’s address book as the victim (read more here at the FBI site: https://www.fbi.gov/news/stories/melissa-virus-20th-anniversary-032519). Melissa itself may be obsolete, but the technique sure isn’t.

Worse, because the macro is coming from an application, it’s already compatible with anything that’s using that application. Mac is not spared this time. A malicious macro can open hundreds of garbage word docs on a Mac too!

Maximalist Mouse – What Else Can You Use It For?

Elizabeth Technology April 20, 2023

You can bind keys on your keyboard, but you can also bind those extra keys on a gaming mouse, if you dare.

Gaming mice are designed with games that use hotbars in mind. A hotbar usually refers to the number keys across the top of the keyboard, sometimes including the F# keys as well. Within the game, you can tie specific usable items to those number keys, and simply hit the right key in the heat of battle to use the item. However, keyboards meant for gaming usually have bigger keys than ones attached to laptops or designed for travel, and sometimes it’s difficult to use the hotbar while your character is still moving – if you need a health potion for your character, but you can’t contort your hands to hit the right hotbar key, a gaming mouse with those hotbar bindings instead can save the day!

How to Bind Keys Elsewhere

Gaming mice are designed for games. Many of the games expecting a mouse like a gaming mouse will let you go into the settings and manually change the keys you need to press for certain actions, whether that’s to other keys on the keyboard or to the buttons on your gaming mouse. While most mice have two or three buttons (mice designed for Windows at least) the sky is the limit!

Be careful doing this – you don’t want to override the primary function of the left or right click buttons, just the ones that shouldn’t already have another function attached.

1) Click Start, and then click Control Panel.

2) Double-click Mouse.

3) Click the Buttons tab.

4) Under Button Assignment, click the box for a button to which you want to assign a function, and then click the function that you want to assign to that button.

5) Repeat this step for each button to which you want to assign a function.

6) Click Apply, and then click OK.

7) Close Control Panel.

That’s it! Your buttons should be working.