AI is a strange thing. Thanks to the marketing efforts of the big tech companies trying to push ‘AI’ to the public, ‘AI’ as a publicly-understood concept has changed a lot from what it was ten years ago. AI is supposed to stand for ‘Artificial Intelligence’, and used to refer to machines that were using algorithmic programming to complete a task. The grand ideas of what an AI is and what they can be are everywhere in pop culture, and pop culture ideas influence the decisions industries make all the time. Siri, Alexa, and Cortana from Halo are conceptually cousins, even if one of them is purely fiction.
Now, when someone says AI, they could be referring to a number of things: their digital assistant program, a video game enemy, something used to spot potatoes with soft spots before they make it to the shipping container, or something like ChatGPT.
There is AI. Then, there is analytical AI, generative AI, adaptive AI, et cetera! The term has spread so far that basically any program that’s ever asked to make a choice could be considered an AI. Algorithms have undergone a similar loosening of terms – when you search something, the algorithm will decide what you see, and therefore it’s very easy to assign malice to the company that made the algorithm when it decides to stop showing you videos by a user or content for a product you enjoy. The spectre of the algorithm changes the way people talk on TikTok, with the dominant theory being that explicit terms are frowned upon by the algorithm and so it’s better to use goofy euphemisms instead.
Others argue there’s no way TikTok would have been able to identify words like that, and that if they wanted these ideas to stay away from the platform, they’d start pushing down the euphemisms too. What’s the truth? Anecdotally, creators seem to suggest their videos about mature topics uncensored do just a little bit worse than videos where the words are censored, but whether that’s because people are watching them a little longer trying to figure out what exactly the poster is trying to say or whether the algorithm is actively discouraging the word ‘murder’ from appearing somewhere, nobody outside the algorithm can see into it, so it may as well be true, right?
For this article, most of what I’m touching on is considered generative AI. This is programs like Sora, ChatGPT, Claude, and so on. These tools are not the knight in shining armor for a business that their big tech investors want to be, but they’re also not the totally useless trash that detractors say they are. The reality is more muddled than that, and with a few tips on how to use these things responsibly, you can avoid costly mistakes like the one that nearly got a lawyer disbarred (https://www.spellbook.legal/learn/lawyer-who-used-chatgpt) or the one that got a man sent to the hospital because of heavy metal building up in his system (https://www.livescience.com/health/food-diet/man-sought-diet-advice-from-chatgpt-and-ended-up-with-bromide-intoxication) or deleted everything in a coding database during a code freeze (https://www.pcmag.com/news/vibe-coding-fiasco-replite-ai-agent-goes-rogue-deletes-company-database).
What Are You Doing With It?
First and foremost, the task at hand might not need AI. It’s a common problem right now – ChatGPT may eventually hit a wall where it’s competence stops climbing, and even when it’s better, it’s error rate may remain worse than a person’s, requiring someone to check in with it each time it’s used and verify the facts of whatever it’s written.
Being vague to be right may also be a consideration you need to make for your business, as it can be very important for email communication to be precise. For example: if I say Julius Caesar died over 100 years ago, I am technically correct – but I’m painting a picture that is inaccurate. Similarly, if you ask an AI to describe a puffball mushroom so you know if the large white mushroom you see in your front yard is edible, it may not come back with enough detail to keep you from eating an Amanita puffball, a common lookalike. This can send you to the hospital, create nightmares for lawyers and accountants where the AI was technically correct but so vague it doesn’t matter, and so on.
The biggest problem is that the AI does not consider truthfulness, it considers quantity. It by design gravitates to the average of all of the content it’s consumed, and while tweaks might make it sound more competent, it’s always going to be struggling against itself.
Does an AI Application Here Make Sense?
This is not even taking into account whether a tool already exists, or whether one has to be custom-fit to your purpose. Right now, the process of getting your own AI to do a task one cannot currently do is a lot like getting a machine made for a shop. To use a metaphor, there are molds to make bicycle tires, and there are molds to make skateboard tires – there were no molds to make Hoverboard tires until the Hoverboard owning company commissioned them, and it probably cost them a six-digit sum of money between planning and actual creation, which itself required labor and custom tools to do.
If Hoverboard couldn’t retrofit a bicycle tire mold to get the result they wanted, getting this machine made was their only choice!
AI is in a similar position right now. Companies that need specific accounting tasks done (things like using actuary tables to disperse claims, take in invoicing forms, pursue Accounts Receivables that are out of date by X weeks, read and sort forms, et cetera) are finding that the generic tools designed to help employees craft emails or write copy are getting stuck and making stupid mistakes, mistakes that interns fresh out of high school stop making their second week on the job. Thus, they need their own AI for their tasks – but getting that is akin to getting Hoverboard tires made! It’s a secret expense that crops up after a company has already tried to invest in AI and found it lacking. In the longterm, how much cheaper than another employee plus their training is the AI? How many years of labor would you need out of a person before this investment zeroes out on the table, and will it need expensive tweaks and modifications should internal processes ever change?
It’s possible this is all mathematically sound for your business! But these are not the questions that an industry moving at a breakneck pace to sell! Sell! Sell! Are going to ask you, these are questions you need to have at the ready when the subject of AI reaches management.
Consider the Ethics – Should You be Using AI at All?
It’s not like having a human somewhere in the system makes systems magically more ethical. Every terrible crime in history was perpetuated by humans – many times, the worst events in history were actually bureaucratic in nature, manufactured by people who just did not care at all.
The real thing a human in a system does is provide a point where behavior can be influenced by the ideas of punishment or praise. When someone embezzles money, they lose their position, they might face jailtime or probation, and depending on the size and source of the theft, they might face public shame for it.
If a machine embezzles money, can the machine go to jail? When it’s something like AI, who gets the blame for the machine’s decision? Is it management, who bought the thing, even if they didn’t know the crime was happening or even that it could do that? Is it the programmers, who genuinely did not have any idea it would react that way when placed into that situation?
We’re already in this dilemma with character AIs, who are SO agreeable and friendly they’ve caused more than one death by suicide (https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots). It seems like in an abstract way, “the company” is at fault, but individual engineers and company figureheads are largely still chugging along running the business! It’s a cold comfort to parents who have lost a child to suicide that whoever was responsible (bullies, a boy- or girl-friend who egged them on, someone who sold them drugs, et cetera) might be jailed, but it is a comfort nonetheless, and it’s one not provided to the parents whose children were taken from them by a machine that dragged them down. Responsibility is decentralized across the entire company, and profit is concentrated to key leadership – this is the sort of scheme the Great Depression mob bosses and steel barons could only dream of!
Say you don’t think the ethics of using a machine instead of a person are going to come into play in your industry. Well, you’d still want a person double-checking output anyway. Remember that news story of a company that had job listings open, but an AI filtering the applicants, so they never actually saw any resumes because it auto-rejected literally every single person who applied – including the manager who finally realized what was going on? The entire HR department got fired for that! (https://www.ibtimes.co.uk/lazy-mediocre-hr-team-fired-after-managers-own-cv-gets-auto-rejected-seconds-exposing-system-1727202)
It was a very expensive and embarrassing mistake for the company, and then finding someone who was responsible in a decentralized problem like this led to management deciding yes, the entire department is responsible, and getting rid of all of them, adding even more expense! Because an AI tricked them into thinking it was competent, management then had to replace a team that was so grossly incompetent it couldn’t find the time to worry about the lack of applications making it through. Would a human have stopped this? Maybe! Maybe it would have been one person dismissed instead of the entire department! Sure, it wasn’t an ethics problem this time, but the things that make AI so prone to ethics problems also make them giant management headaches!
In Summary
The real problem with AI is that it’s being sold as the ultimate, perfect solution to the problem of humans, and it just simply is not. No tool is perfect. No tool is perfect for every job, either. A really well-built hammer is going to be horrible at putting in screws! So when shopping for an AI tool, consider the points of this article, and once you’ve determined whether or not the tool is truly needed, whether the tool will be sufficiently overseen, and whether the tool is going to bankrupt your company, go for it!

