You’ve seen Google try and fail to answer serious questions by using joke answers pulled by Reddit. The generative AI we have today is brilliant with emails but struggles with identifying facts from fiction. Even the better ones are just relying on better sources – DuckDuckGo’s AI assistant seems to rely on Wikipedia when it can, rather than pulling from the open web like early Gemini did.
But, the tool has potential, which is the most frustrating part of any discussion around it: used correctly, and used responsibly, generative AI does have it’s uses.
What does irresponsible use of a Generative AI look like? The web is raging over that right now. Some criticize people using it to make images because the source training material cannot prove it didn’t use copyrighted images, or the images of people who didn’t consent to their work being used, in it’s database. Many criticize companies for using it to, say, review resumes, where it failed to actually look at any of them. Mycologists don’t like Google’s attempts to identify edible mushrooms, because it can’t discern white button mushrooms from Angel of Death mushrooms (something even humans struggle with!) and coders don’t like it when generative AI randomly calls in files from the web, a mimicry of a common practice that instead made it possible for bugs to slip in since it doesn’t seem to understand the purpose of doing so.
Using AI irresponsibly makes it a hurdle for humans to jump over, a danger to their lives, and an inconvenience. It causes harm and delay. People who don’t want their art stolen have to upload it behind filters, that HR debacle wasted the time of both the company and the applicants, and the risk of AI generated malware grows by the day. It doesn’t have to be this way – it’s this way because advertising has completely skewed the average user’s expectations for AI.
None of this is discussing analytical AI, which is a totally different ballgame. Analytical AI is the kind of AI that manages accounting and identifies hidden trends in things like bruised apples or disease outbreaks. Generative AI, which has assumed the large mantle of just “AI” due to tricky marketing by people who want to sell it as the singularity, is not the same thing at all. It cannot do math. It doesn’t have the capacity to look at a calculator.
Keeping all of this in mind, how can you use generative AI responsibly?
The bad news is that until the failure rate is better, any work a GenAI does is going to need human eyes to review it. Remember the lawyer whose paper cited cases that didn’t exist? If he’d looked over it and checked the facts, he’d have saved himself a very big blow to his reputation. If the HR department was also filtering resumes, they’d have caught on to the failure. Meeting minutes must be manually reviewed, and art should be cited as AI Generated. Yes – this adds work. But! The good news is, it’s still less work than you would have had to do had you been working without AI. Any time save at all is a win! Reviewing is less work than writing and then also reviewing! Take the win without trying to swing it into an overwhelming victory, and be happy that at least that much is done without your input!
Want help, or guidance setting up generative AI with your organization? Contact us today!