In the early days of Google’s AI answers, one question and its answer made the rounds on X (formerly Twitter) before Google promptly removed it. To paraphrase, the question was, ‘how do I get cheese to stick better to my pizza?’ and Google’s AI answer was ‘you can add up to ¼ cup of glue to make your cheese stick better’. That’s incredible. The initial answer was traced back to a Reddit user (this Business Insider article traces it: https://www.businessinsider.com/google-ai-glue-pizza-i-tried-it-2024-5) and it was pulled down almost immediately.
That’s easy, and obvious, and it’s so stupid no reasonable adult would think that advice makes sense. Similarly, mushrooms have gained such a reputation for being dangerous that Google AI telling a user ‘you can taste mushrooms to ID them’ is also pretty easy to dismiss, as long as you’re not a kid who doesn’t know better. But what happens when it’s not stupid and easy? What happens when the end user doesn’t recognize an AI summary has been pulled from a satire website? Does that get corrected? Is Google preemptively working on these things so they don’t pop up?
We don’t know, do we?
Google is seemingly working backwards towards correctness instead of aiming forwards for it. Bad results have to be pulled. Inaccurate answers must be rated poorly by the end user. The end user must recognize the flaws in the answer in order to rate it. The user went to Google for the answer. Generally speaking, unless people are looking for sources, they’re going to Google because they don’t already know the answer. People don’t look up what they already know! Why would they?
The largest issues with AI are not the AI itself but it’s utilization: it’s used where it’s not ready, and it causes problems the entire time it’s there. Google’s Gemini was launched opt-out by default. Simply making the thing opt-in would have saved effort and ridicule: it would have been made obvious that the tool was still being developed, and wasn’t launched as part of it’s training.