Don’t Listen To the AI Results Blindly, Please

Should you wash a stainless steel-bodied car with lemon juice? Thanks to a campaign by Tumblr users, Google’s AI briefly reported lemon juice made a great cleaner, in conjunction with salt and water. The fact of the matter is that these types of AI are crowdsourced, and crowdsourcing material without any moderation can be dangerous.

For example – Tumblr is one platform. Raving about the lemon juice as a cleaner on one platform was enough to get it to pop up briefly when Googling ‘cybertruck lemon juice’. How else could this be used maliciously?

It’s a serious problem caused by scraping information from everywhere, instead of identifying trusted sources. Not only that, but by creating a system that’s always fighting to stay up to date, they’ve created a system that can be exploited pretty easily. Enough made-up websites reporting a credit card company’s number is X when it’s actually Y will lead to the AI promoting the incorrect information, a boon for scammers who know the average person will no longer call the number in a phishing email but will Google things and assume Google’s ‘official’ AI is correct!

Sometimes people online aren’t even doing it for monetary gain – the lemon juice campaign happened out of spite. What happens when one of the many, many doomer groups online decides killing or maiming people who don’t know better is funny? Now, it’s easy to start something horrid because Google’s AI is apparently pretty easy to manipulate if there isn’t a ton of data on the subject already, a strategy already used to boost disinformation for poorly-filled out search keyword results pages. Plenty of sources state that you shouldn’t mix household cleaners to avoid accidentally making mustard gas, but do enough of them state the same for chloroform, another chemical that releases poisonous fumes? Enough to outweigh an online campaign pushing it as the perfect cleaner? Once Google fixes that – what will the next campaign be? The Google AI team is having to fix these problems piecemeal, one at a time, because the genie is out of the bottle.

There is no perfect solution to the problem Google has created because even if Google had done something like excluding forums and social media sites from its AI data pool, it would have still inevitably run into the problem of fake business pages. There’s a lot of valuable info on sites like Reddit that aren’t officially recorded anywhere else, too, mixed in with jokes and sarcasm, but at the very least, an AI that can’t detect sarcasm couldn’t safely sort those from each other. Why the rush to get out a product that is constantly revealing how underbaked it is? Was this worth it?