The single biggest problem with AI right now is people abusing the shortcuts it presents. Much like real life shortcuts, the AI shortcuts involve wading through tall grass, walking on private property, and trying not to pick up ticks on the way. We’re seeing a wave of AI-generated literature hitting the market, and it’s becoming concerning.
The first wave was children’s books. Children’s books? Whatever. Children’s books are popular entry points for people looking to get into the self-publishing business, which comes with pros and cons; you may find a real gem of a book for a child going through something specific (many large publishing houses are reluctant to publish books like that because they may not sell well), but you may also find a lot of books written by people who think kids don’t care about quality: they whip up something quick and generic, buy some illustrations from an artist online, format for print-on-demand from services like Ingram Spark (or sometimes Amazon) and collect a teeny bit of money from passive sales.
For the people actively trying to make quality books for the art of the process, this is just more noise in the background they’ll have to compete against, but for the people looking to make a quick buck off of a book, generative content has streamlined the process. AI removes some steps at the expense of quality (and copyrightability).
The messaging an AI comes up with may be unsuitable for certain lessons, but many kid’s books are – you’re expected to use them as a teaching guide, read in conjunction with other books, and not the single wellspring from which you teach a child morals. “The Giving Tree” and “Little Red Hen Bakes A Cake” teach opposite messages about sharing, and both of those were written by humans. You’re also generally meant to know what you’re getting into when you read it, as in you should read it yourself before you read it to a child. That alone filters most of the issues you could run into with an AI-made book (bad message, bad or disturbing art, etc.). Yes, AI kid’s books are now all over the place, but kid’s books were all over the place before that, too.
When we get into serious guides, however, there’s an issue: any parent, guardian, or babysitter knows that you can’t be telling kids “Actually, Hitting Is Always Okay!” but those same people wouldn’t know not to trust a fungus guidebook with bad advice in it. It moves past common sense into real expertise.
Mushrooms are delicious. Foraging for food is fun. Many people want to try fancy mushrooms that either can’t be farmed or can’t be transported, but lack the necessary knowledge to tell a chicken-of-the-woods from a witch’s butter mushroom. Worse, if they think they do and grab the wrong fungus by mistake, they can put themselves into liver failure and die.
The bad news is that almost every edible mushroom has an inedible-to-poisonous lookalike: even the grocery store staple, the white button mushroom, has wild doppelgangers that are incredibly poisonous. The good news is that there are a handful that are tasty, and have lookalikes that are more like cousins instead of twins appearance-wise. Morels and puffballs, for instance, have lookalikes with obvious giveaways, so they’re generally pretty safe to forage if the forager has good instructions. Speaking of which, all foraging instructions need to be tailored to the region they will be foraging in, and consider invasives like the ”angel of death” mushroom, which – as you may guess – is a poisonous lookalike that snuck up on unsuspecting foragers expecting a native edible.
A growing awareness of just how dangerous foraging can be, combined with a desire for unusual foods straight from the Earth with limited processing, is creating a huge demand for guides.
Thus sprung up a number of AI-generated fungus foraging guides, with a nonexistent author’s name on the front. Again – kid’s books are one thing, but a foraging guide relying on generative content programs like ChatGPT is practically an unexploded landmine for anyone unlucky enough to buy it without realizing.
As cited in The Guardian (linked below), sometimes these guides will name an edible mushroom, but give advice not appropriate for the region, even though the book is supposedly up-to-date.
Sometimes they advise tasting the mushroom to identify it, which is bad for two reasons: one, poisonous mushrooms don’t often taste bad or bitter, and two, a handful of poisonous lookalikes are deadly enough to kill you even if you make it to the hospital in as little as a bite. Tasting a mushroom, deciding it tastes fine, and then dying later is exactly what people are trying to avoid when they buy mushroom guides. Taste is not one of the metrics by which expert mycologists determine species. ChatGPT thinks it is. Putting these two on the same footing without even a disclaimer is a serious problem.
The worst that can happen if an AI writes a bad kid’s book or digital recipe is a waste of a few dollars and time. AI-generated field guides to mushrooms with no human supervision over the writing are borderline criminally negligent. The reading party has no way of knowing their instructions are bad or inaccurate because ChatGPT’s main goal, once again, is not to be correct but to sound human.