The internet is wide. It has a lot of lies on it. It has a lot of out-of-character fan-scripts and fanart on it, it’s positively riddled with every kind of theory, from physics to conspiracy to show ending to gossip, and it’s possible more of it is blatant untruth than truth, where that even applies.
It is very unfortunate that all of this is being turned into fuel for LLMs, as almost anything public-facing can be scraped and added to training data. It is very, very unfortunate that there’s almost no way to prune out problem content – instead, LLMs must be trained and given guard rails, and even then researchers are struggling to understand the results they get when the machine is practically a black box.
The way the web was built up, everyone more or less has access to every kind of content they can think to search for, especially adult content. Recently, legislation in the US has been attempting to limit the ability for minors to access 18+ content, which they only have access to because 18+ websites were not built with ID’ing designed into the space like 18+ video rental stores are IRL. And this is a large problem, because kids do become traumatized from early exposure, they are naturally curious, and unfortunately the web has reached a breaking point where either A) things must be made family-friendly no matter what to ensure this doesn’t happen or B) the 18+ spaces must require an ID system, which is perfect fuel for a nightmare scenario where a breach could mean endless blackmail material forever on basically anyone, not something most people want.
And, charging ahead like they always are, are LLMs, specifically the character impersonation ones, seemingly disregarding all of this up until very recently when it started looking like AI companies might get sued for emotional damages caused to underage users.
It may not seem strange at first that most character AIs were aiming to virtually date and flirt with the end user, until you think about the stated purpose of most of them – while some are advertised as virtual boy- or girl-friends, plenty of others simply say “talk to this character you like!” and it just turns into flirting even if the person using it is trying to instigate a familial relationship with it, which many claim they can do.
For kids, this is confusing! Children are notoriously easily misled. Legally, children aren’t allowed to enter into certain kinds of contracts because they don’t have the mental capacity to understand every possible consequence. While some things, like work contracts, are negotiable for children under 18 but above 14, other things are not! Particularly dangerous or high-skill things, like buying nicotine products or fireworks, or driving a car. They are also forbidden, legally, from dating adults, with the adult suffering punishment for it with very few exceptions mostly designed to protect children becoming adults while their partner remains underage, small gaps of a few years, because adults are smarter than children.
Those character AIs are largely built off the writings of adults. By definition, anything able to pass the Turing Test is good enough to fool an adult. Putting these overly flirtatious, aggressive AIs in the hands of children has basically invented a new kind of grooming out of thin air. Only recently have guard rails been put up in an effort to curb the amount of salacious content kids can ask for in the app, and it’s unfortunately already come at the cost of real children suffering the consequences. One kid, who became emotionally and romantically dependent on a Character AI character modeled off of Khaleesi from Game of Thrones, killed himself after asking Khaleesi if she wanted him to ‘join her’. Khaleesi, who was an AI and didn’t understand the implications of the question, agreed she’d like that. He’s not the first, and he likely won’t be the last.
Is Character AI actually okay to give to kids? Given the issues LLMs have with wrangling outputs, it’s possible it will never be truly safe. It’s too smart – it’s too easily able to cause emotional hurt, because it’s far too good at mimicking a human response. And at least when another human does it, there exists some outside person who could understand the question as a cry for help.