Discord, after receiving much pushback from its userbase, has decided to delay the age verification rollout it had planned for spring of this year.
The intent was to link people’s ages to their profile via a facial scan or a government ID. Ideally, this would prevent minors from being exposed to predatory adults and content they’re not legally allowed to be in contact with, like pornography or extremely graphic violence, which on the surface seems like a perfectly reasonable and actionable goal. However, there are concerns. The userbases from other countries (notably the U.K. and Australia) were forced into doing this by their government, and presumably Discord realized this gave them a lot of tasty, tasty data, and at the same time might limit their liability.
So why is this a bad idea?
Again, on the surface, protecting the kids is a worthwhile goal. It might even be achievable with the right online culture, the right safeguards, and the right moderators! But why does it seem that almost everyone actually using the service hates this even though it would theoretically be a good thing? Well, firstly, Discord’s poor record with breaches is an immediate concern when it’s asking for personally-identifying information. Giving up your name and birth date is one thing – but, in combination with zip code, almost anyone can be identified down to a few possible matches. Discord knows where you are if it has your credit card. You can cancel your card if it’s stolen – you can’t change your zip code so easily. If, in the event Discord has another breach, someone buys your data off the list, and then sends off something you posted that you’d normally never have to worry about your work or your family seeing, because they can figure out who you are, will that blow things up for you? Is your family not understanding about your art? Would you simply venting that ‘customers suck at reading’ turn into a written warning at your work? That’s not even touching on the nastier possible uses of your name and address, like opening a credit card, ordering services in your name and then dipping on the bill, et cetera. All of this is even easier if you give them your driver’s license. Discord having this information at all puts it at risk!
Secondly, it seems Discord’s original intention was partnering with a company that itself also supplies services to Palantir, which is looking to build profiles on essentially every person in the US. Facebook has already been doing this (building profiles) to the more harmless end of advertising, and people are already uncomfortable that random companies can simply buy a profile of them as a potential customer and have a ton of data at their fingertips to manipulate them with, and the risk of letting a government agency also do this is that the ends are not limited to trying to sell you something. Even things as harmless as criticizing Daylight Savings Time not making it through the Senate might get you on a naughty list, and people on a government naughty list are not having a great time right now.
Thirdly, the odds are poor that this actually does anything for the safety of kids.
Let’s look at a pretty common scenario: parents are letting their child use social media (because if they’re not, then their kid is not going to fit in, and if their kid doesn’t fit in, it leads to a whole bunch of negative downstream effects that start with an increased risk of depression and get worse from there) and allow them Discord.
It’s a bit of a confusing platform – the chronological feed and editable messages can make things seem shifty, because bad messages are easy to hide, but Discord is so popular right now that if your kid wants to learn how to engineer things, or code, or learn how to do fiber arts, or keep in touch with friends who ended up in different classrooms when the grades changed, Discord is usually the ideal. It’s easy to set up a server, moderation is super easy for the person who set it up, and it’s free.
If they’re really worried – like, really worried – they can ultimately join the platform too and check in every once in a while. An all-ages-safe server allows for this. If they want to let their kid off-leash and allow them to have a bit more freedom, and not join Discord, instead checking over their shoulder every once in a while, they can do that too. A parent taking an interest in what their child is doing used to be the default; kids in the first generation of internet users were exposed to horrors unknowingly or uncaringly, but they grew up into adults that should know better.
However, if the parents have gone completely hands-off, or they notice something disturbing in the channel, Discord themselves must step in and moderate from the Admin level. In the past, Discord was willing to let communities self-moderate with volunteers, but it cannot guarantee without a shadow of a doubt that a Discord moderator will catch every instance of bad or predatory behavior in front of children. There’s increasing pressure both business-wise and politically to prevent children from participating online, to the degree that Australia has basically outright banned children under 15 from joining even general content services like Youtube, and family-oriented social medias like Facebook. They can’t sign in or leave comments – Youtube is still functional like this, but apps like TikTok and Facebook are not. Tech doesn’t like this for obvious reasons, but when faced with the app itself getting investigated or outright banned in a given country, they have to do something.
Discord’s solution is to “verify accounts” – unverified accounts behave like Teen accounts, which themselves were already prevented from joining 18+ channels. This prevents explicit drawn pornography from being posted in channels where kids under 18 can see them. This also in theory prevents dirty-talk between parties of unverifiable ages given filters can catch it. But the one-off incidents, the requests to move to other platforms, etc. are not being monitored, and those are much more pressing problems. However, Discord isn’t invested in teaching basic internet safety, it’s invested in saving it’s own skin by simply kiddie-fying most of the platform and requiring sensitive information to access the parts that aren’t being turned into All-Ages-Only servers. Additionally, this does not necessarily root out ALL bad behavior. Your kids may be radicalized by other kids who themselves are parroting content that’s unmoderated and accessible to them because it’s not sexually explicit, just racist, or homophobic, or et cetera. This widespread focus on specifically 18+ art when other dangers are also present is goofy. Kids who are on TikTok are also very good with euphemisms – an adult wanting to pretend to be 14 and dirty-talk another 14-year-old may be able to evade these filters simply by typing in a way that’s unrecognizable to them!
Then what? We’re trapped in a loop. Kids are a little dumb sometimes, yes. But, if they’re given a good briefing on safety basics by their parents ahead of time, kids can interact safely with public internet spaces the way a twelve year old can safely complete a transaction for a carton of milk at the grocery store, or an eight year old can cross the street to go to the park. If they know what appropriate boundaries look like and what to do when those boundaries are crossed, they’ll be safer than if they were just shunted into a free-range pen with a “No Wolves Allowed” sign on the outside of it, especially if that pen is more focused on keeping “non-wolves” in than it is on keeping “wolves” out, the way the age-gating works on many sites right now where adults who won’t submit to ID checks are shoved into all-ages or explicitly kid-friendly servers. They don’t focus on this angle because, in my opinion, they know they can eventually mandate non-anonymity if they just keep saying “the last fix we did didn’t prevent enough child exposure” (because it won’t) and require stricter information input from the user every time they do, and they’ll never be lying or solving the problem they say they’re trying to solve while also gathering all the advertising data they want.

