X, formerly Twitter, joins a rash of businesses demanding the right to feed users’ works into training engines for LLMs, colloquially known as ‘AI’. X’s policies are asking more of the end user than a lot of similar policies introduced by other companies during the same time period (including Adobe, Facebook, and Microsoft for example) but they’re far from surprising. Every company wants their own proprietary LLM. Demand for AI is only growing, so if your business isn’t adding tools, you’re behind. Right?
Or is that really true? As far as who’s demanding it, the answers are mixed. Do customers actually…want this? Of course, if someone is willing to pay for CoPilot, or other premium access programs, they’ll use that. They wouldn’t have bought it if they didn’t intend to use it. The free tools are the real source of contention, especially on social media platforms: a lot of people feel that over the course of a week, they woke up and logged in to a new popup demanding they use some tool they didn’t want or ask for. Maybe one that could change their posting habits if they used it to redo posts before they went live. But, again, do they want to? Is this why they signed up for whatever social media they’re using? To have their writing habits scrutinized and sanitized by an AI assistant? The demand isn’t all from the consumer: the entire point of social media is to connect, and if you’re not connecting because something else is speaking for you, then what are you even doing there? Posting the half-hearted jokes or technically correct summaries the Social Media AI assist gives you? Why not have the AI scroll and like images at that point, too?
In theory, supplying better tools for ‘free’ (read: for an indirect cost) makes customers more likely to pick your platform over competitors’. But in practice, cluttering the screen with even more options, especially options that don’t necessarily improve the end user’s experience (yet) bothers an equal number of people as it pleases. Assuming there was space to clutter! Sometimes, when there’s no room in the UI, businesses get rid of tools that the customers had already grown used to – Discord removed the ‘send’ button in favor of another button only for paid users, which was understandably very annoying to a lot of people. Not to mention the natural resistance to change from hardcore users. Lately, a number of smaller, better-managed platforms with less demanding TOSs have sprung up from the dirt after Facebook, Discord, and Twitter (now X) all slowly became less and less usable, purely off the back of changes made to the TOS or to the functions of the platform itself.
So why? Why shove AI in as an opt-out feature instead of an opt-in?
One possibility is that the extra data collected for the LLM is also being used to do an even better job of advertising, and the utility of bots trained on real people for the purpose of tricking real people can’t be underestimated. Twitter (now X) accounts hooked up to LLMs for the purpose of engagement farming or flame-warring over politics also happened to grow in popularity last year, and although not all of them were so flawless they went undetected, a sizable number were plenty human enough to pass the Turing test at a glance.
Another is that the investment made into AI makes it very difficult for a company to back out of it once they add it in, a.k.a the sunk cost fallacy. Sometimes customers really do just need time to warm up to an unexpected change, too, so a company is never fully sure until it’s too late and they’ve already built other things around it.
LLMs cost a lot of money to train and then implement in a service – while it might only be momentum driving the movement forward, rather than demand, the alternative is ditching and ‘losing’ the money spent to get everything going. If they’re right, they’re on track; if they ditch, and they were wrong to ditch, they’re seriously behind. Every product has the potential for this issue, and there isn’t a really great solution in a world where consumers are pitted against hundred-million dollar companies. Before it was AI, it was NFT compatibility, and before that, it was something else. The cycle continues as long as there’s hype for a product, and as long as people see something as ambiguous as ‘the future’ in the making, they’ll hype for it.