AI Advertising and Glasgow Wonka

It’s official – even small events are using AI for their ads, to mixed results. The Glasgow Wonka event is already going down in history as a Fyre Fest or Dashcon type event, and just like its predecessors, its deceptive marketing drew in crowds it wasn’t prepared to serve.

The difference this time is that it used AI-generated pics to make up what the warehouse might look like when parents arrived (alongside an AI-generated script for the character actors). Odds are that the person in charge had no illusions that they’d be able to match the pictures they used.

Advertising

Up to this point, a lot of image-generating AI has been used to make pictures for the purpose of having the picture. Memes, photos of hypothetical cosplays and theoretical movie costuming ideas, content one wishes they had the skill to make, cartoon characters drawn in a different style, et cetera. Some people used it for ads, of course, but because all of the big programs are unable to spellcheck the words in the pictures they generate, the person trying to make the ad still needed some editing and fontsetting knowledge to make something truly professional-looking. As such, it wasn’t really good for entire ads, although it was plenty to generate filler images for slideshows or cute critters alongside human-written text for the petting zoo at the fair.

Now, the image generators are getting good enough to produce work that’s passable at a glance, if sometimes confusing and still riddled with spelling errors. The Glasgow Wonka event used a number of AI-generated illustration-style pictures to evoke the magic of Wonka, alongside a handful of pictures of what the inside of the warehouse was supposed to look like. One particular image, an upwards shot of a walkway surrounded by fake grass and a lollipop forest, looks realistic enough (as in, “a company could set this up with plastic props and people would be able to walk through it without destroying it”) that if I didn’t know the image was fake, and I was simply scrolling, I’d be convinced it was real. The total lack of watermarking makes it too easy to believe the pictures of the event were taken at the event.

Who’s Fault Is It?

The Glasgow event was lame inside. The actors were given a borderline incomprehensible script that added in a villain living inside the walls of the chocolate factory, possibly one a bit too scary for children, and at least one of those actors reports that they haven’t been paid yet. There was a single bouncy castle, a couple of decorative tapestries using the same pictures as the generated advertising, and the event was incredibly stingy with refreshments. I can guarantee that – for the price – the attendees weren’t suspicious or otherwise thought it would be quite so lame. Who would have the audacity to charge that much and use fake pictures? The pictures were realistic, for a warehouse event, after all. That’s the problem – this tool is too powerful, and it’s already being used to lie. The image generators are capable of spitting out photorealism now. Some online critics are blaming the parents who attended for not picking up on the use of the AI images before bringing their kids, but I don’t think it’s actually their fault! A fake picture of an event is still a fake picture of an event. The quality of the advertising pictures was borderline stock-image, not the mutated buildings and multi-pronged hand pictures of AI generation past.

Who would question the pictures of what the inside of the warehouse was supposed to look like? The cartoon illustrations of the attractions being filled with nonsensical spelling errors is one thing, the photorealistic shots of “the inside” are a totally different matter, and if anything, they shore up the rest of the advertising. That is meant to be proof that the attractions inside actually exist. The completely believable photos turn this from a case of Dashcon-style overadvertising to genuine deception. Do you wonder if a restaurant’s website cover picture will look the same when you arrive at the physical location? Or if the inside of a hotel room will look like what was advertised on your travel app? Probably not! The average consumer will likely take that at face value, because the average consumer is not used to being shown such blatant lies!

So who’s fault is it? The person who misled them, or the people being misled? We are past the point where AI looks obviously fake and bad. It can’t spell hardly anything right, but it does such a good job of producing picture-perfect fakes that the average person won’t be able to rely on their own instincts to spot them anymore. Instead, a bigger focus needs to be put on making simulated images detectable to the average person – if the warehouse photos had been marked with some sort of watermark that they were made in a picture generator, it might have tipped some of the attendees off that this was fishy.