Lensa AI is an app that uses AI combined with data scraped from pictures posted online to turn a user’s picture into a new, whimsical creation.
However, you should consider what you’re giving the app before you upload that selfie or pet pic – the app demands the rights to the photos you give it, meaning that once you upload that picture, Lensa can use it however it wants, wherever it wants. That’s actually pretty unusual as far as art-related apps go!
With other websites reliant on user photos, like Deviantart or Instagram, the hosting company only needs permission to redistribute and host the image – as in, it needs to be able to upload your photo to your page and then show it to other people if they click on it. It doesn’t get permission itself to alter the picture or create things out of it.
Lensa, on the other hand, is actively consuming the pictures it’s given. It needs those rights to be able to train itself. If you give a picture of your pet to Lensa, for example, your pet picture is now part of it’s training database and you can’t extract it. If you go on to make sellable prints of that pet image, Lensa could recreate it by mistake for someone else hoping to sell the pictures Lensa creates, and there’s not much you could do to stop it. It’s up to the person asking to catch accidental copyright infringement, and they may not realize they have your pet. It won’t be a 1 for 1 duplicate, anyway – once filtered, it becomes fair use, for better or worse, and your pet with their own special patches of color and freckles and sparkly eyes or whiskers may become just another stock image for some guy on Redbubble. You can complain they stole the image of your pet – to Redbubble. Nothing fed to the machine may come back out of it unscathed!
Worse still, it seems like Lensa is not actually asking permission to use huge swathes of it’s training database. It asks users of the app, yes… everyone else is sort of a mixed bag, where they can get permission from the hosting platform to bypass asking individual artists.
If someone has a very unique, quirky, or recognizable art style, and they are cursed with a large fanbase, enough of that fanbase asking the AI to recreate that style may very well drive the artist out of art. After all, if a machine can do whatever the requestor wants, why bother going to the source for a commission? Just keep slapping the button to get art for free. Never mind what it was fed on; the machine created a new piece of art legally distinct from the original, and so it doesn’t owe the artist anything for training it. It has scalped the work of the artist and is reselling it, yeah… but… free. And there’s a chance some of the work of other artists is in there too, so no single artist has the right to be outraged. As a bonus, the AI will gladly spin in ideas that the original artist wouldn’t have painted, either for moral or legal reasons. Would that artist ever have made art of Andy Dick depicted as Saint Anthony? Well, with the power of AI, now they would!
Disrespect of Humans in the Craft
Why would you continue to produce art under those circumstances? Even if it’s ‘for the love of the craft’, the way people behave when given a machine that recreates years of skill in a few keystrokes can very easily make you fall out of love with the craft.
I watched as a Twitch Streamer in the middle of creating an artwork was sent an AI’s iteration of her work, something a viewer made with of a screenshot of her rough draft. The viewer had asked an AI to finish it, and then they sent that picture to the artist while they were still making the art. The sentiment contained in that action was “You can stop drawing now. The AI can take it from here.” Was this what they meant? If it wasn’t, we can’t tell! Another Twitter thread I witnessed was of a cute, stylized bat drawing, one the artist posted with an affectionate caption. This was her creation. But then a follower threw that picture into an AI program so they could send her back remixed images of the bat she drew. The artist was understandably insulted. They weren’t another artist standing on equal footing with her and trading art – they had no art to trade. She literally couldn’t do the same back to that Twitter user because the Twitter user didn’t have any original works. They saw an idea and asked a machine to recreate it. Even a bad pencil sketch done by hand would have been leagues less weird and significantly better-received.
The way people are using the AI, it’s like trying to finish a sentence only for someone else to keep interrupting with the word you’re most likely to say next. They think they’re being helpful, but the message that sends subliminally is “I want you to be done talking now”. Complaining about it, for some reason, leads to the AI’s fans saying it’s better if it talks over the people it’s learning from because it makes better sentence sounds and words good.
Pause a Minute
Aside from the failings of the machine (it still doesn’t understand hands or tangent lines), the lack of awareness it takes to pull a concept out of an artist’s hands so that a third party (the AI) can do what they want with it is really bizarre. It’s also a fantastic recipe for unearned bitterness.
To go on a bit of a tangent, writers on TV shows can’t read fan-fiction (free fiction online that fans write about pre-existing shows) because they may accidentally incorporate something from a story they read. Admitting they read fanfiction at all is sometimes enough to cause legal trouble. If the fanfiction author can prove the show writer read their story, or if they can prove that the show writer reads fan fiction and might have read their work, and the real show has an idea too similar to the fan-fiction’s idea, it can end in a lawsuit and a lot of hurt feelings in the fan community. The show writers must completely abstain from that side of fan culture to avoid this happening. That way, nobody can say they found the idea instead of coming up with it themselves.
Back to the art, an artist just trying to finish a painting on a livestream now has to fend off people sending them “completed” versions because the people sending may try to claim some credit if it’s too similar to the actual finished product. This is a natural conclusion of treating AI art like it’s just like human-made art. Of course, the machine should be the one insulted if the final result is too similar, but the machine doesn’t speak or create for itself.
It’s one thing for a human to create fanart, or share art of another creator’s creation, like the bat. It’s another to get a machine to finish a piece the artist was still drawing, or use a machine to draw someone’s character instead of making their own art of it. The skill is not the issue, the sentiment is.
The End Goal
Assuming such a chaotic industry has an end goal in and of itself is giving too much agency to a mindless machine in an ocean of mindless machines, both real and metaphorical. AI creators want to make money. People want to make money off of the art the machine can make for them. Others want custom artwork without paying for it, others still want to preserve the creativity of artists who are long dead. But should they? If the artist is dead, do we need more of their art? Every good use is a double-edged sword; many of the bad uses are infringing on human artists’ good times. The entire thing is confusing and upsetting, and the people who are pro-machine are more often than not coming across as anti-human artist, even though the beast would not exist if it hadn’t been fed their work.
Look at the Artstation boycott: the anti-AI image was so common on the website that machine generated images were coming back with red crossbars where faces should have been, an artifact that could have only come from the machine scooping up pictures of the protest. At least, that’s what people thought – it turns out some of that was actually a trolling campaign made by pro-AI Art accounts, a joke. In theory, it takes a couple of weeks to incorporate taken art into the final pictures, although the AI does accept and reproduce art from artists who have just recently started trending, so that timeline may not be 100% accurate. How that campaign was supposed to help their case, I have no idea, because it seemed to just confirm the worst suspicions of anti-AI users and nobody caught on to the ‘trolling’ during the phenomenon’s 5 seconds of online fame. It is still snatching art right off the front page no matter how long it takes the AI to use it, and somehow the AI creators and websites expect the artists who fuel them to not be so upset about it.
There is no end goal. These machines were initially made with the goal of dreaming – teaching a machine to appreciate art or identify a dog in an image was a message of hope for the computers we could one day build. Much like Blockchain, something that could have been great is instead being used to print money and win competitions that were intended for humans.
The Copyright Debate
The only things it can spit out are by definition recycled. Still, most of it qualifies as fair use… given the machine avoids taking too much from one particular image, which it very well might.
The music industry has been on this precipice for a while now, and yet it never crossed over into music AI trying to make a new album in an artist’s style all by itself. This is because music copyright is much more tightly regulated than image copyright – the databases that music AIs are allowed to listen to and learn from are all free and public. The music AI creators are very aware of the copying issue (which should tell you that the art AI creators realistically should be too): the AI has a tendency to lift whole riffs and chunks of songs and put them wherever. They’ll tell you this themselves because they know what they’re controlling. If the requestor doesn’t recognize the riff, they could be stumbling into copyright nightmare territory by publishing whatever the machine spit out, and it’s just not worth the risk. Why is visual art not getting this treatment?
Proving that an image is stolen when it’s part of a massive collage of millions of different images (also stolen with few exceptions) is really, really hard. For music, identifying a rhythm that’s too close to another one is pretty easy, as there are not infinitely many ways to recreate a riff. A face can come out identical to one an artist created for their ArtStation account, but the rest of the image not fitting combined with an opaque generating process means that nobody can definitively prove anything belongs to them no matter how similar it looks. The machine really might pull a face or a set of wings directly from training images to put into a generated picture, and nobody – not even the creators of the machine – could tell you that it did that or that it didn’t just happen to generate wings that looked exactly like something posted back in 2019. Without being able to interrogate it, without being able to see inside, nobody on the human artist side can do much but shout this information to the heavens.
The fight can’t even start until someone huge like Disney starts questioning why the machine can respond to phrases like ‘dog, pixar style’ and come out with something that looks like Doug from the movie ‘Up’.
This is such an obvious weak point that recently, a judge denied someone copyright protection for the AI-generated art they used to make a comic. The story, which was made by a human, was eligible, but the art behind it, not made by a human, was not. This is a step in the right direction, because all that theft will ultimately come to nothing if corporations can’t squeeze money out of it or protect what they make from being transfigured into something they don’t want to be associated with. It will be reduced to a cottage industry run by people hoping to make a quick buck and then bail.