AI-generated art can be weird, beautiful, remarkably on point, hilariously off the mark and, at times, oddly terrifying. It lives comfortably in the uncanny valley.
Generative text-to-image models – DALL-E is a popular one – allow people to almost instantly create images based on text prompts entered in natural language. The machine learning models are taught how to replicate and riff based on having analyzed and processed millions of other images of faces, animals, places, landscapes, artistic styles – you name it.
Using an AI-powered art app called Dream by Wombo, for example, which can be downloaded from the App Store or Google Play, it’s possible to type in “Jenga as a metaphor for the vicissitudes of life” done in the style of “line art” and get almost exactly that in a matter of seconds.
And it’s not hard to imagine the applications for media and marketing, including spitting out multiple variations of creative quickly, cheaply and easily.
Ai-ght?
But AI art is also sparking a lot of very tricky questions, such as: What is “real” art, and can robots create it? What does AI mean for the future of creativity? Will machines replace artists? Are we rolling out the red carpet for deep fakes? What are the ethics of training algorithms on images that are under copyright? Who owns the output?
In September, Getty Images announced a ban on uploading and selling AI-generated images over copyright concerns.
But Shutterstock, one of the largest stock photography sites in the world, is taking a more permissive approach. Last month, Shutterstock said it plans to sell DALL-E images through a partnership with OpenAI and launch a “contributor fund” that reimburses creators when their art is downloaded and used to train text-to-image AI models.
“Our position is that we want to use it to create a new line of business and new opportunities for contributors,” said Chris “Skip” Wilson, who joined Shutterstock as VP of brand marketing in April after more than four years as a global brand and communications lead at Peloton. “It’s cool to be at the forefront of something so new – how often do you get to do that?”
AdExchanger caught up with Wilson at Web Summit in Lisbon earlier this month.
AdExchanger: What does it mean for a company like Shutterstock that it’s now possible for anyone to create specific images just by typing a few words? Like, I could type “pop art of a troll eating ice cream” into DALL-E and within seconds have multiple high-quality images of a troll eating ice cream drawn in the style of pop art.
SKIP WILSON: There’s been this general fear of oh no, the robots are coming, and they’re going to take all of the fun away from actual creators. But I don’t think that’s the case. I see it like this: We’re adding a new tool that allows us to be more expressive.
But isn’t generative AI art also a potential threat to Shutterstock’s business, like anyone can make anything?
I don’t see it as a threat, because there’s still a lot of finesse that’s needed to develop quality content that meets a particular need.
The algorithms aren’t perfect. It’s not like I can type in “Allison” and “France” and get a picture of you standing in front of the Eiffel Tower. My creative director recently showed me an image that was generated in response to the prompt “salmon swimming in a river” – and it was actual fillets of salmon, like from the grocery store, floating down a river.
And there are also certain categories that AI just can’t help with, like news. AI is not going to produce a late-breaking image from the war in Ukraine. It’s not going to produce an image from last night’s Beyoncé concert. In order to capture those things, a human touch is still required.
But what about the ethical implications?
There are a lot of questions here, including how to give artists credit for their work and their intellectual property.
What we’re excited about is being early to the conversation so we can help shape the policies and procedures. Because AI-generated content isn’t going away, right? And we think there’s an opportunity here to respond and be strategic as opposed to being reactionary.
What sort of policies are top of mind right now?
Intellectual property rights. Shutterstock is in the content business, but we’re also in the business of protecting intellectual property. It’s our job to make sure that the rights of our contributors remain intact.
That’s why we’ve set up a contributor fund, which provides what you might think of as a royalty licensing fee if someone’s native content was used to train AI models.
What’s interesting is that AI technology can also be used to help us identify IP issues, like images that look a little too similar. The same technology that is raising questions can be part of the solution.
Can someone upload AI-generated images and sell them on Shutterstock today?
Our policy is focused on securing and maintaining intellectual property rights, and as of now, there isn’t a way to track all of those random AI images. With that in mind, our existing policy states that if we can’t recognize or establish IP rights, we won’t sell that content.
This interview has been edited and condensed.