Adobe's thoughts on the ethics of AI-generated images (and paying its contributors for them)

"We're at a tipping point where AI is going to break trust in what you see and hear -- and democracies can't survive when people don't agree on facts. You have to have a baseline of understanding of facts," Dana Rao, Adobe's general counsel and chief trust officer, told me. And while that's not necessarily a new observation, the company's launch of its Firefly generative image creator and overall GenAI platform this week puts this in a different context.

Maybe more so than any other company, Adobe is deeply embedded in both the creative economy and the world of marketing. And while the Adobe Summit, the company's annual digital marketing event, unsurprisingly puts its focus on how generative AI can help marketers market more effectively with AI, there was no escaping the discussions around AI ethics, especially in the context of Firefly. Indeed, Adobe itself put AI ethics in the spotlight because the company clearly believes that this is, in part, what allows it to differentiate its generative AI offerings from those of its competition.

Adobe Summit on Tuesday, March 21, 2023, in Las Vegas. Image Credits: David Becker/AP Images for Adobe

"I manage the AI ethics program. We have a really good relationship with the engineering team as we're developing new technologies," Rao explained. "We've been reviewing AI features for the last five years. Every single AI feature that goes to market goes through the review board." This team, it's worth noting, also ensures that the AI-generated results are not just commercially safe but also free of bias (to ensure that when you ask for images related to an occupation, for example, the results cover a broad demographic set).  

As Adobe used its Adobe Stock service to train the model (in addition to openly licensed and public domain images), the company doesn't have to worry about having the rights to these images. The photographers that contribute to Stock already have a commercial relationship with Adobe, after all, and are likely creating the kind of commercially safe images that Adobe's customers are looking for -- and that the company can then train its AI on. And since the licensing is clear, Adobe's users won't have to worry about breaking any copyright laws themselves.

"That stock database of images is the perfect place to go if you want to create something designed to be commercially safe. And we have the license for it -- a direct license with the contributor. And that helps on both the ethics side and the copyright side," Rao explained.

Image Credits: TechCrunch

But that also creates questions about how to pay these contributors for the content they've licensed to Adobe, especially if services like Firefly take off. Today, stock photographers tend to receive royalties for every time their photo gets licensed on a platform like Adobe Stock. And while Adobe has the rights to use this content to train its model, Adobe Stock contributors will surely want to get paid for helping the company train these models, too. In the company's defense, it's been quite open about this, though how it expects to do this remains a bit vague. Rao didn't provide too many additional details, but he did explain the company's thinking in a bit more detail.

"What we've said is that we're really reviewing all the different ways you could possibly do this and we're going to do that through the beta," he said. "I think the number one thing is that we're committed. We feel it's the right thing to do. We're committed to compensating the people who are contributing their work to these databases. That's what we want to make sure. That's the message we want to get out there." 

He stressed that Adobe wants there to be a value exchange between the contributors and the company. He argued that there are lots of different ways Adobe could pay contributors for how their images influence the AI-generated content, but because it's hard to know what exactly influenced how the model created a new image, it'll also be hard for Adobe to decide how to compensate the content creators that contributed to every AI-generated image. But Rao believes there may be proxies that the company could use -- and the solution for that may actually be another AI system.

Image Credits: TechCrunch

"Speculatively,could use an AI to analyze an image and say: where dothink it came from? There's just a number of ways to imagine how you could come up with the right model. But there's no need right now for us to solve that problem while we're in beta," said Rao.

He also noted that, in addition, Adobe could maybe pay photographers for when a user asks for an image that's specifically influenced by their individual style.

"Whenthink about a forward-looking [compensation] model, it's style. Right now, that's a negative. Artists don't want their style to be ripped off. But what if you can monetize it? What if we can say: you give us your assets. We'll plug it into Firefly and then if someone says: I want it to look like Dana Rao, we pop up a message saying for $2, you can get something in the style of Dana Rao, all of a sudden, I get a new revenue stream," he explained. He also noted that it is now up to everybody who works in the creative economy to figure out new ways to make money.

That's for Adobe and its content partners to figure out, though. For users, who want to use Firefly to create their own assets, that's not a problem they have to worry about. There are some interesting questions around how -- or even if -- you can copyright AI-generated images.

"Where the copyright office is now -- and I think there's a decent chance that'll stick, because technically speaking, it's almost nonsensical, otherwise. Right now, they're saying that if you type in a text prompt, the resulting image: no one owns it. You need a human to add expression in order to get copyright," explained Rao. How much value a human would have to add to copyright an image, though, is still a bit unclear.

Adobe, together with a large number of partners, has long championed the Content Authenticity Initiative, which is developing standards and tools for tracking how an image was created and manipulated over time. And while this initiative mostly focused on fighting deepfakes and misinformation, it may also come to play in this context because it will allow companies to prove that they did add their own expression to an AI-generated image.