Lauren Leffer: Generative artificial intelligence tools can now instantly produce images from text prompts. It’s neat tech, but could mean trouble for professional artists.
Rachel Feltman: Yeah, because thoseAI tools make it really easy to instantly just rip off someone’s style.
Leffer: That’s right, generative AI, which is trained on real peoples’ work, can end up really hurting the artists that enable its existence. But some have started fighting back with nifty technical tools of their own.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Feltman: It turns out that the pixel is mightier than the sword. I’m Rachel Feltman, a new member of the Science, Quickly team. Leffer: And I’m Lauren Leffer, contributing writer at Scientific American.
Feltman: And you’re listening to Scientific American’s Science, Quickly podcast.
[Clip: Show theme music]
Feltman: So I have zero talent as a visual artist myself, but it seems like folks in that field have really been feeling the pressure from generative AI.
Leffer: Absolutely, yeah. I’ve heard from friends who’ve had a harder time securing paid commissions than ever before. You know, people figure they can just whip up an AI-generated image instead of paying an actual human to do the work. Some even use AI to overtly dupe specific artists. But there’s at least one little tiny spot of hope. It’s this small way for artists to take back a scrap of control over their work and digital presence.
Feltman: It’s like a form of self-defense.
Leffer: Right, let’s call it self-defense, but it’s also a little bit of offense.
It’s this pair of free-to-use computer programs called Glaze and Nightshade developed by a team of University of Chicago computer scientists, in collaboration with artists. Both tools add algorithmic cloaks over the tops of digital images that change how AI models interpret the picture, but keep it looking basically unchanged to a human eye.
Feltman: So once you slap one of these filters on your artwork, does that make it effectively off-limits to an AI training model?
Leffer: Yeah, basically. It can’t be used to train generative image models in the same way once it’s been “glazed” or “shaded” – which is what they call an image passed through Nightshade. And, with Nightshade specifically, it actually might mess up a model’s other training– it throws a wrench in the whole process.
Feltman: That sounds like karma to me. I’d love to hear more about how that works. But before we dig into the technical stuff, I have to ask: shouldn’t artists already be protected by copyright laws? Like, why do we need these technical tools to begin with?
Leffer: Yeah, great question– so right now, whether or not copyright law defends against creative work being used to train AI, it’s this really big, unresolved legal gray area, kind of a floating question mark. There are multiple pending lawsuits on the subject, including ones brought by artists against AI image generators, and even The New York Times against OpenAI, because the tech company used the newspaper’s articles to train its large language models. So far, AI companies have claimed that pulling digital content into training databases falls under this protection clause of fair use.
Feltman: And I guess as long as those cases are still playing out, in the meantime, artists just can’t really avoid feeding that AI monster if they want to promote their work online. Which, obviously, they have to do.
Leffer: Right, exactly. Glaze and Nightshade– and similar tools, there are other ones out there like Mist– they aren’t permanent solutions. But they’re offering artists a little bit of peace of mind in the interim.
Feltman: Great names all around.How did these tools come to be?
Leffer: Let’s start with a little bit of background. Before we had generative AI, there was facial recognition AI. That laid the technical groundwork for adversarial filters, which adjust photos to prevent them from being recognized by software. The developers of Glaze and Nightshade, they’d previously released one of these tools, called Fawkes, after the V for Vendetta Guy Fawkes mask.
Feltman: Another great name.
Leffer: Yeah it’s very into, like, the tech-dystopia world.
Feltman: Totally.
Leffer: Fawkes cloaked faces, and in 2023, the research team started hearing from artists asking if Fawkes would work help to hide their work from AI too. Initially, you know, the answer was no, but it did prompt the computer scientists to begin developing programs that could help artists cloak work.
Feltman: So what do these tools do?
Leffer: Glaze and Nightshade, they do slightly different things, but let's start with the similarities. Both programs apply filters. They alter the pixels in digital pictures in subtle ways that are confusing to machine learning models but unobtrusive (mostly) to humans.
Feltman: Very cool in theory, but how does it work?
Leffer: You know how, with optical illusions, a tiny tweak can suddenly make you see a totally different thing? Feltman: Ah yes, like the infamous dress that was definitely blue and black, and not white and gold at all.
Leffer: Right there with you. Yeah, so optical illusions happen because human perception is imperfect, we have these quirks inherent to how our brains interpret what we see. For instance, you know, people have a tendency to see human faces in inanimate objects.
Feltman: So true, like every US power outlet is just a scared lil guy.
Leffer: Absolutely, yeah– power outlets, cars, mailboxes– all of them have their own faces and personalities.
Feltman: 100%.
Leffer: Computers don’t see the world the same way that humans do, but they have their own perceptual vulnerabilities. And the developers of Glaze and Nightshade built an algorithm that figures out those quirks and the best way to exploit them, and then modifies an image accordingly. It’s a delicate balancing act. You want to stump the AI model, but you also want to keep things stable enough that a human viewer doesn’t notice much of a change. In fact, the developers kind of got to that balanced point through trial and error.
Feltman: Yeah, that makes sense. It’s really hard to mask and distort an image without masking and distorting an image. So they’re able to do this in a way that we can’t perceive, but what does that look like from the AI’s perspective?
Leffer: Another great question.To train an image-generating AI model to pump out pictures, you give it lots of images along with descriptive text. The model learns to associate certain words with visual features– think shapes or colors, but really it’s something else we can’t necessarily perceive because it’s a computer. And under the hood, all of these associations are stored within basically multidimensional maps. Similar concepts and types of features are clustered near one another.
With the algorithm that underlie Glaze and Nightshade, the computer scientists strategically force associations between unrelated concepts, so they move points on that multidimensional map closer and closer together.
Feltman: Yeah, I think I can wrap my head around how that would confuse an AI model.
Leffer: Yeah, it’s all still a little hand wavey because what it really comes down to is some complex math. Ben Zhao, the lead researcher at University of Chicago behind these cloaking programs, said that developing the algorithms was akin to solving two sets of linear equations.
Feltman: Not my strong suit. So I will take his word for it.
Leffer: Me either. That’s why we’re at a podcast instead.
Feltman: So why two tools? How are these different?
Leffer: Glaze came out first. It was kind of the entry, the foray, into this world. It’s very focused on cloaking an artists’ style. So this thing kept happening to prominent digital artists where someone would take an open source generative AI model and train it on just that artist’s work. That gave them a tool for producing style mimics. Obviously this can mean fewer paid opportunities for the artist in question, but it also opens creators up to reputational threats. You could use one of these style mimics to make it seem like an artist had created a really offensive image, or something else that they would never make.
Feltman: That sounds like such a nightmare.
Leffer: Absolutely, in the same nightmare zone as deep fakes and everything happening with generative AI right now. So because of that, Zhao and his colleagues put out Glaze, which tricks AI models into perceiving the wrong style. Let’s say your aesthetic is very cutesy, and bubbly and cartoon-ey. If you Glaze your work, an AI model might instead see Picasso-esque cubism. It makes it way harder to train style mimics.
Feltman: Very cool. You mentioned that these tools can also play a little bit of offense against AI art generators. Is that where Nightshade comes in?
Leffer: That’s right. An image cloaked in Nightshade will teach AI to incorrectly associate not just styles but also fundamental ideas and images. As a hypothetical example, it would only take a few hundred Nightshade-treated images to retrain a model to think cats are dogs. Zhao says that hundreds of thousands of people have already downloaded and begun deploying Nightshade. And so his hope– and his co-researcher’s hope and the artist’s hope– is that with all of these images out there, it will become costly enough and annoying enough for AI companies to weed through masked picture, they’ll be more incentivized to pay artists willing to license their work for training instead of just trawling the entire web.
Feltman: And if nothing else, it’s just very satisfying.
Leffer: Yeah, it’s catharsis at some baseline level.
Feltman: Yeah, so it sounds like the idea is to kind of even out the power differential between AI developers and artists, is that right?
Leffer: Yeah, these tools, they definitely tip the balance a little bit, but they’re certainly not a complete solution–they’re more like a stop gap. For one, artists can’t retroactively protect any art that’s already been hoovered up into AI training datasets, they can only apply these tools to newer work. Plus, AI technology is advancing super super fast. I spoke with some AI experts who were quick to point out that neither Glaze nor Nightshade are future proof. They could be compromised moving forward. AI models could just change into things that have different structure and architecture. Already, one group of machine learning academics has partially succeeded at getting around the Glaze cloak.
Feltman: Wow that was fast. That’s a few months after it came out, right?
Leffer: Yeah, it’s quick, though that’s kind of the nature of digital security, as Zhao told me in his own words: “it’s always a cat-and-mouse game.”
Feltman: And I guess even if Glaze and Nightshade continue to work perfectly, it’s still unfair for artists to have to take those extra steps. Leffer: Yes, absolutely great point. I spoke with a professional illustrator, Mignon Zakuga, who’s been really enthusiastic about Glaze and Nightshade. She was involved in beta testing, and still uses both cloaks regularly when she uploads her work. But even she said that passing images through the filters is not the greatest or easiest process. It can take a couple of hours, and even though they’re not supposed to be noticeable, often the visual changes are, at least to her. And especially to her, as the artist who made the image. So Zakuga told me it’s a compromise she’s willing to deal with for now. But clearly, artists deserve better, more robust protections.
Feltman: Yeah like– and I know this is wild– but what about actual policy or legislation?
Leffer: 100%, it would be great to get to a point where all of that is clarified especially in policy and law. But no one really knows what that should or will look like. Will copyright end up being enforced against AI? Do we need some whole new suite of protective laws? But at the very least, programs like Glaze and Nightshade, they offer us a little more time to figure that all out.
[Clip: Show theme music]
Leffer: Science Quickly is produced by Jeff DelViscio, Tulika Bose, Rachel Feltman, Kelso Harper and Carin Leong. Our show is edited by Elah Feder and Alexa Lim. Our theme music was composed by Dominic Smith.
Feltman: Don’t forget to subscribe to Science Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com. And if you like the show, give us a rating or review!
Leffer: For Scientific American’s Science Quickly, I’m Lauren Leffer.
Feltman: I’m Rachel Feltman. See you next time!