This tool removes anti-AI protections for digital art

A new technique called LightShed makes it harder for artists using existing protective tools to prevent their work being ingested to train AI. It’s a new step in the cat-and-mouse technology game. Since years, artists and AI advocates have been arguing about lawand culture.

To create images, generative AI models need to be trained with a variety of visual materials. Data sets used for this training are allegedly copyrighted artwork without permission. Artists are worried that the models could mimic their style and force them out of work.

It’s important to note that the researchers behind LightShed don’t want to steal the work of artists. They don’t want to give people a false sense that they are safe. Hanna Foerster is a PhD student and lead author of the paper. She says, “You won’t know if companies have ways to delete these poisons. But they will never tell you.” If they do, then it could be too late to fix this problem.

AI model work in part by implicitly creating borders between what they perceive to be different categories of images. Glaze and Nightshade alter enough pixels to push an image over this boundary, without affecting its quality. This causes the model to perceive it as something that it is not. These almost imperceptible alterations are called perturbations and they interfere with the AI model’s understanding of the artwork.

Glaze causes models to misunderstand style. (e.g. interpreting a photorealistic picture as a cartoon). Nightshade, on the other hand, makes the model misinterpret the subject (e.g. interpreting a picture of a cat as a dog). Nightshade attacks AI models that search the internet for art. Glaze defends an artist’s style.

Foerster collaborated with a team of researchers at the Technical University of Darmstadt, and the University of Texas at San Antonio, to develop LightShed. This software learns to identify where tools such as Glaze and Nightshade spray this type of digital poison on art in order to effectively clean it. The group will present their findings at the Usenix Security Symposium a leading global cyber-security conference, in August.

Researchers trained LightShed using art pieces with and without Nightshade or Glaze. Foerster describes this process as teaching LightShed how to reconstruct “just poison on poisoned pictures.” By identifying a cutoff point for how much of the poison will confuse an AI, it is easier to “wash off” the poison. LightShed is extremely effective at this. LightShed is more adaptable than other researchers who have discovered simple methods to subvert poisoning. It can even use what it has learned from one anti AI tool, say Nightshade, to others like Mist and MetaCloak (both 19459010) without knowing them beforehand. It may have some difficulty with small doses, but they are less likely than larger doses to affect the AI models’ ability to understand the underlying artwork. This is a win for the AI, or a loss for the artists who use these tools.

Glaze has been downloaded by 7.5 million people to protect their artwork. Many of these artists have small and medium followings with limited resources. Many people who use tools like Glaze consider it an important line of technical defense, especially at a time when regulations around AI training and copyright are still in flux. The LightShed authors view their work as a caution that tools like Glaze may not be permanent solutions. Foerster says that it might take a few more attempts to come up better ideas for protection.

The creators for Glaze and Nightshade appear to agree with this sentiment: the website for Nightshade warned that the tool was not future-proof even before work on LightShed began. Shan, the researcher who worked on both tools, believes that defenses such as his still have value, even if they can be circumvented.

Shan says, “It is a deterrent”–a way of warning AI companies that artists take their concerns seriously. He says that the goal is to create as many roadblocks for AI companies to work with artists. He believes “most artists understand this is temporary solution,” but creating these obstacles against the unwanted usage of their work is valuable.

Foerster wants to use the knowledge she gained through LightShed in order to create new defenses, such as clever watermarks which remain with artwork after it has been run through an AI model. She doesn’t think this will protect an artwork against AI forever but she believes this could help tip back the scales in favor of the artist once again.

www.aiobserver.co

More from this stream

Recomended