Frequently Asked Questions (FAQ)

First, here are a few key properties about Glaze that might help users understand how it works.
  • Image specific: The cloaks needed to prevent AI from stealing the style are different for each image. Our cloaking tool, run locally on your computer, will "calculate" the cloak needed given the original image and the target style (e.g. Van Gogh) you specify.
  • Effective against different AI models: Once you add a cloak to an image, the same cloak can prevent different AI models (e.g., Midjourney, Stable Diffusion, etc.) from stealing the style of the cloaked image. This is the property known as transferability. While it is difficult to predict performance on new or proprietary models, we have extensively tested and validated the performance of our protection against multiple AI models.
  • Robust against removal: These cloaks cannot be easily removed from the artwork (e.g., sharpening, blurring, denoising, downsampling, stripping of metadata, etc.).
  • Stronger cloak leads to stronger protection: We can control how much the cloak modifies the original artwork, from introducing completely imperceptible changes to making slightly more visible modifications. Larger modifications provide stronger protection against AI's ability to steal the style.

How could this possibly work against AI? Isn't it supposed to be smart?
This is a popular reaction to cloaking, and quite reasonable. We hear often in the popular press how amazingly powerful AI models are and the impressive things they can do with large datasets. Yet the Achilles' heel for AI models has been a phenomenon called adversarial examples-- small tweaks in inputs that can produce massive differences in how AI models classify the input. Adversarial examples have been recognized since 2014 (here's one of the first papers on the topic), and numerous papers have attempted to prevent these adversarial examples since. It turns out it’s extremely difficult to remove the possibility of adversarial examples from being used in training datasets, and in a way are a fundamental consequence of the imperfect training of AIs. Numerous PhD dissertations have been written this subject, but suffice it to say, this remains a fundamentally difficult thing to remove.

The underlying techniques used by our cloaking tool draw directly from the same properties that give rise to adversarial examples. Is it possible that AI models evolve significantly to eliminate this property? It's certainly possible, but we expect that would require significant changes in the underlying architecture of AI models. Until then, cloaking works precisely because of fundamental weaknesses in how AI models are designed today.

Can't you just take a screenshot of the artwork to destroy the image cloaks?
The cloaks make calculated changes to pixels within the images. The changes vary for each image, and while they are not necessarily noticeable to the human eye, they significantly distort the image for AI models during the training process. A screenshot of any of these images would retain the underlying alterations, and the AI model would still be unable to recognize the artist’s style in the same way humans do.

Can't you just apply some filter, compression, blurring, or add some noise to the image to destroy image cloaks?
As counterintuitive as this may be, the high level answer is that no simple tools work to destroy the perturbation of these image cloaks. To make sense of this, it helps to first understand that cloaking does not use high-intensity pixels, or rely on bright patterns to distort the image. It is a precisely computed combination of a number of pixels that do not easily stand out to the human eye, but can produce distortion in the AI's “eye.” In our work, we have performed extensive tests showing how robust cloaking is to things like image compression and distortion/noise/masking injection.

Another way to think about this is that the cloak is not some brittle watermark that is either seen or not seen. It is a transformation of the image in a dimension that humans do not perceive, but very much in the dimensions that the deep learning model perceive these images. So transformations that rotate, blur, change resolution, crop, etc, do not affect the cloak, just like the same way those operations would not change your perception of what makes a Van Gogh painting "Van Gogh."

How can cloaking be useful when I have so many uncloaked, original artwork online that I can't easily take down?
Cloaking works by shifting the AI model's view of your style in its “feature space” (the conceptual space where AI models interpret artistic styles). If you, like other artists, already have a significant amount of artwork online, then an AI model like Stable Diffusion has likely already downloaded those images, and used them to learn your style as a location in its feature space. However, these AI models are always adding more training data in order to improve their accuracy and keep up with changes in artistic trend looks over time. The more cloaked images you post online, the more your style will shift in the AI model's feature space, shifting closer to the target style (e.g., van Gogh's style). At some point, when the shift is significant enough, the AI model will start to generate images in van Gogh's style when being asked for your style.

We found that as long as there is even a slight shift in the AI model's feature space, the AI model will start to add in "artifacts" when generating art in your style. For example, the generated images may still resemble your style, it will have noticeable aspects of van Gogh blended in. Oftentime, this is sufficient to prevent people from using AI to generate images in your style, because the generated images will have unnatural artifacts.