Generative AI models have changed the way we create and consume content, particularly images and art. Diffusion models such as MidJourney and Stable Diffusion have been trained on large datasets of scraped images from online, many of which are copyrighted, private, or sensitive in subject matter. Many artists have discovered significant numbers of their art pieces in training data such as LAION-5B, without their knowledge, consent, credit or compensation.

To make it worse, many of these models are now used to copy individual artists, through a process called style mimicry. Home users can take art work from human artists, perform "fine-tuning" or LoRA on models like stable diffusion, and end up with a model that is capable of producing arbitrary images in the "style" of the target artist, when evoked with their name as a prompt. Popular independent artists find low quality facsimilies of their artwork online, often with their names still embedded in the metadata from model prompts.

Style mimicry produces a number of harmful outcomes that may not be obvious at first glance. For artists whose styles are intentionally copied, not only do they see loss in commissions and basic income, but low quality synthetic copies scattered online dilute their brand and reputation. Most importantly, artists associate their styles with their very identity. Seeing the artistic style they worked years to develop taken to create content without their consent or compensation is akin to identity theft. Finally, style mimicry and its impacts on successful artists have demoralized and disincentivized young aspiring artists. We have heard administrators at art schools and art teachers talking about plummeting student enrollment, and panicked parents concerned for the future of their aspiring artist children.

Glaze is a system designed to protect human artists by disrupting style mimicry. At a high level, Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style. For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style, a la Jackson Pollock. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

But you ask, why does this work? Why can't someone just get rid of Glaze's effects by 1) taking a screenshot/photo of the art, 2) cropping the art, 3) filtering for noise/artifacts, 4) reformat/resize/resample the image, 5) compress, 6) smooth out the pixels, 7) add noise to break the pattern? None of those things break Glaze, because it is not a watermark or hidden message (steganography), and it is not brittle. Instead, think of Glaze like a new dimension of the art, one that AI models see but humans do not (like UV light or ultrasonic frequencies), except the dimension itself is hard to locate/compute/reverse engineer. Unless an attack knows exactly the dimension Glaze operates on (it changes and is different on each art piece), it will find it difficult to disrupt Glaze's effects. Read on for more details of how Glaze works and samples of Glazed artwork.

Risks and Limitations. Research in machine learning moves quickly, and there are inherent risks in relying on a tool like Glaze.

  1. Changes made by Glaze are more visible on art with flat colors and smooth backgrounds. The new update (Glaze 2.0) has made great strides along this front, but we will continue to look for ways to improve.
  2. Unfortunately, Glaze is not a permanent solution against AI mimicry. Systems like Glaze face an inherent challenge of being future-proof (Radiya et al). It is always possible for techniques we use today to be overcome by a future algorithm, possibly rendering previously protected art vulnerable. Thus Glaze is not a panacea, but a necessary first step towards artist-centric protection tools to resist AI mimicry. Our hope is that Glaze and followup projects will provide protection to artists while longer term (legal, regulatory) efforts take hold.
  3. Despite these risks, we designed Glaze to be as robust as possible, and have tested it extensively against known systems and countermeasures. We believe Glaze is the strongest tool for artists to protect against style mimicry, and we will continuous to work to improve its robustness, updating it as necessary to protect it against new attacks.
  4. Glaze is designed to protect against individualized mimicry through fine tuning of models on new art styles. It is less or not effective when an attacker is trying to replicate a style that is already trained into base models like SDXL or SD3, styles like impressionism, Van Gogh, or popular sub-genres of anime like Genshin Impact.
  5. There are currently two known attacks against Glaze. One is the IMPRESS paper, published late 2023 at NeurIPS, available here. The authors introduced a new method called IMPRESS that "purifies" images protected by tools like Glaze. They shared their paper and code with us. We had concerns about their methodology, and summarized our comments here. The second attack is the "noisy upscaler" attack, made public in June 2024 here. This attack showed significant results on Glaze (v1.1), Mist and anti-dreambooth. They also shared their findings with us. We summarized our analysis of their attack here, and used what we learned from their attack to update Glaze to be more resistant to this attack, in v2.1.

Why Glaze? Because we have been doing research in adversarial machine learning, and this was an exceptional opportunity to make a strong positive impact; because despite cries of "gatekeeping" or "elitists," the large majority of artists are independent creative people who choose art because it is their passion, and generally barely make a living doing so; and because legal and regulatory processes can take years to converge, and might be too late to prevent generative AI from destroying the human artist community.

Our goals. Our primary goals are to discover and learn new things through our research, and to make a positive impact on the world through them. I (Ben) speak for myself (but I think the team as well) when I say that we are not interested in profit. There is no business model, no subscription, no hidden fees, no startup. We made Glaze free for anyone to use, but not open source, so as to raise the bar for adaptive attacks. Glaze is designed from day 1 to run without a network, so there is no data (or art) sent back to us or anyone else. The only communication Glaze has with our servers is periodic "pings" to ask if there are new software updates.

WebGlaze. One of the things we learned since first deploying Glaze in March 2023 was that we did not understand how artists typically worked. Many work primarily on mobile devices, and few had access to powerful computers with GPUs. We received a LOT of feedback asking us to make Glaze more accessible. In August 2023, we deployed WebGlaze, a free web service that artists can run on their phone, tablet, or any device with a browser to have their art be glazed on GPU servers we pay for in the Amazon AWS cloud. Like the rest of Glaze, WebGlaze is paid for by research grants to ensure it is free for artists.

If you run a Mac, or an older PC, or a non-NVidia GPU, or an NVidia GTX 1660/1650/1550, or don't use a computer, then you should use WebGlaze. WebGlaze is, and will remain for the foreseeable future, invite-only. Any human artist who does not use GenAI tools can get a free invite. Just DM us on @TheGlazeProject on twitter or instagram, or email us (slower). Once you get your invite and create an account, just go to https://webglaze.cs.uchicago.edu to glaze your art. More info on webglaze is here.

For more detailed information on how Glaze works, we point you to other pages on this site, including

  • Frequently Asked Questions (FAQ), on everything from potential countermeasures to future plans for updates
  • Release notes, notes on changes through different versions of Glaze
  • User guide on installing, running/configuring, and uninstalling Glaze
  • Publications and Media Coverage. Read our research paper for all the technical details on Glaze, as well as all the major news coverage and interviews on Glaze, including NYTimes, BBC, and major newspapers from Japan, Germany, UK, India and rest of the world.

Samples. These are posted with permission from Sarah Andersen, ScarlettAndTeal, Karla Ortiz, Eva Toorenent, Jingna Zhang and Bill Saltzstein. They represent a wide range of art styles, from black/white pencil drawn cartoons, to flat color illustrations, oils and hi-res photography.

Karla Ortiz
@kortizart  http://www.karlaortizart.com
ScarlettAndTeal
@scarlettandteal

Glazed

Glazed

Eva Toorenent
IG: @evaboneva     https://www.evaboneva.com

Sarah Andersen
@SarahCAndersen    https://sarahcandersen.com

Glazed

Glazed

Jingna Zhang
@zemotion       https://www.zhangjingna.com,       Cara.app/zemotion

Original

Glazed

Bill Saltzstein
https://www.billsaltphoto.com

Glazed

Glazed