paint-brush
Mist v2 Fails to Defend Against Robust Mimicry Methods Like Noisy Upscalingby@torts

Mist v2 Fails to Defend Against Robust Mimicry Methods Like Noisy Upscaling

by TortsDecember 13th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Mist v2, like Glaze 2.0, introduces visible perturbations but offers no meaningful improvement against robust mimicry, with Noisy Upscaling still easily bypassing protections.
featured image - Mist v2 Fails to Defend Against Robust Mimicry Methods Like Noisy Upscaling
Torts HackerNoon profile picture

Abstract and 1. Introduction

  1. Background and Related Work

  2. Threat Model

  3. Robust Style Mimicry

  4. Experimental Setup

  5. Results

    6.1 Main Findings: All Protections are Easily Circumvented

    6.2 Analysis

  6. Discussion and Broader Impact, Acknowledgements, and References

A. Detailed Art Examples

B. Robust Mimicry Generations

C. Detailed Results

D. Differences with Glaze Finetuning

E. Findings on Glaze 2.0

F. Findings on Mist v2

G. Methods for Style Mimicry

H. Existing Style Mimicry Protections

I. Robust Mimicry Methods

J. Experimental Setup

K. User Study

L. Compute Resources

F Findings on Mist v2

After responsibly disclosing our work to defense developers, authors from Mist brought to our attention the recent release of their latest Mist v2 with improved resilience (Zheng et al., 2023). As we did with Glaze v2.0 (see Section E), we reproduced some of our experiments with the latest protections to verify the success of robust mimicry. Their original implementation still uses the outdated version 1.5 of Stable Diffusion. We change to SD 2.1 to match our previous experiments[6].


Figure 22: Original artwork from @nulevoy and the resulting images after applying Noisy Upscaling to artwork protected with Glaze v2.0. See protected images in Figure 20.


Our findings, as we saw with Glaze v2.0, highlight that improved protections are still not effective against low-effort robust mimicry. More specifically, the latest version of Mist:


  1. introduces visible perturbations over the images. See Figure 23


  2. does not improve protections against robust mimicry. See Figure 24


  3. creates protection that are easily removable with Noisy Upscaling. See Figure 25.


Figure 23: Comparison of perturbations introduced by Mist v1 and v2 on artwork from @nulevoy.


Figure 24: Comparison of robust style mimicry (Noisy Upscaling) on artwork from @nulevoy protected with both versions of Mist. Images in Figure 6 serve as a reference for the artistic style.


Figure 25: Original artwork from @nulevoy and the resulting images after applying Noisy Upscaling to artwork protected with Mist v2. See protected images in Figure 23.


Authors:

(1) Robert Honig, ETH Zurich ([email protected]);

(2) Javier Rando, ETH Zurich ([email protected]);

(3) Nicholas Carlini, Google DeepMind;

(4) Florian Tramer, ETH Zurich ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[6] Both models share the same encoder for which protections are optimized.