paint-brush
How Accurate Is AI at Mimicking Art Styles? Here's What Our Study Foundby@torts

How Accurate Is AI at Mimicking Art Styles? Here's What Our Study Found

by TortsDecember 13th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This section presents the results of a user study on style mimicry, focusing on quality vs. style fit, and providing artist-specific success rates and inter-annotator agreement.
featured image - How Accurate Is AI at Mimicking Art Styles? Here's What Our Study Found
Torts HackerNoon profile picture

Abstract and 1. Introduction

  1. Background and Related Work

  2. Threat Model

  3. Robust Style Mimicry

  4. Experimental Setup

  5. Results

    6.1 Main Findings: All Protections are Easily Circumvented

    6.2 Analysis

  6. Discussion and Broader Impact, Acknowledgements, and References

A. Detailed Art Examples

B. Robust Mimicry Generations

C. Detailed Results

D. Differences with Glaze Finetuning

E. Findings on Glaze 2.0

F. Findings on Mist v2

G. Methods for Style Mimicry

H. Existing Style Mimicry Protections

I. Robust Mimicry Methods

J. Experimental Setup

K. User Study

L. Compute Resources

C Detailed Results

C.1 Mimicry Quality Versus Style

This section includes the detailed results from our user study. As mentioned in Section 5, we ask users to assess quality and stylistic fit separately in our study. Figure 16 and 17 show the results for each of these evaluations separately (the results in the main body represent the average of the two). Finally, Table 1 includes numerical results for each scenario.


Figure 16: Quality evaluation. User preference ratings of all style mimicry scenarios but only for the quality question: “Based on noise, artifacts, detail, prompt fit, and your impression, which image has higher quality?”.


Figure 17: Style evaluation. User preference ratings of all style mimicry scenarios but only for the quality question: “Overall, ignoring quality, which image better fits the style of the style samples?”.


Table 1: Success rates averaged across artists for all style mimicry scenarios. Higher percentages indicate more successful mimicry, and 50% would indicate perfect mimicry.

C.2 Results Broken Down per Artist

We present next the results obtained for each artist in each scenario. Table 2 plots the success rate for each method against each protection for all artists, and Table 3 includes the detailed success rates.



Table 3: User preference ratings of all style mimicry scenarios S ∈ M for each artist A ∈ A by name. Each cell states the percentage of votes that prefer an image generated under the corresponding scenario S and artist A ∈ A over a matching image generated under clean style mimicry. Higher percentages indicate weaker attacks or better defenses.


(a) Quality


(b) Style

C.3 Inter-Annotator Agreement

Figure 18: Inter-annotator agreement for generations from robust mimicry with Noisy Upscaling and generations from models finetuned on protected art directly (naive mimicry). We plot the percentage of comparisons for which the preferred option was selected by 3, 4 or 5 annotators, respectively. The graph shows a higher consensus for naive mimicry, since the differences are clearer, and more variance for robust mimicry.


Authors:

(1) Robert Honig, ETH Zurich ([email protected]);

(2) Javier Rando, ETH Zurich ([email protected]);

(3) Nicholas Carlini, Google DeepMind;

(4) Florian Tramer, ETH Zurich ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.