paint-brush
The Science Behind AI’s Ability to Recreate Art Stylesby@torts
106 reads

The Science Behind AI’s Ability to Recreate Art Styles

by TortsDecember 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Style mimicry in generative art can be achieved through methods like prompting, Img2Img, textual inversion, and finetuning, with finetuning being the most effective.
featured image - The Science Behind AI’s Ability to Recreate Art Styles
Torts HackerNoon profile picture

Abstract and 1. Introduction

  1. Background and Related Work

  2. Threat Model

  3. Robust Style Mimicry

  4. Experimental Setup

  5. Results

    6.1 Main Findings: All Protections are Easily Circumvented

    6.2 Analysis

  6. Discussion and Broader Impact, Acknowledgements, and References

A. Detailed Art Examples

B. Robust Mimicry Generations

C. Detailed Results

D. Differences with Glaze Finetuning

E. Findings on Glaze 2.0

F. Findings on Mist v2

G. Methods for Style Mimicry

H. Existing Style Mimicry Protections

I. Robust Mimicry Methods

J. Experimental Setup

K. User Study

L. Compute Resources

G Methods for Style Mimicry

This section summarizes the existing methods that a style forger can use to perform style mimicry. Our work only considers finetuning since it is reported to be the most effective (Shan et al., 2023a).

G.1 Prompting

Well-known artistic styles contained in the training data (e.g. Van Gogh) can be mimicked by prompting a text-to-image model with a description of the style or the name of the artist. For example, a prompt can be augmented with “ painted in a cubistic style” “ painted by van Gogh” to mimic those styles, respectively. Prompting is easy to apply and does not require changes to the model. However, it fails to mimic styles that are not sufficiently represented in the training data of model—often from the most vulnerable artists.

G.2 Img2Img

Img2Img creates an updated version of an image with guidance from a prompt. For this, Img2Img processes image x with t timesteps of a diffusion process to obtain the diffused image xt. Then, Img2Img uses the model with guidance from prompt P to reverse the diffusion process into the output image variation xP . Analogous to prompting, a prompt suffices to transfer a well-known style, but Img2Img also fails for unknown styles.

G.3 Textual Inversion

Textual inversion (Gal et al., 2022) optimizes the embedding of some n new tokens t = [t1, . . . , tn] that are appended to image prompts P so that generations closely mimic the style of a given set of images. The tokens are optimized via gradient descent on the model training loss so that P + t generates images that mimic the target style. Textual inversion requires white-box access to the target model, but enables the mimicry of unknown styles.

G.4 Finetuning

Finetuning updates the weights of a pretrained text-to-image model to introduce a new functionality. In this case, finetuning allows a forger to “teach” the generative model an unknown style using a set of images in the target style and their captions (e.g. an astronaut riding a horse). First, all captions are augmented with some special word, like the name of the artist, to create prompts Px = Cx + “by w∗”. Then, the model weights are updated to minimize the reconstruction loss of the given images following the augmented prompts. At inference time, the forger can append “by w∗” to any prompt to obtain art in the target style


The authors of Glaze identify this finetuning setup as the strongest style mimicry method (Shan et al., 2023a). We validate the success of our style mimicry with a user study detailed in Appendix K.1


Authors:

(1) Robert Honig, ETH Zurich ([email protected]);

(2) Javier Rando, ETH Zurich ([email protected]);

(3) Nicholas Carlini, Google DeepMind;

(4) Florian Tramer, ETH Zurich ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.