paint-brush
Russian Scientists Demo AI That Can Turn Simple Text Into Stunning Imagesby@autoencoder

Russian Scientists Demo AI That Can Turn Simple Text Into Stunning Images

tldt arrow

Too Long; Didn't Read

Researchers have developed a text-to-image generation model called Kandinsky that uses a novel latent diffusion model to produce images that appear natural.
featured image - Russian Scientists Demo AI That Can Turn Simple Text Into Stunning Images
Auto Encoder: How to Ignore the Signal Noise HackerNoon profile picture

Authors:

(1) Anton Razzhigaev, AIRI and Skoltech;

(2) Arseniy Shakhmatov, Sber AI;

(3) Anastasia Maltseva, Sber AI;

(4) Vladimir Arkhipkin, Sber AI;

(5) Igor Pavlov, Sber AI;

(6) Ilya Ryabov, Sber AI;

(7) Angelina Kuts, Sber AI;

(8) Alexander Panchenko, AIRI and Skoltech;

(9) Andrey Kuznetsov, AIRI and Sber AI;

(10) Denis Dimitrov, AIRI and Sber AI.

Editor's Note: This is Part 3 of 8 of a study detailing the development of Kandinsky, the first text-to-image architecture designed using a combination of image prior and latent diffusion. Read the rest below.

3 Demo System

We implemented a set of user-oriented solutions where Kandinsky model is embedded as a core imaging service. It has been done due to a variety of inference regimes, some of which need specific front-end features to perform properly. Overall, we implemented two main inference resources: Tele-gram bot and FusionBrain website.


Figure 2: Examples of inference regimes using Kandinsky model.


FusionBrain represents a web-based image editor with such features as loading and saving images, sliding location window, erasing tools, zooming in/out, various styles selector, etc. (cf. Figure 3). In terms of image generation, the three following options are implemented on this side:


• text-to-image generation – user inputs a text prompt in Russian or English, then selects an aspect-ratio from the list (9:16, 2:3, 1:1, 16:9, 3:2), and the system generates an image;


• inpainting – using the specific erasing tool, user can remove any arbitrary input image part and fill it, guided by a text prompt or without any guidance;


• outpainting – input image can be extended with a sliding window that can be used as a mask for the following generation (if the window intersects any imaged area, then the empty window part is generated with or without text prompt guidance).


Inpainting and outpainting options are the main image editing features of the model. Architectural details about these generation types can also be found in Figure 1.


Telegram bot contains the following image generation features (cf. Figure 2):


• text-to-image generation;


• image and text fusion – user inputs an image and a text prompt to create a new image guided by this prompt;


• image fusion – user inputs an image as the main one and another ’guiding’ image, and the system generates their fusion;


• image variations – user inputs an image, and the system generates several new images similar to the input one.


This paper is available on arxiv under CC BY 4.0 DEED license.