Exciting news in the world of AI and image editing: Drag Your GAN, a groundbreaking research, was just introduced by Pan et al.!
This new approach allows you to alter images by simply dragging points from A to B, revolutionizing the way we interact with image editing. This isn’t just editing, but actually the creation of completely new images, allowing you to change object positions, subject poses, and more.
The AI realistically adapts the entire image, modifying the object's position, pose, shape, expressions, and other frame elements.
🐶🌄 Edit expressions of dogs, make them sit, adjust human poses, or even alter landscapes seamlessly. Drag Your GAN offers an innovative and interactive way to experiment with image editing.
How does it work? Drag Your Gan leverages StyleGAN2, a state-of-the-art GAN architecture by NVIDIA. By operating in the feature space (latent code), the AI learns how to edit images properly through a series of steps and loss calculations.
Even though the results are fantastic, as you will see below, it's essential to note that Drag Your Gan has some limitations, including only being able to edit generated images for now. Images are part of the distribution. Other limitations are that the selection of points is based on pixel colors and contrast, so you cannot really drag anything. If you take a part of a red car and move it staying on the red car, it might not understand that you move it at all.
Can't wait to try it out?
The authors mention that the code should be available in June.
Tune in to the video to learn more about this new image manipulation style with DragYourGan!