Nvidia this week unveiled its newest AI breakthrough in the form of a mind-blowing computer vision technique that can ‘inpaint’ parts of an image that have been deleted or modified. If you’re thinking Photoshop already does this, think again. This is something you have to see to believe.
Nvidia’s researchers explain the difference between its novel method for inpainting images with deep learning and currently existing tech in a whitepaper published earlier this week:
Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. The goal of this work is to propose a model for image inpainting that operates robustly on irregular hole patterns, and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation.
As you can see in the above video, Nvidia’s technology doesn’t suffer from the same problem as current market techniques for filling in missing spaces in images. There’s no granular degradation or blurred edges which require fiddling with different brushes and levels of smoothness or opacity.
According to the researchers, they’re the first to successfully train a neural network to process irregular shaped holes in images.
Nvidia’s AI does, almost instantly, what could take even a seasoned graphic designer minutes or even hours to accomplish. The way it works uses a deep neural network to create masks and partial convolutional predictions — in essence it creates an invisible layer that it manipulates until it ‘feels’ as though the image is complete.
This isn’t the first time we’ve had our minds blown by the amazing results of one of Nvidia’s AI projects. Last year we were awestruck by the company’s ability to create realistic photographs of people who don’t exist, and even more impressed with its AI that can change the weather or time of day in a video.
Reality is becoming more subjective by the breakthrough. There’s a pretty good chance that, within a couple of years, the only way to tell the difference between AI-generated imagery and ‘real’ pictures will be checking the digital signature or having individual pixels evaluated by a computer.
The Next Web’s 2018 conference is just a few weeks away, and it’ll be 🔥. Find out all about our tracks here. This story was written by Tristan Greene, and was originally published on The Next Web.
Read more trending Next Web stories at HackerNoon.com/TNW