Image Style Transfer And Video Transformation In EbSynth

Written by maverickstudios | Published 2021/05/28
Tech Story Tags: ml | tutorial | data-science | machine-learning-tutorials | software-development | artificial-intelligence | machine-learning

TLDR EbSynth (https://ebsynth.com/) allows you take an input video file, paint a frame of the video and then overlay this style (style transfer) to the entire video. The video file I used was 15 second clip, with 380 png files in the sequence and it took 5-6 mins to run on a GTX1060 (6gb) card with the quality set to high. EBSynth requires the following folder structure:video – contains the full png sequence output from #1 and the output sequence.keyframe – Contains the output image from the image style transfer from #2.via the TL;DR App

Welcome to the the first part in a series of tutorials where I test out EbSynth, use ML models/tools to create the keyframe and then use the keyframes to paint a video/GIF in the same style.
Image style transfer has been around for a few years and there are number of different research papers that demonstrate the possibilities. What if the same concept is extended to video? Is it possible to paint a single frame of video and then apply the same style to the entire video?
EbSynth provides this function. Much of the existing content on EbSynth demonstrates how to hand-paint a keyframe, but my goal here is to simplify the process for non-artists and also automate the process using ML models so that the entire workflow is ML based.

#1 – VIDEO TO PNG SEQUENCE

The first step is to convert your source video file into a PNG sequence. There are number of different solutions available to do this, so I’ll leave your choose your preferred option. I used a 15 second clip of a motorcycle ride and it generated about 380 png files.

#2 – IMAGE STYLE TRANSFER

For this step I used an image style transfer model that is based on a convolutional neural network. The model takes two inputs, a style image and content image and applies the style image to the content image. It has been pre-trained on several paintings each with its own distinctive style. You can find the model here:
I took the first image in the png sequence (from step #1) and applied the “rain_princess” style. It’s important to remember that you can’t simply change the style path image to an image you have and get a different output. The model checkpoint file are pre-trained on the specific styles and enable the style transfer for the existing styles only. Depending on the input file size, this step can take anywhere between a few seconds to a few minutes to run. Once you have your output file, you are ready for your next step.

#3 – EBSYNTH

EbSynth (https://ebsynth.com/) allows you take an input video file, paint a frame of the video and then overlay this style (style transfer) to the entire video. EBSynth requires the following folder structure:
  • video – contains the full png sequence output from #1.
  • keyframe  – contains the output image from the image style transfer from #2.
  • Ensue that the file name matches the name of the related file in the sequence in the video folder.
  • final – an empty folder that will be used by Ebsynth to stored the output files for the painting process.
Next assign the folders in EbSynth so that it knows where to look for the png sequence (video folder), the keyframes used to paint the sequence (keyframe folder) and where to store the output sequence (final folder).
Once these have been setup, you are ready to run Ebsynth. The video file I used was 15 second clip, with 380 png files in the sequence and it took 5-6 mins to run on a GTX1060 (6gb) card with the quality set to high.

#4 PNG SEQUENCE TO VIDEO

For the final step you will need to take the files from the video folder and convert these into a video. There are number of different options to do this, I went with Windows Photo and was able to import the png sequence and create the video.

#5 THE RESULTS

As you can see it produces a very interesting result. As the video continues the colours bleed and the accuracy of the video is reduced. In order to improve this, I would recommend taking additional sample key frames from throughout the video, apply the style transfer and add them to the keyframe folder. Just make sure the the when you add the frames back, they have the same name as the original keyframes.

#6 TAKING THIS FURTHER

I’m aiming for a better workflow and improved image resolution…. Stay tuned for part 2 where I will aim to address both these issues.
If you though this was useful, or have a better workflow, let me know.

Written by maverickstudios | Research | Build | Release | Repeat
Published by HackerNoon on 2021/05/28