AI Software developer
It has been a while since I wrote my first tutorial about running deep learning experiments on Google’s GPU enabled Jupyter notebook interface- Colab. Since then, my several blogs have walked through running either Keras, TensorFlow or Caffe on Colab with GPU accelerated.
One missing framework not pre-installed on Colab is PyTorch. Recently, I am checking out a video to video synthesis model requires running on Linux, plus there are gigabytes of data and pre-trained model to download before I can take the shiny model for a spin. I was wondering, why not give Colab a try by leveraging its awesome downloading speed and freely available GPU?
Let’s starts by installing CUDA on Colab.
Why not other CUDA versions? Here are three reasons.
Back to installing, the Nvidia developer site will ask you for the Ubuntu version where you want to run the CUDA. To find out, run this cell below in a Colab notebook.
It returns the information you want.
VERSION="17.10 (Artful Aardvark)"
After that, you will be able to navigate through the target platform selections, make the installer type “deb(local)”, then right click on the “Download (1.2 GB)” button to copy the link address.
Back to Colab notebook, paste the link after a wget command to download the file. A 1.2GB file only takes about 10 seconds to download on Colab which means there is no coffee break -_-.
Run the following cell to complete the CUDA installation.
If you see those lines at the end of the output, that means the installing was successful.
Setting up cuda (9.2.148-1) ...
Processing triggers for libc-bin (2.26-0ubuntu2.1) ...
Continue with Pytorch.
Very easy, go to pytorch.org, there is a selector for how you want to install Pytorch, in our case,
python --versionin a shell.
It will let you run this line below, after which, the installation is done!
pip3 install torch torchvision
Out of the curiosity how well the Pytorch performs with GPU enabled on Colab, let’s try the recently published Video-to-Video Synthesis demo, a Pytorch implementation of our method for high-resolution photorealistic video-to-video translation. That video demo turns poses to a dancing body looks enticing.
Besides, the demo also depends on custom built CUDA extensions gives the chance to test out the installed CUDA toolkit.
The cell below does all the job from getting the code to running the demo with the pre-trained model.
The generated frame goes to directory results/label2city_1024_g1/test_latest/images which you can display one by calling the cell below.
That wraps up this tutorial.
This short post shows you how to get GPU and CUDA backend Pytorch running on Colab quickly and freely. Unfortunately, the authors of vid2vid haven’t got a testable edge-face, and pose-dance demo posted yet, which I am anxiously waiting. So far, It only serves as a demo to verify our installing of Pytorch on Colab. Feel free to connect with me on social media where I will keep you posted on my future projects and other practical deep learning applications.
Here are some of my previous Colab tutorials.
Originally published at www.dlology.com.