If you've read some of my prior articles you might've thought I'd never write this one huh? :) Well here goes. A common question that we get is "Can you use unikernels with K8S?" The answer is yes, however, there are caveats. Namely, unikernels come packaged as virtual machines and in many cases k8s is provisioned on the public cloud on top of virtual machines. Also, you should be aware that provisioning unikernels under k8s incurs security risks that you would otherwise not need to deal with. These are greatly diminished as the guests are unikernels, not linux guests, but still. Now, if you have your own servers or you are running k8s on bare metal this is how you'd go about running Nanos unikernels under k8s. For this article you need a real physical machine and OPS. While you use nested virtualization I wouldn't because you are going to take a pretty significant performance hit. on some of their instances and if you are on Amazon you might be able to perform this example on the "metal" instances (I haven't checked), although, keep in mind both of these options will not be cheap compared to simply spinning up a t2 nano or micro instance of which you can do easily with unikernels. can Google Cloud has this feature We are going to run a Go unikernel for this example but you can use any OPS example to follow along. Here we have a simple go webserver that sits on port 8083: package main ( ) func main() { http.HandleFunc( , func(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, ) }) fs := http.FileServer(http.Dir( )) http.Handle( , http.StripPrefix( , fs)) http.ListenAndServe( , nil) } import "fmt" "net/http" "/" "Welcome to my website!" "static/" "/static/" "/static/" ":8083" Ok - looks good. We can quickly build the image and ensure everything is working alright like so. We are using the 'nightly' build option here: ops run -n -p goweb 8083 Ops build works here as well but run will autorun it for you to ensure it works locally first. Now we'll need to put it into a format for k8s to use. First, we compress it with (sudo apt-get install xz-utils): XZ cp .ops/images/goweb.img . xz goweb.img From there we need to put it into a place for k8s to import it. I tossed it into a cloud bucket and to keep this article as simple as possible have left it open. ( ) Obviously, you don't want to do this in a real life production scenario. Now let's install : kubectl curl -LO https: chmod +x ./kubectl mv kubectl /usr/local/bin/. sudo mv kubectl /usr/local/bin/. kubectl version --client //storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl Now let's install . I'm using minikube here to hopefully minimize the number of steps you need to do from a fresh install but feel free to use whatever you want. minikube curl -Lo minikube https: minikube start --vm-driver=kvm2 //storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube Then install the kvm2 driver. For this box I needed to install the libvirt suite of tooling: sudo apt-get install libvirt-daemon-system libvirt-clients bridge-utils is this rather old and nasty library used to interact w/KVM although it is a ton of integrations and there aren't that many alternatives. Libvirt If you are having trouble after this step you can run this quick validation check to ensure everything is setup: virt-host-validate Also, ensure you are in the right group to interact with KVM: groups After getting all of this installed you might find the need to reset your session (quickest way is to just logout/login again). Next up - let's install the . This is what really ties the room together. kubevirt operator KUBEVIRT_VERSION=$(curl -s https: echo $KUBEVIRT_VERSION kubectl create -f https: export //api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- - | sort -V | tail -1 | awk -F':' '{print $2}' | sed 's/,//' | xargs) //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml Then let's create a resource: kubectl create -f https: //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml Now let's install virtctl. Are we getting tired yet? curl -L -o virtctl https: chmod +x virtctl //github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64 Then we'll import with CDI. wget https: kubectl create -f storage-setup.yml VERSION=$(curl -s https: kubectl create -f https: kubectl create -f https: kubectl get pods -n cdi //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/storage-setup.yml export //github.com/kubevirt/containerized-data-importer/releases/latest | grep -o "v[0-9]\.[0-9]*\.[0-9]*") //github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml //github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml Ok! Whooh! If you got through all of that we are almost to the finish line. Let's grab a template for our persistent volume claim: wget https: //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/pvc_fedora.yml Now, edit the line to show where you stuffed the original disk image. In my example it looks like this (again this is just an example to keep things easy - you wouldn't/shouldn't do this in real life): cdi.kubevirt.io/storage.import.endpoint: "https://storage.googleapis.com/totally-insecure/goweb.img.xz" Let's create it: kubectl create -f pvc_fedora.yml kubectl get pvc fedora -o yaml You can check out the import as it happens but wait until you see the success message: cdi.kubevirt.io/storage.pod.phase: Succeeded Now we can create the actual vm: wget https: kubectl create -f vm1_pvc.yml //raw.githubusercontent.com/kubevirt/kubevirt.github.io/master/labs/manifests/vm1_pvc.yml Now if you: kubectl get vmi You should see your instance running. If you have minikube you can now do this: Wow! We just deployed a unikernel to K8S. Easy? Well, I'll let you decide that. Of course, if you are using the public cloud like AWS or GCP and you don't want to have to go through all the hassle these 2 commands will get the same webserver deployed just as easily with a lot less hassle, more security and more performance with less waste: ops image create -c config.json -a goweb ops instance create -z us-west2-a -i goweb-image Until next time.