Cloud-native engineer. Writer. Eventual pile of dust. As above, so below.
The lightweight Kubernetes OS that is known as k3OS has quickly been gaining popularity in the cloud-native community as a compact and edge-focused Linux distribution that cuts the fat away from the traditional K8s distro. While k3OS is picking up steam, it is still on the bleeding edge and there is still a bit of a shortage of learning material out there for it.
In this blog post, I’m going to walk you through a demo of how I helped to solve a puzzling issue with Powerflex’s edge k3OS deployment.
For those that aren’t familiar with it, k3OS is a lightweight Linux distribution developed by Rancher Labs. It is designed to abstract away as much of the OS maintenance of a Kubernetes cluster as possible and it is meant to only have what is needed to run k3OS.
Key Features of k3OS include:
By leveraging a lightweight Kubernetes operating system like k3OS, developers can more easily deploy applications to resource-constrained environments such as those at the edge of the internet.
Now, Before we jump into our demo, I want to give a brief overview of what Rancher is and how it relates to what we are trying to solve.
Rancher is an open-source managed Kubernetes platform that can deploy and manage multiple Kubernetes clusters running anywhere, on any provider. It can provision Kubernetes from a hosted provider, provision compute nodes and then install Kubernetes onto them, or inherit existing Kubernetes clusters running anywhere.
Rancher adds significant value on top of Kubernetes, first by centralizing role-based access control (RBAC) for all of the clusters and giving global admins the ability to control cluster access from one location. It then enables detailed monitoring and alerting for clusters and their resources, ships logs to external providers, and integrates directly with Helm via the Application Catalog.
Let’s explore the use case of Powerflex a bit before we dive into the more technical parts. Powerflex maintains a large edge deployment built on top of k3OS while using GCP for the management plane. The architecture looks something like this:
Powerflex has sites scattered across the country, and the installation process for a node has to be fairly plug-and-play. When a site is being installed, Powerflex will ship a preconfigured box with k3OS out to the site, the engineer plugs it in and the box will run an init script that does the following:
This plug-and-play model allows field engineers to rapidly deploy k3OS nodes with little effort and setup. The optimization I’m going to explore today is using the cloud-init scripts to register nodes to Rancher via the API.
Let’s dive into some of the technical setup of ougoing to want to install VirtualBox and make sure to download the k3OS iso here so you can boot it up in a bit. You’ll also need a Rancher instance running on some type of cloud platform, which is outside the scope of this demo.
Just to reiterate, the things you’ll need are:
Once you have Virtualbox installed, you’ll want to start it up and go through the initial process of setting up the VM and attaching the k3OS iso. Once you’ve done that, start up the machine and you should be greeted with a nice little intro screen:
From here, you’ll want to login with the rancher user and then you’ll be asked if you want to configure the system of install. Choose to install and move through the steps until it asks you whether or not you want to use a cloud-init file or not.
Before we move any further, I’m going to break down what the cloud-init file is doing here. Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.
It is a pretty simple process, and I would provide the raw code here but Medium’s formatting makes it a bit difficult.
So let’s head back over to Virtualbox and give k3OS the RAW gist of the cloud-init file we just talked about.
Once you feed the OS the URL of your init file, k3OS will use that to make an authentication call to your Rancher server and get back the YAML to spin up a cluster. If everything goes correctly, you should have a cluster show up in your Rancher server on the execution of the cloud-init file.
That’s really about all there is to it. There’s a lot of really great automation that cloud-init allows and I’m curious to hear how you all implement it in your workflows, esp. with k3OS.
Thanks for reading and let me know what you think. Also, I want to throw a huge shoutout to Nick Kampe for helping me brainstorm this project!
Create your free account to unlock your custom reading experience.