paint-brush
How to Build a Small K8s Cluster on a Single PC - Chapter 4 - Care About your Systemby@ehlesp
398 reads
398 reads

How to Build a Small K8s Cluster on a Single PC - Chapter 4 - Care About your System

by Eduardo HiguerasSeptember 2nd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Fourth and last of my thread of articles where I introduce my guide series for building a small Kubernetes cluster in a single low-end PC. This piece gives insights about the remaining guides dealing with backups and updates of the system built following previously reviewed walkthroughs, and closes my article series pointing out some relevant ideas not covered by my guides.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Build a Small K8s Cluster on a Single PC - Chapter 4 - Care About your System
Eduardo Higueras HackerNoon profile picture


With the third article of this series, you got an idea about how my guides solved the matter of deploying applications or services in the particular Kubernetes cluster setup built with them. Since the main goal of the guides has been achieved, what's left to talk about? Quite a bit in fact since, once you have a system such as this one up and running, you'll need to know how to protect its data and how to keep it up to date.

Chapter 04. Care about your system

In previous guides, I left indications about how to monitor each system's levels (host, virtual machine, and cluster), but monitoring alone is not enough to ensure that your system remains healthy or that it can survive incidents of any kind. Taking proper care of your system implies performing regularly on it two routine but critical tasks: backups and updates. Next, I review the walkthroughs where I tackle these duties in a way fitting to the setup built in my guide series.

Backups

As I've done with other long subjects, I've split the backups matter into different guides, five in this case.

G037 - Backups 01 ~ Considerations

This guide serves as an introduction to the matter of backups, where you'll see my explanations about the main concerns that you should worry about. In other words, this document is for helping you get into the right mindset to understand better the following related guides.

And what are those main concerns talked about in this particular guide?


They are four, each detailed in their own separated point:


  • What to backup, or singling out what data layer is present on each system's level.
  • How to backup, or identifying the tools to use for each data layer.
  • Where to store the backups, or knowing beforehand which are the storage devices or spaces available for holding the backups.
  • When to do the backups or the criteria for the execution schedule for each backup.


One way or the other, all these points are covered in each method detailed in the four guides that come after this G037 one.

G038 - Backups 02 ~ Host platform backup with Clonezilla

The first data layer to worry about is the one regarding the host level. This comprises the whole Proxmox VE setup and the virtual machines (and their virtual storage) running the K3s Kubernetes cluster. To do this kind of complete backup you have to use a specific tool able to pull this out, and I chose Clonezilla for this task since it’s one of the most popular and capable of its kind.


This guide goes over all the concerns pointed out in the previous G037 guide, but in a more specific way centered on the kind of backup you can do with Clonezilla. Also, it tells you the basic steps for restoring a Clonezilla image. Bear in mind that this guide doesn't tell you how to use Clonezilla, that was something I left apart in its own appendix guide that I explain right below.

G905 - Appendix 05 ~ Cloning storage drives with Clonezilla

Since I ended up referring to the backup and restoration process of Clonezilla in a couple of different guides, I decided to leave the detailed installation of this tool in a separate document. This guide is a step-by-step walkthrough explaining how to run the tool and how to use it to carry out a basic backup and restoration procedure. On the other hand, here I also point out certain particularities of the tool that are useful to know, like how a Clonezilla backup filesystem structure is organized.

G039 - Backups 03 ~ Proxmox VE backup job

Going one step up from the host level, you reach the Proxmox VE layer where the virtual machines lay. The Proxmox VE system itself is already covered by the Clonezilla backups, so this guide worries only about how to back up and restore the virtual machines. Thankfully, Proxmox VE comes with its own integrated backup system, quite easy to use and more than enough for a small setup such as the one depicted in my guide series. This is the system that I get into detail in this G039 guide, explaining things such as the scheduling of these backup jobs or how to (and not to) restore the backups, and also pointing you to the directory that stores the compressed backup files of your virtual machines.

Backing up data from the Kubernetes cluster with UrBackup

The last backup procedure explained in my guides is one that covers the data stored within the Kubernetes cluster itself: mainly user data, but also information produced by the applications themselves (configuration or log files, for instance). The method I detail uses the common combination of a backup server with client agents installed in the virtual machines. Those agents are the ones responsible for getting the files into backups when they receive a scheduled, or not, order from the backup server they're connected to. The tool I chose to do this was UrBackup, which offers all the necessary functionality to backup files found in the paths you configure on each client.


Since using UrBackup also required me to explain how to prepare the backup system itself, I divided this guide into two parts. The first is the one dealing just with the UrBackup server's deployment, while the second takes care of explaining the proper installation of the UrBackup clients and the configuration and scheduling of file backups.

G040 - Backups 04 ~ UrBackup 01 - Server setup

This guide is all about setting up an UrBackup server in a new Debian virtual machine within the Proxmox VE node. Here I take advantage of already having a virtual machine template of a regular Debian system, ready for cloning into a new Debian VM where to install the UrBackup server instance. As it happened with the K3s nodes before, this new VM also requires some tinkering (hostname, enabling the second network card, hardening, etc) and, more important, attaching to it some good virtual storage capacity where to store the backups UrBackup will produce and keep.


One major highlight of this whole setup is the use of the BTRFS filesystem. Debian 11 (the particular distribution used in my guides) supports it, and also UrBackup can, automatically, take advantage of its capabilities that, in certain respects, surpass what can be done with Ext4 LVM-based storage. Enabling BTRFS in the UrBackup system is not difficult, is just installing the specific toolset and setting up the storage that will keep the backups as a BTRFS one.


The other noticeable aspect of this guide is the rather long process of properly configuring the UrBackup server itself, which includes things like firewalling in Proxmox VE, enabling HTTPS connections, and some other adjustments done within the UrBackup's web console.

G041 - Backups 05 ~ UrBackup 02 - Clients setup and configuring file backups

For UrBackup to do its job, the server must be connected to clients that will gather the files to backup. Those clients are what you have to install and configure on each of the K3s node VMs of your Kubernetes cluster. The installation part is rather easy, with not much to configure in particular, with the extra facility that the UrBackup server searches and autodetects any clients in its vicinity, attaching them to itself by default.


Now, the part regarding the backups themselves has its own quirks in UrBackup. For starters, the most obvious ones: there's the client's side, and there's the server side. On every client's machine, and with a particular command, you configure the paths you want to back up from each client system. On the server side, you configure how you want to schedule those backups and what limitations they should have. The most important thing to remember here is that the UrBackup server, by default, executes backups automatically with a predetermined configuration that you must revise. In this walkthrough, I get into detail about the options available for configuring backups on the server side, and I show you a rather simple configuration more fitting to this guide series' setup.


Regarding restorations of UrBackup backups, bear in mind that the way the backups are configured in this guide won't allow you to do fast restorations as is possible with full image backups, such as the ones you can do with Proxmox VE of your virtual machines. With this UrBackup setup, you'll make copies of particular directories that you'll probably have to restore by hand on each recovered system, although I also make mention of the UrBackup client command for restorations in client systems.

Updates

As it happens with backups, to update a system you have to apply procedures that come with their own set of particularities depending on the component being updated, and also their relationship with other components. Because of this complexity, I've divided the subject of the update into four guides.

G042 - System update 01 ~ Considerations

Before you get hands-on with updating, you must first be aware of the implications that applying these processes will have on your system. Because of this, I wrote this guide to highlight the main notions you have to take into account when applying updates to the system you've created with my guides. I've organized these notions around the following three points.


  • What to update, or identifying all the components in the system and how they relate to each other. Put in another way, you must know your system's components hierarchy to understand the effects of applying updates to it.
  • How to update, or learning what is the proper update procedure for each component and understanding how applying each update can affect your system's running.
  • When to update, or deciding the right timing and order to apply each component's update to reduce your system's downtime. To know the proper updating order is very important to understand the hierarchy of components that make up your system.


A particular detail that I don't mention in this guide is that, in general, you want to have recent backups at hand before you apply updates to each layer of the system's components hierarchy. This way, if anything goes wrong, you can go back much more easily. This is why I put the backups guides before these ones about the updates, safety first!

G043 - System update 02 ~ Updating Proxmox VE

This guide explains how to update the Proxmox VE system through its web console. Nothing particularly difficult since what it does is just execute the corresponding apt commands, but you still have to pay attention to the procedure. It can give you little surprises, such as asking you about configuration files being updated by newer versions of a package or enabling Proxmox VE services you had left disabled for security and performance reasons. This guide tells you all this, plus a proper order in which to perform all the necessary steps, from doing a complete system backup with Clonezilla to restarting the virtual machines of your Kubernetes cluster.

G044 - System update 03 ~ Updating VMs and UrBackup

There are four virtual machines in the setup explained in my guides. All of them are Debian 11 systems, so updating them means applying the same apt commands to all of them by hand. Of course, there are more advanced platforms or procedures that allow the automation of these tasks, but I don't cover them here (that would be a separate guide altogether!). This guide also explains how to deal with the fact that both the UrBackup and the K3s cluster services are running in the virtual machines.


On the other hand, since the UrBackup server and clients are part of these virtual machines, I thought it proper to leave their procedures detailed also in this G044 guide. As it happened in their installation, the server has its own update procedure while the clients have their own different one.

G045 - System update 04 ~ Updating K3s and deployed apps

This is probably the most complex update procedure in the whole system. While in the previous updates you don't have to really worry about version incompatibilities or similar issues, in Kubernetes the version matter becomes critical. If you're not careful, you might end up updating apps to versions that are incompatible with the Kubernetes engine your cluster is using to run, while the opposite can also be true.


This problem is particularly relevant in more low-level or critical services such as Cert manager or MetalLB, and also determines the order in which you have to update the apps and the K3s software running your cluster. Of course, there's also the issue of the components within complex apps such as Nextcloud: updating an internal component to a newer version could cause an unexpected issue in another component that relies on it, or skipping one version when upgrading could break the app completely.


This guide, the last of both the ones regarding updates and the core guides, explains how to deal with the previous issues. It gets into detail about the critical components' versions and their compatibility with the latest Kubernetes releases, and also details the app’s proper update order to mitigate versioning inconveniences. With the applications updated, the next thing I explain is how to upgrade the K3s software, and indicates two ways: the manual one that forces you to "reinstall" the software on each K3s node, or a way that centralizes the task in a Kubernetes deployment.


An upgraded K3s software comes with a newer K8s engine, which implies that you also need to upgrade your client kubectl command to keep proper compatibility between your client system and the K8s cluster you're connecting with. This is the very last thing I tell you about in this G045 guide, what to do to update this kubectl command. After this, you can consider the guide finished.

About the appendixes

I've made reference, in this and previous articles, to the most relevant appendix guides that you can find in the last files of my project. Those that I haven't mentioned are because they're essentially about things that were not necessary for building the system explained in the core guides, but they are somewhat interesting or useful on their own. So consider them just leftover notes that I attached as appendix guides to avoid losing them.

Concerns left uncovered

Congratulations if you've followed my whole guide series to the end. That means that you've built a small K3s system on which you can practice your Kubernetes skills, but be aware that there's still a lot of ground left to cover regarding this technology. Let me indicate to you a few of the things you may like to learn about.


  • Security: I haven't mentioned barely anything about this, but is a crucial matter that affects in particular how apps are deployed in K8s clusters.
  • Resource management: in Kubernetes is very important to have a hold of the CPU and RAM being used by each deployed app or service. That can be controlled by declaring particular attributes in their deployment resources, but this is a nuanced matter that I didn't know well when I did my guides.
  • Better monitoring or administration tools: my guides show you the manual standard way of handling things in Kubernetes, but nowadays there are many projects (beyond the official K8sdashboard) that offer a better user experience to K8s cluster administration.
  • App images administration: every time you deploy a new application (or a newer version of it) in a K8s cluster, the Kubernetes system has to download and store the corresponding image. Those images are files that eat up space in your K8s nodes, so you need to prune the old ones to free space.


Two last things I'd like to tell you. One is that you won't be able to deploy certain technologies or products in such a small Kubernetes setup, mainly because of the hardware limitations. The other is something you may have already realized by yourself, but still, I want to point it out: if you happen to have a bunch of idle computers that you can use (raspberry pis, NUCs, or whatever), you could ignore all the guides regarding Proxmox VE and just try to build the K3s cluster by adapting my corresponding guides to your hardware (remember that I used two network cards in the VMs, by the way).


I'll take my leave from this article here. Hold fast to that K8s rudder and good luck!



Also published here.