In the second article of this series, I reviewed my guides (find the link to them at the end!) on how to build a K3s-based Kubernetes cluster with a few virtual machines. This time, I'll give you a tour around the walkthroughs where I tell you how to deploy apps or services in such a cluster, by using the official Kustomize method.
You can deploy all sorts of apps or services in your K3s cluster: from just in-memory microservices (which were the original aim of Kubernetes), to more complex applications that need to save data in some storage. Regardless of what you want to deploy, you'll have to do it following some criteria or organizational scheme. In my case, I chose to keep on using Kustomize projects for my deployments (as I did in previous guides). The reasons for this decision are mainly two:
kubectl
command.
Fine then, we've got the cluster and decided on the deployment procedure. So, what would be a good example for a deployment? I thought of something useful, like a personal file cloud server such as Nextcloud, a lightweight Git server like Gitea, and the cherry on the top could be a Prometheus-based monitoring stack.
You'll see that in my guides I followed essentially the same procedure to declare the components in their Kustomize projects, although taking into account their own nuances. You'll notice also that the guide for each deployment is split into several parts, something necessary since each project is made up of different Kustomize subprojects, one for each main component.
This particular document is more of a warning page than a guide by itself, and it's about being careful with your host's resources: the K3s cluster can be lightweight for a Kubernetes setup, but even when "idle" it has already a non-negligible number of processes consuming your RAM and CPU. It also reminds you of methods to monitor the current state of your system's resources.
The guide that explains how to declare and assemble the Nextcloud deployment is split in five parts.
This is the starting point for the Nextcloud deployment guide. In this part, the whole Nextcloud setup gets settled, before declaring anything in yaml files. Here I decided which database to use, if and how to use Redis, and also the organization of the different storage requirements of each component. This is nothing more than a small outline, but it's the very first thing you must do to figure out what steps to follow next and in what order.
This guide also describes the earliest step you have to take, setting all the storage space you'll need later for your Nextcloud components. In this case, this means attaching new virtual storage drives to the K3s worker/agent VMs, then preparing them for their later use by concrete components.
There's another important aspect covered in this guide, which is related to the internal networking of the cluster. Within a Kubernetes cluster, certain components usually have their own IPs to make them reachable internally, those components are services and pods. These internal IPs are dynamically assigned by the Kubernetes system itself, but you can also set static ones for the services or pods you deploy. Here you'll see an example of how to decide the internal static IPs just for the services you'll deploy in the following guides.
Redis is a very popular in-memory data store that Nextcloud can use as a session cache. This guide explains how to set it up as just that, an in-memory service that uses no storage whatsoever. Also, to be able to monitor it with Prometheus, the guide also shows you how to deploy another service that exports Prometheus-style statistics from the Redis service, using a method known as sidecar where two containers (or more) are put within the same pod. All this configuration gets declared as a Kustomize project within Nextcloud's larger one, meaning that you won't deploy this Redis service on its own.
Nextcloud requires a proper database to register its operations and the files it stores, and I chose to use MariaDB. In this case, storage space is required for saving data, so you'll see here how to reserve it with the proper Kubernetes resources and link it to the MariaDB service. And, as with Redis, I show you how to declare another service in a sidecar container to export
Prometheus stats from the MariaDB service. This requires creating a specific user in MariaDB for that service, something I also explain how to do within the very same Kustomize subproject for MariaDB. Also like Redis, you won't deploy this MariaDB service project directly, not only because is part of the larger Nextcloud one, but because it won't work since the storage it requires is not available at this point yet.
In this part, you get into the matter of the Kustomize project for the Nextcloud server instance itself. Like the other components, it also requires a specialized service to export Prometheus stats that, again, will be set up in a sidecar container. The particular difficulty this Nextcloud server has is that it requires a web server to present its contents. The most common options are Apache (the recommended one for Nextcloud) and Nginx. In this guide I detail the configuration necessary for an Apache-based instance, enabling the secured HTTPS connection using a self-generated certificate explained in one of the guides reviewed in my previous article. Don't get confused, the Nextcloud server instance is just another component of the whole Kustomize project, with is yet to be finished.
G911 - Appendix 11 ~ Alternative Nextcloud web server setups
As I've pointed out before, setting up Nextcloud with the Apache web server is just one possibility. You can also use Nginx and, going even further, you can set up different ways to expose the Nextcloud service to the outside world. You can just assign a static IP to the service serving the Nextcloud instance, or you can use a Traefik Ingressroute to channel the connections to Nextcloud through Traefik. In this appendix guide, you'll find some of those alternative ways of configuring access to the Nextcloud service.
This is the final part of the Nextcloud deployment guide, where you add the missing pieces and tie everything up in one major Kustomize project. What are those pieces still pending? The storage spaces that previously declared components (the database, the Nextcloud service) have claimed for themselves, the namespace under which all the components will be deployed in the cluster, and the secret of the self-signed certificate used in HTTPS connections.
In particular, one has to be careful with the storage, since in this setup, each storage space is already attached to a particular virtual machine, and that also has to be configured properly in the declaration of the Persistent Volume resources that represent them in the Kubernetes cluster.
So, to solve this last stage you'll have to declare the Persistent Volumes and Namespace resources, set up the main Kustomize project where you'll call all the previous subprojects and the other remaining resources, deploy the main project and, finally, copy the secret of your self-signed certificate, which should be already created in your cluster, into the namespace of this Nextcloud deployment. This whole procedure is, of course, detailed in this last part of the Nextcloud deployment guide.
As with Nextcloud, I also split the Gitea deployment guide into five parts, since the procedure in both cases is very similar.
Gitea is a lightweight source version control platform that is similar to Nextcloud: Gitea stores user files, needs a database, can use Redis as an in-memory cache, and also you can get its Prometheus metrics. So, as with Nextcloud, in this first part, you decide which components to use, prepare the storage space required, and also consider which K3s agent node to deploy your Gitea platform. It’s important not to forget about this detail, since at this point you'll have already deployed the Nextcloud platform in one of your available agent nodes, so you should use the remaining one.
A very important detail that is different from how the Nextcloud platform was deployed is that to connect to the major Gitea components with each other, instead of using internal static IPs, you'll see how to reach them by their internal FQDN or DNS record. K3s come with the CoreDNS service, which serves as the internal domain register for all the services and pods active within the cluster.
Gitea can use Redis as a memory cache, so it's a good idea to deploy this service to help improve Gitea's performance. This deployment is identical to the one declared for the Nextcloud platform, beyond creating a different password for connecting with this Redis instance.
Gitea requires a database to run and, like Nextcloud, it can use MariaDB. But since Gitea is also compatible with PostgreSQL, I thought it would be better to show how this stage is done with a different database engine. It’s basically the same although, as you might expect, the configuration file and the initialization script are very different from the MariaDB ones. In particular, the initialization script is far more elaborate, since it creates a couple of different users with very different permissions sets. On the other hand, PostgreSQL also requires, like MariaDB, an extra service that exports its stats in Prometheus format. This service is deployed as a sidecar container, again like it was done for Nextcloud's MariaDB.
The last major component to declare is the Gitea server, which is easier to deploy than the Nextcloud server. Gitea comes with web server capabilities, and also can provide its own stats in Prometheus format, so this saves you from the extra complexities of configuring either a web server like Apache or a sidecar container for a Prometheus exporter as it's needed for other components.
In this fifth and last part, you'll put all the components together in the corresponding main Kustomize project for your Gitea platform. You'll declare the required Persistent Volumes and Namespace resources, deploy Gitea and enable the certificate for this platform, all in a very similar way as you saw for the Nextcloud platform. After this, it will come with an extra configuration step you'll need to do in Gitea to finish its installation, but this is just a peculiarity of this product that doesn't take long to solve, and you'll see it explained in this part regardless.
This is the last Kubernetes deployment I detail in my guides. It covers the deployment of a Prometheus-based monitoring stack in six parts, just a bit longer than the two previous ones.
While Gitea and Nextcloud are quite similar in nature, a Prometheus-based monitoring stack is a completely different platform that's made up of a different set of components. Still, this platform also requires storage space since it also can store data, so you'll need yet again to reserve some storage space for it right in this very first part. And again, you'll have to consider the issue of choosing where to deploy this monitoring stack. This time, I'll show you how to split the components between the two K3s agent nodes.
The Kube State Metrics service is specialized in giving out details of all the Kubernetes objects running in a cluster, in particular details that are not reachable through the native Kubernetes monitoring components. In other words, it provides extra information about what's going on in your cluster. This service is small and doesn't require any storage whatsoever, although its deployment has a particular complication regarding the security role it needs to read all that it can from the cluster, a detail also covered in this part.
This is another just-in-memory service, that only exposes Linux-related system-level metrics from K8s cluster nodes. It requires neither any storage nor any particular security configuration, at least in the way it is shown in this part, so its deployment declaration is rather simple compared to previous ones.
This is the heart of the monitoring stack, the Prometheus server itself. The most complex part of this deployment is properly setting up the main configuration file of Prometheus. It's with that file that Prometheus finds the sources of stats present in the cluster, including the Kube State Metrics and the Prometheus Node Exporter instances. In this walkthrough, you'll see how I make Prometheus find those sources by their internal FQDN or DNS name.
On the other hand, there's another component very common in Prometheus installations that I don't cover in this guide, the Alertmanager. The problem is not with the component itself, but about not having a service in this cluster setup where the Prometheus alerts captured and sent by Alertmanager are consumed. The only thing you'll find in this part is an example configuration file for Alertmanager rules, which you can use as a starting point to investigate this matter further on your own.
I must not forget to mention that you'll also see here how to make this service reachable through Traefik for external network traffic. In the cases of Nextcloud and Grafana, their services are assigned a static IP, but in this case, you'll learn how to enable access to the Prometheus server with a DNS name through the Traefik external IP.
Grafana is a better (and more complex) graphical interface for the Prometheus server, which comes with its own more basic one. Regardless of being just a graphical interface, Grafana also uses some storage space, something you'll see claimed in this part. On the other hand, the configuration set for Grafana in this deployment is very simple, and it also makes this service accessible only through Traefik.
The final part of this monitoring stack deployment has the expected declarations of Persistent Volumes and Namespace resources, plus an extra security role that has to be set cluster-wide for giving Prometheus access to metrics like nodes or pods. After all this, the main Kustomize project is also declared and then deployed in the cluster like the others in previous guides, then the self-signed certificate's secret is enabled in the corresponding namespace for this deployment.
Prometheus doesn't require any post-installation action to run, but Grafana will ask you to change your administrator password. Then you'll need to set Grafana up so it can show the stats gathered by your Prometheus server. In this part, I give you a brief explanation of how to enable a Prometheus data source and also a generic Prometheus dashboard.
When you reach this point, you'll have deployed quite a number of services in your K3s cluster. They'll consume (and degrade) your system's resources, so you better keep an eye on them!
This particular guide is just a reminder of the ways you can monitor your Proxmox VE host, your virtual machines, and your K3s cluster. More importantly, I also review all the relevant logs you can check when you detect that something is going wrong. Also, I tell you how to get remote shell access, with the kubectl
command, on the containers running in your cluster, and give you hints about how different things look from the inside of a container.
With the whole setup completed, what's left to talk about? The two tasks that no system nowadays can escape from: the dreaded software updates and the always sensitive matter of backups. So my next, and last, article in this series will review the guides explaining how to carry out those procedures in my Kubernetes setup, plus a mention of some other details worth mentioning.
Also published here.