If you are working with Kubernetes then you should use Helm to be able to easily change the YAML resources for your applications. Helm is like a package manager for deploying applications on a cluster, it gives you a list of already created apps (charts) ready to install and you can also start creating your own and publish them in your own chart museum. Like on an operating system, life without a package manager would be much harder.
Helm charts for an easier Kubernetes installation (image source: https://helm.sh)
I am using Helm day by day, and these are the things I found out while creating new charts or modifying existing ones. If I knew all these before starting with helm than things would have been smoother and I would of wasted less time searching error messages and solutions to all sort of problems or maybe just spent less time refactoring.
As prerequisites I assume you already have kubectl setup and you do have access to a cluster, no matter if it is one created on a cloud platform or just minikube. I also expect that you do have some knowledge on how to use helm and you installed some charts already and you fill confident about getting involved more in helm.
Before starting any work it is a good idea to take a look at the list of packages found in helm stable repository. There is a lot to learn from those charts and we will use them as examples to illustrate a couple of ideas you can use to create your own charts. The helm stable repo is automatically added when helm is installed, so you can already start downloading and using any of those charts.
To start a chart is not very complicated, you only need to create some files inside a folder, like Chart.yaml, values.yaml and the actual resources (like a deployment) in the templates folder. You might be tempted to create them one by one by hand, it does seem like a good exercise. Or you might build your own scaffolding for it while learning go, node.js or some other language. But that's not a good idea. Instead you should always try to use the helm create command. Simply because it is always up to date with the recommended practices. For example if you run helm create test-chart
right now the labels it sets are the latest recommended ones, like app.kubernetes.io/name,
, app.kubernetes.io/instance
or app.kubernetes.io/managed-by
. And if you just browse some charts from helm stable repo you can see that they don't use such labels, simply because when they were created there were other recommendations.
Also if you look at the deployment it created (in templates/deployment.yaml file), you can see other interesting things, like that it is a good idea to parameterise the image repository and tag for your container (especially the tag because it will change quite often), but it is not common to parameterise things like liveness or readiness probe paths. If in the future it will be useful to have other values parameterised, be sure that the templating will be updated. So by using helm create to start your chart you make sure latest best practices are followed, of course with one condition: have the latest helm client version installed.
It is a good practice to lint your charts before trying to install them. The linting will apply the templating (which happens on the client) and will verify that the output is a well formatted yaml.
Sometimes by default some resources will be excluded when the templating is applied. For example a web api can have an ingress rule added to enable outside communication only when some variable it set to _true (_and by default it is false). So that will not be linted because the ingress resource will not be created. For such cases the lint command can take other values files instead of the default one (and set the variable to true in this file).
A good practice would be to create a ci folder on the same level as your templates one and put there the additional values files you want to verify. So if you have a file called ingress-enabled-values.yaml in your ci folder just run helm lint --values ci/ingress-enables-values.yaml
In the stable charts you can see that several have the ci folder and I noticed that more and more are adding this folder lately (this folder might not be used just for linting, but mainly for test releases — for which I intend to write a separate article).
We will take a look at the Postgresql chart and see what will it install for us without actually doing the installation, just by using the 2 flags: dry-run and debug
helm install stable/postgresql --name standalone --dry-run --debug
What it will be displayed on the screen are all those YAML files that will be sent to the tiller (helm server component) in order to be applied on the cluster. Also you can add your local file (using -f values.yaml
flag) that can override some of the default setting and this is very useful when you want to check how you can modify the resources to deploy, without installing the application yet.
Even more, this way of rendering the chart locally and verifying its output can be found in a library that you can use for unit testing. Take a look at it and if you find it useful add it to your chart pipelines. Unfortunately right now the library looks that is not maintained anymore.
context is important for subcharts (image source: https://podfanatic.com)
This is a very important thing to grasp when using subcharts. To explain what the context means we will take a look at the kong chart. Kong is a nice api gateway built on nginx, which is able to extend with a series of plugins. Kong uses Postgres or Cassandra for keep its state, so they are requirements in the chart.
If you look at the Postgres chart templates functions file you will see there a few functions that are using .Chart.Name value and you can also see that the chart name is postgresql. So whenever we will use {{ template "postgresql.fullname" . }}
we should get a value with postgresql inside because .Chart.Name is part of the method.
But when the template is used inside kong chart and the postgresql is a subchart, the .Chart.Name will actually point to the kong chart name. Because that's the context now, we are not anymore under postgresql, in our example we will be under kong chart name. So using {{ template "postgresql.fullname" . }}
will actually result in a name with kong inside (again, when used under kong chart — not postgresql subchart).
helm install stable/kong --name apigateway
Right now this issue is solved by adding another method with the postgresql name hardcoded. So this means code duplication and we have to remember not to use the method defined in postgresql chart, instead we need to use the one below. But until we will have a better solution provided by Helm, the community uses this one.
Check installation on a separate test namespace
Before saving the chart on your own chart museum, do a real installation of it on a test namespace. This helm release should not affect any other applications running in the cluster, that's why it is good to create first a test namespace with a random name. After the installation is done check that all the pods are in a Running state and all the config maps and secrets are attached, that all ingress rules are connected to services, resources and limits are defined and so on.
Try also an upgrade of the current release with some different values. And check that the upgrade went ok, following the same rules as above.
At the end cleanup the release and also delete the test namespace. This is a good functional test that your chart is ready to be used by other developers. Most likely such a test needs to be part of your publish to chart museum pipeline, along with a static analysis done by helm lint.
Other useful helm commands
Here are a few helm commands that are useful, but you might not be very familiar with them:
helm template [LOCAL-CHART-NAME] -x templates/file.yaml
Useful for big charts, when the debug and dry-run flags are producing a very big output which will be hard to follow. Using -x with the path to the resource to output, will ease the checking and debugging.
helm history [RELEASE]
— you can use this to see the revisions of a release, we need this in order to be able to call helm rollback [RELEASE] [REVISION]
in case something went wrong with our installation/upgrade.
helm upgrade --reuse-values
— the reuse-values flag is very useful when you want to do a small change to a release without running the helm install/upgrade again with all the parameters. For example when you have a typo in your installation and you fix your code, but just don't want to wait another 5 minutes for your pipeline to run
helm upgrade --install
— this is helpful especially when running the command in pipelines, you don't need to write code to check if the release already exists, it will do this for you: if the release is found it will do an upgrade with the new values supplied, if there is no release then it installs it
helm install/upgrade --debug
— I know I already mentioned the debug flag above on doing a fake installation, but it is so helpful that I will do it again. It is important to add it to your command especially when running in a pipeline, because it allows you to see what actually happened in case of an install or upgrade. Without it the output is not very informative and it doesn't help you too much when trying to understand what went wrong on the pipeline and what values were supplied to the command.
helm install/upgrade --set name=sandbox
— You have the possibility to overwrite values to an installation/upgrade by using the set flag. This is useful to set a value that is calculated during the pipeline, like the git commit hash. But with this flag I would not recommend using especially on your pipeline because the value becomes hidden. Instead after calculating it, commit it to your repository in a values.yaml file and use that one. This way you have the history of changes in your repo and you follow the gitops approach.
And that's it, now you should be more prepared for working with Helm charts. And if you have other ideas on useful stuff on Helm, do let me know and I will add them to the article (along with the credits).
If you liked this and want more food for thought you can also register for my newsletter for people interested in Cloud Native technologies. I am sending it at least once a month.