Simplify your Kubernetes deployments
If you know what the title means, you’re likely looking to get right into the meat of this tutorial. I’ll keep the introduction brief.
Helm is a tool for templating and applying Kubernetes resources into a Kubernetes cluster. It advertises itself as the “npm of k8s”, which is a description I have found thoroughly unhelpful. Instead, read this article — it explains it beautifully.
From this point onwards, I’m going to assume you’re familiar with what a “helm chart” is. If you’re not, read the linked article.
You’ve got some options when you want to deploy your applications into Helm. You either have a helm chart per application or a helm chart for a group of applications. For example, you either have your auth-service-helm-chart
or you have your java-applications-helm-chart
. What are the pros and cons of either?
Pros: You can implement logic specific to an app or service within your chart.
Cons: If you’ve got Microservices (and chances are in k8s you do), you’re going to end up with a lot of disparate charts all over the place. Lots of repetition and difficult to manage at scale. Creating new applications also requires more effort, you’ve got to wire up a chart correctly.
Pros: One chart is easier to manage. Your charts are all in one place (a chart repo).
Cons: You’re going to need to be very careful that specific applications don’t bleed into the shared chart’s logic. Everything needs to be generic.
As such, I opted for the shared chart approach. This created a new problem — where the hell do we host a chart? Enter the helm s3 plugin.
To configure a local helm CLI to use this plugin, the following commands need to be invoked:
helm plugin install https://github.com/hypnoglow/helm-s3.git
A Helm Repository needs to have an index.yaml
at its root. You can use the helm CLI to initialise it, but it’s easier just to wire this up with a spot of terraform. Instead of running two commands or depending on the helm
CLI being correctly configured, run one command and rely exclusively on terraform.
resource "aws_s3_bucket" "helm_central" {
bucket = "my-helm-central-bucket"
acl = "private"
}
resource "aws_s3_bucket_object" "object" {
bucket = "${aws_s3_bucket.helm_central.bucket}"
key = "charts/index.yaml"
source = "/path/to/my/files/index.yaml"
}
This terraform requires a file called index.yaml
in a local directory. The file I used looks like this:
apiVersion: v1
entries: {}
Note, this will create a key “charts” where your bundled charts will go. Also, your S3 bucket will not be accessible from the internet and you’ll need to regulate access through IAM roles.
Now you’ve got a bucket, you need to inform your local Helm CLI that the s3 bucket exists and it is a usable Helm repository. To do this, you make use of the s3 plugin:
helm repo add my-charts s3://my-helm-central-bucket/charts
Note: Wherever you’re running the helm command from will need appropriate IAM access to this S3 bucket. Either to read, write or both.
Pull down an existing chart, package it up and push it to your new repository.
# This will download the tar.gz from your stable central repository.helm fetch stable/rabbitmq
# This will push that new tar.gz into your private repository.helm s3 push rabbitmq-<version>.tgz my-charts
If that is successful, congratulations! You’ve just wired up your very own chart repository.
For more articles and general rambling about technology, follow me on twitter!