In the last several years, Kubernetes has become the “go to” standard for managing and orchestrating containerized workloads. Thanks to it’s vendor agnostic nature, you can easily run Kubernetes almost anywhere, and in fact, all the major cloud vendors offer a managed Kubernetes service (AWS EKS, Google GKE, and Azure AKS).
With Kubernetes, one of the key advantages is the ease of managing multiple environments and workloads in a single cluster, by separating the cluster into logical areas using namespaces. This post will dive into how we can manage this by using Terraform to both manage the cluster provisioning as well as manage the namespaces.
When using Kubernetes for a team, you usually want to have an isolated environment for each developer, branch, or pull request. There are a few ways to achieve that using Kubernetes: one way is to create a full blown cluster for each division, but the way we’re focusing on is using Kubernetes namespaces feature.
While the namespaces feature is quite powerful it’s not without its complications, especially with a large team of developers: How many namespaces do I have on my cluster? Can I remove them? Does anyone use them? Can I schedule them to automatically shut down during night time and weekend? Can I have policies on who can run what and where? Fortunately, with a good management platform, a lot of this can be alleviated.
As a cloud-native technology that is deployed at wide scale, it’s quite common to manage the deployment of the infrastructure of Kubernetes clusters using Terraform. And where you have many clusters to manage (Dev, Staging, Production, etc.), Terraform allows you to maintain a consistent configuration for the cluster and underlying infrastructure while creating as many clusters as you like with the same configuration in a reliable and easy way.
Terraform’s multi-cloud approach lets you use any cloud provider you wish, including using the native managed services that I’ve mentioned above.
Similarly, when managing namespaces there’s a few ways, but using Terraform is probably the best overall. The main advantages of using Terraform are:
Use the same configuration language to provision the Kubernetes infrastructure and to deploy applications into it.Drift detection — `terraform plan` will always present you the difference between reality at a given time and the config you intend to apply.Full lifecycle management — Terraform doesn’t just initially create resources, but offers a single command for creation, update, and deletion of tracked resources without needing to inspect the API to identify those resources.Synchronous feedback — While asynchronous behaviour is often useful, sometimes it’s counter-productive as the job of identifying operation results (failures or details of created resource) is left to the user. e.g. you don’t have IP/hostname of load balancer until it has finished provisioning, hence you can’t create any DNS record pointing to it.graph of relationships — Terraform understands relationships between resources which may help in scheduling — e.g. if a Persistent Volume Claim claims space from a particular Persistent Volume, Terraform won’t even attempt to create the PVC if creation of the PV has failed.
I’ve taken the Terraform code from a simple AWS EKS Terraform example, popular for its ease, to provision a new EKS cluster in my AWS account.
To run the actual Terraform, I’m using env0 which allows me to quickly deploy and manage my environments based on Terraform templates. If you’re just getting started with env0, you should first get your organization set up and create a new template (check out the getting started documentation for help there).
Now we will run the template which will provision the EKS cluster with env0 and get the outputs:
Now that our EKS cluster is set up, we can create a new template for the namespaces and our deployment using the cluster name from the outputs.
First we use the following code to authenticate against the Kubernetes cluster that we’ve created and create a new deployment with an ngnix:
resource "kubernetes_deployment" "nginx" {
metadata {
name = "scalable-nginx-example"
labels = {
App = "ScalableNginxExample"
}
namespace = random_string.namespace_name.result
}
spec {
replicas = 1
selector {
match_labels = {
App = "ScalableNginxExample"
}
}
template {
metadata {
labels = {
App = "ScalableNginxExample"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "example"
port {
container_port = 80
}
resources {
limits {
cpu = "0.5"
memory = "512Mi"
}
requests {
cpu = "250m"
memory = "50Mi"
}
}
}
}
}
}
}
provider "aws" {
version = "~> 2.0"
region = var.region
}
data "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
}
data "aws_eks_cluster_auth" "eks_cluster_auth" {
name = var.cluster_name
}
provider "kubernetes" {
host = data.aws_eks_cluster.eks_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.eks_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.eks_cluster_auth.token
load_config_file = false
}
So now let’s make (and run) a new template using that Terraform code which will create the namespace and run the deployment:
Running kubectl to get the namespace and the deployment and we can see the resources we’ve created inside the cluster:
You can see that we use the aws_eks_cluster and the aws_eks_cluster_auth data resource to get the authentication data, and to be able to create the namespace and the deployment in the cluster.
There is another way to achieve this and import the state file of the Kubernetes cluster you created, but for simplicity we took a simpler approach. You can read more about it here.
So now that we have a template we can create as many namespaces and deployments as we want, as well as leverage some of the more advanced features of our management platform.
Scheduling
First, I want to make sure that those resources aren’t running over night and weekends so let’s set up scheduling for those namespaces:
Now, env0 will automatically destroy those namespaces on each workday at midnight and will deploy them at 9am when I start my workday.
You can even do that on the entire Kubernetes cluster to save even more money.
Dev Team Environments
Next, now that we have created an easy way for self service provisioning of environments, we want to set up appropriate permissions for everyone on the team to deploy their own environments. In most cases, we want the EKS cluster to be managed by the DevOps team, while the rest of the dev team should be able to provision their own individual environments.
You can achieve this by managing different projects with different permissions for each user.
So now I will give our DevOps team the ability to provision the Kubernetes cluster by assigning them a “Deployer” role, but the developers will be assigned as a Planners so they can request changes on the cluster but will need the approval of the DevOps to make the changes.
For the ephemeral namespaces Project the developers can have a “Deployer” role so they will be able to create environments whenever they like, but everything will be cost effective and no resource will be left behind.
As this is a simple example of provisioning an EKS cluster with relevant deployment, we actually created a balance between a self service way to get resources and governance and a cost efficient way to manage those cloud resources without losing control.
In addition you can achieve this with any other managed kubernetes service like AKS and GKE, or even just a self managed cluster.
You can also leverage env0’s API and CLI to fully automate a per pull request environment, so each PR will get its own namespace and deployment, and once the PR will be merged to master, it will automatically tear down those resources. You can read more about this concept in a great blog post by Avner Sorek.
There is a lot more you can achieve with Kubernetes and Terraform using env0, so go ahead and try it yourself.
Previously published at https://www.env0.com/blog/kubernetes-environments-using-namespaces-and-terraform