paint-brush
Getting Started Provisioning an AWS EKS Kubernetes Cluster with Terraformby@cube2222
378 reads
378 reads

Getting Started Provisioning an AWS EKS Kubernetes Cluster with Terraform

by Jacob MartinJanuary 7th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AWS EKS provides managed Kubernetes clusters as a service. In this guide, you will learn how to provision an AWS EKS Kubernetes cluster with Terraform.

Company Mentioned

Mention Thumbnail
featured image - Getting Started Provisioning an AWS EKS Kubernetes Cluster with Terraform
Jacob Martin HackerNoon profile picture


AWS EKS provides managed Kubernetes clusters as a service. You’re on AWS and want to avoid getting into the details of setting up a Kubernetes cluster from scratch? EKS is the way to go!

In this guide, you will learn how to provision an AWS EKS Kubernetes cluster with Terraform. Let’s start with the basics.


Step 1 - Install Terraform Locally

First, locally install Terraform:

brew install terraform


as well as the AWS CLI:

brew install awscli


and kubectl:

brew install kubernetes-cli


If you’re on a different operating system, please find the respective installation instructions here:



Step 2 - Configure the AWS CLI

Now, you’ll need to configure your AWS CLI with access credentials to your AWS account. You can do this by running


aws configure


and providing your Access Key ID and Secret Access Key. You will also need to add the region. For the purposes of this guide, we will use us-east-2. Terraform will later use these credentials to provision your AWS resources.


Step 3 - Get the Code

You can now clone a repository which contains everything you need to set up EKS:


git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster/


Inside you’ll see a few files, the main one being eks-cluster.tf:


module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = "1.20"
  subnets         = module.vpc.private_subnets

  tags = {
    Environment = "training"
    GithubRepo  = "terraform-aws-eks"
    GithubOrg   = "terraform-aws-modules"
  }

  vpc_id = module.vpc.vpc_id

  workers_group_defaults = {
    root_volume_type = "gp2"
  }

  worker_groups = [
    {
      name                          = "worker-group-1"
      instance_type                 = "t2.small"
      additional_userdata           = "echo foo bar"
      asg_desired_capacity          = 2
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
    },
    {
      name                          = "worker-group-2"
      instance_type                 = "t2.medium"
      additional_userdata           = "echo foo bar"
      additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
      asg_desired_capacity          = 1
    },
  ]
}


It uses the EKS Terraform module to set up an EKS cluster with 2 worker groups (the actual nodes running your workloads): one with a single medium machine, and one with two small machines.


Step 4 - Run Terraform

You can now create all of those resources using Terraform. First, run


terraform init -upgrade


to initialize the Terraform workspace and download any modules and providers which are used.


In order to do a dry run of the changes to be made, run


terraform plan -out terraform.plan


This will show you that 51 resources will be added, as well as their relevant details. You can then run terraform apply with the resulting plan, in order to actually provision the resources:


terraform apply terraform.plan


This may take a few minutes to finish. You might get a “timed out” error, in which case just repeat both the terraform plan and terraform apply steps.


In the end you will get a list of outputs with their respective values printed out. Make note of your cluster_name.


Step 5 - Connect with Kubectl

In order to use kubectl, which is the main tool to interact with a Kubernetes cluster, you have to give it credentials to your EKS Kubernetes cluster. You can do this by running


aws eks --region us-east-2 update-kubeconfig --name <output.cluster_name>


Make sure to replace <output.cluster_name> with the relevant value from your Terraform apply outputs.


Step 6 - Interact with Your Cluster

You can now view the nodes of your cluster by running


> kubectl get nodes -o custom-columns=Name:.metadata.name,nCPU:.status.capacity.cpu,Memory:.status.capacity.memory
Name                                       nCPU   Memory
ip-10-0-1-23.us-east-2.compute.internal    2      4026680Ki
ip-10-0-2-8.us-east-2.compute.internal     1      2031268Ki
ip-10-0-3-128.us-east-2.compute.internal   1      2031268Ki


This command is so long because it displays custom columns, thanks to which we can indeed see that there are 2 smaller nodes, and 1 bigger node.


Let’s deploy an Nginx instance to see if the cluster is working correctly.


kubectl run --port 80 --image nginx nginx


You can see the status of it by running:


> kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          2m46s


And finally set up a tunnel from your computer to this pod:


kubectl port-forward nginx 3000:80


If you open http://localhost:3000 in your browser, you should see the web server greet you:


Step 7 - Clean up

In order to destroy the resources we’ve created in this session, run


terraform destroy


This may again take up to a few minutes.


I hope this guide helped you on your Kubernetes journey on AWS!


First Published here