paint-brush
Building an AWS EKS Fargate Cluster with Terraformby@anadimisra
8,243 reads
8,243 reads

Building an AWS EKS Fargate Cluster with Terraform

by Anadi MisraJanuary 20th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Fargate is service by AWS to run serverless workloads in Kubernetes. We use the AWS EKS Terraform module to deploy the EKS cluster. Terraform is an open source Infrastructure As Code tool by Hashicorp that lets you define AWS Infrastructure via a descriptive DSL. We'll start with deploying the Amazon VPC via Terraform. There are three recommended approaches for deploying a VPC to run EKS. There's additional configuration on the `kubectl` side as well, which we will skip in this blog.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Building an AWS EKS Fargate Cluster with Terraform
Anadi Misra HackerNoon profile picture

Fargate is service by AWS to run serverless workloads in Kubernetes. With Fargate you do not have to manage VMs as cluster nodes yourself as each of the pods are provisioned as nodes by Fargate itself. It is different from Lambda in the sense that you're still self-managing the Kubernetes cluster or the runtime for all the workloads you run in that cluster. Having said that, I believe it's more suitable for teams that are running containerized microservices and want to do away with managing Kubernetes infrastructure themselves.

While Lambda prices on a combination of requests, CPU and memory; Fargate pricing is just the CPU and Memory of the nodes running in the cluster in addition to a fixed monthly cost of the Fargate service itself. If you want to go serverless without Vendor lock-in, Fargate is a good option. Hence we at Digité prefer running our microservices in the Fargate model.

Managing such an infrastructure is certainly not a feasible manual job, hence we rely on IAC to manage and operate our Infrastructure. Here Terraform has been our tool of choice for various reasons from the ease of learning to its robust design. Terraform is an open-source Infrastructure As Code tool by Hashicorp that lets you define AWS Infrastructure via a descriptive DSL and has been quite popular in the DevOps world since its inception.

In this blog, I'll share how we've used Terraform to Deploy an EKS Fargate cluster.

VPC

We'll start with deploying the Amazon VPC via Terraform. There are three recommended approaches for deploying a VPC to run EKS Fargate, let's look at each of them:

  • Public and Private Subnets: the pods run in a private subnets while loadbalancers, both Application or Network are deployed in the Public subnets. One public and private subnet is deployed to each of the availability zones within the region for availability and fault tolerance, this is the deployment model we will follow for this blog
  • Public Subnets Only: both the pods (or nodes) and the loadbalancers are in public subnets, here three public subnets are deployed in three different availability zones within the region. All nodes have a public IP address and a security group blocks all inbound and outbound traffic to the nodes. To be honest I haven't ever figured out why would anyone need this :-)
  • Private Subnets Only: both pods and loadbalancers run in private subnets only, three of which are created in each Availability zone of the region. Quite naturally we have to configure additional NAT Gateway, Egress Only Gateway, VPN or Direct Connect to be able to access the cluster. There's an additional configuration on the `kubectl` side as well, which we will skip in this blog

VPC subnets should have certain tags which allow EKS Fargate to deploy internal loadbalancers to them and provision nodes; let's look at the tags first

  • Key:
    kubernetes.io/cluster/cluster-name
  • Value: Shared

The following tags allow EKS Fargate to decide where auto provisioned Elastic loadbalancers are deploy and also allows you to control where the application or network loadbalancers are configured

  • Private Subnets:
  • Key:
    kubernetes.io/role/internal-elb
  • Value: 1
  • Public Subnets:
  • Key:
    kubernetes.io/role/elb
  • Value: 1

The VPC configuration, therefore, is as follows, we'll use the AWS VPC Terraform module for this purpose as it provides easier configuration via declarative properties instead of having to write all the resources yourself.

module "vpc" {
  source                        = "terraform-aws-modules/vpc/aws"
  version                       = "3.4.0"
  name                          = "vpc-serverless"
  cidr                          = "176.24.0.0/16"
  azs                           = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets               = ["176.24.1.0/24","176.24.3.0/24","176.24.5.0/24"]
  public_subnets                = ["176.24.2.0/24","176.24.4.0/24","176.24.6.0/24"]
  enable_nat_gateway            = true
  single_nat_gateway            = true
  enable_dns_hostnames          = true
  manage_default_security_group = true
  default_security_group_name   = "vpc-serverless-security-group"
  
  public_subnet_tags = {
    "kubernetes.io/cluster/vpc-serverless" = "shared"
    "kubernetes.io/role/elb"               = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/vpc-serverless" = "shared"
    "kubernetes.io/role/internal-elb"      = "1"
  }

  tags = {
    "kubernetes.io/cluster/vpc-serverless" = "shared"
  }

}

There's a single NAT gateway for managing all traffic to the Nodes running in the Private subnet, we also have to ensure we keep the

enable_dns_hostnames
option set to true so that any ALBs that we configure in the future can be assigned hostnames for Canonical DNS mapping.

EKS Cluster

We'll use the AWS EKS Terraform module to deploy the EKS Fargate Cluster. A basic configuration for it is as follows

module "eks-cluster" {
  source                        = "terraform-aws-modules/eks/aws"
  version                       = "17.1.0"
  cluster_name                  = "eks-serverless"
  cluster_version               = "1.21"
  subnets                       = flatten([module.vpc.outputs.public_subnets, module.vpc.outputs.private_subnets])
  cluster_delete_timeout        = "30m"
  cluster_iam_role_name         = "eks-serverless-cluster-iam-role"
  cluster_enabled_log_types     = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
  cluster_log_retention_in_days = 7

  vpc_id = module.vpc.outputs.vpc_id

  fargate_pod_execution_role_name = "eks-serverless-pod-execution-role"
  // Fargate profiles here
}

Fargate Profiles and CoreDNS

A basic configuration like the one above will deploy the EKS cluster; however, you need to create Fargate Profiles that allow you to define which pods will run in Fargate. These Fargate profiles define selectors and a namespace to run the pods, along with optional tags, you also have to add a pod execution role name, for allowing the EKS infrastructure to make AWS API calls on the cluster owner's behalf. You can have up to 5 selectors for a Fargate profile.

While Fargate takes care of provisioning nodes as pods for the EKS cluster, it still needs a component that can manage the networking within the cluster nodes, coreDNS is that plugin for EKS Fargate, and like any other workload, needs a Fargate profile to run. So we'll add both the plugin and profile configuration to our Terraform code.

First, let's update the profile configuration

fargate_profiles = {
    coredns-fargate-profile = {
      name = "coredns"
      selectors = [
        {
          namespace = "kube-system"
          labels = {
            k8s-app = "kube-dns"
          }
        },
        {
          namespace = "default"
        }
      ]
      subnets = flatten([module.vpc.outputs.private_subnets])
    }
  }

We're essentially saying, select the pods with label

k8s-app
to run in the
kube-system
namespace. Let's also add the CoreDNS plugin to the configuration

resource "aws_eks_addon" "coredns" {
  addon_name        = "coredns"
  addon_version     = "v1.8.4-eksbuild.1"
  cluster_name      = "eks-serve"
  resolve_conflicts = "OVERWRITE"
  depends_on        = [module.eks-cluster]
}

Conclusion

At this stage, it's a simple module so you can bundle all of it into a single one. Here's what the file structure looks like.

cluster
├── main.tf
├── outputs.tf
├── providers.tf
├── terraform.tf
├── terraform.tfvars
├── variables.tf

The

providers.tf
file defined AWS provider along with AWS CLI credentials as variable that you can read from variables defined in the
variables.tf
file

If you're saving TF State in a remote backend you can define the configuration for it in the

terraform.tf
file.

terraform {

  backend "s3" {
    bucket         = "swiftalk-iac"
    dynamodb_table = "swiftalk-iac-locks"
    key            = "vpc/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
  }
}

That's it! Run

terraform init
to get an EKS Fargate Cluster up and running in minutes!

Also Published Here