AWS EKS + Terraform + Cloudskiff do the job In this article I explain how to spin up an AWS EKS cluster in 1 min of work, and get Terraform code out of it for reproducibility and easy cleanup. That’s done with Cloudskiff, a CI/CD for infrastructure as code. Setting up new environments in EKS is a little tedious, and requires a lot of point and click work if you do it through the console. Plus if something messes up, or you just want to shut it all down, you end up with a shitload of work cleaning up your AWS account and getting rid of now useless services. And AWS didn’t make that simple (who designed that CLI again? And , you can’t delete your VPC, there’s a NAT gateway attached to it. And again there is automated cleanup function). no no no The AWS team doesn’t really want to add easy cleanup functions :) https://github.com/aws/aws-cli/issues/1721 . Describe everything as Terraform code, and you get a really easy way to deploy your new dev environment, a way that is reproducible and easy to clean up. And it makes it simpler to do things cleanly, with your environment neatly set up in a VPC for isolation. Enters Terraform Writing, optimizing and running Terraform code is a little tricky, and if you have your infra described as code, you might as well manage it in a CI/CD system like any other code. Right? That’s why that: Cloudskiff is building a CI/CD for infrastructure as code Day 1: makes getting started with infrastructure as code more approachable. Day 2+: streamlines versioning, acts as the central place for automation, and enables collaboration around your templates and deployments . We’re talking about AWS here, but Cloudskiff connects to other cloud providers too. So let’s dive into it. Start the timer, and let’s see how we launch a small dev cluster in 2 min of work. Cloudskiff will also generate basic but clean Terraform code for you, that you can then reuse and upgrade to evolve your environment. 1. Create a Cloudskiff account Easy, it’s . here 2. Create a Cloudskiff IAM user in your AWS account Sign into the AWS management console, then user for cloudskiff. I called mine create a new aws IAM cloudskiff. Hit then select Add user programmatic access We will create a new set of policies for this user to secure things up. Cloudskiff needs access to EKS, EC2 and IAM. I created an easy, copy paste friendly permission set right there. [ ]. regularly updated permissions here { : , : [ { : , : [ , , ], : }, { : , : [ ], : [ ] } ] } "Version" "2012-10-17" "Statement" "Effect" "Allow" "Action" "ec2:*" "eks:*" "autoscaling:*" "Resource" "*" "Effect" "Allow" "Action" "iam:*" "Resource" "*" You don’t need to add tags. Let’s create this user now. Once you’ve created your user, save your access key and secret key, we’ll need them soon. 3. Add AWS to Cloudskiff Great! We’ve created a new IAM cloudskiff user. Now let’s grant the Cloudskiff platform access using that user. Open the Cloudskiff app Navigate to the on the left Integrations tab Select AWS Enter your credentials, select your favourite AWS region Save. Keep the keys handy, we’ll need them later to configure our local AWS profile. 4. Add permissions on a new infra as code Github repo Cloudskiff will generate Terraform code for your infrastructure and save it in your repo. So we need to create a github repo that we want it to push to. Create a new Github repository. Let’s call it private cloudskiff-dev-eks Go to Cloudskiff’s and select integrations tab Github Connect your Github account Note: Cloudskiff only needs access to the specific repo where you Terraform code will be stored. 5. Cool. Let’s deploy an EKS cluster The setup is complete. You’ll only have to do steps 1, 2, 3, 4 once. Now let’s see how we can launch an EKS cluster. Move to the Cloudskiff dashboard. That’s where you will monitor all your clusters, and launch new ones. Hit New Project. Select . Templates are preconfigured EKS cluster that help you get started. You still have access to the behind it. Templates Terraform Pick a name for your project Select as the provider AWS Select your usual region We’ll deploy a small cluster of scaling between 1 and 3 machines. You can always come back to it and launch something more serious afterwards :) t3.nano Enter your ssh public key ( to get it real quick on most systems). It should look like . cat ~/.ssh/id_rsa.pub ssh-rsa BLABLABLA Select your brand new cloudskiff-dev-eks repo Hit Save You should land back to your dashboard, and tadaaaam: our project is there. 9. Hit Deploy! Our project will start, and we can monitor the progress in the Logs. 5. Relax and check out the Terraform code we’ve generated See that github logo? Hit it and you’ll land on your repository. The Terraform code that is executing right now has been stored on that repo. That means it is versioned, traced, and in case there is trouble you can roll back to older versions. GitOps become easier. cloudskiff-dev-eks Meanwhile, AWS is doing its thing, starting the EKS cluster, VPCs, autoscaling groups described in this Terraform. I am guessing you already are using AWS routinely, and you have the AWS CLI setup. If not, Let’s take a look at that. 6. Setup your local environment All you need to do is : 1. Create a Cloudskiff profile in your AWS credentials file, so that you can access your machines with your IAM. cloudskiff aws_access_key_id = .. aws_secret_access_key = [cloudskiff] aws_access_key_id = .. aws_secret_access_key = .. #you probably already have something here #here too #Create this # told you we'd need that later 2. Set your local environment variables and . should contain the path to your file. We will download it from Cloudskiff later, so let’s just prepare a folder to save it, for example and define the to point to the . $KUBECONFIG $AWS_PROFILE KUBECONFIG kubeconfig $HOME/code/cloudskiff/config/aws KUBECONFIG file AWS_PROFILE=cloudskiff; KUBECONFIG= /code/cloudskiff/config/aws/kubeconfig-dev-cluster; export #the path to where you will store your kubeconfig file export $HOME 7. Connect Wait a few minutes (10–15) for AWS to assign resources. At some point, your project will be deployed (you will see a green ball in the UI). 1. Get the kubeconfig from the Cloudskiff dashboard 2. Rename it to .Then move it to or the place of your choosing, as long as it matches your kubeconfig-dev-cluster ~/code/cloudskiff/config/kubeconfig-dev-cluster $AWS_PROFILE 3. Check: . It should output something like that: echo $AWS_PROFILE; echo $KUBECONFIG AWS_PROFILE: cloudskiff KUBECONFIG: ~/code/cloudskiff/config/aws/kubeconfig-dev-cluster 4. Now run , or if you prefer. You’re in! Cluster deployed! kubectl get nodes k9s (8. Destroy) To destroy your cluster and cleanup everything, well: just hit the button on Cloudskiff. Everything will be cleaned up automatically and ready for a re-deploy! Destroy Debrief Reading this, you might think I took more than 2 min, because I sprayed screenshots everywhere. Thinking about it, most of the things I did were just one-off for setup: (only once) Create a Cloudskiff account (only once) Create an IAM (only once) add it to Cloudskiff (only once) add permissions on a new infra as code Github repo Select 5 options Press a Deploy button (only once) make sure my local AWS profile was configured Get a kubeconfig Connect! Only 5, 6, 8, 9 are steps you need to do for each deployment, and they are mostly buttons to press or single lines of command. I hope you liked that! We haven’t looked in detail in the terraform code together, so I will keep that for an upcoming post. Don’t hesitate to reach out about this tutorial! , and . I am a Product Manager at Cloudskiff, and write about infra in my spare time. You can find other posts on Venturebeat Twitter To contact me, use <my-weird-first-name>@cloudskiff.com Previously published at https://www.cloudskiff.com/how-to-launch-eks-cluster-terraform