If you're reading this, you probably know what a CI/CD pipeline is. If you don't, no big deal -- it's just yet another acronym. CI/CD short for "continuous integration and continuous delivery", which when we boil it down, simply means that everything between your git commit and a deployment works auto-magically so you can rest your fingers and watch some Netflix. In this example, I'm going to be focusing on deploying a static HTML website, but you really could do this for any sort of web application front-end. We will be highlighting how to do this using AWS, Terraform and Gitlab. Let me just explain real quick why we'd want to do this. First, this makes your life easier. You write the code, you make a commit, push, and your pipeline deletes old files from your S3 bucket, uploads the new files, sets permissions, and then invalidates your CloudFront distribution so that all you have to do is go look at your changes in production. Second, it removes human error. You can't forget to upload files or invalidate a CF if your code does it for you. Third, it speeds up your deployments because you no longer have to go ahead and do things manually. For all of these reasons, I always set up a pipeline for all of my personal and professional projects. Enough of me rambling; let's get to work. Prerequisites A account GitLab An account AWS is installed Terraform A account Keybase Set up the infrastructure I like to make things as easy and reproducible as possible, so we'll be using Terraform to build out the infrastructure. If you don't have Terraform installed, then you didn't read the prerequisites (tsk tsk...) and I'm going to need you to install it. For the website's infrastructure, all we'll need is an S3 bucket and a CloudFront distribution. We'll just use a default CloudFront certificate since we aren't connecting this distribution to our domain name in this example. Before we go any further, I would recommend creating a repository for all of your Terraform configs. It will just keep things more organized if you decide that you want to continue to use Terraform for configuring your infrastructure. Anyways... Create a file named main.tf and paste this into. You can change the bucket_name variable to whatever you want, just make sure you set this correctly later on in another file (you'll see as I've highlighted where it needs to be adjusted in the code snippet). { default // provider region } resource bucket acl policy { : , : [ { : , : , : , : , : } ] } EOF website index_document error_document } } locals s3_origin_id } resource origin domain_name origin_id } wait_for_deployment enabled is_ipv6_enabled default_root_object default_cache_behavior allowed_methods cached_methods target_origin_id forwarded_values query_string cookies forward } } viewer_protocol_policy min_ttl default_ttl max_ttl } price_class restrictions geo_restriction restriction_type } } custom_error_response error_code error_caching_min_ttl response_code response_page_path } viewer_certificate cloudfront_default_certificate } } variable "bucket_name" = "website.example.com" change this} "aws" { = "us-east-1" "aws_s3_bucket" "bucket" { = "${var.bucket_name}" = "private" = <<EOF "Version" "2012-10-17" "Statement" "Sid" "AddPerm" "Effect" "Allow" "Principal" "*" "Action" "s3:GetObject" "Resource" "arn:aws:s3:::${var.bucket_name}/*" { = "index.html" = "index.html" { = "S3-${var.bucket_name}" "aws_cloudfront_distribution" "s3_distribution" { { = "${aws_s3_bucket.bucket.bucket_regional_domain_name}" = "${local.s3_origin_id}" = false = true = true = "index.html" { = [ "DELETE" , "GET" , "HEAD" , "OPTIONS" , "PATCH" , "POST" , "PUT" ] = [ "GET" , "HEAD" ] = "${local.s3_origin_id}" { = false { = "none" = "redirect-to-https" = 0 = 3600 = 86400 = "PriceClass_100" { { = "none" { = 403 = 0 = 200 = "/index.html" { = true Now run the config create the infrastructure: terraform init terraform - -approve apply auto Awesome! All of our infrastructure is set up in AWS and now we just need to set up our GitLab runner! Configure our Runner For this part, you'll need a GitLab repository for us to work with, so create a new one before proceeding. Now that you have a repository, let's set up our runner. We're going to use a shared runner from GitLab. They're free to use up for up to 2,000 minutes of deployments per month -- and they're enabled by default. I've found that they're somewhat slow and it stinks that you're throttled at 2,000 minutes, so I usually use my own runners spun up in Kubernetes or on EC2 instances, but let's save that for another tutorial. We need to supply the runner with an AWS IAM user to use to deploy to S3. Create another terraform config with this content: { description default // provider region } resource user pgp_key } resource name user policy { : , : [ { : , : [ ], : }, { : , : [ , , , , ], : [ }, { : , : , : } ] } EOF } resource name } output value } output value } variable "keybase_user" = "A keybase username to encrypt the secret key output." = "dannextlinklabs" change this} "aws" { = "us-east-1" "aws_iam_access_key" "gitlab_ci" { = "${aws_iam_user.gitlab_ci.name}" = "keybase:${var.keybase_user}" "aws_iam_user_policy" "gitlab_ci" { = "gitlab-ci-policy" = "${aws_iam_user.gitlab_ci.name}" = <<EOF "Version" "2012-10-17" "Statement" "Effect" "Allow" "Action" "s3:ListBucket" "Resource" "*" "Effect" "Allow" "Action" "s3:PutObject" "s3:PutObjectAcl" "s3:GetObject" "s3:GetObjectAcl" "s3:DeleteObject" "Resource" "arn:aws:s3:::website.example.com/*" // this needs to be set to your bucket name ] "Effect" "Allow" "Action" "cloudfront:*" "Resource" "*" "aws_iam_user" "gitlab_ci" { = "gitlab-ci" "access_key" { = "${aws_iam_access_key.gitlab_ci.id}" "secret_access_key" { = "${aws_iam_access_key.gitlab_ci.encrypted_secret}" Make sure you set the keybase user to your own keybase user, because Terraform uses that to encrypt the IAM user's secret access key. This won't work if you don't have a Keybase account, or if you try to use my Keybase account. Run the config. terraform init terraform - -approve apply auto The terraform config returns an access key and an encrypted secret key for this user. We need to decrypt the secret key with the command (this is why you needed to use your own keybase user). terraform output encrypted_secret | --decode | pgp decrypt base64 keybase Now that we have the access key and the secret key for our GitLab user, we just to supply these variables to our runner by adding them to the variables section in the CI/CD settings. We need to set three variables in GitLab: AWS_ACCESS_KEY_ID - the access key that Terraform returned us AWS_SECRET_ACCESS_KEY - the access key we just decrypted AWS_DEFAULT_REGION - us-east-1 to to secret Create our super-duper simple website We have an empty repository set up with our GitLab runner enabled, so l Super cool website! Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. < > html < > body < > h1 </ > h1 < > p </ > p </ > body </ > html Commit that and push it to the repo, and that will be all we need for the "design". Run a Deployment GitLab CI/CD is based around a file called .gitlab-ci.yml. GitLab requires that file to be located in the root of your project. Our file needs to look like this: stages: - - variables: AWS_BUCKET: website.example.com // change this if necessary deploy_s3: image: python:3.6 stage: deploy-s3 tags: - - before_script: - script: - only: - deploy_cf: image: python:3.6 stage: deploy-cf tags: - - before_script: - script: - - only: - deploy-s3 deploy-cf docker gce pip install awscli -q aws s3 sync . / --delete --acl public-read s3: /$AWS_BUCKET/ master docker gce pip install awscli -q export distId=$(aws cloudfront list-distributions --output=text --query ] cut -f1) 'DistributionList.Items[*].[Id, DefaultCacheBehavior.TargetOriginId' | grep "S3-$AWS_BUCKET" | read -r dist; aws cloudfront create-invalidation --distribution-id $dist --paths ; done <<< while do "/*" "$distId" master This gitlab-ci file sets up two stages: deploy-s3, and deploy-cf. The first stage uploads our website to the S3 bucket, and the second invalidates the CloudFront distribution for that bucket in order to present the new changes to our website! This simple configuration is all you need to have a complete CI/CD pipeline for your business' website. Commit and push those changes, and then check out the pipelines section under the CI/CD tab in that project. You should see a deployment running. Success! You have a working CI/CD pipeline. Now, whenever you commit code to the master branch, GitLab will auto-magically upload and distribute your changes! Daniel Slapelis is a Devops Engineer with NextLink Labs, a Pittsburgh-based Devops and full stack engineering company.