paint-brush
How To Setup Continuous Integration Pipeline By Using Terraform And GitLab CIby@EmmanuelSys
3,409 reads
3,409 reads

How To Setup Continuous Integration Pipeline By Using Terraform And GitLab CI

by Emmanuel SysDecember 13th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Terraform is a fantastic tool for managing your cloud infrastructure, especially if your assets are hosted on multiple cloud providers.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How To Setup Continuous Integration Pipeline By Using Terraform And GitLab CI
Emmanuel Sys HackerNoon profile picture

Terraform is a fantastic tool for managing your cloud infrastructure, especially if your assets are hosted on multiple cloud providers.

Depending on the scale of your organization, you usually start by running these Terraform commands locally while having your code in a Git repository and your remote state in a proper shared online backend. Everything is going well until you realize that having a central codebase is really good but need to go hand in hand with a “out of developer laptop” way to run these Terraform commands.

Developers are now used to do some CI for code and application development. But what about Terraform? How can you apply continuous integration and delivery techniques to your Terraform code?

In this article, we will demonstrate how to build a complete Terraform pipeline using GitLab CI.

Implementing the Pipeline

In this pipeline, we will consider 3 stages:

  • Check will run some quality checks against the code.
  • Plan generates and saves the Terraform execution plan to be reviewed by a team member.
  • Apply where the team member will manually trigger a job to apply the changes on infrastructure if the plan review is OK. In addition, this stage will only occur on the master branch.

We can also consider an additional stage: reverting the last applied set of changes. But as these operations may be sensible, I personally keep them out of CI.

Setup

To implement this pipeline, you need to configure some secrets. Most people are using a remote bucket for storing the Terraform state. Access to this storage requires specific credentials depending on your cloud provider.

Let’s demonstrate using Google Cloud Platform and a storage bucket.

First create a new service account for GitLab and save the credential file.

Give this service account the appropriate rights on the Terraform bucket (Storage Owner)Configure the service account credentials as a file variable in your GitLab project CI/CD settings.

Configure the google credentials file in GitLab CI

The process is similar if you are using AWS: you have to configure AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY  variables with the credentials of an appropriately scoped user.

Initialize the GitLab CI Pipeline

Create a .gitlab-ci.yml file at the project root with the following content:

tf-fmt:
  stage: terraform:check
  extends: .base-terraform
  script:
    - terraform fmt -check -recursive terraform/
  needs: []
  
tf-validate:
  stage: terraform:check
  extends: .base-terraform
  script: |
    for d in $(ls terraform); do
      terraform init -input=false -backend=false terraform/$d
      terraform validate terraform/$d
    done
  needs: []

This base job will be inherited by all the subsequent job and inject the credentials. We take car of triggering our pipeline only if Terraform files are changed in the context of a merge request or a push on a branch.

Static Check for Terraform Code

First thing we want is to ensure the pushed code is formatted correctly as per Terraform’s format command and has no syntax errors courtesy of the validate command

stages:
  - terraform:check
  - terraform:plan
  - terraform:apply

.base-terraform:
  image: 
    name: "hashicorp/terraform"
    entrypoint: [""]
  before_script:
    # set the google service account credentials from the variables
    - export GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS}
    - terraform version
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH
      changes:
      - terraform/**

These checks will be the first jobs in our pipeline. They are run in parallel to spare some time. Note that the validate command requires Terraform to be initialized.

Generate Plan

Next we want to generate the Terraform plan. We need to initialize Terraform first and then, if you are using the workspace feature, select the adequate workspace. Finally we call the plan command with optional tfvars files if required.

tf-plan:
  stage: terraform:plan
  extends: .base-terraform
  variables:
    STACK: "terraform/mystack"
    WORKSPACE: myworkspace
    VARS: "-var-file=my.tfvars"
  script:
    - terraform init -input=false ${STACK}
    - terraform workspace select ${WORKSPACE} ${STACK}
    - terraform plan -out=${WORKSPACE}.tfplan ${VARS} ${STACK}
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event" || $CI_COMMIT_BRANCH
      changes:
        - terraform/**
        - my.tfvars
  artifacts:
    name: ${WORKSPACE}
    paths:
      - ./*.tfplan
      - .terraform
    expire_in: 1 week

There are some interesting things going on here:

  • First we save the plan as a file to feed it into the apply command in the next stage. This is the only way to be absolutely sure what we plan will be what will be applied to our infrastructure.
  • For the same reason, we save the .terraform folder to keep all the provider modules in the exact same version as the one used to generate the plan. In Terraform 0.14, the dependency lock file will solve this problem more elegantly.
  • We save the files using the GitLab CI artifacts keyword to make them available to jobs in later stage.

Applying the changes

Last thing to do is to apply the changes.

As we have explained before, we want this step to be manual after reviewing the plan. The reason is that infrastructure changes may be critical or destructive and automatic validation with no human eye may be impossible to implement.

tf-apply:
  stage: terraform:apply
  extends: .base-terraform
  variables:
    PLAN_FILE: myworkspace.tfplan
  script:
    - terraform apply -auto-approve ${PLAN_FILE}
  environment:
    name: gcp
    url: https://console.cloud.google.com
  rules:
    - if: $CI_COMMIT_BRANCH == "master" || $CI_PIPELINE_SOURCE == "merge_request_event"
      changes:
        - terraform/**
        - my.tfvars   
      when: manual

We get back our saved plan and .terraform folder from the previous stage and simply run the Terraform command in non interactive mode. This job is created only for the master branch.

Going further

If you will, you may implement key metrics from Terraform plan directly in your GitLab GUI using the Terraform MR integration.

Or you can add a testing step for your infrastructure by implementing some Terratest test cases, especially against a “staging” infrastructure environment before deploying to production.

Conclusion

Even if automating Terraform operations is maybe less trendy than automating applications deployment, it’s a must-have to secure your process and keep your IaaC repository as the unique source of truth. As this automation is quick and easy to implement, there is no excuse not to do it now!

Also published at https://medium.com/swlh/continuous-integration-for-terraform-using-gitlab-ci-4f7f2f835e81