paint-brush
Serverless API with Terraform, GO and AWS, Part 1by@danstenger
1,244 reads
1,244 reads

Serverless API with Terraform, GO and AWS, Part 1

by DanielNovember 26th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Go, Lambda, API Gateway, DynamoDB and Terraform built server-less REST API using GO, AWS Lambda and then deploy it all to AWS cloud with Terraform. Terraform has the ability to store its state remotely in a variety of back-ends and since deployment will happen on AWS, I'll use S3 for that. This will allow me to easily manage lifecycle of the application as well as allow other participants to share and update same state. Terraform will not have access to AWS API. It is simply my personal preference to organise configuration by splitting logical units into separate files.

Company Mentioned

Mention Thumbnail
featured image - Serverless API with Terraform, GO and AWS, Part 1
Daniel HackerNoon profile picture

Knowing how to build REST API with latest tech is cool. But you know what is even cooler? - Being able to deploy it to the cloud! I'll walk you through the process of building simple, server-less REST API using GO, AWS Lambda, API Gateway, DynamoDB and then deploy it all to AWS cloud with Terraform.

If you want to follow along, you'll need an AWS account with user that has programmatic access enabled, GO and Terraform. Installing these dependencies is relatively easy and there's plenty of online resources to guide you through. I would also recommend looking into tfenv to manage multiple versions of terraform locally.

For this project I'm using

go v1.17.1
and
terraform v1.0.11
. Assuming all dependencies are installed, I can start crafting.

Business specific logic will be kept in

api
and infrastructure in
iac
(Infrastructure As Code) directory.

serverless-api/
  |-api/
    |-lambdas/
    |-internal/
  |-iac/
    |-prerequisites/
    |-modules/
    |-api/

Since I'll be using

go
to develop api, I'll be placing my lambdas in
/lambdas
and all shared/reusable packages in
/internal
directory. In above diagram there's
iac/prerequisites
directory and this is what I'll be working on first.

Terraform has the ability to store its state remotely in a variety of back-ends and since deployment will happen on AWS, I'll use S3 for that. This will allow me to easily manage lifecycle of the application as well as allow other participants to share and update same state. That way everything and everyone will be in sync.

Prerequisites is kept separate as it has to be deployed first. This is because our main applications

terraform > backend
configuration block cannot contain interpolated values and is initialised prior to Terraform parsing the variables. Let's start from the main config file:

# iac/prerequisites/main.tf

# ----------------------------------------------------
# configure aws provider
# ----------------------------------------------------
provider "aws" {
  region  = var.region
}

# ----------------------------------------------------
# create dynamo db for deployments locking 
# ----------------------------------------------------
resource "aws_dynamodb_table" "terraform_statelock" {
  name           = local.ddb_name
  read_capacity  = 20
  write_capacity = 20
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

# ----------------------------------------------------
# create S3 bucket that will be used for the backend
# ----------------------------------------------------
resource "aws_s3_bucket" "remote_state" {
  bucket        = local.state_bucket_name
  force_destroy = local.destroy_bucket
  acl           = "authenticated-read"

  versioning {
    enabled = true
  }

  tags = {
    Env = local.env
  }
}

Here I'm creating S3 bucket to hold shared state and DynamoDB table for locking deployments. Locking is needed for cases when two or more people are trying to perform deployment at almost exact same time and this is where things can get out of sync. Important thing to mention is DynamoDB

hash_key
. For locking to work, it has to be explicitly named
LockID
and it's case sensitive.

Quick note before moving on. File names do not matter. Terraform will read all configuration files in top-level directory. Nested directories are treated as completely separate modules and will not be automatically included in the configuration. You don't even have to split your logic into variables.tf, locals.tf, main.tf like I do. It could all be placed within a single file. It is simply my personal preference to organise configuration by splitting logical units into separate files.

Next

variables.tf
,
locals.tf
and
versions.tf
:

# iac/prerequisites/variables.tf

variable "region" {
  default     = "eu-central-1"
  description = "region where all the resources will be deployed"
}

variable "prefix" {
  default     = "project-123"
  description = "organization or service name, has to be unique"
}

variable "ddb_statelock_table" {
  default     = "tf-statelock"
  description = "name of dynamo db table for terraform state locking"
}

variables.tf
is the way to define program specific variables, add validations and set defaults. All these values can be conveniently overridden by exporting same variable name and adding a
TF_VAR_
prefix to it. Say if I'd like to use different region in CI/CD pipeline, I'd
export TF_VAR_region=us-east-1
and this value would take priority.

# iac/prerequisites/locals.tf

locals {
  env               = terraform.workspace == "default" ? "dev" : terraform.workspace
  state_bucket_name = "${var.prefix}-remote-state"
  destroy_bucket    = contains(["prod", "staging"], local.env)
}

locals.tf
is used to dynamically compute variables and avoid duplicate logic in rest of configuration.

# iac/prerequisites/versions.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.66.0"
    }
  }

  required_version = "~> 1.0.11"
}

versions.tf
is like package.json in node.js or go.mod in GO. Here I can specify and lock the version of provider used in the stack as well as define a version of terraform that's required to build and deploy my infrastructure.

With all that in place I could try to deploy prerequisites stack to AWS, but there's a problem. Terraform will not have access to AWS api. To solve it, I need to export secret and access keys that I got from AWS when my user with programmatic access was created:

# replace ******** with appropriate values
export AWS_SECRET_ACCESS_KEY=********
export AWS_ACCESS_KEY_ID=********

Time for some action:

cd iac/prerequisites

# initialise terraform project
terraform init

# plan deployment, should output:
# Plan: 2 to add, 0 to change, 0 to destroy.
terraform plan

# deploy, should output:
# Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
terraform apply -auto-approve

One important thing to mention is that my user has full access rights to S3, DynamoDB, API Gateway and Lambda attached via AWS management console. I find it a bit easier to manage this way. If you'll run into errors, say

not authorized to perform: dynamodb:CreateTable
then get back to AWS console and ensure that your user has appropriate permissions attached.

After running above commands, it's always a good idea to navigate back to AWS management console and ensure all resources have been created. It's also a good idea to destroy and recreate the stack to make sure everything is in right order:

# destroy the stack
# should print out: Destroy complete! Resources: 2 destroyed.
terraform destroy -auto-approve

# plan the stack
terraform plan

# deploy the stack
terraform apply -auto-approve

Sometimes terraform fails to execute. Most of the time the solution is just to rerun the last terraform command. If that does not help then your favourite search engine is always one click away and I'm pretty sure there will be plenty of answers on how to solve the issue.

I hope you have learned something useful. In part 2 I'll be creating configuration for api infrastructure. This is where above prerequisites backend will be used. You can find the source code and track the progress of the project here.

Got inspired? Share and inspire others!