paint-brush
CI/CD Pipeline for NodeJS Lambdas on AWS using Jest, Serverless Framework, Github, and TravisCIby@lekan
1,048 reads
1,048 reads

CI/CD Pipeline for NodeJS Lambdas on AWS using Jest, Serverless Framework, Github, and TravisCI

by Olalekan SogunleNovember 13th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Experienced software developer skilled in Databases, Ruby on Ruby, Ruby, Javascript, Cloud Infrastructure, AWS. He wanted to start a serverless project, and one of the primary headaches aside from architecting the workflow and deciding which Lambda functions need to be created is a simple and yet effective CICD workflow. I use Github for my code repository, and Serverless Framework for creating and deploying my functions to AWS. The build phase on Travis will run Jest tests on the functions, if all tests pass, it deploys the code on AWS Lambda using the serverless.

Company Mentioned

Mention Thumbnail
featured image - CI/CD Pipeline for NodeJS Lambdas on AWS using Jest, Serverless Framework, Github, and TravisCI
Olalekan Sogunle HackerNoon profile picture

I wanted to start a serverless project, and one of the primary headaches aside from architecting the workflow and deciding which lambda functions need to be created is a simple and yet effective CICD workflow.

Typical AWS’s suggestion is to use a combination of CodeCommit, CodeBuld, CodeDeploy, and glue everything together in CodePipeline. But I already have GitHub for other projects, so setting up CodeCommit will be unnecessary. The same applies to CodeBuild whose functions I currently perform with TravisCI, where I run both integration and feature specs.

This will be a NodeJS project, and I am interested in fully testing my functions at the Build phase using Jest. I also know I can adapt TravisCI to perform CodeDeploy’s functions in deploying code changes to production. But I still want my serverless function run on AWS Lambda. So here is my plan;

I use Github for my code repository, and Serverless Framework for creating and deploying my functions to AWS while running the Build phase on Travis. The build phase on Travis will run Jest tests on the functions, if all tests pass, it deploys the code on AWS Lambda using the serverless CLI depending on the working branch. I will create two environments, one for staging and the other for production. The staging branch deploys to my staging env, while the master branch deploys to production. Other branches will only run Jest tests on the lambda code and will not deploy.

All code used on this hands-on is available here on Github. Alright then, here we go.

Setting up the SLS functions and Github Repo

Create a new repository for our project on Github. I call mine lambda-cicd . I will clone the repository to my local workspace with:

$ git clone [email protected]:<github username>/<project name>.git

Mine is 

$ git clone [email protected]:lekansogunle/lambda-cicd.git 

Next, create two serverless AWS functions. The concept is to be able to have multiple functions in the same project and each deploying as need be to AWS Lambda. I call my two functions init-user and create-post . Hypothetically, we can say the first function is to create a new user, while the second is used to create blog posts for our user. We will only stick to the boilerplate code mostly and not implement details of these functions. Install serverless CLI with:

$ npm install -g serverless

I will create my two functions with the SLS CLI command for creating AWS Lambda services.

$ sls create --template aws-nodejs --path myService

Replacing my myService with init-user and create-post for my two functions.

$ sls create --template aws-nodejs --path init-user
$ sls create --template aws-nodejs --path create-post

Test My Services with JEST

Now that we have all the functions, I will add the Jest testing framework to the project and perform very simple testing of my functions. At the root of the project directory do:

$ yarn add --dev jest

This adds a package.json at the root directory and installs Jest. For Jest to pick up our tests, we need to add a __test__ directory in both directories holding each service. And this test folder will hold test files ending in .test.js . For consistency, since our services are named handler.js by default from SLS, I will name the tests also handler.test.js .

Next, I will go into the handler functions created for both services and change the message text to something different. For init user, I addHello from init user as a return message. My test will assert that the function returns this exact same output. Definitely, it gets far more complicated than this when the functions are fully implemented. My project file structure looks like this so far:

├── create-post
│   ├── __test__
│   │   └── handler.test.js
│   ├── handler.js
│   └── serverless.yml
├── init-user
│   ├── __test__
│   │   └── handler.test.js
│   ├── handler.js
│   └── serverless.yml
├── package.json
└── yarn.lock

Before writing the tests, now will be a good time to make a commit. Add a 

.gitignore
file to the root of the project and write in there
node_modules
 .We don't want to commit our
node_modules
. Make a commit and push the changes so far. From your root folder, run the tests and get two failures saying;

$ yarn run jest
Your test suite must contain at least one test.

I ended up having my create-post lambda function and it’s jest test look like;

'use strict';

module.exports.createPost = event => {
  return {
    statusCode: 200,
    body: JSON.stringify(
      {
        message: 'Hello From Create Post!',
        input: event,
      },
      null,
      2
    ),
  };
};
const handler = require('../handler');

test('correctly create post', () => {
  expect(handler.createPost({foo: 'bar'})).toStrictEqual({
    statusCode: 200,
    body: JSON.stringify(
      {
        message: 'Hello From Create Post!',
        input: {foo: 'bar'},
      },
      null,
      2
    ),
  });
});

Notice I changed the function name from

hello
to
createPost
 , removed
async 
from the function for now. In the test, I am using
{foo: 'bar'}
as dummy event. Then I am asserting the exact response from our function with a
strictEqual
 . Running the test now will pass with a

Setup two AWS IAM account

For the next step, we will have to create two AWS IAM accounts. One for the staging environment and the other for our production environment. Here is a guide to creating IAM users. You will have to namespace the accounts with the environment they belong to (production and staging) respectively so as to clearly identify the accounts in the future. For example, production-user and staging-user. Also, follow the guide to make sure you give correct access rights. As you proceed, carefully note both Access Key Id and Secret Access Key for each user. Save these credentials in a secure place to be used in the coming steps.

Configure TravisCI

Before we configure TravisCI, we will commit everything we have and create a new branch. Call it staging and push this branch to Github. This branch will be used to create a staging deployment to AWS from TravisCI.

If you do not already have an account, create an account with TravisCI and install the Github App Integration into your Github account. Then visit https://travis-ci.com/account/repositories. Here, we are able to connect our GitHub repositories to Travis for running continuous integration jobs. Then click on settings as shown from below

The most important section here is the environment variables section. There you will notice we can set

name
 ,
value
 , and
branch
for each credential.

For example, staging

AWS_ACCESS_KEY_ID
we set the name as
AWS_ACCESS_KEY_ID
 , set value as the value noted in the previous section from our AWS IAM staging user, and then we select the branch as
staging
 .

We do this for all four credentials. SLS CLI will automatically pick up these variables as the setup credentials when present in the job machine. Then we add another variable,

STAGE_NAME
to let serverless know what stage we are deploying to. The value is
staging
for the staging branch and
production
for the master branch. We will then have a total of 6 env vars like below

Next, we create a 

.travis.yml
file at the root of our project. Then we create a bin directory, in there we create a deploy bash script. The
.travis.yml
file will hold our TravisCI specific configurations and our
bin/deploy
script will hold the SLS deploy script commands. The
.travis.yml
and
bin/deploy
files will have the following

language: node_js
node_js:
  - "10"
cache:
  directories:
    - node_modules
    - ${SERVICE_PATH}/node_modules
install:
    - yarn global add serverless
    - travis_retry yarn install
    - cd ${SERVICE_PATH}
    - travis_retry yarn install
    - cd -
script:
    - yarn run jest
jobs:
  include:
    -
      name: "Deploy User Init API"
      env:
        - SERVICE_PATH="init-user"
    -
      name: "Deploy Create Post API"
      env:
        - SERVICE_PATH="create-post"
deploy:
  provider: script
  script: bash bin/deploy
  on:
    all_branches: true
    condition: $TRAVIS_BRANCH =~ ^(master|staging)$
#!/usr/bin/env bash

cd ${SERVICE_PATH}
serverless deploy -s ${STAGE_NAME}
cd -

Next, make sure our deploy script is executable with the following

$ chmod u+x bin/deploy

Most important parts of our 

.travis.yml
file specify install commands to be run both globally and in the separate function directories. It also installs the serverless CLI globally for deploying. The
script
section allows us to run Jest tests in our build and the output will determine if Travis proceed to deploy the build. If the test fails, TravisCI will not proceed to deploy.

Otherwise, the build will be deploy after the test passes. Here, we include two sections under

job
so as to allow TravisCI to create two separate build jobs, one for each function. That is why we specify different env var
SERVICE_PATH
as the name of the two lambda directories in our project. In the
deploy
section, we specify a
script
strategy and mention which script command to run when TravisCI is ready to deploy our functions to AWS. Lastly, Travis checks all branches and deploy only the ones matching the condition that their branch name is either master or staging.

$TRAVIS_BRANCH =~ ^(master|staging)$

The deploy script changes directory to the

SERVICE_PATH
specified for each build job. Runs serverless deploy command and changes directory back out to the root directory.

Configure Serverless

Next, we will change some things in the

serverless.yml 
. First, we add

custom:
  stage: ${opt:stage, self:provider.stage}

This is to tell serverless to take in the stage option as specified when running the CLI command and then set this stage when deploying. If not present, it uses the default provided under

provider.stage
which is
dev
 .

Next, you will want to confirm the runtime value under the provider to be

nodejs10.x
we will keep this uniform with what we have in our TravisCI. Lastly, we will change the handler from
handler.hello
to
handler.initUser
and
handler.createPost
in the appropriate
serverless.yml
file. The init-user
serverless.yml
looks like this:

service: init-user
frameworkVersion: '2'
provider:
  name: aws
  runtime: nodejs12.x
functions:
  hello:
    handler: handler.initUser

Finally, Launch Lambdas 🚀

Commit what we have and push to the staging branch. You will notice a new build started on Travis with two jobs. The jobs should pass, then I will create a pull request and merge into the master branch to deploy our production lambdas. Wait for the master branch to be green and deploy the production lambdas.

And on AWS, you will be able to see both your staging and production lambda functions deployed.

All the code for this project is available on GitHub here and TravisCI builds are available here. If you have come this far, I hope this follow-along has helped you smoothly create a lambda CICD workflow. Let me know how the process works for you!

Also published at https://medium.com/@OlalekanSogunle/create-a-ci-cd-pipeline-for-nodejs-lambdas-on-aws-using-jest-serverless-framework-github-and-d4c68dc77793