Photo by on Marion Michele Unsplash Today we are going to start building the dream backend for your startup with and . Test Driven Development Serverless Traditional APIs server have come a long way, but today’s fast moving projects need to kindly consider serverless, the sooner, the better. Being requirement #1 to , the common side effect is that the codebase and infrastructure become harder to maintain once the product and the team grow. ship as soon as possible Serverless architectures help mitigating this situation in many ways: Lambda functions encourage to write granular, clean and specific API operations. They also force to decouple code from the local architecture (url, ports, etc). Combined with , you can develop faster and leaner. TDD No server means no sysadmin. More time to spend on your project. In short, what we are about to explore is: A clean serverless API project Ready for Test Driven Development Connecting to a (MongoDB Atlas) cloud database Using secret management With automated deployment Using staging environments So let’s get right to it! Start me up! First, make sure that you have on your system and install the Serverless framework: NodeJS npm i -g serverless (Windows)sudo npm i -g serverless (Linux and MacOS) Cloud accounts Create an account on and open the when you are done. You need to add a new user. Amazon Web Services IAM Management Console Amazon AWS IAM Console Give it a meaningful name and enable “Programmatic access” for it. IAM user creation By now, we are are just developing, so we will attach the policy and keep working on the project. AdministratorAccess : When the project is ready for production, get back to the lambda IAM user and on how to apply the . IMPORTANT check this article Least Privilege Principle AWS IAM user just created Our AWS user is now ready. Copy the keys shown on the screen and run in the console: serverless config credentials --provider aws --key <the-access-key-id> --secret <the-secret-access-key> Done! Your serverless environment is ready to connect to Amazon and do its magic. Now let’s a database to connect to. The choice of a particular database is out of the scope of this article. Because MongoDB is one of the most popular NoSQL databases, we will open an account at . MongoDB Atlas MongoDB Atlas Select a provider and a region that suit your needs. Since our code will be running on AWS Lambda, it makes sense to select as the provider, and the same region where we will deploy our lambdas. AWS Check any additional settings and choose a name for your cluster. Wait for a few minutes for it to provision. When it is ready, open the tab of the cluster and add a new user. Security MongoDB cluster just created Since we are just testing, enter a username/password of your choice and select “ . Read and write to any database” When you are ready for production, you should apply again the and restrict privileges to just your app’s database. You should also create different users for production and staging environments. IMPORTANT: Least Privilege Principle DB user creation Keep the user and password and get back to the cluster overview. Click on the button. Next we need to whitelist the IP addresses allowed to connect to the database. Unfortunately, on AWS Lambda we have no predictable way to determine the IP address that will connect to MongoDB Atlas. So the only choice is to go for “Connect” “Allow access from anywhere”. Connection settings Finally, click on , choose version 3.6 and copy the URL string for later. “Connect your application” Connection URL Let’s code! Enough accounts, now let’s get our hands dirty. Open the console and create a NodeJS project on folder : my-api serverless create --template aws-nodejs --path my-apicd my-api Let’s invoke the default function: serverless invoke local -f hello (output){"statusCode": 200,"body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":\"\"}"} Ok, running. Let’s create the and add a few dependencies: package.json npm init -ynpm i mongodbnpm i -D serverless-offline serverless-mocha-plugin Next, let’s define our environment. The serverless CLI just created the file for us. Clean it up and edit like this: serverless.yml serverless.yml A few things to note here: We are using NodeJS 8.10 in order to get the modern Javascript goodies. We have defined the same region that we previously chose on MongoDB Atlas. A function is added by default in hello handlers.js We are adding two plugins at the bottom (from the dependencies we installed before). In a Daft Punk fashion, they will help us develop . better, faster, stronger Now we can start listening for incoming HTTP requests on our local computer: serverless offline start Serverless offline listening on port 3000 So if we open our browser and visit we will get the output of the function, which is attached to the path by default. The output is a message, in addition to the HTTP headers and parameters that the function gets. http://localhost:3000, hello / The cool thing is that if we edit so that returns something different, you’ll notice that refreshing the browser will show the updated content. out of the box! handlers.js hello Live reload API Before we jump into our TDD environment, let’s tidy up a bit our project and arrange our files and routes. mkdir handlers testrm handler.jstouch handlers/users.js The routes we will be supporting are: GET /usersGET /users/<id>POST /usersPUT /users/<id>DELETE /users/<id> So let’s edit and define them. Edit the block to contain these lines: serverless.yml functions: As you can see, each key inside the block is the name of a Lambda function. Note that would translate into: functions handler: handlers/users.list Use the _list_ JS function inside _handlers/user.js_ . Time to TDD! The serverless CLI provides a command to add new tests for each function. Let’s check it: serverless create test -f listUsersserverless create test -f getUserserverless create test -f addUserserverless create test -f updateUserserverless create test -f removeUser The folder now contains a dummy spec file for each of our Lambda functions. As all our user-related functions are pointing to maybe it’s better that the specs keep the same structure. test handlers/users.js So let’s discard the isolated specs , combine them into a single file and code the full user spec in . rm test/* test/users.spec.js As you can see, instead of importing just one function wrapper, we import them all and define test cases for the whole set. You may also note that we are not performing HTTP requests. Rather we have to pass the parameters as they would be received by the function. If you are interested to spec like an HTTP client, . check this library What happens now if we run the test suite? serverless invoke test You guessed it! Booom. And that’s because… we haven’t coded yet the handlers! But as you know: in TDD, specs are coded before the application logic. So, as our runtime is NodeJS 8, we can take advantage of ES6/ES7 features to write cleaner code in our lambda functions. Let’s implement the (still undefined) functions in : handlers/users.js Note that unlike a traditional NodeJS server, the connection to the database needs to be opened and closed at each execution. This is due to the nature of how serverless works: there is no process running all the time. Instead, instances are created and destroyed when external events occur. Also note that all routes make sure to always close the DB connection. Otherwise, the internal NodeJS event loop would keep the process alive until the timeout was reached, and you might incur in higher charges. And finally, note that thanks to using NodeJS v8, we can our response instead of using callbacks with error-first parameter. return So now, our specs are ready, our implementation is there. The moment of truth: Test execution Yay! Here is our first serverless API. As you may see, local execution times are not particularly impressive, but keep the following in mind: Our goal is not about single request speed. Rather, we aim for concurrent massive scalability, easy maintainability and future-proof code. Latencies are mainly due to the time spent by our local computer connecting to the remote database. Tests with 3~4 DB requests will experience much higher latencies than when code is running within the datacenter. We are using the lightest implementation possible ( instead of , and plain JS instead of Express/Connect ). mongodb mongoose on top of Serverless If you switch to a local MongoDB server, running the tests will take around 80ms. We’ll check performance back when our code is deployed. Secret management We are almost ready to deploy, but before we need to deal with an important aspect: . keeping credentials out of the codebase Luckily for us, Serverless suports (SSM) since version 1.22. This means that we can store key/value data to our IAM user and have it automatically retrieved whenever Serverless needs to resolve a secret. Simple Systems Manager So, first of all let’s get back to our file, copy the current URL string and replace: handlers/users.js const uri = "mongodb+srv://lambda:lambda@myapp...." with: const uri = process.env.MONGODB_URL Next, let’s add an block to in : environment provider serverless.yml provider:name: awsruntime: nodejs8.10stage: prodregion: eu-central-1environment:MONGODB_URL: ${ssm:MY_API_MONGODB_URL~true} This will bind the environment variable to the SSM key and will decrypt the contents. MONGODB_URL MY_API_MONGODB_URL at deploy time ~true Finally, let’s grab the string we just copied and store the credential in our SSM: pip install awscli # install the AWS CLI if necessaryaws configure # confirm the key/secret, define your region aws ssm put-parameter --name MY_API_MONGODB_URL --type SecureString --value "mongodb+srv://lambda:xxxxxx@myapp-.....mongodb.net/my-app?retryWrites=true" If you are part of a bigger team, . read here NOTE: At the time of writing, there is an issue with the serverless CLI retrieving SSM variables. If you encounter warnings or errors, check and . here here Deploying Stay with me, we are almost done! Let’s deploy our code to the cloud. We will update before. Then: serverless.yml serverless deploy Ready! You can also (select the appropriate region). manage them here If you call the URL corresponding to the function you will see that the latency takes under one third of what it was taking from out computer. listUsers This happens because now, the lambda function and the DB server are in the same region (i.e. datacenter), so connection latencies between them are considerably lower. Our round trip to Amazon will always be there, but now DB connections will not. Production As already commented during the article, when your API is ready to ship: Remove the administrative permissions from your IAM user and for insights on how to grant fine grained privileges. refer to this page Restrict the privileges of the DB user to only the database of your application, and not just any. Deployments should only be made by project maintainers. The rest of developers shouldn’t need to configure any IAM Lambda credentials. Cleaning Deploying a Lambda function will involve (at least) three different services from Amazon. If you intend to wipe an existing Lambda function you need to: Delete the function from AWS Lambda Delete the corresponding bucket from S3 Delete the corresponding stack from CloudFormation Wrap up I hope you enjoyed reading the article as much as I enjoyed writing it. If you want more, clap, comment, share and smile 🙂! If you want to experiment with the code of the article, feel free to clone the starter repo from GitHub: https://github.com/ledfusion/serverless-tdd-starter/tree/part-1 BTW: I am available to help you grow your projects. Feel free to find me on https://jordi-moraleda.github.io/ Bonus track #1: Staging If you are like most of us, you will need at least 3 environments for your project: development, staging and production. For the database, developers could use a local instance of MongoDB to speed connections up, but for staging and production, we need to provide completely independent database environments. Head back to MongoDB Atlas and create two different user accounts ( and ) with access to two different databases ( and respectively). my-app-prod my-app-staging my-app-prod my-app-staging Let’s remove the key we created before: aws ssm delete-parameter --name MY_API_MONGODB_URL And create two, for production and staging: # PRODaws ssm put-parameter --name MY_API_MONGODB_URL_prod --type SecureString --value "mongodb+srv://my-app-prod:xxxxxx@myapp-.....mongodb.net/my-app-prod?retryWrites=true" # STAGINGaws ssm put-parameter --name MY_API_MONGODB_URL_staging --type SecureString --value "mongodb+srv://my-app-staging:xxxxxx@myapp-.....mongodb.net/my-app-staging?retryWrites=true" : Here they are AWS Systems Manager Now let’s edit > and let’s do some magic: serverless.yml provider First of all, the field tells Serverless where to deploy the lambdas. If you deploy like below: stage serverless deploy --stage prod Then, would be . If we just ran then would default to . provider.stage "prod" serverless deploy provider.stage "dev" Second, the value of is evaluated in 2 steps, depending on the environment. If we are on or , Serverless will fetch or respectively from SSM and use it. If our IAM user does not have that key, will default to . MONGODB_URL "prod" "staging" MY_API_MONGODB_URL_prod MY_API_MONGODB_URL_staging MONGODB_URL "mongodb://localhost:27017" Ta da! This allows our development team to code and test from their local database, while code running on AWS will get the remote URL connection string. Deployment to staging Here they are, so staging and prod are ready to go! Bonus track #2: Automated tasks As the title suggests, we are aiming for nirvana, not just zen. Having to type repetitive commands may be a bit of overhead, so let’s finish our show with a clean set of commands to work with the project. As a recap, the actions we may perform during the lifecycle of the project are: Run the app locally Run the test suite Run the test suite and deploy to staging if successful Run the test suite and deploy to production if successful So let’s define these actions on our : package.json ..."scripts": {"deploy": "npm test && sls deploy --stage staging","deploy:prod": "npm test && sls deploy --stage prod","start": "serverless offline start","test": "serverless invoke test"}, Now the team can run the API with and test with , while the project maintainer can deploy to staging with and to production with . Nobody will interfere with unintended settings, data or environments. npm start npm test npm run deploy npm run deploy:prod After this whole exercise of integrations, you should have an easy to code, test, upgrade and maintain backend. It should work well with agile teams and fast moving companies and I hope that it does for you! Thanks for your read. If you liked it, do not miss episode #2 _If you are one of the happy readers of my article about TDD + Serverless architecture, just keep reading because today…_medium.com If TDD is Zen, adding Serverless brings Nirvana (part #2)
Share Your Thoughts