updated: this post has been updated on 15/09/2017 following the release of Serverless framework 1.22.0 which introduced support for SSM parameter store out of the box. Go to the end of post to see how it compares with other approaches.
AWS Lambda announced native support for environment variables at the end of 2016. But even before that, the Serverless framework had supported environment variables and I was using them happily as me and my team at the time migrated our monolithic Node.js backend to serverless.
However, as our architecture expanded we found several drawbacks with managing configurations with environment variables.
The biggest problem for us was the inability to share configurations across projects since environment variables are function specific at runtime.
The Serverless framework has the notion of services, which is just a way of grouping related functions together. You can specify service-wide environment variables as well as function-specific ones.
A sample serverless.yml that specifies both service-wide as well as function-specific environment variables.
However, we often found that configurations need to be shared across multiple services. When these configurations change we had to update and redeploy all functions that depend on them — which in itself was becoming a challenge to track these dependencies across many Github repos that are maintained by different members of the team.
For example, as we were migrating from a monolithic system piece by piece whilst delivering new features, we weren’t able to move away from the monolithic MongoDB database in one go. It meant that lots of functions shared MongoDB connection strings. When one of these connection strings changed — and it did several times — pain and suffering followed.
Another configurable value we often share are the root URL of intermediate services. Being a social network, many of our user-initiated operations depend on relationship data, so many of our microservices depend on the Relationship API. Instead of hardcoding the URL to the Relationship API in every service (one of the deadly microservice anti-patterns), it should be stored in a central configuration service.
When you need to configure sensitive data such as credentials, API keys or DB connection strings, the rule of thumb are:
If you’re operating in a heavily regulated environment then point 3. might be more than a good practice but a regulatory requirement. I know of many fintech companies and financial juggernauts where access to production credentials are tightly controlled and available only to a handful of people in the company.
Whilst efforts such as the serverless-secrets-plugin delivers on point 1. it couples one’s ability to deploy Lambda functions with one’s access to sensitive data — ie. he who deploys the function must have access to the sensitive data too. This might be OK for many startups, as everyone has access to everything, ideally your process for managing access to data can evolve with the company’s needs as it grows up.
My team outgrew environment variables, and I started looking at other popular solutions in this space — etcd, consul, etc. But I really didn’t fancy these solutions because:
This was 5 months before Amazon announced SSM Parameter Store at re:invent 2016, so at the time we built our own Configuration API with API Gateway and Lambda.
Nowadays, you should just use the SSM Parameter Store because:
In short, it ticks all our boxes.
You can create secure strings that are encrypted by KMS.
You can see a history of all the changes to these parameters.
You have fine-grained control over what parameters a function is allowed to access.
There are couple of service limits to be aware of:
Having a centralised place to store parameters is just one side of the coin. You should still invest effort into making a robust client library that is easy to use, and supports:
Here is one such client library that I put together for a demo:
To use it, you can create config objects with the loadConfigs
function. These objects will expose properties that return the config values as Promise
(hence the yield
, which is the magic power we get with co).
You can have different config values with different cache expiration too.
If you want to play around with using SSM Parameter Store from Lambda (or to see this cache client in action), then check out this repo and deploy it to your AWS environment. I haven’t included any HTTP events, so you’d have to invoke the functions from the console.
Update 15/09/2017: the Serverless framework release 1.22.0 which introduced support for SSM parameters out of the box.
With this latest version of the Serverless framework, you can specify the value of environment variables to come from SSM parameter store directly.
Compared to many of the existing approaches, it has some benefits:
However, it still falls short on many fronts (based on my own requirements):
process.env
) then they’ll be able to easily find the decrypted values during the initial probe (go to 13:05 mark on this video where I gave a demo of how easily this can be done)Of course, your requirement might be very different from mine, and I certainly think it’s an improvement over many of the approaches I have seen. But, personally I still think you should:
Hi, my name is Yan Cui. I’m an AWS Serverless Hero and the author of Production-Ready Serverless. I have run production workload at scale in AWS for nearly 10 years and I have been an architect or principal engineer with a variety of industries ranging from banking, e-commerce, sports streaming to mobile gaming. I currently work as an independent consultant focused on AWS and serverless.
You can contact me via Email, Twitter and LinkedIn.
Check out my new course, Complete Guide to AWS Step Functions.
In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. Including basic concepts, HTTP and event triggers, activities, design patterns and best practices.
Get your copy here.
Come learn about operational BEST PRACTICES for AWS Lambda: CI/CD, testing & debugging functions locally, logging, monitoring, distributed tracing, canary deployments, config management, authentication & authorization, VPC, security, error handling, and more.
You can also get 40% off the face price with the code ytcui.
Get your copy here.