Update: there’s been some recent changes to Secrets in ECS, for the newest approach, hit up part two of this article below. And to figure out how we got there, keep on reading.
Last year I wrote an article about the pain of Secrets Management in ECS, but the world’s now a brighter place with…medium.com
Containers on ECS are great…but when it comes to secrets and config management, ECS is still rough around the edges. Apps need secrets and the key question is how can we get secrets into Docker containers in a way thats both secure and compatible with ECS.
For local development, we can run Docker images and simply pass in an env-file with the variables we want our containerised app to use. This makes changing our secrets and config flexible. You just change the value in the file and restart your container. No re-building necessary! But when we start getting to staging or production environments, it gets a little harder. We have secrets and credentials to manage, and we want only our application to have access to these for obvious reasons. Let’s talk about some of the options available for getting these values into ECS both easily and securely.
Bake into Docker
We can bake our config and secrets into our Docker image at build time. The advantage is that you can start your container without having to worry about configuration at all, but that comes with a cost. It’s now impossible to change config without re-building your image. Managing config between development, staging and production environments becomes painful. The image you’ve now tested in staging won’t be the same image you deploy to production due to the rebuild that’s required. To be honest, this isn’t really a viable option for the majority of use cases.
ECS Environment Variables
This method is somewhat analogous to how we work locally. We pass in our config as environment variables getting set in the running container. Unlike for local development, ECS has no support for env-files. What ECS does allow, is defining environment variables when you specify the containers in your task definition or when you actually run an instance of the task definition.
The key downside is that AWS doesn’t support any kind of secure parameters in these variables. If you pass secrets this way, they are now readable via the AWS console to anyone who has access. Our app credentials are now viewable and usable by anyone with console access. There goes our security.
Docker Fetches from SSM Parameter Store
This is the technique that AWS seems to be promoting and the cleanest, most secure of the options so far (given the lack of other viable options). SSM Parameter Store allows for secure storage of configuration data and secrets. Because of it’s hierarchal structure, we can also set granular rules over which applications have access to which set of secrets. Say we create 2 parameters with the following names as per the above example:
By giving our containers a Task Role with SSM permissions, they can use the CLI to call out to SSM on startup to fetch their required config. And if we specify a specific path in the Roles resources such as
/application/secret/, only parameters in that path are accessible by your container.
To get this working in a Docker image, you first need to install the awscli (via python) and JQ. Then, modify your containers entrypoint to first call a script that will fetch its config before running its primary task. An example of this is the script below which calls
get-parameters-by-path with a specific path, and then sets the retrieved parameters in the container as environment variables.
This method works well due to the security that SSM offers with it’s SecureString parameter type and fine-grained permissions. It’s the best way we have right now, but I can’t say I’m happy with this. Firstly, we’re bloating our slim clean Docker image with dependencies our application doesn’t really require. And in a sense we’re ruining the purity of docker as we have to pull from an outside source on startup rather than just running a single task like a container does best. It’s a painful compromise that’s needed until AWS provides us with more support in this area.
With more and more people moving into containerised applications on AWS, hopefully some more flexibility around secrets management will arise. There’s feature requests to give first class support to SSM parameter store in the task definitions themselves to prevent Docker having to call out on startup. Others are using S3 to store an encrypted version of what’s essentially an env-file, although this still requires calling out to KMS for decryption. For now, security beats ease-of-use, but there’s no reason ECS can’t continue to be refined in the future to allow developers both.
As of November 2018, Amazon have now officially released support for injecting sensitive data in ECS containers via SSM! This is an awesome first step based on user feedback from the feature request shown in the links. There’s currently no support for the Fargate launch type, or for pulling multiple parameters given a single path, but hopefully we can see these advancements over the coming months.
Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in AWS Systems…docs.aws.amazon.com
Thanks to my colleague Stas Vonholsky for a great blog on managing secrets with Amazon ECS applications. -- As…aws.amazon.com