At my current gig, we’ve got 50+ services, ~6+ environments, and a rough count of 3.5k parameters used by services and humans across our environments. We used to use Vault, but after a year with it, we finally settled on using Parameter Store since we’re hosted on AWS. Parameter Store was a clear winner for us as it gives us all the security we want by leveraging IAM policies and KMS keys. And… I didn’t have to run Vault myself.
If you’re considering migrating things or have already started using Parameter Store, here’s a few tips from my experience with Param Store for managing secrets.
One really annoying bit is that I can’t easily search for parameters across the whole Parameter Store. If you’ve tried to do this using the UI, its a bit of pain between searching by regex and/or path. For example, I can’t search for all parameters starting with “db_” i.e. *db_*
a useful workaround — do a quick search* of parameter store. this requires getting all the parameters — but lets you apply plain regex across available paths which you can’t quite do in the UI. *It is a little bit slow but can get you a quick preview of the secrets you have available
> aws ssm describe-parameters \ --output text \ | egrep '^PARAMETERS' \ | awk '{print $5}' \ | egrep $REGEX
/dev/myApp/foo /dev/myApp/bar ...
There’s probably a better way of doing this.
Try to avoid scattering your db credentials among different keys in Parameter Store, and keep them together. Its easier to audit which users/roles have accessed the credentials since theres only one place you can get them from.
/$env/databases/$appDb/host = mydb.somewhereinaws.com /user = read-only /password = 'a good password' /port = 5432 /dbname = 'orders' /scheme = 'postgres'
connection_string = $scheme://$user:$password@$host:$port/$dbname
This would let you control who has access to which sets of databases via IAM policies. You can also script it to automatically log you into a database without you needing to see the password. Here’s an example of how you might do it in bash
dbInfo=$(aws ssm get-parameters \ --names "/dev/dbs/$dbName/database" \ "/dev/dbs/$dbName/host" \ "/dev/dbs/$dbName/password" \ "/dev/dbs/$dbName/port" \ "/dev/dbs/$dbName/user" \ "/dev/dbs/$dbName/scheme" \ --region $REGION \ --with-decryption \ --query Parameters[*].Value \ --output text | tr "\t" " ")
database="${dbInfo[0]}"database_host="${dbInfo[1]}"database_pass="${dbInfo[2]}"database_port="${dbInfo[3]}"database_scheme="${dbInfo[4]}"database_user="${dbInfo[5]}"
if [ "$database_scheme" = "mysql" ]; then MYSQL_PWD=$database_pass mysql -h $database_host -P $listen_port -u $database_user $database_database else PGPASSWORD=$database_pass psql -h $database_host -p $listen_port -U $database_user -d $database_database fi
This is the convention we’ve used for our parameters which has scaled ok.
/$environment_name/databases/$database_name/{host,port,pass,user} /databags/$service_name/{all,my,server,creds} /other_sensitive_info/{foo,bar,baz}
We borrowed databags
from our days using chef cookbooks & encrypted databags. It basically just means that there’s a bunch of parameters under keys belonging to a service.
This lets us scope access in a few different dimensions via IAM policies. We can say that a developer should have access to database secrets in a dev environment, but only service secret in prod.
{ "Effect": "Allow", "Action": [ "ssm:GetParameters" ], "Resource": [ "arn:aws:ssm:us-east-2:xxxx:parameter/dev/databases/*", "arn:aws:ssm:us-east-2:xxxx:parameter/prod/databags/myService/*" ] }
We used to use the certificate cookbook to install certs on our hosts which required us to keep our certificates stored in this format
{ "id": "mail", "cert": "-----BEGIN CERTIFICATE-----\nMail Certificate Here...", "key": "-----BEGIN PRIVATE KEY\nMail Private Key Here...", "chain": "-----BEGIN CERTIFICATE-----\nCA Root Chain Here..." }
But you can’t stuff this as json into a parameter due to the size limitations of 4096 characters. This is more of a nuisance when you’re migrating secrets over, but its easy to workaround. It will just unfortunately require you to update some scripts that are expecting the previous format of the secret.
Password rotation is an inherently hard problem, but one strategy could be to use pointers (or what they now call labels) to secrets. For example, lets say you want to rotate an encryption key, but you have multiple services relying on it.
you could define your structure like:
/dev/databags/my-key/current = 2 (index of actual secret) /1 = 'secret 2018' /2 = 'secret 2019'
Your applications would need to know to follow the pointer from current
and read the intended secret. From there, you’d have to regenerate a config file and bounce your service.
Fortunately, You can do this natively now on AWS with labels on parameters. Rotating passwords can also be done using Lambda and Secrets Manager — check out this walkthrough here
The docs indicate that parameters are versioned, and they are versioned while they exist. But if you accidentally delete a parameter, the history is gone with it.
The consensus seems to be that you should export your parameter store database to S3 or Dynamodb but I haven’t come across tools in this space yet.
secretly for exporting secrets into your environment, written in python — https://github.com/energyhub/secretly
parameter-store-exec — similar to secretly, written in go. — https://github.com/cultureamp/parameter-store-exec
confd for putting into config files — https://github.com/kelseyhightower/confd
as part of a consul-template — https://github.com/hellofresh/consul-template-plugin-ssm
chamber for managing secrets including parameter store — https://github.com/segmentio/chamber
Originally published at www.intricatecloud.io on October 20, 2018.