It was evening, and I had nothing to do. I started thinking about creating something new, innovative, and valuable for the open-source community. Then I remembered that many years ago, one of my clients wanted a self-hosted password manager for his small IT consulting company, and we considered using Rattic. However, around the time we installed and tested it, the repository was archived. I tried to find alternatives, but didn’t succeed. To be honest, I don’t even remember how that story ended. This time, I looked into popular password managers and found them very expensive. For startups, the cost is usually around $4–5 per user per month, billed annually. That means for 40 team members, the price comes to $5 × 40 × 12 = $2,400 a year. On the other hand, some popular password managers offer self-hosted solutions, but they require running everything through Docker Compose. Personally, I don’t believe Docker Compose is a good choice for production—I call it “run and pray.” You still have to maintain the underlying infrastructure, manage Docker containers, and keep Docker and Docker Compose themselves up to date. That sounded like a headache I wasn’t willing to take on. Long story short - challenge accepted! The first design was simple, using an API gateway, Lambda functions, and parameter store After I built the first version, I ran into more questions and challenges: Password sharing within a team — requires storing some metadata somewhere to know who shared what shared and with whom.Defining ownership of a secret — how should that work? Maybe just by adding a prefix, e.g. /JohnSnow/secret1? Where /JohnSnow/ is the owner. Most important and most challenging — secret encryption. Password sharing within a team — requires storing some metadata somewhere to know who shared what shared and with whom. Password sharing within a team Defining ownership of a secret — how should that work? Maybe just by adding a prefix, e.g. /JohnSnow/secret1? Where /JohnSnow/ is the owner. Defining ownership of a secret /JohnSnow/secret1 /JohnSnow/ Most important and most challenging — secret encryption. Most important and most challenging For sharing, I already had an idea: just use DynamoDB to store metadata. It’s serverless and cheap, even if the project grows to a hundred users. Encryption, however, raised bigger issues. I considered several approaches: Encrypt with a private key — but how would sharing work then?Encrypt with a passphrase — but if you share the secret, you also need to share the passphrase. I didn’t like that idea.Use a private key per group (e.g., managers, team1, team2, etc.) — but then, if you want to share an existing password with another group, you’d need to re-encrypt it with their group key. That also felt wrong. Encrypt with a private key — but how would sharing work then? Encrypt with a passphrase — but if you share the secret, you also need to share the passphrase. I didn’t like that idea. Use a private key per group (e.g., managers, team1, team2, etc.) — but then, if you want to share an existing password with another group, you’d need to re-encrypt it with their group key. That also felt wrong. At that point, I realized my password manager idea wasn’t going to work, and I gave up on it. In the meantime, I shifted my attention to preparing for the AWS Developer exam — and that’s where I discovered Cognito Identity Pools. Amazon Cognito identity pools provide temporary AWS credentials for users who are guests (unauthenticated) and for users who have been authenticated and received a token. An identity pool is a store of user identifiers linked to your external identity providers. And it's bingo! I can use the identity pool to get temporary credentials on a client (means browser) and encrypt/decrypt passwords using a KMS key. And everything works from a browser, on a client level. And the next plan was: I created a Cognito User Pool and Identity Pool, then updated the KMS key policy to allow only kms:Encrypt and kms:Decrypt for Cognito. After receiving a JWT token, I used it to make an API call to KMS and successfully encrypt/decrypt a password. kms:Encrypt kms:Decrypt To handle sharing and ownership, I added DynamoDB. Each Lambda calls a metadata Lambda to retrieve ownership information. Everything seemed to be working, but I still had a feeling that something wasn’t quite right. Should I be using Secrets Manager? I initially skipped this idea because at $0.40 per secret, the costs would add up quickly. My goal was to keep everything within the AWS Free Tier. $0.40 per secret Should I be using Parameter Store? What’s the real benefit? To me, it just adds another AWS API call for every Lambda execution, which impacts performance. So I decided to drop Parameter Store entirely and keep everything in DynamoDB. Now I was confident I was on the right track. I created a minimal POC of the application where I could create, delete, and retrieve data from DynamoDB. It worked fast and securely. Later, I added backend functionality to manage Cognito users and groups. I decided to use one Lambda function per functionality, since that approach is easier to maintain. In total, I ended up with 15 Lambda functions — some to manage secrets in DynamoDB, and others to handle users and groups in Cognito. Frontend: Then it was time to think about the frontend. The only requirement was that it should support building as a static website. I didn’t spend much time — I just went ahead and chose React. And the first simple version looked like: After about a month later, I changed the theme to black by default and added support to switch it to white: Security: Each user has MFA enabled by default. You can whitelist/block countries you want to provide access toTo make API calls to the backend, you must provide JWT token. The token is being generated by Cognito and being validated twice - by the API gateway and by a lambda function: Each user has MFA enabled by default. You can whitelist/block countries you want to provide access to To make API calls to the backend, you must provide JWT token. The token is being generated by Cognito and being validated twice - by the API gateway and by a lambda function: Penetration testing: Of course, I was interested in penetration testing and asked a security engineer to try to hack the application. And what? She managed to find one vulnerability, if to be more specific, the regular user can make an API call using their credentials and do actions which should be available only to the admin user. The fix was easy - I just added a check in lambdas to identify if the user is in Admin group Infrastructure as a code: Since I’ve spent most of my career as a DevOps engineer, I wanted to make the whole setup as simple as possible using an IaC tool. I started with AWS CDK — I didn’t have much experience with it, so it seemed like a good opportunity to learn. But soon I realized I had been a bit too optimistic: setting everything up with CDK would take much longer than I expected. So, I switched to Terraform instead. In the first versions, I used public Terraform modules for Lambda, API Gateway, DynamoDB, and other resources. But then I asked myself: Do I really want to keep all these module versions up to date? The answer was no. To avoid that headache, I decided to rewrite everything from scratch. I also moved repeatable code (like JWT token validation in Lambdas) into Lambda layers. Do I really want to keep all these module versions up to date? A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files. There are multiple reasons why you might consider using layers: - To reduce the size of your deployment packages. Instead of including all of your function dependencies along with your function code in your deployment package, put them in a layer. This keeps deployment packages small and organized. - To separate core function logic from dependencies. With layers, you can update your function dependencies independent of your function code, and vice versa. This promotes separation of concerns and helps you focus on your function logic. - To share dependencies across multiple functions. After you create a layer, you can apply it to any number of functions in your account. Without layers, you need to include the same dependencies in each individual deployment package. - To use the Lambda console code editor. The code editor is a useful tool for testing minor function code updates quickly. However, you can’t use the editor if your deployment package size is too large. Using layers reduces your package size and can unlock usage of the code editor. - To lock an embedded SDK version.The embedded SDKs may change without notice as AWS releases new services and features. You can lock a version of the SDK by creating a Lambda layer with the specific version needed. The function then always uses the version in the layer, even if the version embedded in the service changes. On a diagram it looks like this: One file, or a whole library, can be shared within multiple functions And Terraform main.tf looked like: module "runa_vault" { source = "../modules/" domain_name = "runavault.example.com" frontend_domain = "runavault.example.com" api_domain = "api.runavault.example.com" cognito_domain = "auth.runavault.example.com" geo_restriction_type = "whitelist" geo_restriction_locations = ["UA", "GB"] cognito_groups = ["Admin", "Users", "Managers"] cognito_users = { "admin" = { email = "admin@example.com" groups = ["Admin"] given_name = "Admin" family_name = "Admin" } "manager" = { email = "manager@example.com" groups = ["Managers", "Users"] given_name = "Manager" family_name = "Manager" } "alone" = { email = "alone_user@example.com" groups = [] given_name = "alone" family_name = "user" } } } module "runa_vault" { source = "../modules/" domain_name = "runavault.example.com" frontend_domain = "runavault.example.com" api_domain = "api.runavault.example.com" cognito_domain = "auth.runavault.example.com" geo_restriction_type = "whitelist" geo_restriction_locations = ["UA", "GB"] cognito_groups = ["Admin", "Users", "Managers"] cognito_users = { "admin" = { email = "admin@example.com" groups = ["Admin"] given_name = "Admin" family_name = "Admin" } "manager" = { email = "manager@example.com" groups = ["Managers", "Users"] given_name = "Manager" family_name = "Manager" } "alone" = { email = "alone_user@example.com" groups = [] given_name = "alone" family_name = "user" } } } You need to choose domains for the frontend, backend and Cognito. You create an init user or users with defined groups. The full documentation is in the repository. Once you created the config, simply run "terraform apply", and all users will get their notifications to activate accounts and enable MFA. If you have any questions, please feel free to comment here or create issues/discussions on GitHub https://github.com/RunaVault/RunaVault https://github.com/RunaVault/RunaVault Would be very thankful for your "stars" on the repo.