Google Workload Identity Federation, OIDC, Dex and TypeScript: Connect your B2B to Gemini and Vertex

Written by josejaviasilis | Published 2025/09/26
Tech Story Tags: gcp | google-gemini | dex | docker | workload-identity-provider | openid-connect | google-artifact-registry | hackernoon-top-story

TLDRThis guide will teach you how to set up an identity provider using Dex (Open Source), and generate OIDC tokens against it. It will cover how to pull images from Artifact Registry and connect to Vertex AI, all through a VPS (Hetzner) The guide will also cover the steps to Initialize Servers in an External VPS or Google Cloud.via the TL;DR App

This guide will be thorough and skimmable so you can access the pieces you want directly. There will be an accompanying GitHub repo (Coming soon) that you can use to go through this.

Index:

  • About

  • Errata

  • What This Guide Covers and Does Not.

  • What about LLMs and tools? (Gemini, Claude, Claude Code, Cursor, etc.)

  • Definitions

  • Requirements

  • What is Google Cloud Workload Identity Flotation?

  • Architecture Overview

  • Configure a custom OIDP (Open ID Provider) using Dex. How to configure a custom one using DEX, which is an identity provider.

  • How to connect a Docker application to pull it. There's also going to be some code, some JavaScript code that we're going to use to connect directly to it and issue some tokens.

    Follow for more.

About

Setting up an enterprise Google Cloud Platform (GCP) account brings in some challenges (Like accessing Gemini via Vertex AI on an external service like Vercel/Digital Ocean/Hetzner).

Previously, you’d create a service account that would hold the permissions to the services (e.g Vertex AI) and generate a long-lived (No expiration date) key that you’d use to connect to GCP.

This isn’t recommended.

The service account key may be compromised, making it difficult to rotate.

Google recommends that you use short-lived tokens through Workload Identity Federation and Open ID Connect to communicate with the services. You __still use __a service account, but you generate a short-lived token instead.

The caveat is that this configuration isn’t straightforward. Documentation is scarce, and it’s a mess. It took me 1.5 part-time months to get it right.

Don’t worry,

This guide will teach you how to do this step-by-step.

We will configure an Open ID Provider (Dex) that will be used to issue tokens, and will connect Docker and our app so we can pull images from Artifact Registry and connect to Vertex AI, all through a VPS (Hetzner).

Errata

I’ve done this to the best of my abilities. Feel free to reach out and suggest improvements and or fix mistakes!

What This Guide Covers and Does Not.

Covers:

  • Setting up an external identity provider using Dex that authenticates against Google Cloud
  • Pull an image through the artifact registry using Docker and OpenID Connect/Workload Identity Federation.
  • Issuing an OIDC token through a NodeJS application using google-auth-library .

Doesn’t cover:

  • How to Initialize Servers in an External VPS or Google Cloud.
  • How to upload files to your VPS.
  • How to use IaC (Infrastructure as Code) to set up Google Cloud Services.
  • How to set up GitHub actions to communicate with GCP.

This guide will cover how to set up an identity provider using Dex (Open Source), and generate OIDC tokens against it, so you can connect through any other provider in the world.

What about LLMs and tools? (Gemini, Claude, Claude Code, Cursor, etc.)

They helped a lot, but couldn’t figure it out on their own. They didn’t know how to mix the logic required for Google Workload Identity Federation with Dex.

Point the LLM to this page to help you set it up.

Definitions

Google Cloud Platform (GCP)

The cloud solution that we will use to authenticate with OIDC. Shortened as GCP for this article.

Workload Identity Federation:

It is a way for workloads (apps, services, CI/CD pipelines, VMs, containers, etc.) running outside of a cloud provider to authenticate securely to that provider’s APIs without using long-lived service account keys.

Instead of giving your app a static key file (which is a significant security risk if leaked), WIF lets the cloud provider trust an external identity provider (IdP) such as GitHub Actions, GitLab, Kubernetes, or any OIDC/SAML compatible provider.

In other words, it generates tokens with specific permissions that you can safely send via your payload (E.g: Cloud Storage get, list, upload) to access the services.

Identity Providers

It is a central hub that a user or machine (our code) can use to “log in” or authenticate against one or more services. In our case, it’s the system that will be used to generate a valid token that will be exchanged with GCP to give access to our applications.

You’ll see later on how it works in detail.

Example of Identity Providers

Developer Platforms (OIDC-native): These have specific endpoints that you can pass to GCP and generate the required tokens automatically.

  • GitHub (GitHub Actions OIDC tokens)
  • GitLab (CI/CD pipelines with OIDC)
  • Bitbucket (pipelines OIDC)

Other providers:

  • Okta
  • Auth0
  • Azure Active Directory
  • Google Identity Platform
  • Ping Identity
  • AWS IAM Identity Center
  • Keycloak
  • Dex (The one that we will be using)

Dex

https://dexidp.io/

This is the Identity Platform that we will use. We configure it with a username and password, and through a REST endpoint, we call it using our tool of choice (cURL, wget, JavaScript’s fetch/xhr/axios, etc.) It will then return a JWT that we can use to communicate with GCP.

Why?

  • It’s open source and can be freely hosted.
  • It only requires a single, simple YAML file.
  • You don’t need Kubernetes to run (Although it is popular in that environment).

We will cover how to implement a connection between Google Cloud and an external source (such as a VPS) using Google Cloud Workload Identity Federation, Dex (an identity provider), and more.

Hetzner

https://www.hetzner.com/

It’s the Cloud provider chosen for its impressive price/performance ratio. Plus it has a very simple UI and IaC (Infrastructure as Code) plugins that make it a breeze to setup.

Doppler

https://www.doppler.com/

It's a Secrets Manager. Something similar to HashiCorp Vault or AWS Secrets Manager.

Doppler allows us to store our environment variables in different projects and branches, which enables all our team members to share them. We can protect, fork, isolate, copy them without sharing .env files across the teams.

Even for working solo, it has a very generous free tier that will cover all your needs. I’ve been using it since last year to manage all my passwords for all my services.

Although entirely optional, this post assumes that you’re using Doppler to load the environments (You can load an .env next to the commands and you’ll be good to go)

Requirements.

  1. A Google Cloud Platform (GCP) account

  2. Have a server you can connect to (SSH): Virtual Machine - VPS, Bare Metal, etc.

  3. A domain name (.com, .ai, .app, .io etc.) pointing to your server.

  4. Basic Knowledge of:

    1. Docker
    2. Google CLI
    3. Bash
  5. I assume:

    1. You have a working GCP project.
    2. You have Artifact Registry enabled with a docker image
    3. Have installed Docker in your VPS.

Install the Google CLI on your machine

Official Download Installation Link:

https://cloud.google.com/sdk/docs/install

Our first task is to configure the required permissions and enable the services. We use the Google CLI

These are the locations from which you can install it:

(Note that additional steps are omitted from this article.)

Windows:

Download the installer here

MacOS:

Install Brew (in case you haven’t):

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Install Google CLI:

brew install --cask gcloud-cli

Linux:

(Download the file)

curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz

Open it:

tar -xf google-cloud-cli-linux-x86_64.tar.gz

Create a Project in Google Cloud or use an existing one

Go to: https://console.cloud.google.com/

Select your account and proceed to create a new project if you haven’t already done so.

Init Google Cloud CLI

After installing it, open a terminal and run:

gcloud init 

This will begin the connection process.

The CLI operates by linking your Google account. This account must have the permissions required to perform administrative operations. In other words, if starting, use your main account.

Choose the project you’ve created.

After authenticating, the CLI will ask you to choose the project you’d like to work on. This is for convenience.

This can be changed later on with:

gcloud config set project PROJECT_ID

OR, you can always pass the --project PROJECT_ID flag to all the gcloud commands.

Enable the Services:

You will need to enable these.

In your terminal, run:

gcloud services enable \
  iamcredentials.googleapis.com \
  sts.googleapis.com \
  iam.googleapis.com \
  cloudresourcemanager.googleapis.com

Environment Variables

Used across the entire post. Replace these with your own.

export PROJECT_ID="spiritual-slate-445211-i1"
export PROJECT_NUMBER="1016670781645"
export POOL_ID="hetzner-pool"
export PROVIDER_ID="hetzner-provider"
export SERVICE_ACCOUNT_ID="hetzner"
export SERVICE_ACCOUNT="$SERVICE_ACCOUNT_ID@${PROJECT_ID}.iam.gserviceaccount.com"
export DEX_ISSUER="https://auth.yourdomain.com"
export ROLE_NAME="hetzner_role"

You can get the Project ID and Project Number from the main page. Select your project from the button upper left corner.

Configure Workload Identity Federation

How it works in GCP:

  1. You create an identity pool that will have a series of providers or clients connecting to it.
  2. These providers are the ones that the Dex’ token will be used to authenticate against.
  3. We try to restrict these by using an “attribute mapping,”. It parses the JWT from Dex, compares the fields from the attribute mapping with the ones in the token. If one of them doesn’t match, it’s rejected.

Think of the pool as a group of possible authentication methods that you let GCP know that other 3rd parties can authenticate against.

Create the Pool

Run this command:

gcloud iam workload-identity-pools create "$POOL_ID" \
  --project="$PROJECT_ID" \
  --location="global" \ 
  --display-name="Hetzner Workload Identity Pool" \
  --description="Pool for Hetzner VPS to access GCP services via OIDC"

Example:

gcloud iam workload-identity-pools create "hetzner-pool" \
  --project="spiritual-slate-445211-i1" \
  --location="global" \ 
  --display-name="Hetzner Workload Identity Pool" \
  --description="Pool for Hetzner VPS to access GCP services via OIDC"

Example PowerShell:

gcloud iam workload-identity-pools create "hetzner-pool" `
  --project="spiritual-slate-445211-i1" `
  --location="global" `
  --display-name="Hetzner Workload Identity Pool" `
  --description="Pool for Hetzner VPS to access GCP services via OIDC"

Create the OIDC Provider

This will be the central connecting point with Google Cloud. This will be the entry point that validates the JWT from Dex.

Be careful with the --attribute-mapping flag. This is the one that tells GCP to compare the fields with JWT for OIDC. I recommend you (while testing) to start broad and then narrow down the permission scopes as you become more knowledgeable.

# Create OIDC provider pointing to your DEX instance
gcloud iam workload-identity-pools providers create-oidc "$PROVIDER_ID" \
  --project="$PROJECT_ID" \
  --location="global" \
  --workload-identity-pool="$POOL_ID" \
  --display-name="Hetzner Dex OIDC Provider" \
  --description="OIDC provider using Dex for Hetzner VPS authentication" \
  --attribute-mapping="google.subject=assertion.sub,attribute.email=assertion.email,attribute.groups=assertion.groups" \
  --issuer-uri="$DEX_ISSUER"

Example:

# Create OIDC provider pointing to your DEX instance
gcloud iam workload-identity-pools providers create-oidc "hetzner-provider" \
  --project="spiritual-slate-445211-i1" \
  --location="global" \
  --workload-identity-pool="hetzner-pool" \
  --display-name="Hetzner Dex OIDC Provider" \
  --description="OIDC provider using Dex for Hetzner VPS authentication" \
  --attribute-mapping="google.subject=assertion.sub,attribute.email=assertion.email,attribute.groups=assertion.groups" \
  --issuer-uri="https://auth.yourdomain.com"

This will inform GCP that the JWT’s subject and email must match those from the pool to be considered valid.

Service Account Configuration

Yes, you still need to create a service account with the final permissions that will be used to access your GCP resources.

The difference is that we will impersonate this account using short-lived tokens versus using the permanent service account key.

Create a service account.

gcloud iam service-accounts create "$SERVICE_ACCOUNT_ID"\
  --project="$PROJECT_ID" \
  --display-name="Hetzner VPS Service Account" \
  --description="Service account for Hetzner VPS containers"

Example:

gcloud iam service-accounts create "hetzner" \
  --project="spiritual-slate-445211-i1" \
  --display-name="Hetzner VPS Service Account" \
  --description="Service account for Hetzner VPS containers"

PowerShell:

gcloud iam service-accounts create "hetzner" `
  --project="spiritual-slate-445211-i1" `
  --display-name="Hetzner VPS Service Account" `
  --description="Service account for Hetzner VPS containers"

Assigning the roles to the service account

You can assign the GCP’s predefined Roles by visiting this link:

https://cloud.google.com/iam/docs/roles-permissions

I’ve opted for a more granular approach which sets the permissions directly to a custom role, which helps me reduce the attack surface area.

OR - Assigning granular roles or permissions:

This approach takes some back and forth with GCP (403 Forbidden errors here and there). But you’ll have a more secure infra in the end.

You can find the granular permissions in this link:

https://cloud.google.com/iam/docs/permissions-reference

We start by creating a custom role.

Create a custom role for the service account - with minimum permissions

To create a custom role, we need a YAML file that will hold each permission:

Create a file named “roles-hetzner.gcp.yml” and copy and paste the following (Note this file lives locally in your machine or repo)

# To update the role:  gcloud iam roles update hetzner_role --project=spiritual-slate-445211-i1 --file=./roles-hetzner.gcp.yml
title: Hetzner GCP Roles
description: |
  This policy ensures that the Hetzner VPS has the required permissions to access all
  the Google Cloud services needed for running Docker containers with the same
  functionality as when they were deployed on Cloud Run.
stage: GA
# https://cloud.google.com/iam/docs/permissions-reference
includedPermissions:
  # === IAM Permissions ===
  # For service account creation and management
  - iam.serviceAccounts.getAccessToken
  - iam.serviceAccounts.signBlob # Required for signed URLs

  # === Artifact Registry Permissions ===
  # For Docker image storage and management
  - artifactregistry.repositories.get
  - artifactregistry.repositories.list
  - artifactregistry.repositories.downloadArtifacts
  - artifactregistry.packages.get
  - artifactregistry.packages.list
  - artifactregistry.versions.get
  - artifactregistry.versions.list
  - artifactregistry.dockerimages.get
  - artifactregistry.dockerimages.list
  - artifactregistry.tags.get
  - artifactregistry.tags.list
  - artifactregistry.files.get
  - artifactregistry.files.list
  - artifactregistry.files.download
  - artifactregistry.versions.get
  - artifactregistry.versions.list
  - resourcemanager.projects.get
  - artifactregistry.attachments.get
  - artifactregistry.attachments.list

  # === Cloud Storage Permissions ===
  # For bucket and object operations
  - storage.objects.create
  - storage.objects.delete # Optional: only include if you need to delete images
  - storage.objects.get
  - storage.objects.list
  - storage.objects.update
  - storage.objects.getIamPolicy
  - storage.objects.setIamPolicy

  # === Vertex AI Permissions ===
  # For AI/ML platform operations
  - aiplatform.endpoints.predict
  - aiplatform.endpoints.get
  # Permissions for operations management
  - aiplatform.operations.list

And then execute the command:

gcloud iam roles create "$ROLE_NAME" --project="$PROJECT_ID" --file="./roles-hetzner-gcp.yml"

gcloud iam roles create hetzner_role --project="spiritual-slate-445211-i1" --file="./roles-hetzner-gcp.yml"

Refresher:

The --file parameter in the command points to a relative path where your terminal is:

As the file is in the same location, we can do:

gcloud iam roles create "herzner_role" --project="spiritual-slate-445211-i1" --file="roles-hetzner.gcp.yml"

/Users/joseasilis/Documents/programming/alertdown/libs/infrastructure/src/pulumi

Updating the role

To make changes to the role, you can update it like:

gcloud iam roles update hetzner_role --project=spiritual-slate-445211-i1 --file="./roles-hetzner.gcp.yml"

How to get granular use it:

  1. Search for the service I’d like to use, e.g, Vertex AI, and click it:

  1. Using an LLM or the Editor/Viewer roles, ask for the roles you’d like to grant. Then, search for the role directly and pick up the permissions you want.

For example, I found out that it Vertex AI User has the aiplatform.endpoints.predict permission that lets me call Gemini.

If you mess up with a permission, you will receive a 403 Forbidden from Google Cloud.

  1. Add the permission to the YAML File and update the role.

Enabling missing services

You can enable missing services by executing:

gcloud services enable <service-api>.googleapis.com

Attaching the role to the service account

gcloud iam service-accounts add-iam-policy-binding \
 "$SERVICE_ACCOUNT" \
 --project="$PROJECT_ID" \
 --role="projects/$PROJECT_ID/roles/$ROLE_NAME" \
 --member="serviceAccount:$SERVICE_ACCOUNT"

gcloud iam service-accounts add-iam-policy-binding \
 "[email protected]" \
 --project="spiritual-slate-445211-i1" \
 --role="projects/spiritual-slate-445211-i1/roles/hetzner_role" \
 --member="serviceAccount:[email protected]"

Connecting the Service Account to Workload Identity Federation and allowing Service Account Impersonation

Independent of the two steps above, we attach the workloadIdentityUser role directly to the user. (You can opt to add the permissions directly instead)

gcloud iam service-accounts add-iam-policy-binding \
  "$SERVICE_ACCOUNT" \
  --project="$PROJECT_ID" \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/$PROJECT_NUMBER/locations/global/workloadIdentityPools/$POOL_ID/*"

Which becomes:

gcloud iam service-accounts add-iam-policy-binding \
  hetzner \
  --project="spiritual-slate-445211-i1" \
  --role="roles/iam.workloadIdentityUser" \
  --member="principalSet://iam.googleapis.com/projects/1016670781645/locations/global/workloadIdentityPools/hetzner-pool/*"

Inside the VPS - Docker Compose

While I recommend you have a separate server for the OpenID provider (Dex), I decided to co-host my app and Dex in a single server to reduce costs.

Now we’re moving to the VPS (E.g: Hetzner). We will git pull/or shell copy our scripts to the server (Docker compose and some shell files) that will help us:

  1. Bootstrap the Dex service and set up the IDP.
  2. Authenticate against Artifact Registry with Docker.
  3. Setup a TLS certificate for our HTTPS domain.
  4. Configure Nginx with zero deployments to a Remix/React-Router app.

Docker Compose:

services:
  # Certificate management (initial + renewal)
  certbot:
    image: certbot/dns-cloudflare
    restart: unless-stopped
    volumes:
      # Map certbot data directly to avoid double nesting
      - ./certbot/conf:/etc/letsencrypt
      - ./certbot/www:/var/www/certbot
      - ./certbot/logs:/var/log/letsencrypt
      - ./certbot-scripts/certbot-manager.sh:/certbot-manager.sh:ro
      - /run/secrets/cloudflare.ini:/etc/cloudflare/cloudflare.ini:ro
    environment:
      - EMAIL=${CERTBOT_EMAIL:[email protected]}
    entrypoint: ['/bin/sh', '/certbot-manager.sh']
    networks:
      - shared-network

  nginx:
    image: nginx:stable-alpine
    restart: unless-stopped
    ports:
      - '80:80' # HTTP for certbot challenges and redirects
      - '443:443' # HTTPS for auth.mydomain.com
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certbot/www:/var/www/certbot:ro
      - ./certbot/conf:/etc/letsencrypt:ro
    depends_on:
      - dex
    networks:
      - shared-network

  # Dex OIDC server
  dex:
    build:
      context: ./dex
      dockerfile: Dockerfile
    restart: unless-stopped
    networks:
      - shared-network
    volumes:
      - ./dex:/etc/dex
      - ./dex/data:/data
      - ./dex/entrypoint.sh:/entrypoint.sh:ro
      - ./dex/config.yaml:/etc/dex/config.yaml:ro
    environment:
      - DOPPLER_TOKEN=${DOPPLER_TOKEN}
      - DEX_GCP_STATIC_CLIENT_ID=${DEX_GCP_STATIC_CLIENT_ID}
      - DEX_GCP_STATIC_CLIENT_SECRET=${DEX_GCP_STATIC_CLIENT_SECRET}
      - DEX_GCP_STATIC_PASSWORD_EMAIL=${DEX_GCP_STATIC_PASSWORD_EMAIL}
      - DEX_GCP_URL=${DEX_GCP_URL}
    healthcheck:
      test: ['CMD', 'wget', '-qO-', 'http://localhost:5556/healthz']
      interval: 5s
      timeout: 5s
      retries: 3
      start_period: 3s
    expose:
      - '5556'
    # Main application
  app:
    image: us-east1-docker.pkg.dev/myfirstproject/remix-app-artifact-repo-production/remix-app-production:latest
    restart: unless-stopped
    networks:
      - shared-network
    working_dir: /app/apps/remix-app-vite
    command: ['doppler', 'run', '--', 'bun', 'server.ts']
    expose:
      - '8080'
    environment:
      - DOPPLER_TOKEN=${DOPPLER_REMIX_TOKEN}
    healthcheck:
      test:
        [
          'CMD',
          'sh',
          '-c',
          'test ! -f /tmp/drain && curl -f --max-time 5 http://localhost:8080/health',
        ]
      interval: 5s
      timeout: 10s
      retries: 3
      start_period: 3s # Allow time for app startup
    labels:
      # Configure docker-rollout pre-stop hook for graceful shutdown
      - 'docker-rollout.pre-stop-hook=sleep 7'

networks:
  shared-network:
    driver: bridge

The docker compose is made of 5 parts:

  1. Certbot - Used to generate a TLS certificate for us. We need to issue an HTTPS connection because Google will connect to our service and verify the authenticity of the request, and because it’s security 101. Nginx then handles the certificate. I’m using Cloudflare as a proxy. This version of Certbot connects to our Cloudflare account, manages the DNS verification for us, and generates the certificate automatically.
  2. Nginx - Will consume the TLS certificate for both: Dex (IDP), and the React Router app. It will also serve as a load balancer when I’m trying to perform zero-downtime deployments with Docker Rollout.
  3. Dex - The star of the show. It will create the Identity Provider that will issue our OIDC tokens that will be used to exchange for short-lived credentials to impersonate the service account. The beastly characteristic of Dex is that it is configured with a single YAML file.
  4. App - The NodeJS Remix/React Router docker image hosted in Artifact Registry. This was built in a CI/CD pipeline and then pushed to it (Outside the scope of this tutorial). The Docker image hosted in Artifact Registry, which will configure later on to pull it from Google using Workload Identity Federation and OIDC. This one has a health check that is used by Docker rollout to kill the old container (see down below).
  5. The startup shell script - docker compose up won’t cut it. We need to make sure certificates are issued first, and it was easier for me to handle the orchestration with a shell script. Some kung-fu was required to have Nginx up and running (As Dex needed to be booted up first before the app)

Docker Rollout (A bit outside scope, but I wanted to include it anyway)

It’s a single-file script created by Wowu https://github.com/Wowu/docker-rollout that helps us have zero-downtime Docker container updates without the need for Kubernetes.

To install, execute in the VPS the following:

# Create directory for Docker cli plugins
mkdir -p ~/.docker/cli-plugins

# Download docker-rollout script to Docker cli plugins directory
curl https://raw.githubusercontent.com/wowu/docker-rollout/main/docker-rollout -o ~/.docker/cli-plugins/docker-rollout

# Make the script executable
chmod +x ~/.docker/cli-plugins/docker-rollout

Using it is very simple. In every place where we would use docker compose (note the new syntax), We’d use docker rollout it instead.

To deploy an updated image, you docker compose pull app the new image from Artifact Registry and execute docker rollout . It will automatically switch images without downtime.

Your service cannot have container_name and ports defined in docker-compose.yml, as it's not possible to run multiple containers with the same name or port mapping.

How does it work?

Docker rollout will spin up a new instance of the app (That’s why we don’t use a container_name ) whilst maintaining the old one intact. Once the new service is up and running, it adds an empty extensionless sentinel file (It can be an endpoint call)- called drain in the temporary directory within the Docker container: /tmp/drain.

This forces the health check to fail and signals Docker rollout to kill the old one and keep the new one alive.

Part of the magic is done via Nginx. It automatically load balances the upstream servers (there will be two app instances when making the switch), and once it detects one failing with a health check, it will stop serving it and redirect to the new one.

In other kinds of apps, which aren’t placed in front of a load balancer, like a Temporal.io worker (Outside of the scope of this tutorial), you’ll need to add a mechanism to drain the worker and let it automatically docker rollout make the change.

Doppler

I stated that Doppler is a service responsible for handling all our environment variables. It’s straightforward to get started: You create an account with them, install the CLI, and add your environments to their service.

You'll see down below that Dex will ask for a bcrypt hash. These hashes contain dollar signs ($) which need to be escaped in bash. Don't escape them in Doppler. We will handle them using a file replacement mechanism instead.

This took me a solid 7 hours to get right.

CLI installation

You will need to install Doppler in the VPS or the server that you are currently using.

You can install it using the following command:

 curl -Ls --tlsv1.2 --proto "=https" --retry 3 https://cli.doppler.com/install.sh | sudo sh

Generate a Doppler Token to connect to the Doppler Service

Dex Configuration

Dex is a “Federated OpenID Connect Provider”. In other words, it acts as a middleman for connecting to Identity Providers (Think like these: Sign In With Google, Sign In With GiTHub). OpenID is an authentication protocol that is based on OAuth 2.0, which allows apps and users to get user profile information with a REST-like api.

It is provided as a Docker image, and we will configure it using Docker Compose.

What I love about Dex is its simplicity. You only need a single YAML file that will hold the entire configuration. It will do wonders with a few lines of code.

Create a config.yaml file and place it in your VPS in ~/dex/config.yaml (Create the directory with mkdir ~/dex)

# dex/config.yaml - Configuration for Google Cloud Workload Identity Federation
issuer: $DEX_GCP_URL

storage:
  type: sqlite3
  config:
    file: /var/dex/dex.db

web:
  # Listen on HTTP, assuming a reverse proxy handles TLS termination.
  http: 0.0.0.0:5556

# Enable the password database to allow authentication with static passwords.
enablePasswordDB: true

# Define a static user for authentication.
# This user will be used to exchange a Dex ID token for a GCP token.
staticPasswords:
  - email: $DEX_GCP_STATIC_PASSWORD_EMAIL
    hash: $DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED
    username: $DEX_GCP_STATIC_PASSWORD_EMAIL
    userID: '722ba69a-3cba-4007-8a24-2611d4c4d5f9'

# The `staticClients` list contains OAuth2 clients that can connect to Dex.
staticClients:
  # This is the client for Google Cloud Workload Identity Federation.
  # The `id` MUST be the full resource name of the GCP Workload Identity Provider.
  # This value will be the `aud` (audience) claim in the OIDC token.
  - id: $DEX_GCP_STATIC_CLIENT_ID
    secret: $DEX_GCP_STATIC_CLIENT_SECRET
    name: 'Google Cloud Workload Identity Federation'
    # Redirect URIs are not used in the token-exchange flow.
    redirectURIs: []

# This section configures OAuth2 behavior.
oauth2:
  # Use the built-in password database as the connector for the password grant type.
  # This allows the static user defined above to authenticate.
  passwordConnector: local
  # By default, Dex supports the necessary grant types, including 'token-exchange'
  # and the response types 'code', 'token', and 'id_token'.
  # Explicitly defining them is not necessary unless you need to restrict them.
  skipApprovalScreen: true

If we replace it with our environments, we get:

# dex/config.yaml - Configuration for Google Cloud Workload Identity Federation
issuer: https://auth.yourdomain.com

storage:
  type: sqlite3
  config:
    file: /var/dex/dex.db

web:
  # Listen on HTTP, assuming a reverse proxy handles TLS termination.
  http: 0.0.0.0:5556

# Enable the password database to allow authentication with static passwords.
enablePasswordDB: true

# Define a static user for authentication.
# This user will be used to exchange a Dex ID token for a GCP token.
staticPasswords:
  - email: [email protected]
    hash: $2y$10$k2IZomh1UUyDlNrsuoNiFuNlOIn5Siw738AdFA6ukcMu07H0uGQ7K
    username: [email protected]
    userID: '722ba69a-3cba-4007-8a24-2611d4c4d5f9'

# The `staticClients` list contains OAuth2 clients that can connect to Dex.
staticClients:
  # This is the client for Google Cloud Workload Identity Federation.
  # The `id` MUST be the full resource name of the GCP Workload Identity Provider.
  # This value will be the `aud` (audience) claim in the OIDC token.
  - id: //iam.googleapis.com/projects/1016670781645/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider
    secret: 28XhU2xcgQnusLRlG4nZlUZbFdn3lfof21jvvbTG970
    name: 'Google Cloud Workload Identity Federation'
    # Redirect URIs are not used in the token-exchange flow.
    redirectURIs: []

# This section configures OAuth2 behavior.
oauth2:
  # Use the built-in password database as the connector for the password grant type.
  # This allows the static user defined above to authenticate.
  passwordConnector: local
  # By default, Dex supports the necessary grant types, including 'token-exchange'
  # and the response types 'code', 'token', and 'id_token'.
  # Explicitly defining them is not necessary unless you need to restrict them.
  skipApprovalScreen: true

Here’s the breakdown:

The Issuer

issuer: $DEX_GCP_URL

It’s the final URL that you will authenticate against. Define a path or subdomain that will be unique (You will need to update your namespace to match this - We’ll see this later on)

In other words, it will become something like this:

issuer: https://auth.mydomain.com

The Storage

storage:
  type: sqlite3
  config:
    file: /var/dex/dex.db

Dex needs to store its state so it can handle refresh tokens, invalidate current JWTs, and more. Fortunately, we can keep it simple by using SQLite, which it will generate for us.

About TLS certificates

Dex isn't handling the TLS configuration, Nginx is (You’ll see it in the config).

In other deployments, you can let Dex handle the certificates directly by pointing it to the cert files. However, that will be outside the scope of this tutorial.

The Web port

web:
  # Listen on HTTP, assuming a reverse proxy handles TLS termination.
  http: 0.0.0.0:5556

This is the port that gets exposed within the Docker container. We’ll see later on how we’ll map it out to our Nginx container.

Generating a static password :

enablePasswordDB: true

# Define a static user for authentication.
# This user will be used to exchange a Dex ID token for a GCP token.
staticPasswords:
  - email: $DEX_GCP_STATIC_PASSWORD_EMAIL
    hash: $DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED
    username: $DEX_GCP_STATIC_PASSWORD_EMAIL
    userID: '722ba69a-3cba-4007-8a24-2611d4c4d5f9'

# The `staticClients` list contains OAuth2 clients that can connect to Dex.
staticClients:
  # This is the client for Google Cloud Workload Identity Federation.
  # The `id` MUST be the full resource name of the GCP Workload Identity Provider.
  # This value will be the `aud` (audience) claim in the OIDC token.
  - id: $DEX_GCP_STATIC_CLIENT_ID
    secret: $DEX_GCP_STATIC_CLIENT_SECRET
    name: 'Google Cloud Workload Identity Federation'
    # Redirect URIs are not used in the token-exchange flow.
    redirectURIs: []

# This section configures OAuth2 behavior.
oauth2:
  # Use the built-in password database as the connector for the password grant type.
  # This allows the static user defined above to authenticate.
  passwordConnector: local
  # By default, Dex supports the necessary grant types, including 'token-exchange'
  # and the response types 'code', 'token', and 'id_token'.
  # Explicitly defining them is not necessary unless you need to restrict them.
  skipApprovalScreen: true

This will allow us to provide a username and password to authenticate against Dex, which will generate the JWT for us to authenticate against GCP.

This is also used in other scenarios to authenticate against other providers on your behalf. This means that if I provide the username and password, and I have an OAuth connection with GitHub, Microsoft, or Google, it will connect to those services on our behalf and return a token that can connect to those services. But again, this is outside the scope of the tutorial

staticClients:

  • id - the magical part:

This is critical. This will specify the audience field of the generated JWT. Google will compare against this and validate the request! I cannot tell you how much time I spent on getting this right.

We need to provide a full URI to our hetzner-provider:

//iam.googleapis.com/projects/1016670781645/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider

Note we start with: //

  • password:

    Use your password manager and generate a strong password. Avoid using any dollar signs, so escaping isn’t an issue. This will be used to protect your API from anyone accessing your service!

    a-strong-password-generated-using-a-password-generator
    

You will pass this password when sending an HTTP payload to Dex later on!

You pass to Dex the hashed string of the password above in the staticPasswords.password field.

staticPasswords:

  • email: the [email protected] → This can be anything. Since this will not be propagated upward in the OpenID chain (i.e., used by another provider), it is only used to authenticate against the service. So you can provide something non-existent

  • userID: a random UUID. You can use ULIDs or anything. This field isn’t used in our case.

  • password: It’s a bcrypt hash that you generate using:

    htpasswd -bnBC 10 "" <-the-password-for-reference->
    

E.g:

htpasswd -bnBC 10 "" a-strong-password-generated-using-a-password-generator

It generates:

$2y$10$s4ETxkQQeuJu4Kp58O607u.wiqlHnkyV8LkFK1g4cMKGFU959uusq

(Note that it starts with a dollar sign)

A hashed bcrypt password will have 3 dollar signs as shown above. These need to be escaped correctly in bash/shell if you try to inject them as environment variables. As mentioned in the Dex section, it becomes problematic. I’ll show you how to overcome this using envsubst.

Creating an entry point for Dex:

#!/bin/sh

set -e

echo "🔧 Dex Custom Entrypoint - Starting initialization..."

# Export all current environment variables
echo "📝 Exporting current environment variables..."
export $(printenv | grep -v '^_' | cut -d= -f1)

# Check if doppler is configured
if [ -z "$DOPPLER_TOKEN" ]; then
    echo "❌ Error: DOPPLER_TOKEN environment variable is required"
    exit 1
fi

echo "🔐 Fetching DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED from Doppler..."

# Get the bcrypt hashed password from doppler
DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED=$(doppler secrets get DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED --plain)

if [ -z "$DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED" ]; then
    echo "❌ Error: Failed to retrieve DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED from Doppler"
    exit 1
fi

# Export the retrieved password hash
export DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED

# Apply environment variable substitution to config
echo "🔄 Applying environment variable substitution..."
envsubst < /etc/dex/config.yaml > /tmp/config.yaml


# Cleanup function
cleanup() {
    echo "🧹 Cleaning up temporary files..."
    if [ -f "/tmp/config.yaml" ]; then
        rm -f /tmp/config.yaml
        echo "✅ Removed /tmp/config.yaml"
    fi
    echo "👋 Dex container shutdown complete"
    exit 0
}

# Set up signal handlers for graceful shutdown
trap cleanup SIGTERM SIGINT SIGQUIT

echo "🚀 Starting Dex server..."

# Start dex server with processed config in background
dex serve /tmp/config.yaml &
DEX_PID=$!

# Wait for dex process to finish
wait $DEX_PID

This file serves three purposes:

  1. Load all the environments from Doppler VPS project

  2. Load specifically the Bcrypt password hash in a plain format so it does not add any escaping.

    DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED=$(doppler secrets get DEX_GCP_STATIC_PASSWORD_SECRET_BCRYPT_HASHED --plain)
    
  3. Use env_subst and generate a temporary config.yaml file which we serve in the dex serve /tmp/config.yaml with all of our environments replaced. This was the only working mechanism I found that worked. We remove this temporary file in the cleanup function.

When using an .env file, this script isn’t needed. You can boot Dex directly. However, downloading the hashed bcrypt from Doppler became a mess. This solution nailed it.

Nginx:

I don't think it needs any introduction. NGINX is an HTTP web server, reverse proxy, content cache, load balancer, TCP/UDP proxy server, and mail proxy server. It's one of the skeletons of the entire web.

We will use it as the main entry point for our main Docker application and our DEX service. It will also be responsible for handling the TLS connections for us.

However, this configuration is somewhat challenging.

Nginx will not load properly unless all the services are up and running. I will show you below how we utilize an automation script that leverages two Nginx configurations, providing zero downtime.

We use these two configurations because NGINX will not load properly unless all the services are available. As you can see, our main application is held in Artifact Registry, which also needs DEX to be authenticated against.

To overcome this ,we:

  1. Load a Dex-only configuration first.

  2. Authenticate Docker against it, generating an OIDC token that authenticates in GCP.

  3. Pull the image from Artifact Registry.

  4. Provide a second configuration that includes our app, and we reload it in real-time using nginx -s reload

    nginx.conf.init (Initial Dex only config):

    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log warn;
    pid /var/run/nginx.pid;
    
    events {
      worker_connections 1024;
    }
    
    http {
      include /etc/nginx/mime.types;
      default_type application/octet-stream;
    
      # Basic server for returning a maintenance page
      server {
        listen 443 ssl http2 default_server;
        server_name _; # Catch-all
    
        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
    
        # Return a 503 for all requests to the main app
        location / {
          return 503 '{"status": "initializing", "message": "Application is starting up, please try again shortly."}';
          add_header Content-Type 'application/json';
        }
      }
    
      # HTTPS server for auth.yourdomain.com (dex)
      server {
        listen 443 ssl http2;
        server_name auth.yourdomain.com;
    
        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
    
        location / {
          proxy_pass http://dex:5556;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_redirect off;
        }
    
        location /healthz {
          access_log off;
          proxy_pass http://dex:5556/healthz;
        }
      }
    
      # HTTP server for redirects and ACME challenges
      server {
        listen 80;
        server_name www.yourdomain.com yourdomain.com auth.yourdomain.com;
    
        location /.well-known/acme-challenge/ {
          root /var/www/certbot;
        }
    
        location / {
          return 301 https://$host$request_uri;
        }
      }
    }
    
    

    nginx.conf.prod (Full config)

    # https://www.digitalocean.com/community/tutorials/how-to-upgrade-nginx-in-place-without-dropping-client-connections
    # https://www.f5.com/company/blog/nginx/avoiding-top-10-nginx-configuration-mistakes
    user nginx;
    # Number of CPU in the server - cat /proc/cpuinfo
    worker_processes auto;
    error_log /var/log/nginx/error.log warn;
    pid /var/run/nginx.pid;
    
    events {
      worker_connections 4096;
      use epoll;
      multi_accept on;
    }
    
    http {
      include /etc/nginx/mime.types;
      default_type application/octet-stream;
    
      # Performance optimizations
      sendfile on;
      tcp_nopush on;
      tcp_nodelay on;
      keepalive_timeout 30;
      keepalive_requests 1000;
      server_tokens off;
    
      # Buffer optimizations
      client_body_buffer_size 128k;
      client_max_body_size 50m;
      client_header_buffer_size 1k;
      large_client_header_buffers 4 4k;
      output_buffers 1 32k;
      postpone_output 1460;
    
      # Gzip compression
      gzip on;
      gzip_vary on;
      gzip_min_length 1024;
      gzip_proxied any;
      gzip_comp_level 6;
      gzip_types
      text/plain
      text/css
      text/xml
      text/javascript
      application/json
      application/javascript
      application/xml+rss
      application/atom+xml
      image/svg+xml;
    
    
      # Rate limiting
      limit_req_zone $binary_remote_addr zone=auth:10m rate=10r/m;
      limit_req_zone $binary_remote_addr zone=api:10m rate=100r/m;
    
      # Dynamic upstream for app service (supports docker rollout)
      upstream app_server {
        # Use Docker's internal DNS for service discovery
        # This allows nginx to discover all containers for the 'app' service
        server app:8080 max_fails=3 fail_timeout=5s;
    
        # Configure keepalive connections
        # Double of the upstream block
        keepalive 4;
        keepalive_requests 1000;
        keepalive_timeout 60s;
      }
    
      # Dynamic upstream for dex service (supports docker rollout)
      upstream dex_server {
        server dex:5556 ;
      }
    
      # HTTP server for redirects and ACME challenges
      server {
        listen 80;
        server_name www.yourdomain.com yourdomain.com auth.yourdomain.com;
    
        location /.well-known/acme-challenge/ {
          root /var/www/certbot;
        }
    
        location / {
          return 301 https://$host$request_uri;
        }
      }
    
      # HTTPS server for yourdomain.com (main app)
      server {
        listen 443 ssl http2;
        server_name www.yourdomain.com yourdomain.com;
    
        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_session_cache shared:SSL:50m;
        ssl_session_timeout 1d;
        ssl_session_tickets off;
        ssl_buffer_size 8k;
        ssl_prefer_server_ciphers on;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
        limit_req zone=api burst=20 nodelay;
    
        # Handle static assets specifically
        location ~* \.(js|mjs|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot|map)$ {
          proxy_pass http://app_server;
          proxy_http_version 1.1;
          proxy_set_header Connection "";
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
    
          # Container draining support - retry on different upstream if current fails
          proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
          proxy_next_upstream_tries 2;
          proxy_next_upstream_timeout 10s;
    
          # Optimized timeouts for faster failover during rollouts
          proxy_connect_timeout 3s;
          proxy_send_timeout 10s;
          proxy_read_timeout 10s;
    
          # Prevent caching of broken responses
          proxy_buffering off;
    
          # Static asset headers
          expires 1y;
          add_header Cache-Control "public, max-age=31536000, immutable";
          add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
        }
    
        location / {
          proxy_pass http://app_server;
          proxy_http_version 1.1;
          proxy_set_header Connection "";
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
    
          # Container draining support - retry on different upstream if current fails
          proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
          proxy_next_upstream_tries 2;
          proxy_next_upstream_timeout 15s;
    
          # Optimized timeouts for faster failover during rollouts
          proxy_connect_timeout 5s;
          proxy_send_timeout 30s;
          proxy_read_timeout 30s;
    
          # Proxy buffering
          proxy_buffering on;
          proxy_buffer_size 64k;
          proxy_buffers 4 64k;
          proxy_busy_buffers_size 64k;
        }
    
        # Health check endpoints
        location /health {
          access_log off;
          proxy_pass http://app_server/health;
    
          # Fast failover for health checks
          proxy_connect_timeout 2s;
          proxy_send_timeout 5s;
          proxy_read_timeout 5s;
          proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
          proxy_next_upstream_tries 2;
          proxy_next_upstream_timeout 5s;
        }
    
      }
    
      # HTTPS server for auth.yourdomain.com (dex)
      server {
        listen 443 ssl http2;
        server_name auth.yourdomain.com;
    
        ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_session_cache shared:SSL:50m;
        ssl_session_timeout 1d;
        ssl_session_tickets off;
        ssl_buffer_size 8k;
        ssl_prefer_server_ciphers on;
        ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    
        add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
        limit_req zone=auth burst=5 nodelay;
    
        location / {
          proxy_pass http://dex_server;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_redirect off;
    
          # Container draining support - retry on different upstream if current fails
          proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
          proxy_next_upstream_tries 2;
          proxy_next_upstream_timeout 10s;
    
          # Optimized timeouts for faster failover during rollouts
          proxy_connect_timeout 5s;
          proxy_send_timeout 15s;
          proxy_read_timeout 15s;
        }
    
        # Health check for Dex
        location /healthz {
          access_log off;
          proxy_pass http://dex_server/healthz;
    
          # Fast failover for health checks
          proxy_connect_timeout 2s;
          proxy_send_timeout 5s;
          proxy_read_timeout 5s;
          proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
          proxy_next_upstream_tries 2;
          proxy_next_upstream_timeout 5s;
        }
      }
    }
    
    
    

Bonus!

F5 is the company behind NGINX, and they have a fantastic cookbook that helps you better understand how to configure it. It works for both the free and the paid versions.

https://www.f5.com/content/dam/f5/corp/global/pdf/ebooks/NGINX_Cookbook-final.pdf

Configuring Docker to connect to Artifact Registry using Workload Identity Federation and OIDC.

We now need to tell Docker to connect to GCP via Workload Identity Federation. We use a Credential Helper for that. We are going to configure this manually.

This will be composed of 3 files:

  • fetch-id-token.sh (This authenticates against DEX and generates a JWT)
  • fetch-google-oidc-token.sh (It takes the JWT from fetch-id-token.sh and generates an OIDC token from Google Cloud)
  • docker-credential-gcr.sh (It orchestrates the two files above, and sends the OIDC token from fetch-google-oidc-token.sh to GCP in order to impersonate the service account)

Don’t blindly copy and paste these files, as many of them have hardcoded values like the current non-root user in the VPS: localuser

All of these files need to have execution access:

chmod +x ~/scripts/fetch-id-token.sh
chmod +x ~/scripts/fetch-google-oidc-token.sh
chmod +x /usr/local/bin/docker-credential-gcr

Configuring the credential helper within Docker:

Navigate to ~/.docker/config.json or create the file if it doesn’t exist, and add the following:

{
    "credHelpers": {
        "us-east1-docker.pkg.dev": "gcr"
    }
}

This will tell Docker to use the gcr credential helper to authenticate whenever pulling or pushing the image

Docker then tries to execute the binary named docker-credential-<helper> (extensionless - e.g: docker-credential-gcr).

It is a naming convention used by Docker, and it finds it via $PATH when needed.

The docker-credential-gcr (/usr/local/bin/docker-credential-gcr):

#! /bin/bash
#
# This file is extensionless.
# Update all the /home/localuser/scripts directory with your own!
# A custom Docker credential helper for Workload Identity Federation.
# This script performs the full OIDC-to-Google-Access-Token exchange.
#
# Prerequisites:
# 1. `curl` and `jq` must be installed.
# 2. The custom OIDC token must be available in a file.
# 3. The script must be executable (`chmod +x /path/to/this/script.sh`).
# 4. This script is copied to  /usr/local/bin/docker-credential-gcr (This is a file)

# set -euo pipefail
$PROJECT_ID="spiritual-slate-445211-i1"


# Add logging to a file for debugging
exec 2>> /tmp/docker-credential-gcr.log
echo "$(date): Starting docker-credential-gcr" >> /tmp/docker-credential-gcr.log

set -a
source /home/localuser/.env
set +a

# Step 1: Get the ID Token
echo "$(date): Getting ID token" >> /tmp/docker-credential-gcr.log
if ! SUBJECT_TOKEN=$(/home/localuser/scripts/fetch-id-token.sh 2>> /tmp/docker-credential-gcr.log); then
    echo "$(date): ERROR - Failed to get ID token" >> /tmp/docker-credential-gcr.log
    exit 1
fi

echo "$(date): Got ID token: ${SUBJECT_TOKEN:0:20}..." >> /tmp/docker-credential-gcr.log

# Step 2: Get federated token
echo "$(date): Getting federated token" >> /tmp/docker-credential-gcr.log
if ! FEDERATED_ACCESS_TOKEN=$(/home/localuser/scripts/fetch-google-oidc-token.sh "$SUBJECT_TOKEN" 2>> /tmp/docker-credential-gcr.log); then
    echo "$(date): ERROR - Failed to get federated token" >> /tmp/docker-credential-gcr.log
    exit 1
fi

if [ "${FEDERATED_ACCESS_TOKEN}" == "null" ]; then
  echo "Error: Failed to get federated token from STS. Response: ${STS_RESPONSE}" >&2
  echo "{}"
  exit 1
fi

# Check required environment variable for service account
if [[ -z "${GCP_SERVICE_ACCOUNT:-}" ]]; then
    echo "$(date): ERROR - GCP_SERVICE_ACCOUNT_EMAIL not set" >> /tmp/docker-credential-gcr.log
    exit 1
fi

# Step 4: Use the federated token to impersonate the Service Account.
# This calls the IAM Credentials API to get a final, usable Google Cloud access token.
IAM_API_URL="https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/hetzner@$PROJECT_ID.iam.gserviceaccount.com:generateAccessToken"
IAM_PAYLOAD='{"scope": ["https://www.googleapis.com/auth/cloud-platform"]}'

IAM_RESPONSE=$(curl -s -X POST "${IAM_API_URL}" \
  -H "Accept: application/json" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${FEDERATED_ACCESS_TOKEN}" \
  -d "${IAM_PAYLOAD}")


GOOGLE_ACCESS_TOKEN=$(echo "${IAM_RESPONSE}" | jq -r .accessToken)

if [ "${GOOGLE_ACCESS_TOKEN}" == "null" ]; then
  echo "Error: Failed to get Google access token via impersonation. Response: ${IAM_RESPONSE}" >&2
  echo "{}"
  exit 1
fi

# Step 5: Output the credentials in the JSON format that Docker expects.
# The "Username" must be "oauth2accesstoken".
# The "Secret" is the final Google Cloud access token.
cat <<EOF
{
  "Username": "oauth2accesstoken",
  "Secret": "${GOOGLE_ACCESS_TOKEN}"
}
EOF

This file orchestrates the following two. It also adds some debugging mechanisms to /tmp/docker-credential-gcr.log to help you aid in debugging when things fail.

fetch-id-token.sh (~/scripts/fetch-id-token.sh - create the scripts directory ):

#!/usr/bin/env bash
# set -euo pipefail

# Load environment variables from .env and Doppler
set -a
source .env 2>/dev/null || true
set +a

# Check if any required variables are empty
if [[ -z "$DEX_GCP_URL" || -z "$DEX_GCP_STATIC_CLIENT_ID" || -z "$DEX_GCP_STATIC_CLIENT_SECRET" || -z "$DEX_GCP_STATIC_PASSWORD_EMAIL" || -z "$DEX_GCP_STATIC_PASSWORD_SECRET" ]]; then
    echo "Error: One or more required DEX_GCP_* environment variables are empty!" >&2
    echo "Make sure DOPPLER_TOKEN is set and you have access to the required secrets." >&2
    exit 1
fi

BASIC_AUTH_HEADER=$(echo -n "$DEX_GCP_STATIC_CLIENT_ID:$DEX_GCP_STATIC_CLIENT_SECRET" | base64 -w 0)

# Request OIDC token from Dex using the correct password grant flow
response=$(curl --silent --location "$DEX_GCP_URL/token" \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --header "Authorization: Basic $BASIC_AUTH_HEADER" \
  --data-urlencode 'grant_type=password' \
  --data-urlencode "username=$DEX_GCP_STATIC_PASSWORD_EMAIL" \
  --data-urlencode "password=$DEX_GCP_STATIC_PASSWORD_SECRET" \
  --data-urlencode 'scope=openid email profile groups' 2>&1)

curl_exit_code=$?
if [[ $curl_exit_code -ne 0 ]]; then
    echo "Error: Failed to call Dex API (curl exit code: $curl_exit_code)" >&2
    echo "Response: $response" >&2
    exit 1
fi

# Check if response contains an error
if echo "$response" | grep -q '"error"'; then
    echo "Error: Authentication failed" >&2
    echo "Response: $response" >&2
    exit 1
fi

# Extract the id_token from the JSON response
id_token=$(echo "$response" | jq -r '.id_token')

# Validate we got a token
if [[ -z "$id_token" || "$id_token" == "null" ]]; then
    echo "Error: Failed to get id_token from Dex. Response: $response" >&2
    exit 1
fi

# Output the token
echo "$id_token"

fetch-google-oidc-token (~/scripts/fetch-google-oidc-token)

#!/usr/bin/env bash
set -euo pipefail

# Check if ID_TOKEN is provided as argument
if [[ $# -eq 0 ]]; then
  echo "Error: ID_TOKEN parameter is required" >&2
  echo "Usage: $0 <ID_TOKEN>" >&2
  exit 1
fi

ID_TOKEN="$1"

# Validate ID_TOKEN is not empty
if [[ -z "$ID_TOKEN" ]]; then
  echo "Error: ID_TOKEN parameter cannot be empty" >&2
  exit 1
fi


# Check required environment variables
: "${DEX_GCP_STATIC_CLIENT_ID:?DEX_GCP_STATIC_CLIENT_ID environment variable not set}"

# Build the audience URL
AUDIENCE="$DEX_GCP_STATIC_CLIENT_ID"

# Exchange ID_TOKEN for ACCESS_TOKEN using Google STS
echo "Calling Google STS with audience: $AUDIENCE" >&2
response=$(curl --silent --location 'https://sts.googleapis.com/v1/token' \
  --header 'Content-Type: application/json' \
  --data "{
    \"grant_type\": \"urn:ietf:params:oauth:grant-type:token-exchange\",
    \"subject_token_type\": \"urn:ietf:params:oauth:token-type:id_token\",
    \"subject_token\": \"$ID_TOKEN\",
    \"audience\": \"$AUDIENCE\",
    \"requested_token_type\": \"urn:ietf:params:oauth:token-type:access_token\",
    \"scope\": \"https://www.googleapis.com/auth/cloud-platform\"
  }")

curl_exit_code=$?
# Check if curl succeeded
if [[ $curl_exit_code -ne 0 ]]; then
  echo "Error: Failed to call Google STS API (curl exit code: $curl_exit_code)" >&2
  echo "STS Response: $response" >&2
  exit 1
fi


# Extract the access_token from the JSON response
access_token=$(echo "$response" | jq -r '.access_token')

# Validate we got a token
if [[ -z "$access_token" || "$access_token" == "null" ]]; then
  echo "Error: Failed to get access_token from Google STS. Response: $response" >&2
  exit 1
fi

# Output the access token
echo "$access_token"

What these files do in summary:

  1. Fetch a JWT from Dex.
  2. Send the JWT to Google to generate an OIDC Token.
  3. Use the OIDC Token to impersonate the service account.
  4. Use the token as the password field of the Docker container.

About these files:

  • The docker-credential-gcr (Note that this file is extensionless) must be placed in: /usr/local/bin/
    • The full path should be: /usr/local/bin/docker-credential-gcr
    • This makes it accessible for Docker to authenticate against Artifact Registry
  • In my setup, I have a non-sudo user called localuser. I’ve added the fetch-id-token.sh and fetch-google-oidc-token.sh inside the localuser’s home directory:
    • /home/localuser/scripts/
    • The scripts dir is one I created myself.
  • You must update docker-credential-gcr to match the directories!!! (

Putting everything together - The smart-start.sh

As I’ve mentioned, orchestrating this is a bit tricky, as I’m hosting everything within a single server and a single Nginx process, we need to (recap):

  1. Generate the TLS certificate
  2. Load Nginx with Dex.
  3. Let Docker authenticate against Dex and GCP - Pull the image from Artifact Registry
  4. Reload Nginx config to serve the app.

For this, we’ve created a helper smart-start.sh which also requires executable access.

You can place this in the /home/localuser or the leading directory next to the docker-compose.yml file:

smart-start.sh

#!/usr/bin/env bash
#
#  smart-start.sh – one-shot bootstrap for the Hetzner VPS
##
#  Usage examples:
#    DOPPLER_TOKEN="dp.st.prd.xyz" ./smart-start.sh              # Full deployment
#    DOPPLER_TOKEN="dp.st.prd.xyz" SKIP_CERT=1 ./smart-start.sh  # Skip certificates

set -euo pipefail

###############################################################################
# Helpers
###############################################################################
c_blue='\033[0;34m'; c_green='\033[0;32m'; c_yellow='\033[1;33m'; c_red='\033[0;31m'; c_nc='\033[0m'
log(){ printf "${c_blue}ℹ︎  %s${c_nc}\n" "$*"; }
ok (){ printf "${c_green}✔  %s${c_nc}\n" "$*"; }
warn(){ printf "${c_yellow}⚠  %s${c_nc}\n" "$*"; }
die (){ printf "${c_red}✖  %s${c_nc}\n" "$*" >&2; exit 1; }

###############################################################################
# Environment & sanity
###############################################################################

log "Pre-flight checks"

[[ $EUID -eq 0 ]] && die "Run as an unprivileged user with docker group membership."
command -v docker          >/dev/null || die "Docker missing."
command -v doppler         >/dev/null || die "Doppler CLI missing."
docker compose version     >/dev/null || die "Docker Compose plugin missing."
docker ps                  >/dev/null || die "User cannot talk to the Docker socket."

# Validate DOPPLER_TOKEN
[[ -z "${DOPPLER_TOKEN:-}" ]] && die "DOPPLER_TOKEN environment variable is required."


# Helper function to run docker commands with doppler
docker_compose() {
  DOPPLER_TOKEN="$DOPPLER_TOKEN" doppler run -- docker compose "$@"
}
docker_rollout() {
  DOPPLER_TOKEN="$DOPPLER_TOKEN" doppler run -- docker rollout "$@"
}

# # Helper function to check if service is running
service_running() {
  local service="$1"
  docker_compose ps --services --filter "status=running" | grep -q "^${service}$"
}

# Helper function to wait for service health
wait_for_service_health() {
  local service="$1"
  local timeout="${2:-60}"
  local count=0

  log "Waiting for $service health check (timeout: ${timeout}s)..."

  while [ $count -lt $timeout ]; do
    if docker_compose ps "$service" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
      ok "$service is healthy"
      return 0
    fi

    # If service is not running, try to check if it's still starting
    if ! service_running "$service"; then
      warn "$service is not running, checking if it exited..."
      docker_compose logs --tail=10 "$service"
      return 1
    fi

    sleep 2
    count=$((count + 2))
  done

  warn "$service health check timeout after ${timeout}s"
  docker_compose logs --tail=20 "$service"
  return 1
}

ok  "Environment looks good."

###############################################################################
# TLS (Let's Encrypt via dns-01, can be skipped)
###############################################################################
if [[ ${SKIP_CERT:-0} -eq 0 ]]; then
  log "Phase 1  – TLS certificates"
  cert_live="certbot/conf/live/alertdown.ai/fullchain.pem"
  if [[ ! -f $cert_live ]] || ! openssl x509 -checkend 604800 -noout -in "$cert_live" >/dev/null 2>&1
  then
      log "Obtaining / renewing wildcard cert via Cloudflare"
      [[ -f /run/secrets/cloudflare.ini ]] || die "Cloudflare credentials missing at /run/secrets/cloudflare.ini (chmod 600)."
      mkdir -p certbot/{conf,logs,www}
      docker run --rm \
        -v "$PWD/certbot/conf:/etc/letsencrypt" \
        -v "$PWD/certbot/logs:/var/log/letsencrypt" \
        -v /run/secrets/cloudflare.ini:/cloudflare.ini:ro \
        certbot/dns-cloudflare certonly \
        --dns-cloudflare \
        --dns-cloudflare-credentials /cloudflare.ini \
        --dns-cloudflare-propagation-seconds 300 \
        --email "${CERTBOT_EMAIL:[email protected]}" --agree-tos --no-eff-email \
        # We tell Certbot to generate wildcard certificates. 
        -d mydomain.com -d "*.mydomain.com" \
        --non-interactive --rsa-key-size 4096
  fi
  ok "TLS certificate ready."
else
  warn "SKIP_CERT=1 → skipping Let's Encrypt"
fi


###############################################################################
# Bring up auth services (nginx + dex)
###############################################################################
log "Phase 2 – Bring up auth services (nginx + dex)"

  # Use the initial maintenance config first
  log "Switching to initial NGINX config..."
  cp nginx/nginx.conf.init nginx/nginx.conf

  # Start dex first
  log "Starting dex service..."
  docker_compose up -d dex

  if ! wait_for_service_health "dex" 60; then
    die "Dex failed to become healthy"
  fi

 # Start nginx with the init config or reload if already running
if service_running "nginx"; then
    log "Nginx already running, reloading with maintenance config..."
    if ! docker_compose exec nginx nginx -t -c /etc/nginx/nginx.conf; then
        die "New nginx config is invalid"
    fi
    if ! docker_compose exec nginx nginx -s reload; then
        die "Nginx reload failed - config may be invalid"
    fi
else
    # Start nginx normally
    docker_compose up -d nginx
fi

  # Give nginx a moment to start and verify it's running
  sleep 5
  if ! service_running "nginx"; then
    docker_compose logs nginx
    die "Nginx failed to start"
  fi

  # Test auth endpoint
  log "Testing auth endpoint availability..."
  for i in {1..30}; do
    if curl -fs "https://auth.mydomain.com/healthz" >/dev/null 2>&1 || \
       curl -fs "$DEX_GCP_URL/healthz" >/dev/null 2>&1; then
      ok "Auth endpoint is accessible"
      break
    fi
    sleep 2
    [[ $i -eq 30 ]] && warn "Auth endpoint not yet accessible (may still be starting)"
  done

###############################################################################
# Pull and start app services
###############################################################################
log "Phase 3 – Pull and start app service"

  log "Pulling app image..."
  if ! docker_compose pull app; then
    warn "App image pull failed (check WIF setup) – continuing with cached image if available."
  fi

  log "Starting app service..."
  docker_rollout app

  if ! wait_for_service_health "app" 180; then
    warn "App service health check failed. The app may not be accessible."
    # Even if health check fails, we proceed to switch nginx config
  fi

  # Switch to the final production nginx config
  log "Switching to production NGINX config..."
  cp nginx/nginx.conf.prod nginx/nginx.conf

  # Reload nginx to apply the new configuration
  log "Reloading nginx with production config..."
  if docker_compose exec nginx nginx -s reload; then
    ok "Nginx reloaded successfully with production config."
  else
    warn "Nginx reload failed. Check logs."
    docker_compose logs nginx
  fi

# Spin up certbot manager to listen for certbot renewals
log "Phase 4 – Starting certbot manager"
docker_compose up -d certbot

###############################################################################
# Health probes
###############################################################################
log "Comprehensive health probes"

  # Internal service health checks
  declare -A internal_services=(
    ["dex"]="dex"
    ["app"]="app"
  )

  for name in "${!internal_services[@]}"; do
    service="${internal_services[$name]}"
    if service_running "$service"; then
      if docker_compose ps "$service" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
        ok "Internal $name service ✓"
      else
        warn "Internal $name service ✗ (not healthy)"
      fi
    else
      warn "Internal $name service ✗ (not running)"
    fi
  done

  # External endpoint health checks
  declare -A external_probes=(
    ["Auth (HTTPS)"]="curl -fs --connect-timeout 10 https://auth.alertdown.ai/healthz"
    ["App (HTTPS)"]="curl -fs --connect-timeout 10 https://alertdown.ai/health"
    ["App (HTTPS/nginx-health)"]="curl -fs --connect-timeout 10 https://alertdown.ai/nginx-health"
    ["HTTP redirect"]="curl -fs --connect-timeout 5 http://alertdown.ai/ | grep -q '301'"
  )

  for name in "${!external_probes[@]}"; do
    if timeout 15 bash -c "${external_probes[$name]}" >/dev/null 2>&1; then
      ok "External $name ✓"
    else
      warn "External $name ✗"
    fi
  done

  # Service status summary
  echo
  log "Service Status Summary:"
  docker_compose ps --format "table {{.Service}}\t{{.Status}}\t{{.Health}}"

  echo -e "\n${c_green}🚀  AlertDown stack deployment complete:"
  echo -e "${c_green}   Auth: https://auth.alertdown.ai${c_nc}"
  echo -e "${c_green}   App:  https://alertdown.ai${c_nc}"

  if ! service_running "app" || ! docker_compose ps "app" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
    echo -e "${c_yellow}   Note: App service may still be starting up${c_nc}"
  fi

Running the script:

To run this, you :

DOPPLER_TOKEN=<insert-doppler-vps-token-here> bash smart-start.sh

E.g:

DOPPLER_TOKEN=dp.stasdasdsad22OmPSi bash smart-start.sh

Breaking the script apart:

#!/usr/bin/env bash
#
#  smart-start.sh – one-shot bootstrap for the Hetzner VPS
##
#  Usage examples:
#    DOPPLER_TOKEN="dp.st.prd.xyz" ./smart-start.sh              # Full deployment
#    DOPPLER_TOKEN="dp.st.prd.xyz" SKIP_CERT=1 ./smart-start.sh  # Skip certificates

set -euo pipefail

Selects bash, shows examples, and enables strict mode:

  • -e exit on error, -u fail on unset vars, -o pipefail catch pipe failures.

1) helper: colors + logger functions

c_blue='\033[0;34m'; c_green='\033[0;32m'; c_yellow='\033[1;33m'; c_red='\033[0;31m'; c_nc='\033[0m'
log(){ printf "${c_blue}ℹ︎  %s${c_nc}\n" "$*"; }
ok (){ printf "${c_green}✔  %s${c_nc}\n" "$*"; }
warn(){ printf "${c_yellow}⚠  %s${c_nc}\n" "$*"; }
die (){ printf "${c_red}✖  %s${c_nc}\n" "$*" >&2; exit 1; }

Colored output helpers for info/ok/warn/error (die also exits).

2) environment & sanity checks

log "Pre-flight checks"

[[ $EUID -eq 0 ]] && die "Run as an unprivileged user with docker group membership."
command -v docker          >/dev/null || die "Docker missing."
command -v doppler         >/dev/null || die "Doppler CLI missing."
docker compose version     >/dev/null || die "Docker Compose plugin missing."
docker ps                  >/dev/null || die "User cannot talk to the Docker socket."

# Validate DOPPLER_TOKEN
[[ -z "${DOPPLER_TOKEN:-}" ]] && die "DOPPLER_TOKEN environment variable is required."
  • Ensures:
    • not root (expects user in docker group),
    • docker, doppler, docker compose plugin available,
    • The user can reach the Docker socket,
    • DOPPLER_TOKEN is set.

3) Doppler-wrapped Docker and service helpers

docker_compose() {
  DOPPLER_TOKEN="$DOPPLER_TOKEN" doppler run -- docker compose "$@"
}
docker_rollout() {
  DOPPLER_TOKEN="$DOPPLER_TOKEN" doppler run -- docker rollout "$@"
}

service_running() {
  local service="$1"
  docker_compose ps --services --filter "status=running" | grep -q "^${service}$"
}

wait_for_service_health() {
  local service="$1"
  local timeout="${2:-60}"
  local count=0

  log "Waiting for $service health check (timeout: ${timeout}s)..."

  while [ $count -lt $timeout ]; do
    if docker_compose ps "$service" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
      ok "$service is healthy"
      return 0
    fi
    if ! service_running "$service"; then
      warn "$service is not running, checking if it exited..."
      docker_compose logs --tail=10 "$service"
      return 1
    fi
    sleep 2
    count=$((count + 2))
  done

  warn "$service health check timeout after ${timeout}s"
  docker_compose logs --tail=20 "$service"
  return 1
}

ok  "Environment looks good."

It:

  • docker_compose / docker_rollout: Runs Docker commands with env injected by Doppler.
  • service_running: Checks if a service is in “running” state.
  • wait_for_service_health Polls Docker Compose health for a service with a timeout; logs on failure.

4) TLS via Let’s Encrypt (dns-01) — skippable with SKIP_CERT=1

if [[ ${SKIP_CERT:-0} -eq 0 ]]; then
  log "Phase 1  – TLS certificates"
  cert_live="certbot/conf/live/alertdown.ai/fullchain.pem"
  if [[ ! -f $cert_live ]] || ! openssl x509 -checkend 604800 -noout -in "$cert_live" >/dev/null 2>&1
  then
      log "Obtaining / renewing wildcard cert via Cloudflare"
      [[ -f /run/secrets/cloudflare.ini ]] || die "Cloudflare credentials missing at /run/secrets/cloudflare.ini (chmod 600)."
      mkdir -p certbot/{conf,logs,www}
      docker run --rm \
        -v "$PWD/certbot/conf:/etc/letsencrypt" \
        -v "$PWD/certbot/logs:/var/log/letsencrypt" \
        -v /run/secrets/cloudflare.ini:/cloudflare.ini:ro \
        certbot/dns-cloudflare certonly \
        --dns-cloudflare \
        --dns-cloudflare-credentials /cloudflare.ini \
        --dns-cloudflare-propagation-seconds 300 \
        --email "${CERTBOT_EMAIL:[email protected]}" --agree-tos --no-eff-email \
        # We tell Certbot to generate wildcard certificates. 
        -d mydomain.com -d "*.mydomain.com" \
        --non-interactive --rsa-key-size 4096
  fi
  ok "TLS certificate ready."
else
  warn "SKIP_CERT=1 → skipping Let's Encrypt"
fi
  • If not skipping:
    • checks if alertdown.ai cert exists and is valid for ≥7 days.
    • uses certbot/dns-cloudflare to issue/renew wildcard certs via dns-01 with Cloudflare creds at /run/secrets/cloudflare.ini.
    • stores certs under certbot/conf.
  • Note: domains here are mydomain.com But cert path uses alertdown.ai. Make sure these match your real domains.

5) Phase 2 — Bring up the auth stack (Nginx + Dex)

log "Phase 2 – Bring up auth services (nginx + dex)"

log "Switching to initial NGINX config..."
cp nginx/nginx.conf.init nginx/nginx.conf

log "Starting dex service..."
docker_compose up -d dex

if ! wait_for_service_health "dex" 60; then
  die "Dex failed to become healthy"
fi

if service_running "nginx"; then
  log "Nginx already running, reloading with maintenance config..."
  if ! docker_compose exec nginx nginx -t -c /etc/nginx/nginx.conf; then
      die "New nginx config is invalid"
  fi
  if ! docker_compose exec nginx nginx -s reload; then
      die "Nginx reload failed - config may be invalid"
  fi
else
  docker_compose up -d nginx
fi

sleep 5
if ! service_running "nginx"; then
  docker_compose logs nginx
  die "Nginx failed to start"
fi

log "Testing auth endpoint availability..."
for i in {1..30}; do
  if curl -fs "https://auth.mydomain.com/healthz" >/dev/null 2>&1 || \
     curl -fs "$DEX_GCP_URL/healthz" >/dev/null 2>&1; then
    ok "Auth endpoint is accessible"
    break
  fi
  sleep 2
  [[ $i -eq 30 ]] && warn "Auth endpoint not yet accessible (may still be starting)"
done

It:

  • Swaps nginx to a maintenance/initial config.
  • Starts dex, waits until container health = healthy.
  • Ensures nginx is up; validates config and reloads if already running.
  • Probes auth health at https://auth.mydomain.com/healthz or $DEX_GCP_URL/healthz.
  • Note: again mydomain.com vs your actual domain; unify.

6) Phase 3 — Pull & start app service, then switch nginx to prod

log "Phase 3 – Pull and start app service"

log "Pulling app image..."
if ! docker_compose pull app; then
  warn "App image pull failed (check WIF setup) – continuing with cached image if available."
fi

log "Starting app service..."
docker_rollout app

if ! wait_for_service_health "app" 180; then
  warn "App service health check failed. The app may not be accessible."
fi

log "Switching to production NGINX config..."
cp nginx/nginx.conf.prod nginx/nginx.conf

log "Reloading nginx with production config..."
if docker_compose exec nginx nginx -s reload; then
  ok "Nginx reloaded successfully with production config."
else
  warn "Nginx reload failed. Check logs."
  docker_compose logs nginx
fi

It:

  • Pulls app image (warns if pull fails; uses cached image).
  • Deploys app via docker rollout (zero-downtime rollout tool).
  • Waits up to 180s for app health.
  • Swaps nginx to production config and reloads.

7) Phase 4 — start certbot manager (for renew hooks)

log "Phase 4 – Starting certbot manager"
docker_compose up -d certbot

8) Comprehensive health probes + summary

log "Comprehensive health probes"

declare -A internal_services=(
  ["dex"]="dex"
  ["app"]="app"
)

for name in "${!internal_services[@]}"; do
  service="${internal_services[$name]}"
  if service_running "$service"; then
    if docker_compose ps "$service" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
      ok "Internal $name service ✓"
    else
      warn "Internal $name service ✗ (not healthy)"
    fi
  else
    warn "Internal $name service ✗ (not running)"
  fi
done

declare -A external_probes=(
  ["Auth (HTTPS)"]="curl -fs --connect-timeout 10 https://auth.alertdown.ai/healthz"
  ["App (HTTPS)"]="curl -fs --connect-timeout 10 https://alertdown.ai/health"
  ["App (HTTPS/nginx-health)"]="curl -fs --connect-timeout 10 https://alertdown.ai/nginx-health"
  ["HTTP redirect"]="curl -fs --connect-timeout 5 http://alertdown.ai/ | grep -q '301'"
)

for name in "${!external_probes[@]}"; do
  if timeout 15 bash -c "${external_probes[$name]}" >/dev/null 2>&1; then
    ok "External $name ✓"
  else
    warn "External $name ✗"
  fi
done

echo
log "Service Status Summary:"
docker_compose ps --format "table {{.Service}}\t{{.Status}}\t{{.Health}}"

echo -e "\n${c_green}🚀  AlertDown stack deployment complete:"
echo -e "${c_green}   Auth: https://auth.alertdown.ai${c_nc}"
echo -e "${c_green}   App:  https://alertdown.ai${c_nc}"

if ! service_running "app" || ! docker_compose ps "app" --format "table {{.Health}}" 2>/dev/null | grep -q "healthy"; then
  echo -e "${c_yellow}   Note: App service may still be starting up${c_nc}"
fi

It:

  • checks container “running” + “healthy” for dex and app.
  • hits external HTTPS health endpoints + HTTP→HTTPS redirect.
  • prints a one-line docker compose status table and friendly success banner.
  • notes if app is still starting.

Beware:

Please inspect this file before proceeding. There are other hardcoded values within them that you need to update.

Hooray 🥳

With this, you’ll have Dex up and running with a Docker image pulled from Artifact Registry!


Authenticating using Postman

In the case that you run into issues, you can always try using cURL or Postman to debug the token exchanges.

We need to perform 3 token exchanges

  • Dex
  • Google STS (Storage Transfer Service)
  • Google IAM Credentials

Authenticating against Dex


We’ll cover this down below, in shell we can generate an auth script like:

BASIC_AUTH_HEADER=$(echo -n "$DEX_GCP_STATIC_CLIENT_ID:$DEX_GCP_STATIC_CLIENT_SECRET" | base64 -w 0)

The generated fetch request:

const myHeaders = new Headers();
myHeaders.append("Content-Type", "application/x-www-form-urlencoded");
myHeaders.append("Authorization", "Basic Ly9pYW0uZ29vZ2xlYXBpcy5jb20vcHJvamVjdHMvMTA4MzM1MDgwNzQ5OC9sb2NhdGlvbnMvZ2xvYmFsL3dvcmtsb2FkSWRlbnRpdHlQb29scy9oZXR6bmVyLXBvb2wvcHJvdmlkZXJzL2hldHpuZXItcHJvdmlkZXI6YS1zdHJvbmctcGFzc3dvcmQtZ2VuZXJhdGVkLXVzaW5nLWEtcGFzc3dvcmQtZ2VuZXJhdG9y");

const urlencoded = new URLSearchParams();
urlencoded.append("grant_type", "password");
urlencoded.append("username", "[email protected]");
urlencoded.append("password", "pleaseuseastrongerpassword");
urlencoded.append("scope", "openid email profile groups");

const requestOptions = {
  method: "POST",
  headers: myHeaders,
  body: urlencoded,
  redirect: "follow"
};

fetch("https://auth.yourdomain.com", requestOptions)
  .then((response) => response.text())
  .then((result) => console.log(result))
  .catch((error) => console.error(error));

And now in CURL:

curl --location 'https://auth.yourdomain.com' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic Ly9pYW0uZ29vZ2xlYXBpcy5jb20vcHJvamVjdHMvMTA4MzM1MDgwNzQ5OC9sb2NhdGlvbnMvZ2xvYmFsL3dvcmtsb2FkSWRlbnRpdHlQb29scy9oZXR6bmVyLXBvb2wvcHJvdmlkZXJzL2hldHpuZXItcHJvdmlkZXI6YS1zdHJvbmctcGFzc3dvcmQtZ2VuZXJhdGVkLXVzaW5nLWEtcGFzc3dvcmQtZ2VuZXJhdG9y' \
--data-urlencode 'grant_type=password' \
--data-urlencode '[email protected]' \
--data-urlencode 'password=pleaseuseastrongerpassword' \
--data-urlencode 'scope=openid email profile groups'

Authenticating against Google STS Api:

With the token in hand, you need to authenticate against Google STS (Storage Transfer Service) which allows us to move data across cloud providers:

Where:

  • ALERTDOWN_AUTH_ID_TOKEN => Is the JWT that came from Dex. Use the ID token.
  • AUDIENCE => //iam.googleapis.com/projects/1016670781645/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider
  • Note the scope!

Here’s the fetch command:

const myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");

const raw = JSON.stringify({
  "grant_type": "urn:ietf:params:oauth:grant-type:token-exchange",
  "subject_token_type": "urn:ietf:params:oauth:token-type:id_token",
  "subject_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjE1NjdmY2MyMTVhMmQxNGY2MzVhZmY1ZjI5MjNmZDkzMTVkZWNkNzQifQ.eyJpc3MiOiJodHRwczovL2F1dGguYWxlcnRkb3duLmFpIiwic3ViIjoiQ2lRM01qSmlZVFk1WVMwelkySmhMVFF3TURjdE9HRXlOQzB5TmpFeFpEUmpOR1ExWmprU0JXeHZZMkZzIiwiYXVkIjoiLy9pYW0uZ29vZ2xlYXBpcy5jb20vcHJvamVjdHMvMTA4MzM1MDgwNzQ5OC9sb2NhdGlvbnMvZ2xvYmFsL3dvcmtsb2FkSWRlbnRpdHlQb29scy9oZXR6bmVyLXBvb2wvcHJvdmlkZXJzL2hldHpuZXItcHJvdmlkZXIiLCJleHAiOjE3NTMyNTc4NDUsImlhdCI6MTc1MzE3MTQ0NSwiYXRfaGFzaCI6IjJ5bW1mQzFfUUZaYnRibTdCLWZuS3ciLCJlbWFpbCI6ImF1dGhAYWxlcnRkb3duLmFpIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsIm5hbWUiOiJhdXRoQGFsZXJ0ZG93bi5haSJ9.kyI61WGR5m0h4YvflYV4sCHDY0G7ix7R4m59ITvE_Bq3oIwjbH2NxAMzmtPbUp9kCcsbosAeJTcfJWj2n03-LtRZKc1WjELrFytlnSDgt1KeNCqYYWsdG5eUORzYgvfl9ayNqf7QgDPc3Sr7XQElfk07F-uJPAGPssUXY-qxos6lZHrmComzEWkWqfbuq5e-cvLsBP6TmFAt58B2XKAcSLYSuFrp8eMDaCZ7zQ12z9NR9q0N7u7cVKsJT2429I27fh6LrQsthMaaDMEKzfhY-HskbmcvYO_z4U2M1plYvXAqJRJzGGrkArPSGklxvfFS6gIqLSI7MLnzzsJKpTUt1w",
  "audience": "//iam.googleapis.com/projects/1083350807498/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider",
  "requested_token_type": "urn:ietf:params:oauth:token-type:access_token",
  "scope": "https://www.googleapis.com/auth/cloud-platform",
  "options": "{\"serviceAccount\": \"[email protected]\"}"
});

const requestOptions = {
  method: "POST",
  headers: myHeaders,
  body: raw,
  redirect: "follow"
};

fetch("https://sts.googleapis.com/v1/token", requestOptions)
  .then((response) => response.text())
  .then((result) => console.log(result))
  .catch((error) => console.error(error));

Here’s the full cURL command:

curl --location 'https://sts.googleapis.com/v1/token' \
--header 'Content-Type: application/json' \
--data-raw '{
    "grant_type": "urn:ietf:params:oauth:grant-type:token-exchange",
    "subject_token_type": "urn:ietf:params:oauth:token-type:id_token",
    "subject_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6IjE1NjdmY2MyMTVhMmQxNGY2MzVhZmY1ZjI5MjNmZDkzMTVkZWNkNzQifQ.eyJpc3MiOiJodHRwczovL2F1dGguYWxlcnRkb3duLmFpIiwic3ViIjoiQ2lRM01qSmlZVFk1WVMwelkySmhMVFF3TURjdE9HRXlOQzB5TmpFeFpEUmpOR1ExWmprU0JXeHZZMkZzIiwiYXVkIjoiLy9pYW0uZ29vZ2xlYXBpcy5jb20vcHJvamVjdHMvMTA4MzM1MDgwNzQ5OC9sb2NhdGlvbnMvZ2xvYmFsL3dvcmtsb2FkSWRlbnRpdHlQb29scy9oZXR6bmVyLXBvb2wvcHJvdmlkZXJzL2hldHpuZXItcHJvdmlkZXIiLCJleHAiOjE3NTMyNTc4NDUsImlhdCI6MTc1MzE3MTQ0NSwiYXRfaGFzaCI6IjJ5bW1mQzFfUUZaYnRibTdCLWZuS3ciLCJlbWFpbCI6ImF1dGhAYWxlcnRkb3duLmFpIiwiZW1haWxfdmVyaWZpZWQiOnRydWUsIm5hbWUiOiJhdXRoQGFsZXJ0ZG93bi5haSJ9.kyI61WGR5m0h4YvflYV4sCHDY0G7ix7R4m59ITvE_Bq3oIwjbH2NxAMzmtPbUp9kCcsbosAeJTcfJWj2n03-LtRZKc1WjELrFytlnSDgt1KeNCqYYWsdG5eUORzYgvfl9ayNqf7QgDPc3Sr7XQElfk07F-uJPAGPssUXY-qxos6lZHrmComzEWkWqfbuq5e-cvLsBP6TmFAt58B2XKAcSLYSuFrp8eMDaCZ7zQ12z9NR9q0N7u7cVKsJT2429I27fh6LrQsthMaaDMEKzfhY-HskbmcvYO_z4U2M1plYvXAqJRJzGGrkArPSGklxvfFS6gIqLSI7MLnzzsJKpTUt1w",
    "audience": "//iam.googleapis.com/projects/1083350807498/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider",
    "requested_token_type": "urn:ietf:params:oauth:token-type:access_token",
        "scope": "https://www.googleapis.com/auth/cloud-platform",
    "options": "{\"serviceAccount\": \"[email protected]\"}"
}'

Google IAM Credentials

This is the last part! With this we obtain the short lived JWT that impersonates our service account

Fetch:

const myHeaders = new Headers();
myHeaders.append("Accept", "application/json");
myHeaders.append("Content-Type", "application/json");
// The bearer is the token we receive from Google STS
myHeaders.append("Authorization", "Bearer ya29.d.c0ASRK0GZhTkZi8JGyqpYLR_m7ikJXVUiOL0mOA4jtgydbRyRyNusJu0up5PGOebyIYAGZoKHkreDgaukv7DbNZm3OT6JPVB5HRW_KiqZZyUfgnQRloLrcTPOyf6bDl_yHyzEjVlOy-FYPs98RSt2C-POO3R8NZC-7jEYYzqVlimcNgt7E4QBmeSsxhXShegLCjlDcgSBOob3Kb_-Q-xeTvtkflhYo7jPFiE8ChJEqnIIxL6CzritqgOsrGB4VBNuR0BhghbrH_Sa4q7ELTMclLuRb_PuTJ1YtxW1Ia7az3ENOQSF1gxKyeubBAAvKGhtfEDJYVj3mkkPUjXYUTJHPzwV6HRNF2Rls3XWfPZpiTILbbBp1bPt45Up0-9cY6NMie9b1hL9FmtCHxJOzmxDPWWDhi9KDAsRx0OQkJKJWAtl_jNX7HbjNiMRXPF-WMqR3TQia9DEBsEqWNGSMg67uFCbxbtz0DWVjjQS9VlErmK_h1pwdwhaSMWSW5_Qz-VmZ5E2K2uPdPDMLWgdDjk01b4mVXVsDJGpIj2W-9D3C8yzV5avJcAhuWpDCjOtUTUxo5LKDvjZAFbPBaIa5-LfM54UlIvMnem2y9sgIIVxptmR1gslv0PJMgCyeYDjxlCcvQqv8W5IuFw61O7zIp3FlmU8EwcbMGdfs8PRulgLjep2jL9HhxBo8zAcz1aJgVnV0kHFPfYSTVsBUQcNHYCOBmmp_NkrgKW8Tjs1e7buzZpKz4Op6Up2rqMa69ArwasSkTiQ1cGZxM9_hEt9lt9bCjvlyns9JWcSxdBrRptk4fFXBhQyYbZQ90WW2BBxfTCopa_QKcwAiNqVgKHQZlxuGeVgXVuwpBdNqKMGT29BjuiqGlz_64fmLhPr37ivSjdt6YyCKB1S6-_sjDMFUAC73HtCyaMcF8WsNdbQDgBzvNYfZMg_YnsHkQdClj7EwOMymOna9hOZyKHbtFP89IZ852yzTUqEEGAn3WAaneiBbYQjjB1WtMtSGTTGVmggK65NG1HOjiuI8R7Ik0Dgx4kOSQQYD6fHVLcYPignZIzxm8TZhIcuBPIRKm5FHb9LD69lFtpDkJ8w0vfC2bAMz-Y15u-mla8jIkVTDi_3i1mYmJh8cCm0wwEDCBWEtNOJ5zwNFsYMHQwDso-9nc1cM6P57JXsq7g");

const raw = JSON.stringify({
  "scope": [
    "https://www.googleapis.com/auth/cloud-platform"
  ]
});

const requestOptions = {
  method: "POST",
  headers: myHeaders,
  body: raw,
  redirect: "follow"
};

fetch("https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken", requestOptions)
  .then((response) => response.text())
  .then((result) => console.log(result))
  .catch((error) => console.error(error));

cURL:

curl --location 'https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ya29.d.c0ASRK0GZhTkZi8JGyqpYLR_m7ikJXVUiOL0mOA4jtgydbRyRyNusJu0up5PGOebyIYAGZoKHkreDgaukv7DbNZm3OT6JPVB5HRW_KiqZZyUfgnQRloLrcTPOyf6bDl_yHyzEjVlOy-FYPs98RSt2C-POO3R8NZC-7jEYYzqVlimcNgt7E4QBmeSsxhXShegLCjlDcgSBOob3Kb_-Q-xeTvtkflhYo7jPFiE8ChJEqnIIxL6CzritqgOsrGB4VBNuR0BhghbrH_Sa4q7ELTMclLuRb_PuTJ1YtxW1Ia7az3ENOQSF1gxKyeubBAAvKGhtfEDJYVj3mkkPUjXYUTJHPzwV6HRNF2Rls3XWfPZpiTILbbBp1bPt45Up0-9cY6NMie9b1hL9FmtCHxJOzmxDPWWDhi9KDAsRx0OQkJKJWAtl_jNX7HbjNiMRXPF-WMqR3TQia9DEBsEqWNGSMg67uFCbxbtz0DWVjjQS9VlErmK_h1pwdwhaSMWSW5_Qz-VmZ5E2K2uPdPDMLWgdDjk01b4mVXVsDJGpIj2W-9D3C8yzV5avJcAhuWpDCjOtUTUxo5LKDvjZAFbPBaIa5-LfM54UlIvMnem2y9sgIIVxptmR1gslv0PJMgCyeYDjxlCcvQqv8W5IuFw61O7zIp3FlmU8EwcbMGdfs8PRulgLjep2jL9HhxBo8zAcz1aJgVnV0kHFPfYSTVsBUQcNHYCOBmmp_NkrgKW8Tjs1e7buzZpKz4Op6Up2rqMa69ArwasSkTiQ1cGZxM9_hEt9lt9bCjvlyns9JWcSxdBrRptk4fFXBhQyYbZQ90WW2BBxfTCopa_QKcwAiNqVgKHQZlxuGeVgXVuwpBdNqKMGT29BjuiqGlz_64fmLhPr37ivSjdt6YyCKB1S6-_sjDMFUAC73HtCyaMcF8WsNdbQDgBzvNYfZMg_YnsHkQdClj7EwOMymOna9hOZyKHbtFP89IZ852yzTUqEEGAn3WAaneiBbYQjjB1WtMtSGTTGVmggK65NG1HOjiuI8R7Ik0Dgx4kOSQQYD6fHVLcYPignZIzxm8TZhIcuBPIRKm5FHb9LD69lFtpDkJ8w0vfC2bAMz-Y15u-mla8jIkVTDi_3i1mYmJh8cCm0wwEDCBWEtNOJ5zwNFsYMHQwDso-9nc1cM6P57JXsq7g' \
--data '{
    "scope": [
        "https://www.googleapis.com/auth/cloud-platform"
    ]
}'

The Bearer token is the ya29.d token we receive from Google STS.


Issuing an OIDC Token from NodeJS

This process isn’t as cumbersome as Docker’s, but it’s still hairy and little documented. Google builds the authentication flow right into google-auth-library. But it still requires some configuration.

Here’s how:

0. The OIDC Config (We pass it from Doppler):

OIDC_CLIENT_ID="//iam.googleapis.com/projects/1016670781645/locations/global/workloadIdentityPools/hetzner-pool/providers/hetzner-provider"
OIDC_CLIENT_SECRET="This is the password that we pass in base64 format in the header"
OIDC_ISSUER="https://auth.mydomain.com"
OIDC_PASSWORD="This is the unhashed bcrypt password"
OIDC_POOL_ID="hetzner-pool"
OIDC_PROVIDER_ID="hetzner-provider"
OIDC_USERNAME="DEX_GCP_STATIC_PASSWORD_EMAIL -> the email address we used in Dex"

1. Install the required packages: google-auth-library

pnpm add google-auth-library @google-cloud/vertexai

2. Create an Auth.ts:

import {
  ExternalAccountClientOptions,
  SubjectTokenSupplier,
} from 'google-auth-library';
import { GoogleOIDCFetchException } from './google.auth.exception';

export type OIDCToken = {
  access_token: string;
  token_type: 'bearer';
  expires_in: number;
  id_token: string;
};

export type OidcConfigParams = {
  oidcIssuer: string;
  oidcClientId: string;
  oidcClientSecret: string;
  oidcUsername: string;
  oidcPassword: string;
  googleProjectNumber: string;
  oidcPoolId: string;
  oidcProviderId: string;
  googleServiceAccount: string;
};


class OidcTokenSupplier implements SubjectTokenSupplier {
  readonly #config: OidcConfigParams;

  constructor(config: OidcConfigParams) {
    this.#config = config;
  }


  async getSubjectToken(
  ): Promise<string> {
    // our Dex URL
    const tokenUrl = `${this.#config.oidcIssuer}/token`;
    const basicAuthHeader = Buffer.from(`${this.#config.oidcClientId}:${this.#config.oidcClientSecret}`).toString('base64');
    const urlencoded = new URLSearchParams();
    urlencoded.append("grant_type", "password");
    urlencoded.append("username", this.#config.oidcUsername);
    urlencoded.append("password", this.#config.oidcPassword);
    urlencoded.append("scope", "openid email profile groups");

    const response = await fetch(tokenUrl, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/x-www-form-urlencoded',
        'Authorization': `Basic ${basicAuthHeader}`,
      },
      body: urlencoded,
    });

    if (!response.ok) {
      throw new GoogleOIDCFetchException(
        `Failed to get OIDC token: ${response.status} ${response.statusText}`,
        {
          status: response.status,
          statusText: response.statusText,
          body: await response.text(),

        }
      );
    }

    const data = await response.json() as OIDCToken;
    return data.id_token;
  }
}

export function getOidcConfig(
  params: OidcConfigParams
): ExternalAccountClientOptions {
  const audience = `//iam.googleapis.com/projects/${params.googleProjectNumber}/locations/global/workloadIdentityPools/${params.oidcPoolId}/providers/${params.oidcProviderId}`;


  return {
    type: 'external_account',
    audience,
    subject_token_type: 'urn:ietf:params:oauth:token-type:id_token',
    token_url: 'https://sts.googleapis.com/v1/token',
    subject_token_supplier: new OidcTokenSupplier(params),
    service_account_impersonation_url: `https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/${params.googleServiceAccount}:generateAccessToken`,
    scopes: ['https://www.googleapis.com/auth/cloud-platform'],
  };
}

We inherit from the SubjectTokenSupplier, and we use it to point to our Dex configuration.

The getOidcConfig function transforms our input into the shape that Google Auth Credentials expect from every Google package service. This is similar to the Postman config, with the difference that the STS and IAM endpoints are baked in.

3. Vertex AI configuration:

We can then consume it in Vertex AI:

export class GoogleVertexLLM  {
  #client: VertexAI;
  #projectId: string;
  #location: string;

  constructor(config: GoogleVertexLLMConstructorConfig) {
    this.#projectId = config.projectId;
    this.#location = config.location;

    this.#client = new VertexAI({
      project: this.#projectId,
      location: this.#location,
      googleAuthOptions: {
        credentials: getOidcConfig(config.oidcConfig)
      },
    });
  }
 async generateChat<T>(
    systemPrompt: string
  ): PromiseExceptionResult<T> {
     
      const parts: Part[] = [
        {
          text: systemPrompt,
        },
      ];

      const model = this.#client.getGenerativeModel({
        model: 'gemini-2.0-flash',
        systemInstruction: {
          role: 'system',
          parts,
        },
      });

      const chat = model.startChat({
        generationConfig: {
          temperature: 0,
          topP: 0.95,
          maxOutputTokens: 8192,
          responseMimeType: 'application/json',
        
        },
     });
 }
}

It’d then automatically perform all the token exchanges for us!

WE DID IT

Omgosh. It was crazy. It took a while, but we were able to connect to the services and issue a rotating token!

Hopefully, it won’t take you long.

If there’s a part that wasn’t clear from the article, let me know!

You can always find me on Twitter/X and/or LinkedIn @javiasilis


Written by josejaviasilis | Talks about: Web Development, Startup Engineering, and latest trends
Published by HackerNoon on 2025/09/26