Microservice architecture is nowadays almost a standard for backend development. An API gateway is an excellent way to connect a group of microservices to a single API accessible to users. API gateways are available from cloud providers such as AWS/Azure/Google Cloud Platform and Cloudflare. Kong is a scalable API gateway built on open source and can be an excellent alternative if you don't want to have your system locked into a particular vendor. This tutorial shows an example using Kong API gateway, , and . The illustration below shows you the final architecture we are going to build in this guide Ory Kratos Ory Oathkeeper The full source code for this tutorial is available on github What we will use gateway can be an excellent solution for an ingress load balancer and API gateway if you do not want vendor lock-in of any cloud API Gateways in your application. Kong uses and Lua. OpenResty extends Nginx with Lua scripting to use Nginx's event model for non-blocking I/O with HTTP clients and remote backends like PostgreSQL, Memcached, and Redis. OpenResty is not an Nginx fork, and Kong is not an Openresty fork. Kong uses OpenResty to enable . Kong OpenResty API gateway features acts like an identity and access proxy for our microservices. It allows us to proxy only authenticated requests to our microservices, so we don't need to implement middleware to check authentication. It can also transform requests, for example, convert session auth into JWT for a back-end service. Oathkeeper is the authentication provider; it handles all first-party authentication flows: username/password, forgot password, MFA/2FA, . It also provides OIDC/social login capabilities for example, "Login with GitHub". Kratos and more Building simple microservices Let's say we have two microservices: and . They are pretty simple and serve only to test our API gateway, but you can switch them out for more complex components. hello world The "World" microservice exposes a API endpoint and returns a simple JSON message: /world package main import ( "encoding/json" "log" "net/http" ) type Response struct { Message string `json:"message"` } func helloJSON(w http.ResponseWriter, r *http.Request) { response := Response{Message: "World microservice"} w.Header().Set("Content-type", "application/json") w.WriteHeader(http.StatusOK) json.NewEncoder(w).Encode(response) } func main() { http.HandleFunc("/world", helloJSON) log.Fatal(http.ListenAndServe(":8090", nil)) } The "Hello" microservice exposes a API endpoint and returns a simple JSON message: /hello package main import ( "encoding/json" "log" "net/http" ) type Response struct { Message string `json:"message"` } func helloJSON(w http.ResponseWriter, r *http.Request) { response := Response{Message: "Hello microservice"} w.Header().Set("Content-type", "application/json") w.WriteHeader(http.StatusOK) json.NewEncoder(w).Encode(response) } func main() { http.HandleFunc("/hello", helloJSON) log.Fatal(http.ListenAndServe(":8090", nil)) } We now want to secure access to these microservices and let only authenticated users access these endpoints. Okay. Let's start hacking, shall we? Ory Kratos setup Follow the guide to set up Ory Kratos. In this tutorial, you only need a docker-compose file with the following configuration: Quickstart postgres-kratos: image: postgres:9.6 ports: - "5432:5432" environment: - POSTGRES_USER=kratos - POSTGRES_PASSWORD=secret - POSTGRES_DB=kratos networks: - intranet kratos-migrate: image: oryd/kratos:v0.8.0-alpha.3 links: - postgres-kratos:postgres-kratos environment: - DSN=postgres://kratos:secret@postgres-kratos:5432/kratos?sslmode=disable&max_conns=20&max_idle_conns=4 networks: - intranet volumes: - type: bind source: ./kratos target: /etc/config/kratos command: -c /etc/config/kratos/kratos.yml migrate sql -e --yes kratos: image: oryd/kratos:v0.8.0-alpha.3 links: - postgres-kratos:postgres-kratos environment: - DSN=postgres://kratos:secret@postgres-kratos:5432/kratos?sslmode=disable&max_conns=20&max_idle_conns=4 ports: - '4433:4433' - '4434:4434' volumes: - type: bind source: ./kratos target: /etc/config/kratos networks: - intranet command: serve -c /etc/config/kratos/kratos.yml --dev --watch-courier kratos-selfservice-ui-node: image: oryd/kratos-selfservice-ui-node:v0.8.0-alpha.3 environment: - KRATOS_PUBLIC_URL=http://kratos:4433/ - KRATOS_BROWSER_URL=http://127.0.0.1:4433/ networks: - intranet ports: - "4455:3000" restart: on-failure mailslurper: image: oryd/mailslurper:latest-smtps ports: - '4436:4436' - '4437:4437' networks: - intranet Some notes on the network architecture: HTTP and are the public and admin API's of Ory Kratos. :4433 :4434 HTTP for Mailslurper - a mock Email server. You can get an activation link by accessing . :4436 http://127.0.0.1:4436 HTTP for the UI interface that allows one to start sign-up/login/recovery flows. :4455 After running you can open to test your configuration. docker-compose up http://127.0.0.1:4455/welcome Configuring Ory Oathkeeper Now we can start configuring our gateways for this example. Kong is the entry point for the network traffic. Ory Oathkeeper would be accessible from the internal network only in this case. Let's review our architecture diagram from before: Oathkeeper checks sessions and proxies traffic to our microservice while Kong provides ingress load balancing. We can even set up to have a more robust configuration for our service. Here is how we configure the access rules for Ory Oathkeeper: Round-Robin DNS - id: "api:hello-protected" upstream: preserve_host: true url: "http://hello:8090" match: url: "http://oathkeeper:4455/hello" methods: - GET authenticators: - handler: cookie_session mutators: - handler: noop authorizer: handler: allow errors: - handler: redirect config: to: http://127.0.0.1:4455/login - id: "api:world-protected" upstream: preserve_host: true url: "http://world:8090" match: url: "http://oathkeeper:4455/world" methods: - GET authenticators: - handler: cookie_session mutators: - handler: noop authorizer: handler: allow errors: - handler: redirect config: to: http://127.0.0.1:4455/login The Ory Oathkeeper configuration: log: level: debug format: json serve: proxy: cors: enabled: true allowed_origins: - "*" allowed_methods: - POST - GET - PUT - PATCH - DELETE allowed_headers: - Authorization - Content-Type exposed_headers: - Content-Type allow_credentials: true debug: true errors: fallback: - json handlers: redirect: enabled: true config: to: http://127.0.0.1:4455/login when: - error: - unauthorized - forbidden request: header: accept: - text/html json: enabled: true config: verbose: true access_rules: matching_strategy: glob repositories: - file:///etc/config/oathkeeper/access-rules.yml authenticators: anonymous: enabled: true config: subject: guest cookie_session: enabled: true config: check_session_url: http://kratos:4433/sessions/whoami preserve_path: true extra_from: "@this" subject_from: "identity.id" only: - ory_kratos_session noop: enabled: true authorizers: allow: enabled: true mutators: noop: enabled: true Ory Oathkeeper now looks up a valid session in the request cookies and proxies only authenticated requests. It redirects to login UI if there's no cookie available. ory_kratos_session Adding Kong Now all that is needed is to configure Kong: services: kong-migrations: image: "kong:latest" command: kong migrations bootstrap depends_on: - db environment: <<: *kong-env networks: - intranet restart: on-failure kong: platform: linux/arm64 image: "kong:latest" environment: <<: *kong-env KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_PROXY_LISTEN: "${KONG_PROXY_LISTEN:-0.0.0.0:8000}" KONG_ADMIN_LISTEN: "${KONG_ADMIN_LISTEN:-0.0.0.0:8001}" KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr KONG_PREFIX: ${KONG_PREFIX:-/var/run/kong} KONG_DECLARATIVE_CONFIG: "/opt/kong/kong.yaml" networks: - intranet ports: # The following two environment variables default to an insecure value (0.0.0.0) # according to the CIS Security test. - "${KONG_INBOUND_PROXY_LISTEN:-0.0.0.0}:8000:8000/tcp" - "${KONG_INBOUND_SSL_PROXY_LISTEN:-0.0.0.0}:8443:8443/tcp" - "127.0.0.1:8001:8001/tcp" - "127.0.0.1:8444:8444/tcp" healthcheck: test: ["CMD", "kong", "health"] interval: 10s timeout: 10s retries: 10 restart: on-failure:5 read_only: true volumes: - kong_prefix_vol:${KONG_PREFIX:-/var/run/kong} - kong_tmp_vol:/tmp - ./config:/opt/kong security_opt: - no-new-privileges db: image: postgres:9.6 environment: POSTGRES_DB: kong POSTGRES_USER: kong POSTGRES_PASSWORD: kong healthcheck: test: ["CMD", "pg_isready", "-U", "kong"] interval: 30s timeout: 30s retries: 3 restart: on-failure networks: - intranet hello: The docker-compose creates three containers db container with PostgreSQL database to store the configuration of services/routes for our API gateway. kong-migrate to run migrations against the database. kong container that exposes port for proxying traffic and port with admin API. 8000 8001 As the last step, we need to create a service for Kong and configure routes. #!/bin/bash # Creates an secure-api service # and proxies network traffic to oathkeeper curl -i -X POST \ --url http://localhost:8001/services/ \ --data 'name=secure-api' \ --data 'url=http://oathkeeper:4455' # Creates routes for secure-api service curl -i -X POST \ --url http://localhost:8001/services/secure-api/routes \ --data 'paths[]=/'\ Testing You can open or in your browser and there are two possible scenarios: http://127.0.0.1:8000/hello http://127.0.0.1:8000/world You receive (or ). {"message": "Hello microservice"} "World microservice" The browser redirects you to . http://127.0.0.1:4455/login Further steps mutator to have the identity accessible as JWT token for your microservices. Configure id_token the password policy to better suit your use case. Configure . Add two-factor authentication Consider using instead of having an additional reverse proxy inside your network. authentication based on subrequest result can be an excellent plugin to use oathkeeper as a decision API for Kong. Kong auth request