paint-brush
Architecting a Highly Scalable Golang API with Docker Swarm & Traefikby@mlabouardy
6,830 reads
6,830 reads

Architecting a Highly Scalable Golang API with Docker Swarm & Traefik

by Mohamed LabouardyNovember 1st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This post will show you how to setup a <strong>Swarm Cluster</strong>, deploy a couple of microservices, and create a Reverse Proxy Service (with <a href="https://traefik.io/" target="_blank"><strong>Traefik</strong></a>) in charge of routing requests on their base URLs.
featured image - Architecting a Highly Scalable Golang API with Docker Swarm & Traefik
Mohamed Labouardy HackerNoon profile picture

This post will show you how to setup a Swarm Cluster, deploy a couple of microservices, and create a Reverse Proxy Service (with Traefik) in charge of routing requests on their base URLs.

If you haven’t already, create a Swarm cluster, you could use the shell script below to setup a cluster with 3 nodes (1 Manager & 2 Workers)

Issue the following command to execute the script:

chmod +x setup.sh

./setup.sh

The output of the above command is as follows:

At this moment, we have 3 nodes:

Our example microservice application consists of two parts. The Books API and the Movies API. For both parts I have prepared images for you that can be pulled from the DockerHub.

The Dockerfiles for both images can be found on my Github.

Create docker-compose.yml file with the following content:

  • We use an overlay network named traefik-net, on which we add the services we want to expose to Traefik.
  • We use constraints to deploy the APIs on workers & Traefik on Swarm manager.
  • Traefik container is configured to listen on port 80 for the standard HTTP traffic, but also exposes port 8080 for a web dashboard.
  • The use of docker socket (/var/run/docker.sock) allows Traefik to listen to Docker Daemon events, and reconfigure itself when containers are started/stopped.
  • The label traefik.frontend.rule is used by Træfik to determine which container to use for which Request Path.
  • The configs part create a configuration file for Traefik from config.toml(it enables the Docker backend)

In order to deploy our stack, we should execute the following command:

docker stack deploy — compose-file docker-compose.yml api

Let’s check the overlay network:

docker network ls

Traefik configuration:

docker config ls

To display the configuration content:

docker config inspect api_traefik-config — pretty

And finally, to list all the services:

docker stack ps api

In the list of above, you can see that the 3 containers are being running on node-1, node-2 & node-3 :

If you point your favorite browser (not you IE) to the Traefik Dashboard URL (http://MANAGER_NODE_IP:8080) you should see that the frontends and backends are well defined:

If you check http://MANAGER_NODE_IP/books, you will get a list of books:

If you replace the base URL with /movies:

What happens if we want to scale out the books & movies APIs. With the docker service scale command:

We can confirm that:

Obviously Traefik did recognise that we started more containers and made them available to the right frontend automatically:

In the diagram below, you will find that the manager has decied to schedule the new containers on node-2 (3 of them) and node-3 (4 of them) using the Round Robin strategy: