Architecting a Highly Available and Scalable Wordpress Using Docker Swarm, Traefik & GlusterFS… by@eon01

Architecting a Highly Available and Scalable Wordpress Using Docker Swarm, Traefik & GlusterFS…

Aymen HackerNoon profile picture


Docker is a powerful tool, however learning how to use it the right way could take a long time especially with the rapidly growing ecosystem of containers which could be confusing, that is why I had the idea to start writing Painless Docker.

Painless Docker is a complete and detailed guide (for beginners and intermediate levels) to create, deploy, optimize, secure, trace, debug, log,orchestrate & monitor Docker and Docker clusters in order to create a high quality microservices applications.

This article is more detailed in the bonus chapters of Painless Docker Book.

Using Docker, Docker Swarm, Amazon RDS (Aurora) + EC2, GlusterFS & Traefik, we are going to create a highly available and scalable WordPress cluster.

Preparing The Infrastructure

I am using Amazon EC2 machines but you can use you prefered infrastructure.

Start by creating two (EC2) machines in two different availability zones.

In this tutorial, I am using:


The first machine will be the manager of the Swarm cluster and the second one will be the worker.

This is just an example but it depends on how available you want your cluster to be, you may create 3 managers (or more) for more availability.

The manager will have a public IP because it will receive all of the ingoing requests and redirect them to Docker that will handle the internal load balancing between the containers living in the manager (same machine in this case) and the containers living in the worker.

Don’t forget to add an Elastic Block Store to each machine.

We should have two instances:

Using lsbsk, we can verify that the EBS is attached to our machine:

xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 10G 0 disk

On each machine, create our filesystem:

sudo mkfs.xfs /dev/xvdb

Creating A Trusted Pool Using GlusterFS

GlusterFS is a scale-out network-attached storage file system.

Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks.

GlusterFS was developed originally by Gluster, Inc. and then by Red Hat, Inc., as a result of Red Hat acquiring Gluster in 2011.

Let’s start by installing GlusterFS:

apt-get  install -y glusterfs-server

If you are using another OS or another Linux distribution, adapt this command to your need.

GlusterFS Logo

Notice: I created a machine with no public IP (the worker), this machine will not be able to reach Internet and install GlusterFS unless you setup an Amazon NAT instance. If you are just making some tests or not familiar with AWS, create a machine with public IP or create an EIP then assign it to the machine.

Create a mount target (brick):

mkdir -p /glusterfs/bricks/

Notice: A brick is a directory on an underlying disk filesystem. If one of the bricks goes down, there is a hardware failure to make the data available.

Use fstab file to permanently mount the EBS to the new brick:

/dev/xvdb /glusterfs/bricks/ xfs    defaults        0 0

Now use mount /dev/xvdb in order to apply the modifications that we added to the fstab file.

Under each /glusterfs/bricks/ mount point a directory used for GlusterFS volume:

mkdir /glusterfs/bricks/

Let’s add these lines to the /etc/hosts/ file on each server , we will need this in a following step: node1 node2

node1 is the manager and node2 is our worker. In order to test this configuration, you can execute: ping node2 from node1:

PING node2 ( 56(84) bytes of data.
64 bytes from node2 ( icmp_seq=1 ttl=64 time=0.857 ms

From the manager (node1), type the following command to establish Gluster cluster nodes trust relationship:

gluster peer probe node2

You should have peer probe: success as an output, otherwise check your firewall settings or tail your logs.

Now we have a working GlusterFS trusted pool.

A GlusterFS storage pool is a trusted network of storage servers (node1 and node2 in our case). When we started the first server, the storage pool consisted of that server alone. When we added additional node2 storage server to the storage pool (using the probe command from the node1 storage server that is already trusted), we created a trusted storage pool of 2 servers.

An Example Of A GlusterFS Architecture

Now, from the manager server, we should create a two-way mirror volume that we will call booksfordevops-com-wordpress using:

gluster volume create booksfordevops_com-wordpress replica 2 node1:/glusterfs/bricks/ node2:/glusterfs/bricks/

You will get this output:

volume create: booksfordevops_com-wordpress: success: please start the volume to access data

You can see the volume if you type gluster volume list


Now you should start it using gluster volume start booksfordevops_com-wordpress:

If everything is ok, you will get a similar output to this:

volume start: booksfordevops_com-wordpress: success

Our volume should be healthy but you can check its status using gluster volume status:

Status of volume: booksfordevops_com-wordpress
Gluster process TCP Port RDMA Port Online Pid
Brick node1:/glusterfs/bricks/booksfordevop 49152 0 Y 1868
Brick node2:/glusterfs/bricks/booksfordevop 49152 0 Y 17591
NFS Server on localhost N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 1894
NFS Server on node2 2049 0 Y 17612
Self-heal Daemon on node2 N/A N/A Y 17613
Task Status of Volume booksfordevops_com-wordpress
There are no active volume tasks

Now that the GlusterFS server is set, we need to setup the client side. Let’s create the directory (in each node) to be used by the client in each node of our cluster:

mkdir -p /data/booksfordevops_com-wordpress

We need to mount a shared directory on node2 from the node1 and the same directory on node1 from node2.

On node1, add this line at the end of /etc/fstab file:

node2:/booksfordevops_com-wordpress       /data/booksfordevops_com-wordpress       glusterfs     defaults,_netdev  0  0

On node2, add this line at the end of /etc/fstab file:

node1:/booksfordevops_com-wordpress       /data/booksfordevops_com-wordpress       glusterfs     defaults,_netdev  0  0

Then on both hosts, type mount -a.

Creating Our Swarm Cluster

The next step is to install Docker on both hosts:

curl -fsSL | sh

Then initialize the Swarm cluster:

docker swarm init

Execute the last command on the manager and you will get a command to execute on the worker:

docker swarm join \
--token XXXXXX-x-xxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxx \

If everything is ok, the worker will join the cluster:

This node joined a swarm as a worker.

Deploying Our Application

In this tutorial, we want to create a Wordpress blog, we can host the Mysql or the MariaDB database in our GlusterFS trusted pool but in my specefic case I created an Aurora database.

We are going to host Wordpress files in the storage pool we created, that points to /data/booksfordevops_com-wordpress.

This is the Docker Compose v3 file that we are going to deploy:

version: '3'
image: wordpress:4.7.3-php7.1-apache
- 8000:80
- booksfordevops_com-network
- /data/booksfordevops_com-wordpress:/var/www/html
mode: replicated
replicas: 1
condition: always

Notice: For security reasons, do not put your docker-compose.yml file in the same directory as your Wordpress files in `/data/booksfordevops_com-wordpress, it will be publicly accessible.

In order to deploy our website, we should execute this command:

docker stack deploy --compose-file=docker-compose.yml booksfordevops_com

We can use a similar command to the following one in order to deploy the Wordpress app:

docker network -d overlay booksfordevops_com-network
docker service create --name booksfordevops_com_wordpress \
--publish 8000:80 \
--mount type=bind,source=/data/wp,target=/var/www/html \
-e \
-e WORDPRESS_DB_NAME=db_name \
--replicas 1 \
--network booksfordevops_com-network \

But putting all together in a Docker Compose v3 file is a more organised way to deploy our app.

You may have some premission problems with your Wordpress fresh installation, you will need to execute these commands:

cd /data/booksfordevops_com-wordpress
chown www-data:www-data  -R *
find . -type d -exec chmod 755 {} \;  
find . -type f -exec chmod 644 {} \;

In both servers, you can now notice that Wordpress files are mounted from the service containers to the host volume:

ls -lrth /data/booksfordevops_com-wordpress
-rw-r--r--  1 www-data www-data  418 Sep 25  2013 index.php
-rw-r--r-- 1 www-data www-data 3.3K May 24 2015 wp-cron.php
-rw-r--r-- 1 www-data www-data 364 Dec 19 2015 wp-blog-header.php
-rw-r--r-- 1 www-data www-data 1.6K Aug 29 2016 wp-comments-post.php
-rw-r--r-- 1 www-data www-data 3.0K Aug 31 2016 xmlrpc.php
-rw-r--r-- 1 www-data www-data 5.4K Sep 27 21:36 wp-activate.php
-rw-r--r-- 1 www-data www-data 4.5K Oct 14 19:39 wp-trackback.php
-rw-r--r-- 1 www-data www-data 30K Oct 19 04:47 wp-signup.php
-rw-r--r-- 1 www-data www-data 3.3K Oct 25 03:15 wp-load.php
-rw-r--r-- 1 www-data www-data 34K Nov 21 02:46 wp-login.php
-rw-r--r-- 1 www-data www-data 2.4K Nov 21 02:46 wp-links-opml.php
-rw-r--r-- 1 www-data www-data 16K Nov 29 05:39 wp-settings.php
-rw-r--r-- 1 www-data www-data 20K Jan 2 18:51 license.txt
-rw-r--r-- 1 www-data www-data 7.9K Jan 11 05:15 wp-mail.php
-rw-r--r-- 1 www-data www-data 7.3K Jan 11 17:46 readme.html
drwxr-xr-x 18 www-data www-data 8.0K Mar 6 16:00 wp-includes
drwxr-xr-x 9 www-data www-data 4.0K Mar 6 16:00 wp-admin
-rw-r--r-- 1 www-data www-data 2.7K Mar 19 19:41 wp-config-sample.php
-rw-r--r-- 1 www-data www-data 3.2K Mar 19 19:41 wp-config.php
drwxr-xr-x 4 www-data www-data 52 Mar 19 19:51 wp-content

If you check the created brick on each server, you will find the same files:

ls -lrth /glusterfs/bricks/

At this step, we have a working Wordpress installation that you can reach using the IP address of the manager and the port 8000.

Adding Traefik:

Træfɪk is a HTTP reverse proxy and load balancer made to deploy microservices and supports Docker and Docker Swarm (and other backends like Mesos/Marathon, Consul, Etcd, Zookeeper, BoltDB, Amazon ECS and Rest APIs).

Let’s create a service to run Traefik on the manager ( --constraint=node.role==manager ). The reverse proxy will run on a separate network ( --network traefik-net).

docker network -d overlay traefik-net;
docker service create --name traefik \
--constraint=node.role==manager \
--publish 80:80 \
--publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik:v1.1.0-rc1 \
--docker \
--docker.swarmmode \
--docker.domain=traefik \ \

Updating Wordpress Service

In order to make our Wordpress website work with Traefik we are going to update its docker-compose file and add it to the same Traefik network traefik-net. I added also some labels related to Traefik like port and frontend.rule:

version: '3'
external: true
image: wordpress:4.7.3-php7.1-apache
- 8001:80
- booksfordevops_com-network
- traefik-net
- /data/booksfordevops_com-wordpress:/var/www/html
mode: replicated
replicas: 2
traefik.port: 80
traefik.frontend.rule: ","
condition: on-failure

Update the deployment using

docker stack deploy --compose-file=docker-compose.yml booksfordevops_com

You can check if the configured domain is accessible using a simple curl, in my case:

curl -H

You can also go to the health dashboard in order to see things like the response time and the status codes of our application.

What We Learned

We saw how to build a highly available Wordpress website, where storage and computing are distributed into two different regions.

Within each EC2 machine we can scale Wordpress to more than one container and have another level of resilience.

Our reverse proxy can check the health of each container and manage to redirect traffic to the working ones.

We used modern tools and technologies like:

  • Amazon RDS
  • Amazon EC2
  • GlusterFS
  • Docker
  • Docker Swarm
  • Traefik

Our website is working, you can subscribe and wait for Books For DevOps release.

Connect Deeper

This article is part of Painless Docker Book: Unlock The Power Of Docker & Its Ecosystem.

Painless Docker is a practical guide to master Docker and its ecosystem based on real world examples.

Painless Docker tends to be a complete and detailed guide to create, deploy, optimize, secure, trace, debug, log, orchestrate & monitor Docker and Docker clusters. Through this book you will learn how to use Docker in development and production environments and the DevOps pipeline between them in order to build a modern microservices applications.

If you resonated with this article, please subscribe to DevOpsLinks : An Online Community Of Diverse & Passionate DevOps, SysAdmins & Developers From All Over The World.

You can find me on Twitter, Clarity or my blog and you can also check my books: SaltStack For DevOps & The Jumpstart Up.

If you liked this post, please recommend and share it to your followers.