In this example we are going to see deployment of:
We assume that you have already set up a Platform9 cluster with at least one node, and the cluster is ready.
Let’s start with the Redis parts.
Redis is a key-value in-memory store that is used mainly as a cache service. In order to set up Clustering for Data Replication we need a Redis instance that acts as Master, together with additional instances as slaves. Then the guestbook application can use this instance to store data. The Redis master will propagate the writes to the slave nodes.
We can initiate a Redis Master deployment in a few different ways: either using the kubectl tool, the Platform9 UI or the Kubernetes UI. For convenience, we use the kubectl tool as it’s the most commonly understood in tutorials.
First we need to create a Redis Cluster Deployment. Looking at their documentation here, to set up a cluster, we need some configuration properties. We can leverage kubernetes configmaps to store and reference them in the deployment spec.
We need to save a script and a redis.conf file that is going to be used to configure the master and slave nodes.
Create the following config redis-cluster.config.yml
With these values
$ cat redis-cluster.config.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster-config
data:
update-ip.sh: |
#!/bin/sh
sed -i -e "/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${IP}/" /data/nodes.conf
exec "$@"
redis.conf: |+
cluster-enabled yes
cluster-config-file /data/nodes.conf
appendonly yes
We define a script that will insert an IP value to the node.conf file. This is to fix an issue with Redis as referenced here. We use this script every time we deploy a new redis image.
Then we have the redis.conf, which applies the minimal cluster configuration.
Apply this spec into the cluster:
$ kubectl apply -f redis-cluster.config.yml
Then verify that it exists in the list of configmaps:
$ kubectl get configmaps
Next we need to define a spec for the redis cluster instances. We can use a Deployment or a StatefulSet to define 3 instances:
Here is the spec: redis-cluster.statefulset.yml
$ cat redis-cluster.statefulset.yml
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector:
matchLabels:
app: redis-cluster
template:
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0.7-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/update-ip.sh", "redis-server", "/conf/redis.conf"]
env:
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster-config
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
In the above step we defined a few things:
An IP environment variable that we need in the update-ip.sh script that we defined in the configmap earlier. This is the pod-specific IP address using the Downward API.Some shared volumes including the configmap that we defined earlier.Two container ports – 6379 and 16379 – for the gossip protocol.
With this spec we can deploy the Redis cluster instances:
$ kubectl apply -f redis-cluster.statefulset.yml
Once we verify that we have the deployment ready, we need to perform the last step, which is bootstrapping the cluster. Consulting the documentation here for creating the cluster, we need to ssh into one of the instances and run the redis-cli cluster create command. For example taken from the docs:
$ redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
To do that in our case, we need to get the local pod IPs of the instances and feed them to that command.
We can query the IP using this command:
$ kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 '
So if we save them in a variable or a file, we can pipe them at the end of the redis-cli command:
$ POD_IPS = $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')
Then we can run the following command:
$ kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $POD_IPS
If everything is OK, you will see the following prompt. Enter ‘yes’ to accept and continue:
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
........
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Then we can verify the cluster state by running the cluster info command:
$ kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:28
cluster_stats_messages_pong_sent:34
cluster_stats_messages_sent:62
cluster_stats_messages_ping_received:29
cluster_stats_messages_pong_received:28
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:62
Before we continue deploying the guestbook app, we need to offer a unified service frontend for the Redis Cluster so that it’s easily discoverable in the cluster.
Here is the service spec: redis-cluster.service.yml
$ cat redis-cluster.service.yml
---
apiVersion: v1
kind: Service
metadata:
name: redis-master
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
We expose the cluster as redis-master here, as the guestbook app will be looking for a host service to connect to with that name.
Once we apply this service spec, we can move on to deploying and exposing the Guestbook Application:
$ kubectl apply -f redis-cluster.service.yml
The guestbook application is a simple php script that shows a form to submit a message. Initially it will attempt to connect to either the redis-master host or the redis-slave hosts. It needs the GET_HOSTS_FROM environment variable set pointing to the file with the following variables: REDIS_MASTER_SERVICE_HOST: of the master REDIS_SLAVE_SERVICE_HOST: of the slave
First, let’s define the deployment spec bellow:
php-guestbook.deployment.yml
$ cat php-guestbook.deployment.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 1
selector:
matchLabels:
app: guestbook
template:
metadata:
labels:
app: guestbook
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v6
resources:
requests:
cpu: 150m
memory: 150Mi
env:
- name: GET_HOSTS_FROM
value: env
- name: REDIS_MASTER_SERVICE_HOST
value: "redis-master"
- name: REDIS_SLAVE_SERVICE_HOST
value: "redis-master"
ports:
- containerPort: 80
The code of the gb-frontend image is located here.
Next is the the associated service spec:
---
apiVersion: v1
kind: Service
metadata:
name: guestbook-lb
spec:
type: NodePort
ports:
- port: 80
selector:
app: guestbook
Note: NodePort will assign a random port over the public IP of the Node. In either case, we get a public host:port pair where we can inspect the application. Here is a screenshot of the app after we deployed it:
Once we have finished experimenting with the application, we can clean up the resources and all the servers by issuing kubectl delete statements. A convenient way is to delete by labels. For example:
**Please note I am an employee of Platform9 and my team helped contribute to this guide**