It’s time to install a MongoDB ReplicaSet on a Kubernetes cluster on Azure and try to kill it in all possible ways!Starring: Helm StatefulSet PersistentVolumes PersistentVolumeClaims Azure Kubernetes Cluster configuration Let’s install a Cluster with 1 Master and 3 Nodes all running Linux using ACS Engine with following commands. I described detailed steps on ACS Engine usage for installing a cluster in my article. Please refer to it for details. Here I list quickly commands to be used. Kubernetes Adventures on Azure — Part 3 (ACS Engine & Hybrid Cluster) Creation of Resource Group This is needed to group all resources for this tutorial in a single logical group, to be able to delete everything with a single command at the end. az group create --name k8sMongoTestGroup --location westeurope Cluster provisioning I usually create an ssh pair for my test on acs. Please check my article on how to do it. I will change the examples/kubernetes.json file to use previously created ssh pair, dnsPrefix and servicePrincipalProfile ( ). here following Deploy a Kubernetes Cluster suggestions from Microsoft my kubernetes.json file with changes in is: bold {"apiVersion": "vlabs","properties": {"orchestratorProfile": {"orchestratorType": "Kubernetes","orchestratorRelease": "1.7"},"masterProfile": {"count": 1,"dnsPrefix": " ","vmSize": "Standard_D2_v2"},"agentPoolProfiles": [{"name": "agentpool1","count": 3,"vmSize": "Standard_D2_v2","availabilityProfile": "AvailabilitySet"}],"linuxProfile": {"adminUsername": "azureuser","ssh": {"publicKeys": [{"keyData": " "}]}},"servicePrincipalProfile": {"clientId": " ","secret": " "}}} ivank8stest ssh-rsa yourpubkeyhere yourappclientid yourappsecrete Create your cluster with: acs-engine deploy --subscription-id Wait for the cluster to be up running: INFO[0010] Starting ARM Deployment (k8sMongoGroup-2051810234). This will take some time…INFO[0651] Finished ARM Deployment (k8sMongoGroup-2051810234). Connect to it using kubeconfig file generated during deployment in _output folder. export KUBECONFIG=~/acs/acs-engine/_output/ /kubeconfig/kubeconfig.westeurope.json ivank8stest Following commands can be used to determine when cluster is ready: kubectl cluster-infokubectl get nodes NAME STATUS AGE VERSIONk8s-agentpool1-33584487-0 Ready 46m v1.7.4k8s-agentpool1-33584487-1 Ready 46m v1.7.4k8s-agentpool1-33584487-2 Ready 46m v1.7.4k8s-master-33584487-0 Ready 46m v1.7.4 Now you can open Kubernetes Dashboard if you want to use a UI to check you cluster status: and then open a browser at kubectl proxy http://127.0.0.1:8001/ui Helm MongoDB Charts Helm is the Package Manager for Kubernetes. It simplifies installation and maintenance of products and services like: MongoDB Redis RabbitMQ and many other We will use it to install and configure a MongoDB Replica Set. Helm installation Prerequisites, installation steps and details can be found in the article from Microsoft. Use Helm to deploy containers on a Kubernetes cluster With all prerequisites in place, Helm installation is as simple as running command: helm init --upgrade Clone charts repository Let’s clone charts repository to be able to examine and change MongoDB chart files before deploying everything on our cluster: git clone https://github.com/kubernetes/charts.git Now go in the folder. Here you will find all artifacts composing an Helm Chart. If needed you can change file to tailor installation based on your needs. For now let’s try a standard installation. /charts/stable/mongodb-replicaset values.yaml Run following command: and wait for following output: helm install . NAME: foppish-angelfishLAST DEPLOYED: Sun Sep 10 20:42:42 2017NAMESPACE: defaultSTATUS: DEPLOYED RESOURCES:==> v1/ServiceNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEfoppish-angelfish-mongodb-replicaset None <none> 27017/TCP 5s ==> v1beta1/StatefulSetNAME DESIRED CURRENT AGEfoppish-angelfish-mongodb-replicaset 3 1 5s ==> v1/ConfigMapNAME DATA AGEfoppish-angelfish-mongodb-replicaset 1 5sfoppish-angelfish-mongodb-replicaset-tests 1 5s NOTES:... DONE! MongoDB Replicaset is up and running! Helm is ultra easy and powerful! Helm is ultra easy and powerful! MongoDB installation test From output of helm install pick up the NAME of your release and use it in the following command: export RELEASE_NAME= foppish-angelfish Here we follow a different path from Helm Chart. Let’s open an interactive shell session with remote Mongo server! kubectl exec $RELEASE_NAME-mongodb-replicaset-0 --mongo --shell OUTPUT MongoDB shell version v3.4.8connecting to: mongodb://127.0.0.1:27017MongoDB server version: 3.4.8type "help" for help......A LOT OF WARNING (I will check these in a future post to clean them up, if possible)......rs0:PRIMARY> Who is the primary? In theory Pod 0 should be the master as you can see rom rs: > prompt. If this is not the case tun following command to find the Master: PRIMARY rs0:SECONDARY> db.isMaster().primary Take note of the primary Pod because we are going to kill it soon and connect to it using last kubectl exec used above. Failover testing We need to create some data to check persistence across failures. We are already connect to the mongo shell, creating a document and leaving the session is as simple as: rs0:PRIMARY> db.test.insert({key1: 'value1'})rs0:PRIMARY> exit Use following command to monitor changes in replica set: kubectl run --attach bbox --image=mongo:3.4 --restart=Never --env="RELEASE_NAME=$RELEASE_NAME" -- sh -c 'while true; do for i in 0 1 2; do echo $RELEASE_NAME-mongodb-replicaset-$i $(mongo --host=$RELEASE_NAME-mongodb-replicaset-$i.$RELEASE_NAME-mongodb-replicaset --eval="printjson(rs.isMaster())" | grep primary); sleep 1; done; done'; OUTPUTfoppish-angelfish-mongodb-replicaset-0 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",foppish-angelfish-mongodb-replicaset-1 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",foppish-angelfish-mongodb-replicaset-2 "primary" : "foppish-angelfish-mongodb-replicaset-0.foppish-angelfish-mongodb-replicaset.default.svc.cluster.local:27017",......... Kill the primary! Here it is: kubectl delete pod $RELEASE_NAME-mongob-replicaset-0 MongoDB will start an election and another Pod will become master: foppish-angelfish-mongodb-replicaset-1 “primary” : “foppish-angelfish-mongodb-replicaset-1.foppish-angelfish-mongodb- And in the meantime Kubernetes will immediately take corrective actions instantiating a new Pod 0. Kill’em all Now we have to simulate a real disaster: let’s kill all Pods and see the StatefulSet magically recreating everything with all data available. kubectl delete po -l "app=mongodb-replicaset,release=$RELEASE_NAME"kubectl get po --watch-only After few minutes our MongoDB replicaset will be back online and we can test it again to see if our data are still there. Final check and clean up Run following command to verify that key created is still there: kubectl exec $RELEASE_NAME-mongodb-replicaset-1 --mongo --eval=”rs.slaveOk(); db.test.find({key1:{\$exists:true}}).forEach(printjson)” As always you can delete everything with a simple Azure CLI 2 command: az group delete --name k8sMongoTestGroup --yes --no-wait How to expose this replicaSet externally? This is a topic for a e post. It seems trivial as a but it is not so easy. If you expose the existing service, it will be LoadBalanced on 3 nodes behind it and this is wrong. We need 3 LoadBalancers, exposing 3 services, 1 for each Pod in the StatefulSet. Moreover we have to activate Authentication and SSL to ensure best practice from perspective. futur kubectl expose security I will find best way to do it, while playing with Heml, Kubernetes, MongoDB and Azure!